question
stringlengths 15
100
| context
stringlengths 18
412k
|
---|---|
list of number 1 overall mlb draft picks | List of first overall Major League Baseball draft picks - wikipedia
The First - Year Player Draft, also known as the Rule 4 Draft, is Major League Baseball 's (MLB) primary mechanism for assigning amateur baseball players from high schools, colleges, and other amateur baseball clubs to its teams. Unlike most professional sports, MLB does not permit the trading of draft picks, so the draft order is solely determined by the previous season 's standings; the team that possesses the worst record receives the first pick. If two teams have identical records, the team with the worse record in the previous season will receive the higher pick. In addition, teams that lost free agents in the previous off - season may be awarded "compensatory '' picks. The first draft took place in 1965; it was introduced to prevent richer teams from negotiating wealthier contracts with top - level prospects and therefore, monopolizing the player market. Originally, three drafts were held each year. The first draft took place in June and involved high - school graduates and college seniors who had just finished their seasons. The second draft took place in January for high school and college players who had graduated in December. The third draft took place in August and was for players who participated in American amateur summer leagues. The August draft was eliminated after two years, and the January draft lasted until 1986.
In 1965, Rick Monday became MLB 's first draft pick after being selected by the Kansas City Athletics. Royce Lewis is the most recent first overall pick; he was drafted by the Minnesota Twins in 2017. Overall, 23 of the 50 picks before 2015 have participated in the All - Star Game, and three (Bob Horner, Darryl Strawberry, and Bryce Harper) have won the Rookie of the Year Award. Twenty - five of the fifty picks before 2015 have been drafted from high schools, one has been drafted out of the Independent American Association, and the others were drafted from universities. To date, Arizona State University and Vanderbilt University are the only schools from which multiple number - one overall draft picks have been chosen. No first overall pick was inducted into the Major League Baseball Hall of Fame until 2016, when Ken Griffey Jr. was inducted with a record 99.3 % of votes cast.
In the 54 drafts that have taken place through 2018, 22 of the 30 MLB franchises have had the first pick at least once. The Toronto Blue Jays, St. Louis Cardinals, Los Angeles Dodgers, San Francisco Giants, Cleveland Indians, Cincinnati Reds, Boston Red Sox, and Colorado Rockies have never had the first pick. The Montreal Expos never had the first pick, but the Nationals have had it twice. The Oakland Athletics have never had the first pick, but the Kansas City Athletics had the very first pick in MLB Draft history. The New York Mets, San Diego Padres, and Houston Astros have each had the first pick 5 times, and the Seattle Mariners, Pittsburgh Pirates, and Tampa Bay Rays have each had the first pick four times.
Goodwin chose to attend university instead of signing with the Chicago White Sox, and re-entered the draft once he graduated in 1975. Hochevar played college baseball for the University of Tennessee, and was originally drafted by the Los Angeles Dodgers in 2005, but did not agree to a contract. He re-entered the draft in 2006 after spending the previous year with the independent Fort Worth Cats.
|
ctrl + page up will take you to | Page Up and Page Down keys - Wikipedia
The Page Up and Page Down keys (sometimes abbreviated as PgUp and PgDn) are two keys commonly found on computer keyboards.
The two keys are primarily used to scroll up or down in documents, but the scrolling distance varies between different applications. In word processors, for instance, they may jump by an emulated physical page or by a screen view that may show only part of one page or many pages at once depending on zoom factor. In cases when the document is shorter than one screenful, Page Up and Page Down often have no visible effect at all.
Operating systems differ as to whether the keys (pressed without modifier) just move the view -- e.g. in Mac OS X -- or also the input caret -- e.g. in Microsoft Windows. In right - to - left settings, PgUp will move either upwards or rightwards (instead of left) and PgDn will move down or leftwards (instead of right). The keys have been dubbed previous page and next page, accordingly.
The arrow keys and the scroll wheel can also be used to scroll a document, although usually by smaller incremental distances. Used together with a modifier key, such as Alt, ⌥ Opt, ^ Ctrl or a combination thereof, they may act the same as the Page keys.
In most operating systems, if the Page Up or Page Down key is pressed along with the ⇧ Shift key in editable text, all the text scrolled over will be highlighted.
In some applications, the Page Up and Page Down keys behave differently in caret navigation (toggled with the F7 function key in Windows). For a claimed 30 % of people, the paging keys move the text in the opposite direction to what they find natural, and software may contain settings to reverse the operation of these keys to accommodate that.
In August 2008, Microsoft received the patent # 7,415,666 for the functions of the two keys -- Page Up & Page Down.
|
who were the voices in the emoji movie | The Emoji Movie - wikipedia
The Emoji Movie is a 2017 American 3D computer - animated science fiction comedy film directed by Tony Leondis, and written by Leondis, Eric Siegel and Mike White, based on the trend of emojis. It stars the voices of T.J. Miller, James Corden, Anna Faris, Maya Rudolph, Steven Wright, Rob Riggle, Jennifer Coolidge, Christina Aguilera, Sofía Vergara, Sean Hayes and Patrick Stewart. The film centers on Gene, a multi-expressional emoji who lives in a teenager 's phone, and who sets out on a journey to become a normal meh emoji like his parents.
Produced by Sony Pictures Animation and distributed by Columbia Pictures, The Emoji Movie premiered on July 23, 2017, at the Regency Village Theatre and was theatrically released in the United States on July 28, 2017. The film grossed $217 million worldwide but was panned by critics. At the 38th Golden Raspberry Awards, it won in four categories: Worst Picture, Worst Director, Worst Screen Combo and Worst Screenplay, making it the first animated film to receive nominations and wins in those categories at those awards.
Gene is an emoji that lives in Textopolis, a digital city inside the phone of his user Alex. He is the son of two meh emojis named Mel and Mary, and is able to make multiple expressions despite his parents ' upbringing. His parents are hesitant of him going to work, but Gene insists so that he can feel useful. Upon receiving a text from his crush Addie, Alex decides to send her an emoji. When Gene is selected, he panics, makes a panicked expression, and wrecks the text center. Gene is called in by Smiler, a smiley emoji and leader of the text center, who concludes that Gene is a "malfunction '' and therefore must be deleted. Gene is chased by bots, but is rescued by Hi - 5, a once popular emoji who has since lost his fame due to lack of use. He tells Gene that he can be fixed if they find a hacker, and Hi - 5 accompanies him so that he can reclaim his fame.
Smiler sends more bots to look for Gene when she finds out that he has left Textopolis, as his actions have caused Alex to think that his phone needs to be fixed. Gene and Hi - 5 come to a piracy app where they meet a hacker emoji named Jailbreak, who wants to reach Dropbox so that she can live in the cloud. The trio is attacked by Smiler 's bots, but manage to escape into the game Candy Crush. Jailbreak reveals that Gene can be fixed in the cloud, and the group goes off into the Just Dance app. While there, Jailbreak is revealed to be a princess emoji who fled home after tiring of being stereotyped. They are once again attacked by bots, and their actions cause Alex to delete the Just Dance app. Gene and Jailbreak escape, but Hi - 5 is taken along with the app and ends up in the trash.
Mel and Mary go searching for Gene and have a very lethargic argument. They make up in the Instagram app when Mel reveals that he, too, is a malfunction, explaining Gene 's behavior. While traveling through Spotify, Jailbreak admits that she likes Gene just the way he is, and that he should not be ashamed of his malfunction. They make it to the trash and rescue Hi - 5, but are soon attacked by an upgraded bot. They evade it and enter Dropbox, where they encounter a firewall. The gang get past it with a password being Addie 's name and make it to the cloud, where Jailbreak prepares to reprogram Gene. Gene admits his feelings for Jailbreak, but she wishes to stick to her plan of venturing into the cloud, unintentionally causing Gene to revert to his apathetic programming out of heartbreak. The upgraded bot takes Gene, and Hi - 5 and Jailbreak race after them on a Twitter bird summoned by Jailbreak.
As Smiler prepares to delete Gene, Mel and Mary arrive and are also threatened. Jailbreak and Hi - 5 arrive and disable the bot, which falls on top of Smiler. Alex has since taken his phone to the store and asks to have his phone erased to fix the problem. Out of desperation, Gene prepares to have himself texted to Addie, making numerous faces to express himself. Realizing that Addie received a text from him, Alex stops his phone from getting erased, saving the emoji and finally getting to speak with Addie. Gene accepts himself for who he is and is celebrated by all of the emojis.
In a mid-credits scene, Smiler has been relegated to the "loser lounge '' with the other forgotten and unused emotions for her crimes, wearing numerous braces due to her teeth being cracked by the bot, and playing and losing a game of Go Fish.
The film was inspired by director Tony Leondis ' love of Toy Story. Wanting to make a new take on the concept, he began asking himself, "What is the new toy out there that has n't been explored? '' At the same time, Leondis received a text message with an emoji, which helped him realize that this was the world he wanted to explore. In fleshing out the story, Leondis considered having the emojis visit the real world. However, his producer felt that the world inside a phone was much more interesting, which inspired Leondis to create the story of where and how the emojis lived. As Leondis is gay, he connected to Gene 's plight of "being different in a world that expects you to be one thing, '' and in eventually realizing that the feeling held true for most people, Leondis has said the film "was very personal ''. The movie was fast tracked into production as there were concerns that the movie would become outdated otherwise.
In July 2015, it was announced that Sony Pictures Animation had won the bidding war against Warner Bros. and Paramount Pictures over production rights to make the film, with the official announcement occurring at the 2016 CinemaCon. Singer Ricky Reed recorded an original song, "Good Vibrations '', for the film.
On World Emoji Day on June 17, 2016, Miller was announced as the lead. Leondis created the part with Miller in mind, although the actor was initially hesitant to play the role, only accepting after Leondis briefed him on the story. Leondis chose Miller because "when you think of irrepressible, you think of TJ. But he also has this surprising ability to break your heart ''. in addition Miller also contributed some re-writes. In October 2016, it was announced that Ilana Glazer and Corden would join the cast as well. Glazer was later replaced by Anna Faris. According to Jordan Peele, he was initially offered the role of "Poop '', which he would go on to state led to his decision to retire from acting. The part would ultimately go to Patrick Stewart.
In November 2015, Sony scheduled the film to be released on August 11, 2017. A year later, it was moved to August 4, 2017, with Baby Driver initially taking its previous date. In late March 2017, the film was moved one week earlier, to July 28, 2017, switching places with Sony Pictures ' The Dark Tower.
On December 20, 2016, a teaser trailer for the film was released. A second trailer was later released on May 16, 2017. Sony promoted the release of the latter trailer by hosting a press conference in Cannes, the day before the 2017 Cannes Film Festival, which featured T.J. Miller parasailing in. Variety called the event "slightly awkward '', and The Hollywood Reporter described it as "promotional ridiculousness ''.
Days prior the film 's release, Sony Pictures was criticized after the film 's official Twitter account posted a promotional picture of a parody of The Handmaid 's Tale, featuring Smiler. The parody was considered to be "tasteless '' due to the overall themes of the work, and the image was deleted afterward.
The film 's theatrical release is preceded by Puppy!, a Hotel Transylvania short directed by Genndy Tartakovsky.
The Emoji Movie was released on Blu - ray and DVD on October 24, 2017, by Sony Pictures Home Entertainment.
The Emoji Movie has grossed $86.1 million in the United States and Canada and $130.9 million in other territories, for a worldwide total of $217 million, against a production budget of $50 million.
The Emoji Movie was released alongside Atomic Blonde, and was projected to gross around $20 million from 4,075 theaters in its opening weekend. The film made $900,000 from Thursday night previews and $10.1 million on its first day. It went on to debut to $24.5 million, finishing second at the box office behind Dunkirk.
Review embargoes for the film were lifted midday July 27, only a few hours before the film premiered to the general public, in a move considered among one of several tactics studios are using to try to curb bad Rotten Tomatoes ratings. Speaking of the effect embargoing reviews until last minute had on the film 's debut, Josh Greenstein, Sony Pictures president of worldwide marketing and distribution, said, "The Emoji Movie was built for people under 18... so we wanted to give the movie its best chance. What other wide release with a score under 8 percent has opened north of $20 million? I do n't think there is one. '' In the film 's second weekend, it dropped by nearly 50 %, grossing $12.4 million and finishing in 3rd.
Critics panned The Emoji Movie, calling it "unfunny and a waste of time ''. Several major outlets, including BBC News, called The Emoji Movie one of the worst movies of 2017. On review aggregator Rotten Tomatoes, the film has an approval rating of 9 % based on 108 reviews, with an average rating of 2.7 / 10. The site 's critical consensus displays a no symbol emoji ("🚫 '') in place of text. On Metacritic, which assigns a normalized rating to reviews, the film has a weighted average score of 12 out of 100, based on 26 critics, indicating "overwhelming dislike ''. Audiences polled by CinemaScore gave the film an average grade of "B '' on an A+ to F scale, while IMDb users gave the film a 3.1 out of 10.
David Ehrlich of IndieWire gave the film a D, writing: "Make no mistake, The Emoji Movie is very, very, very bad (we 're talking about a hyperactive piece of corporate propaganda in which Spotify saves the world and Sir Patrick Stewart voices a living turd), but real life is just too hard to compete with right now. '' Alonso Duralde of TheWrap was also critical of the film, calling it "a soul - crushing disaster because it lacks humor, wit, ideas, visual style, compelling performances, a point of view or any other distinguishing characteristic that would make it anything but a complete waste of your time ''.
Glen Kenny of The New York Times described the film as "nakedly idiotic '', stating that the film plays off a Hollywood idea that the "panderingly, trendily idiotic can be made to seem less so ''. Owen Gleiberman of Variety lambasted the film as "hectic situational overkill '' and "lazy '' while viciously criticizing the film, writing: "There have been worse ideas, but in this case the execution is n't good enough to bring the notion of an emoji movie to funky, surprising life. '' Writing in The Guardian, Charles Bramesco called the film "insidious evil '' and wrote that it was little more than an exercise in advertising smartphone downloads to children. Writing for the Hindustan Times, Aditya Dogra acknowledged that viewers had noticed the similarities of The Emoji Movie to Inside Out, The Lego Movie, and Wreck - It Ralph.
|
why does a gas show a colour only when electricity is passed through the discharge tube | Electric discharge in gases - wikipedia
Electric discharge in gases occurs when electric current flows through a gaseous medium due to ionization of the gas. Depending on several factors, the discharge may radiate visible light. The properties of electric discharges in gases are studied in connection with design of lighting sources and in the design of high voltage electrical equipment.
In cold cathode tubes, the electric discharge in gas has three regions, with distinct current - voltage characteristics:
Glow discharge is facilitated by electrons striking the gas atoms and ionizing them. For formation of glow discharge, the mean free path of the electrons has to be reasonably long but shorter than the distance between the electrodes; glow discharges therefore do not readily occur at both too low and too high gas pressures.
The breakdown voltage for the glow discharge depends nonlinearly on the product of gas pressure and electrode distance according to Paschen 's law. For a certain pressure × distance value, there is a lowest breakdown voltage. The increase of strike voltage for shorter electrode distances is related to too long mean free path of the electrons in comparison with the electrode distance.
A small amount of a radioactive element may be added into the tube, either as a separate piece of material (e.g. nickel - 63 in krytrons) or as addition to the alloy of the electrodes (e.g. thorium), to preionize the gas and increase the reliability of electrical breakdown and glow or arc discharge ignition. A gaseous radioactive isotope, e.g. krypton - 85, can also be used. Ignition electrodes and keepalive discharge electrodes can also be employed.
The E / N ratio between the electric field E and the concentration of neutral particles N is often used, because the mean energy of electrons (and therefore many other properties of discharge) is a function of E / N. Increasing the electric intensity E by some factor q has the same consequences as lowering gas density N by factor q.
Its SI unit is V cm, but the Townsend unit (Td) is frequently used.
The use of a glow discharge for solution of certain mapping problems was described in 2002. According to a Nature news article describing the work, researchers at Imperial College London demonstrated how they built a mini-map that gives tourists luminous route indicators. To make the one - inch London chip, the team etched a plan of the city centre on a glass slide. Fitting a flat lid over the top turned the streets into hollow, connected tubes. They filled these with helium gas, and inserted electrodes at key tourist hubs. When a voltage is applied between two points, electricity naturally runs through the streets along the shortest route from A to B -- and the gas glows like a tiny glowing strip light. The approach itself provides a novel visible analog computing approach for solving a wide class of maze searching problems based on the properties of lighting up of a glow discharge in a microfluidic chip.
|
who sits on the woolsack in the house of lords | Woolsack - wikipedia
The Woolsack is the seat of the Lord Speaker in the House of Lords, the Upper House of the Parliament of the United Kingdom.
In the 14th century King Edward III (1327 -- 1377) commanded that his Lord Chancellor whilst in council should sit on a wool bale, now known as "The Woolsack '', in order to symbolise the central nature and huge importance of the wool trade to the economy of England in the Middle Ages. Indeed, it was largely to protect the vital English wool trade routes with continental Europe that the Battle of Crécy was fought with the French in 1346. From the Middle Ages until 2006, the presiding officer in the House of Lords was the Lord Chancellor and the Woolsack was usually mentioned in association with the office of Lord Chancellor. In July 2006, the function of Lord Speaker was split from that of Lord Chancellor pursuant to the Constitutional Reform Act 2005.
The Woolsack is a large, wool - stuffed cushion or seat covered with red cloth; it has neither a back nor arms, though in the centre of the Woolsack there is a back - rest. The Lords ' Mace is placed on the rear part of the Woolsack.
In 1938, it was discovered that the Woolsack was, in fact, stuffed with horsehair. When the Woolsack was remade it was re-stuffed with wool from all over the Commonwealth as a symbol of unity.
The Lord Speaker may speak from the Woolsack when speaking in his or her capacity as Speaker of the House, but must, if he or she seeks to debate, deliver his or her remarks either from the left side of the Woolsack, or from the normal seats of the Lords.
If a Deputy Speaker presides in the absence of the Lord Speaker, then that individual uses the Woolsack. However, when the House meets in the "Committee of the Whole '', the Woolsack remains unoccupied, and the presiding officer, the Chairman or Deputy Chairman, occupies a Chair at the front of the table of the House.
In front of the Woolsack is an even larger cushion known as the Judges ' Woolsack. During the State Opening of Parliament, the Judges ' Woolsack was historically occupied by the Law Lords. Now the Attorney General, the Solicitor General, the Lord Chief Justice, the Master of the Rolls, the President of the Family Division, the Vice-Chancellor, Justices of the Supreme Court, the Lords Justices of Appeal and the Justices of the High Court only attend Parliament for the State Opening.
Coordinates: 51 ° 29 ′ 55.7 '' N 0 ° 07 ′ 29.5 '' W / 51.498806 ° N 0.124861 ° W / 51.498806; - 0.124861
|
who was the first navy chief to be awarded the medal of honor | John William Finn - wikipedia
World War II
John William Finn (23 July 1909 -- 27 May 2010) was a sailor in the United States Navy who, as a chief petty officer, received the United States military 's highest decoration, the Medal of Honor, for his actions during the attack on Pearl Harbor in World War II. As a chief aviation ordnanceman stationed at Naval Air Station Kaneohe Bay, he earned the medal by manning a machine gun from an exposed position throughout the attack, despite being repeatedly wounded. He continued to serve in the Navy and in 1942 was commissioned an ensign. In 1947 he was reverted to chief petty officer, eventually rising to lieutenant before his 1956 retirement. In his later years he made many appearances at events celebrating veterans. At the time of his death, Finn was the oldest living Medal of Honor recipient, the last living recipient from the attack on Pearl Harbor, and the last United States Navy recipient of World War II.
Born on 24 July 1909, in Compton, California, Finn dropped out of school after the seventh grade. He enlisted in the Navy in July 1926, shortly before his seventeenth birthday, and completed recruit training in San Diego. After a brief stint with a ceremonial guard company, he attended General Aviation Utilities Training at Naval Station Great Lakes, graduating in December. By April 1927 he was back in the San Diego area, having been assigned to Naval Air Station North Island. He initially worked in aircraft repair before becoming an aviation ordnanceman and working on anti-aircraft guns. He then served on a series of ships: the USS Lexington (CV - 2), the USS Houston (CA - 30), the USS Jason (AC - 12), the USS Saratoga (CV - 3), and the USS Cincinnati (CL - 6).
Finn was promoted to chief petty officer (E-7, the highest enlisted rank in the Navy at that time) in 1935 after only nine years of active duty. He later commented on his promotions, "Everybody thought I was a boy wonder. I was just in the right place at the right time. '' As a chief, Finn served with patrol squadrons in San Diego, Washington, and Panama.
By December 1941, Finn was stationed at Naval Air Station Kaneohe Bay on the island of Oahu in Hawaii. As a chief aviation ordnanceman, he was in charge of twenty men whose primary task was to maintain the weapons of VP - 11, a PBY Catalina flying boat squadron. At 7: 48 a.m. on the morning of Sunday, 7 December 1941, Finn was at his home, about a mile from the aircraft hangars, when he heard the sound of gunfire. Finn recalled how a neighbor was the first to alert him, when she knocked on his door saying, "They want you down at the squadron right away! '' He drove to the hangars, catching sight of Japanese planes in the sky on the way, and found that the airbase was being attacked, with most of the PBYs already on fire.
Finn 's men were trying to fight back by using the machine guns mounted in the PBYs, either by firing from inside the flaming planes or by detaching the guns and mounting them on improvised stands. Finn later explained that one of the first things he did was to take control of a machine gun from his squadron 's painter. "I said, ' Alex, let me take that gun '... knew that I had more experience firing a machine gun than a painter. ''
Finding a movable tripod platform used for gunnery training, Finn attached the. 50 caliber machine gun and pushed the platform into an open area, from which he had a clear view of the attacking aircraft. He fired on the Japanese planes for the next two hours, even after being seriously wounded, until the attack had ended. In total, he received 21 distinct wounds, including a bullet through his right foot and an injury to his left shoulder, which caused him to lose feeling in his left arm.
"I got that gun and I started shooting at Jap planes, '' Finn said in a 2009 interview. "I was out there shooting the Jap planes and just every so often I was a target for some, '' he said, "In some cases, I could see their (the Japanese pilots ') faces. ''
Despite his wounds, Finn returned to the hangars later that day. After receiving medical treatment, he helped arm the surviving American planes. His actions earned him the first Medal of Honor to be awarded in World War II. He was formally presented with the decoration on 14 September 1942, by Admiral Chester Nimitz, for courage and valor beyond the call of duty. The ceremony took place in Pearl Harbor on board the USS Enterprise (CV - 6).
In 1942 Finn was commissioned, and served as a Limited Duty Officer with the rank of ensign. In 1947 he was reverted to his enlisted rank of chief petty officer, eventually becoming a lieutenant with Bombing Squadron VB - 102 and aboard the USS Hancock (CV - 19). He retired from the Navy as a lieutenant in September 1956.
From 1956 until shortly before his death, Finn resided on a 90 - acre (0.36 km) ranch in Live Oak Springs, near Pine Valley, California. He and his wife became foster parents to five Native American children, causing him to be embraced by the Campo Band of Diegueño Mission Indians, a tribe of Kumeyaay people in San Diego. His wife, Alice Finn, died in 1998. John Finn was a member of the John Birch Society.
In his retirement he made many appearances at events honoring veterans. On 25 March 2009, he attended National Medal of Honor Day ceremonies at Arlington National Cemetery. With the aid of walking sticks, he stood beside U.S. President Barack Obama during a wreath - laying ceremony at the Tomb of the Unknown Soldier. Later that day, Finn was a guest at the White House. It was his first visit to the White House, and his first time meeting a sitting President.
On June 27, 2009, a crowd of over 2,000 made up of family, friends and well - wishers came to Pine Valley to celebrate Finn 's 100th birthday. The Association of Aviation Ordnancemen presented him with an American flag which had flown on each of the 11 aircraft carriers then in active service.
When called a hero during a 2009 interview Finn responded:
That damned hero stuff is a bunch (of) crap, I guess. (...) You got ta understand that there 's all kinds of heroes, but they never get a chance to be in a hero 's position.
Finn died at age 100 on the morning of 27 May 2010, at the Chula Vista Veterans Home. He was buried besides his wife at the Campo Indian Reservation 's cemetery, after a memorial service in El Cajon. He was the last surviving Medal of Honor recipient from the attack on Pearl Harbor, the oldest living recipient, and the only aviation ordnanceman to have ever received the medal. Upon his death, fellow World War II veteran Barney F. Hajiro became the oldest living Medal of Honor recipient.
The headquarters building for Commander, Patrol and Reconnaissance Force, United States Pacific Fleet at Marine Corps Base Hawaii Kaneohe was named in Finn 's honor, and in 2009 a boat used to bring visitors to the USS Arizona Memorial was also named after him. In that same year, part of Historic U.S. Route 80, was named "John Finn Route ''. Three buildings in the former Naval Training Center San Diego were named the John and Alice Finn Office Plaza. On 15 February 2012, the U.S. Secretary of the Navy Ray Mabus announced that an Arleigh Burke - class destroyer would be named the USS John Finn (DDG - 113) in his honor.
Finn received the following awards and decorations:
For extraordinary heroism, distinguished service, and devotion above and beyond the call of duty. During the first attack by Japanese airplanes on the Naval Air Station, Kaneohe Bay, Territory of Hawaii, on December 7, 1941, he promptly secured and manned a. 50 caliber machine gun mounted on an instruction stand in a completely exposed section of the parking ramp, which was under heavy enemy machine gun strafing fire. Although painfully wounded many times, he continued to man this gun and to return the enemy 's fire vigorously and with telling effect throughout the enemy strafing and bombing attacks and with complete disregard for his own personal safety. It was only by specific orders that he was persuaded to leave his post to seek medical attention. Following first - aid treatment, although obviously suffering much pain and moving with great difficulty, he returned to the squadron area and actively supervised the rearming of returning planes. His extraordinary heroism and conduct in this action are considered to be in keeping with the highest traditions of the Naval Service.
|
who was killed in the tower of london | List of prisoners of the Tower of London - wikipedia
From an early stage of its history, one of the functions of the Tower of London has been to act as a prison, though it was not designed as one. The earliest known prisoner was Ranulf Flambard in 1100 who, as Bishop of Durham, was found guilty of extortion. He had been responsible for various improvements to the design of the tower after the first architect Gundulf moved back to Rochester. He escaped from the White Tower by climbing down a rope, which had been smuggled into his cell in a wine casket.
Other prisoners include:
|
who passed the civil constitution of the clergy | Civil Constitution of the Clergy - wikipedia
The Civil Constitution of the Clergy (French: "Constitution civile du clergé '') was a law passed on 12 July 1790 during the French Revolution, that caused the immediate subordination of the Catholic Church in France to the French government.
Earlier legislation had already arranged the confiscation of the Catholic Church 's French land had been holdings and banned monastic vows. This new law completed the destruction of the monastic orders, outlawing "all regular and secular chapters for either sex, abbacies and priorships, both regular and in commendam, for either sex '', etc. It also sought to settle the chaos caused by the earlier confiscation of Church lands and the abolition of the tithe. Additionally, the Civil Constitution of the Clergy regulated the current dioceses so that they could become more uniform and aligned with the administrative districts that had recently been created. It emphasised that officials of the church could not provide commitment to anything outside France, specifically the Pope (due to his power and the influence he had) which was outside France. Lastly, the Civil Constitution of the Clergy made Bishops and Priests elected. By having members of the Clergy elected the church lost much of the authority it had to govern itself and was now subject to the people, since they would vote on the Priest and Bishops as opposed to these individuals being appointed by the church and the hierarchy within.
The Civil Constitution of the Clergy was passed and some of the support for this came from figures that were within the Church, such as the priest and parliamentarian Pierre Claude François Daunou, and, above all, the revolutionary priest Henri Grégoire, who was the first French Catholic priest to take the Obligatory Oath. The measure was opposed, but ultimately acquiesced to, by King Louis XVI.
The Civil Constitution of the Clergy has four titles with different articles.
Even before the Revolution and the Civil Constitution of the Clergy, the Catholic Church in France (the Gallican Church) had a status that tended to subordinate the Church to the State. Under the Declaration of the Clergy of France (1682) privileges of the French monarch included the right to assemble church councils in their dominions and to make laws and regulations touching ecclesiastical matters of the Gallican Church or to have recourse to the "appeal as from an abuse '' ("appel comme d'abus '') against acts of the ecclesiastical power.
Even prior to the Civil Constitution of the Clergy:
The following interlinked factors appear to have been the causes of agitation for the confiscation of church lands and for the adoption of the Civil Constitution of the Clergy:
On 6 February 1790, one week before banning monastic vows, the National Constituent Assembly asked its ecclesiastical committee to prepare the reorganization of the clergy. No doubt, those who hoped to reach a solution amenable to the papacy were discouraged by the consistorial address of March 22 in which Pius VI spoke out against measures already passed by the Assembly; also, the election of the Protestant Jean - Paul Rabaut Saint - Étienne to the presidency of the Assembly brought about "commotions '' at Toulouse and Nîmes, suggesting that at least some Catholics would accept nothing less than a return to the ancien régime practice under which only Catholics could hold office.
The Civil Constitution of the Clergy came before the Assembly on 29 May 1790. François de Bonal, Bishop of Clermont, and some members of the Right requested that the project should be submitted to a national council or to the Pope, but did not carry the day. Joining them in their opposition to the legislation was Abbé Sieyès, one of the chief political theorists of the French Revolution and author of the 1789 pamphlet "What is the Third Estate? ''
Conversely, the Jansenist theologian Armand - Gaston Camus argued that the plan was in perfect harmony with the New Testament and the councils of the fourth century.
The Assembly passed the Civil Constitution on 12 July 1790, two days before the anniversary of the storming of the Bastille. On that anniversary, the Fête de la Fédération, Talleyrand and three hundred priests officiated at the "altar of the nation '' erected on the Champ de Mars, wearing tricolor waistbands over their priestly vestments and calling down God 's blessing upon the Revolution.
In 1793, the War in the Vendee was influenced by the Constitution passing due to the devout population toward the Church among other social factors.
As noted above, even prior to the Civil Constitution of the Clergy, church property was nationalized and monastic vows were forbidden. Under the Civil Constitution of the Clergy:
The tone of the Civil Constitution can be gleaned from Title II, Article XXI:
In short, new bishops were required to swear loyalty to the State in far stronger terms than to any religious doctrine. Note also that, even in this revolutionary legislation, there are strong remnants of Gallican royalism.
The law also included some reforms supported even by many within the Church. For example, Title IV, Article I states, "The law requiring the residence of ecclesiastics in the districts under their charge shall be strictly observed. All vested with an ecclesiastical office or function shall be subject to this, without distinction or exception. '' In effect, this banned the practice by which younger sons of noble families would be appointed to a bishopric or other high church position and live off its revenues without ever moving to the region in question and taking up the duties of the office. The abuse of bishoprics by the nobility was further reduced in Title II, Article XI: "Bishoprics and cures shall be looked upon as vacant until those elected to fill them shall have taken the oath above mentioned. '' This unified state control over both the nobility and the Church through the use of elected bishops and the oath of loyalty.
For some time, Louis XVI delayed signing the Civil Constitution, saying that he needed "official word from Rome '' before doing so. Pope Pius VI broke the logjam on 9 July 1790, writing a letter to Louis rejecting the arrangement. On 28 July, 6 September, and 16 December 1790, Louis XVI wrote letters to Pius VI, complaining that the National Assembly was forcing him to publicly accept the Civil Constitution, and suggesting that Pius VI appease them by accepting a few selected articles too. On 10 July, Pius VI wrote to Louis XVI, indicating to the king that the Church could not accept any of the provisions of the Constitution. The Constitution attempted to change the internal government of the Church, and no political regime had the right to unilaterally change the internal structure of the Church. On 17 August, Pius VI wrote to Louis XVI of his intent to consult with the cardinals about this, but on 10 October Cardinal Rochefoucauld, the Archbishop of Aix, and 30 of France 's 131 bishops sent their negative evaluation of the main points of the Civil Constitution to the Pope. Only four bishops actively dissented. On 30 October, the same 30 bishops restated their view to the public, signing a document known as the Exposition of Principles ("Exposition des principes sur la constitution civile du clergé ''), written by Jean de Dieu - Raymond de Cucé de Boisgelin
On 27 November 1790, still lacking the king 's signature on the law of the Civil Constitution, the National Assembly voted to require the clergy to sign an oath of loyalty to the Constitution. During the debate on that matter, on 25 November, Cardinal de Lomenie wrote a letter claiming that the clergy could be excused from taking the Oath if they lacked mental assent; that stance was to be rejected by the Pope on 23 February 1791. On 26 December 1790, Louis XVI finally granted his public assent to the Civil Constitution, allowing the process of administering the oaths to proceed in January and February 1791.
Pope Pius VI 's 23 February rejection of Cardinal de Lomenie 's position of withholding "mental assent '' guaranteed that this would become a schism. The Pope 's subsequent condemnation of the revolutionary regime and repudiation of all clergy who had complied with the oath completed the schism.
Within the Civil Constitution of the Clergy there was a clause that required the Clergy to take an oath stating the individual 's allegiance to France. The oath was basically an oath of fidelity and it required every single priest in France to make a public choice on whether or not they believed the nation of France had authority over all religious matters. This oath was very controversial because many Clergy believed that they could not put their loyalty towards France before their loyalty towards God. If a clergyman were to refuse to take this oath of allegiance then they were challenging the Civil Constitution of the Clergy and challenging the validity of the assembly which had established the Civil Constitution of the Clergy. On 16 January 1791 approximately 50 % of the individuals required to take the oath went ahead and took it and the other half decided to wait for Pope Pius VI to provide instruction, since he was indecisive on what the Oath signified and how the Clergy should respond to it. It is important to note that all but seven of the bishops in France decided to not take the oath and as a reprimand those governing France began to replace the individuals with those who had taken the oath. In March 1791 Pope Pius VI finally decided that the oath was against the beliefs of the Church. By deciding that it was against the beliefs two groups were formed "jurors '' ("refractory priests '') and "non-jurors '' and that was based on whether or not they had decided to take the oath. The Pope condemned those who took the oath and went as far as saying that they were absolutely separated from the church. Additionally, the Pope expressed disapproval and chastised King Louis XVI for signing the document that required the oath to be taken. Since the Pope expressed disapproval those who did not take it stayed unwilling to take it and as a result were replaced by those who had taken it. In addition to not receiving support from approximately 50 % of the Clergy the oath was also disliked by a part of France 's population. The individuals in France who were opposed to it claimed that the Revolution was destroying their "true '' faith and this was also seen in the two groups of individuals that were formed because of the oath. Those who believed that the Revolution was causing their "true '' faith to be destroyed sided with the "non-jurors '' and those who believed that the French government should have a say in religion sided with the "jurors. ''
American Scholar Timothy Tackett believes that the oath that was required determined which individuals would let the revolution cause change and allow revolutionary reform and those who did not would remain true to their beliefs for many years to come. Apart from Tackett 's beliefs, it can be said that the obligatory oath marked a key historical point in the French Revolution since it was the first piece of legislation in the revolution that received massive drawback and resistance.
As noted above, the government required all clergy to swear an oath of loyalty to the Civil Constitution of the Clergy. Only seven bishops and about half of the clergy agreed while the rest refused; the latter became known as "non-jurors '' or "refractory priests. '' In areas where a majority had taken the oath, such as Paris, the refractory minority could be victimized by society at large: nuns from the Hôtel - Dieu de Paris, for example, were subjected to humiliating public spankings.
While there was a higher rate of rejection in urban areas, most of these refractory priests (like most of the population) lived in the countryside, and the Civil Constitution generated considerable resentment among religious peasants. Meanwhile, the Pope repudiated the "jurors '' who had signed the oath, especially bishops who had ordained new, elected clergy, and above all Bishop Louis - Alexandre Expilly de la Poipe. In May 1791, France recalled its ambassador to the Vatican and the Papal Nuncio was recalled from Paris. On June 9, the Assembly forbade the publication of Papal Bulls or Decrees, unless they had been approved by the Assembly as well.
The Constituent Assembly went back and forth on the exact status of non-juring priests. On 5 February 1791, non-juring priests were banned from preaching in public. By not allowing the clergy to preach the National Assembly was trying to silence the Clergy. This punishment that was imposed by the assembly signified that all refractory priest could no longer practice marriages and baptisms which were public ceremonies. By not allowing refractory clergy to practice these large public ceremonies they were silenced. However, non-juring clergy continued to celebrate the Mass and attract crowds because the Assembly feared that stripping them of all of their powers would create chaos and that would be ineffective towards silencing them. Although the Assembly allowed them to continue working in ceremonies that were not public they stated that they could only do so until they had been replaced by a clergyman who had taken the oath (juring). A large percentage of the refractionary priests were not replaced until 10 August 1792, which was more than a year after the original 50 % had taken the oath; by the time they began to be replaced the Assembly had made some changes and it was not as significant that they were practicing Mass.
At the beginning, when the Assembly was stripping the clergy of their titles they tried to ignore how the extreme anti-clerical elements were responding with violence against those who attended these Masses and against nuns who would not renounce their vocation. Ultimately the Assembly had to recognize the schism that was occurring because it was extremely evident, even while the replacement was occurring juror priests often faced a hostile and violent reception in their old parishes. On 7 May 1791, the Assembly reversed itself, deciding that the non-juring priests, referred to as prêtres habitués ("habitual priests '') could say Mass and conduct services in other churches on condition that they would respect the laws and not stir up revolt against the Civil Constitution. The assembly had to allow this change to control the schism and in part because "Constitutional Clergy '' (those who had taken the oath) were unable to properly conduct their service. The constitutional clergy often required the assistance of the National Guard due to the mayhem that would occur.
The division in France was at an all - time high when even families had different views on juring and non-juring priest. The difference in families was primarily seen when the women would attend masses that where held by those who had defied the oath and men attended mass that was provided by clergymembers who had taken the oath. It is important to note that even though priests who had not taken the Oath had the right to use the churches many were not allowed to use the buildings (this was done by priest who had sworn their allegiance) this furthermore demonstrated the division in the state. On 29 November 1791, the Legislative Assembly, which had replaced the National Constituent Assembly, decreed that refractory priests could only exacerbate factionalism and aggravate extremists in the constituent assembly. The November 29th decree declared that no refractory priest could invoke the rights in the Constitution of the Clergy and that all such priests were suspect and so to be arrested. Louis XVI vetoed this decree (as he also did with another text concerning the creation of an army of 20,000 men on the orders of the Assembly, precipitating the monarchy 's fall), which was toughened and re-issued a year later. Anti-Catholic persecution by the State would intensify into de-Christianization and propagation of the Cult of Reason and the Cult of the Supreme Being in 1793 -- 1794. During this time countless non-juring priests were interned in chains on prison ships in French harbors where most died within a few months from the appalling conditions.
After the Thermidorian Reaction, the Convention repealed the Civil Constitution of the Clergy; however, the schism between the civilly constituted French Church and the Papacy was only resolved when the Concordat of 1801 was agreed on. The Concordat was reached on July 15, 1801 and it was made widely known the following year, on Easter. It was an agreement executed by Napoleon Bonaparte and clerical and papal representatives from Rome and Paris, and determined the role and status of the Roman Catholic Church in France; moreover, it concluded the confiscations and church reforms that had been implemented over the course of the revolution. The agreement also gave the first consul (Napoleon) the authority and right to nominate bishops, redistribute the current parishes and bishoprics, and allowed for seminaries to be established. In an effort to please Pius VII it was agreed upon that suitable salaries would be provided for bishops and curés and he would condone the acquisition of church lands.
|
where and when do the events of the odyssey take place | Odyssey - wikipedia
The Odyssey (/ ˈɒdəsi /; Greek: Ὀδύσσεια Odýsseia, pronounced (o. dýs. sej. ja) in Classical Attic) is one of two major ancient Greek epic poems attributed to Homer. It is, in part, a sequel to the Iliad, the other work ascribed to Homer. The Odyssey is fundamental to the modern Western canon; it is the second - oldest extant work of Western literature, while the Iliad is the oldest. Scholars believe the Odyssey was composed near the end of the 8th century BC, somewhere in Ionia, the Greek coastal region of Anatolia.
The poem mainly focuses on the Greek hero Odysseus (known as Ulysses in Roman myths), king of Ithaca, and his journey home after the fall of Troy. It takes Odysseus ten years to reach Ithaca after the ten - year Trojan War. In his absence, it is assumed Odysseus has died, and his wife Penelope and son Telemachus must deal with a group of unruly suitors, the Mnesteres (Greek: Μνηστῆρες) or Proci, who compete for Penelope 's hand in marriage.
The Odyssey continues to be read in the Homeric Greek and translated into modern languages around the world. Many scholars believe the original poem was composed in an oral tradition by an aoidos (epic poet / singer), perhaps a rhapsode (professional performer), and was more likely intended to be heard than read. The details of the ancient oral performance and the story 's conversion to a written work inspire continual debate among scholars. The Odyssey was written in a poetic dialect of Greek -- a literary amalgam of Aeolic Greek, Ionic Greek, and other Ancient Greek dialects -- and comprises 12,110 lines of dactylic hexameter. Among the most noteworthy elements of the text are its non-linear plot, and the influence on events of choices made by women and slaves, besides the actions of fighting men. In the English language as well as many others, the word odyssey has come to refer to an epic voyage.
The Odyssey has a lost sequel, the Telegony, which was not written by Homer. It was usually attributed in antiquity to Cinaethon of Sparta. In one source, the Telegony was said to have been stolen from Musaeus of Athens by either Eugamon or Eugammon of Cyrene (see Cyclic poets).
The Odyssey begins after the end of the ten - year Trojan War (the subject of the Iliad), and Odysseus has still not returned home from the war. Odysseus ' son Telemachus is about 20 years old and is sharing his absent father 's house on the island of Ithaca with his mother Penelope and a crowd of 108 boisterous young men, "the Suitors '', whose aim is to persuade Penelope to marry one of them, all the while reveling in Odysseus ' palace and eating up his wealth.
Odysseus ' protectress, the goddess Athena, requests to Zeus, king of the gods, to finally allow Odysseus to return home when Odysseus ' enemy, the god of the sea Poseidon, is absent from Mount Olympus to accept a sacrifice in Ethiopia. Then, disguised as a Taphian chieftain named Mentes, she visits Telemachus to urge him to search for news of his father. He offers her hospitality; they observe the suitors dining rowdily while the bard Phemius performs a narrative poem for them. Penelope objects to Phemius ' theme, the "Return from Troy '', because it reminds her of her missing husband, but Telemachus rebuts her objections, asserting his role as head of the household.
That night Athena, disguised as Telemachus, finds a ship and crew for the true prince. The next morning, Telemachus calls an assembly of citizens of Ithaca to discuss what should be done with the suitors. Telemachus is scoffed by the insolent suitors, particularly by their leaders Antinous, Eurymachus, and Leiocritus. Accompanied by Athena (now disguised as Mentor), he departs for the Greek mainland and the household of Nestor, most venerable of the Greek warriors at Troy, now at home in Pylos.
From there, Telemachus rides overland, accompanied by Nestor 's son Peisistratus, to Sparta, where he finds Menelaus and Helen, who are now reconciled. While Helen laments the fit of lust brought on by Aphrodite that sent her to Troy with Paris, Menelaus recounts how she betrayed the Greeks by attempting to imitate the voices of the soldiers ' wives while they were inside the Trojan Horse. Telemachus also hears from Helen, who is the first to recognize him, that she pities him because Odysseus was not there for him in his childhood because he went to Troy to fight for her and also about his exploit of stealing the Palladium, or the Luck of Troy, where she was the only one to recognize him. Menelaus, meanwhile, also praises Odysseus as an irreproachable comrade and friend, lamenting the fact that they were not only unable to return together from Troy but that Odysseus is yet to return.
Both Helen and Menelaus also say that they returned to Sparta after a long voyage by way of Egypt. There, on the island of Pharos, Menelaus encountered the old sea - god Proteus, who told him that Odysseus was a captive of the nymph Calypso. Incidentally, Telemachus learns the fate of Menelaus ' brother Agamemnon, king of Mycenae and leader of the Greeks at Troy: he was murdered on his return home by his wife Clytemnestra and her lover Aegisthus. The story briefly shifts to the suitors, who have only just now realized that Telemachus is gone. Angry, they formulate a plan to ambush his ship and kill him as he sails back home. Penelope overhears their plot and worries for her son 's safety.
The second part recounts the story of Odysseus. In the course of his seven years in captivity on Ogygia, the island of Calypso, she has fallen deeply in love with him, even though he has consistently spurned her offer of immortality as her husband and still mourns for home. She is ordered to release him by the messenger god Hermes, who has been sent by Zeus in response to Athena 's plea. Odysseus builds a raft and is given clothing, food, and drink by Calypso. When Poseidon learns that Odysseus has escaped, he wrecks the raft but, helped by a veil given by the sea nymph Ino, Odysseus swims ashore on Scherie, the island of the Phaeacians. Naked and exhausted, he hides in a pile of leaves and falls asleep. The next morning, awakened by the laughter of girls, he sees the young Nausicaä, who has gone to the seashore with her maids to wash clothes after Athena told her in a dream to do so. He appeals to her for help. She encourages him to seek the hospitality of her parents, Arete and Alcinous (or Alkinous). Odysseus is welcomed and is not at first asked for his name, but Alcinous promises to provide him a ship to return him to his home country. He remains for several days, and is goaded into taking part in a discus throw by the taunts of Euryalus, impressing the Phaecians with his incredible athletic ability. Afterwards, he hears the blind singer Demodocus perform two narrative poems. The first is an otherwise obscure incident of the Trojan War, the "Quarrel of Odysseus and Achilles ''; the second is the amusing tale of a love affair between two Olympian gods, Ares and Aphrodite. Finally, Odysseus asks Demodocus to return to the Trojan War theme and tell of the Trojan Horse, a stratagem in which Odysseus had played a leading role. Unable to hide his emotion as he relives this episode, Odysseus at last reveals his identity. He then begins to tell the story of his return from Troy.
Odysseus goes back in time and recounts his story to the Phaecians. After a failed piratical raid on Ismaros in the land of the Cicones, Odysseus and his twelve ships were driven off course by storms. Odysseus visited the lethargic Lotus - Eaters who gave his men their fruit that would have caused them to forget their homecoming had Odysseus not dragged them back to the ship by force. Afterwards, Odysseus and his men landed on a lush, uninhabited island near the land of the Cyclopes. The men then landed on shore and entered the cave of Polyphemus, where they found all the cheeses and meat they desired. Upon returning home, Polyphemus sealed the entrance with a massive boulder and proceeded to eat Odysseus ' men. Odysseus devised an escape plan in which he, identifying himself as "Nobody '', plied Polyphemus with wine and blinded him with a wooden stake. When Polyphemus cried out, his neighbors left after Polyphemus claimed that "Nobody '' had attacked him. Odysseus and his men finally left the cave by hiding on the underbellies of the sheep as they were let out of the cave. While they were escaping, however, Odysseus foolishly taunted Polyphemus and revealed his true identity. Recalling that had been prophesized by appeals to his father Poseidon. Poseidon then cursed Odysseus to wander the sea for ten years, during which he would lose all his crew and return home through the aid of others. After the escape, Odysseus and his crew stayed with Aeolus, a king endowed by the gods with the winds. He gave Odysseus a leather bag containing all the winds, except the west wind, a gift that should have ensured a safe return home. Just as Ithaca came into sight, the greedy sailors naively opened the bag while Odysseus slept, thinking it contained gold. All of the winds flew out and the resulting storm drove the ships back the way they had come. Aeolus, recognizing that Odysseus has drawn the ire of the gods, refused to further assist him.
The men then re-embarked and encountered the cannibalistic Laestrygonians. All of Odysseus ' ships except his own entered the harbor of the Laestrygonians ' Island and were immediately destroyed. He sailed on and reached the island of Aeaea where he visited the witch - goddess Circe, daughter of the sun - god Helios. She turned half of his men into swine after feeding them drugged cheese and wine. Hermes warned Odysseus about Circe and gave Odysseus an herb called moly which gave him resistance to Circe 's magic. Odysseus forced the now - powerless Circe to change his men back to their human form, and was subsequently seduced by her. They remained with her on the island for one year, while they feasted and drank. Finally, guided by Circe 's instructions, Odysseus and his crew crossed the ocean and reached a harbor at the western edge of the world, where Odysseus sacrificed to the dead. He first encountered the spirit of Elpenor, a crewman who had gotten drunk and fallen from a roof to his death on Aeaea. Elpenor 's ghost told Odysseus to bury his body, which Odysseus promised to do. Odysseus then summoned the spirit of the prophet Tiresias for advice on how to appease Poseidon upon his return home, and was told that he may return home if he is able to stay himself and his crew from eating the sacred livestock of Helios on the island of Thrinacia and that failure to do so would result in the loss of his ship and his entire crew. Next Odysseus met the spirit of his own mother, Anticlea, who had died of grief during his long absence. From her, he got his first news of his own household, threatened by the greed of the Suitors. Finally, he met the spirits of famous men and women. Notably, he encountered the spirit of Agamemnon, of whose murder he now learned, and Achilles, who lamented the woes of the land of the dead but was comforted in hearing of the success of his son Neoptolemus (for Odysseus ' encounter with the dead, see also Nekuia).
Returning to Aeaea, they buried Elpenor and were advised by Circe on the remaining stages of the journey. They skirted the land of the Sirens, who sang an enchanting song that normally caused passing sailors to steer toward the rocks, only to hit them and sink. All of the sailors had their ears plugged up with beeswax, except for Odysseus, who was tied to the mast as he wanted to hear the song. He told his sailors not to untie him as it would only make him want to drown himself. They then passed between the six - headed monster Scylla and the whirlpool Charybdis, narrowly avoiding death, even though Scylla snatched up six men. Next, they landed on the island of Thrinacia, with the crew overriding Odysseus 's wishes to remain away from the island. Zeus caused a storm which prevented them leaving, causing them to deplete the food given to them by Circe. While Odysseus was away praying, his men ignored the warnings of Tiresias and Circe and hunted the sacred cattle of Helios. The Sun God insisted that Zeus punish the men for this sacrilege. They suffered a shipwreck as they were driven towards Charybdis. All but Odysseus were drowned. Odysseus clung to a fig tree above Charybdis. Washed ashore on the island of Ogygia, he was compelled to remain there as Calypso 's lover, bored, homesick and trapped on her small island, until she was ordered by Zeus, via Hermes, to release Odysseus. Odysseus did not realise how long it would take to get home to his family.
Having listened with rapt attention to his story, the Phaeacians agree to provide Odysseus with more treasure than he would have received from the spoils of Troy. They deliver him at night, while he is fast asleep, to a hidden harbour on Ithaca. Poseidon, offended that the Phaecians have returned Odysseus home, destroys the Phaeacian ship on its return voyage, and the city sacrifices to Poseidon and agrees to stop giving escorts to strangers to appease him. Odysseus awakens and believes that he has been dropped on a distant land before Athena appears to him and reveals that he is indeed on Ithaca. She then hides his treasure in a nearby cave and disguises him as an elderly beggar so he can see how things stand in his household. He finds his way to the hut of one of his own slaves, the swineherd Eumaeus, who treats him hospitably and speaks favorably of Odysseus. After dinner, the disguised Odysseus tells the farm laborers a fictitious tale of himself: he was born in Crete, had led a party of Cretans to fight alongside other Greeks in the Trojan War, and had then spent seven years at the court of the king of Egypt, finally shipwrecking in Thesprotia and crossing from there to Ithaca. He further promises the men of the return of Odysseus, but his promises are wearily discounted by the men.
Meanwhile, Telemachus sails home from Sparta, evading an ambush set by the Suitors. He disembarks on the coast of Ithaca and makes for Eumaeus 's hut. Father and son meet; Odysseus identifies himself to Telemachus (but still not to Eumaeus), and they decide that the Suitors must be killed. Telemachus goes home first. Accompanied by Eumaeus, Odysseus returns to his own house, still pretending to be a beggar. When Odysseus ' dog (who was a puppy before he left) saw him, he becomes so excited that he dies. He is ridiculed by the Suitors in his own home, especially by one extremely impertinent man named Antinous. Odysseus meets Penelope and tests her intentions by saying he once met Odysseus in Crete. Closely questioned, he adds that he had recently been in Thesprotia and had learned something there of Odysseus 's recent wanderings.
Odysseus 's identity is discovered by the housekeeper, Eurycleia, when she recognizes an old scar as she is washing his feet. Eurycleia tries to tell Penelope about the beggar 's true identity, but Athena makes sure that Penelope can not hear her. Odysseus then swears Eurycleia to secrecy.
The next day, at Athena 's prompting, Penelope maneuvers the Suitors into competing for her hand with an archery competition using Odysseus ' bow. The man who can string the bow and shoot it through a dozen axe heads would win. Odysseus takes part in the competition himself: he alone is strong enough to string the bow and shoot it through the dozen axe heads, making him the winner. He then throws off his rags and kills Antinous with his next arrow. Then, with the help of Athena, Odysseus, Telemachus, Eumaeus, and Philoetius the cowherd he kills the other Suitors, first using the rest of the arrows and then by swords and spears once both sides armed themselves. Once the battle is won, Odysseus and Telemachus also hang twelve of their household maids whom Eurycleia identifies as guilty of betraying Penelope or having sex with the Suitors. They mutilate and kill the goatherd Melanthius, who had mocked and abused Odysseus and brought weapons and armor to the suitors. Now, at last, Odysseus identifies himself to Penelope. She is hesitant but recognizes him when he mentions that he made their bed from an olive tree still rooted to the ground. Many modern and ancient scholars take this to be the original ending of the Odyssey, and the rest to be an interpolation.
The next day he and Telemachus visit the country farm of his old father Laertes, who likewise accepts his identity only when Odysseus correctly describes the orchard that Laertes had previously given him.
The citizens of Ithaca have followed Odysseus on the road, planning to avenge the killing of the Suitors, their sons. Their leader points out that Odysseus has now caused the deaths of two generations of the men of Ithaca: his sailors, not one of whom survived; and the Suitors, whom he has now executed (albeit rightly). Athena intervenes in a dea ex machina and persuades both sides to give up the vendetta. After this, Ithaca is at peace once more, concluding the Odyssey.
Odysseus ' name means "trouble '' in Greek, referring to both the giving and receiving of trouble -- as is often the case in his wanderings. An early example of this is the boar hunt that gave Odysseus the scar by which Eurycleia recognizes him; Odysseus is injured by the boar and responds by killing it. Odysseus ' heroic trait is his mētis, or "cunning intelligence ''. He is often described as the "Peer of Zeus in Counsel ''. This intelligence is most often manifested by his use of disguise and deceptive speech. His disguises take forms both physical (altering his appearance) and verbal, such as telling the Cyclops Polyphemus that his name is Οὖτις, "Nobody '', then escaping after blinding Polyphemus. When asked by other Cyclopes why he is screaming, Polyphemus replies that "Nobody '' is hurting him, so the others assume that "If alone as you are (Polyphemus) none uses violence on you, why, there is no avoiding the sickness sent by great Zeus; so you had better pray to your father, the lord Poseidon ''. The most evident flaw that Odysseus sports is that of his arrogance and his pride, or hubris. As he sails away from the island of the Cyclopes, he shouts his name and boasts that nobody can defeat the "Great Odysseus ''. The Cyclops then throws the top half of a mountain at him and prays to his father, Poseidon, saying that Odysseus has blinded him. This enrages Poseidon, causing the god to thwart Odysseus ' homecoming for a very long time.
The Odyssey is written in dactylic hexameter. It opens in medias res, in the middle of the overall story, with prior events described through flashbacks or storytelling. This device is also used by later authors of literary epics, such as Virgil in the Aeneid, Luís de Camões in Os Lusíadas and Alexander Pope in The Rape of the Lock.
The first four books of the poem trace Telemachus ' efforts to assert control of the household, and then, at Athena 's advice, his efforts to search for news of his long - lost father. Then the scene shifts: Odysseus has been a captive of the beautiful nymph Calypso, with whom he has spent seven of his ten lost years. Released by the intercession of his patroness Athena, through the aid of Hermes, he departs, but his raft is destroyed by his divine enemy Poseidon, who is angry because Odysseus blinded his son, Polyphemus. When Odysseus washes up on Scherie, home to the Phaeacians, he is assisted by the young Nausicaä and is treated hospitably. In return, he satisfies the Phaeacians ' curiosity, telling them, and the reader, of all his adventures since departing from Troy. The shipbuilding Phaeacians then loan him a ship to return to Ithaca, where he is aided by the swineherd Eumaeus, meets Telemachus, regains his household by killing the Suitors, and is reunited with his faithful wife, Penelope.
All ancient and nearly all modern editions and translations of the Odyssey are divided into 24 books. This division is convenient, but it may not be original. Many scholars believe it was developed by Alexandrian editors of the 3rd century BC. In the Classical period, moreover, several of the books (individually and in groups) were given their own titles: the first four books, focusing on Telemachus, are commonly known as the Telemachy. Odysseus ' narrative, Book 9, featuring his encounter with the cyclops Polyphemus, is traditionally called the Cyclopeia. Book 11, the section describing his meeting with the spirits of the dead is known as the Nekuia. Books 9 through 12, wherein Odysseus recalls his adventures for his Phaeacian hosts, are collectively referred to as the Apologoi: Odysseus ' "stories ''. Book 22, wherein Odysseus kills all the Suitors, has been given the title Mnesterophonia: "slaughter of the Suitors ''. This concludes the Greek Epic Cycle, though fragments remain of the "alternative ending '' of sorts known as the Telegony.
Telegony aside, the last 548 lines of the Odyssey, corresponding to Book 24, are believed by many scholars to have been added by a slightly later poet. Several passages in earlier books seem to be setting up the events of Book 24, so if it were indeed a later addition, the offending editor would seem to have changed earlier text as well. For more about varying views on the origin, authorship and unity of the poem see Homeric scholarship.
The events in the main sequence of the Odyssey (excluding Odysseus ' embedded narrative of his wanderings) take place in the Peloponnese and in what are now called the Ionian Islands. There are difficulties in the apparently simple identification of Ithaca, the homeland of Odysseus, which may or may not be the same island that is now called Ithakē (Ιθάκη). The wanderings of Odysseus as told to the Phaeacians, and the location of the Phaeacians ' own island of Scheria, pose more fundamental problems, if geography is to be applied: scholars, both ancient and modern, are divided as to whether or not any of the places visited by Odysseus (after Ismaros and before his return to Ithaca) are real.
Scholars have seen strong influences from Near Eastern mythology and literature in the Odyssey. Martin West has noted substantial parallels between the Epic of Gilgamesh and the Odyssey. Both Odysseus and Gilgamesh are known for traveling to the ends of the earth, and on their journeys go to the land of the dead. On his voyage to the underworld, Odysseus follows instructions given to him by Circe. Her island, Aeaea, is located at the edges of the world and seems to have close associations with the sun. Like Odysseus, Gilgamesh gets directions on how to reach the land of the dead from a divine helper: in this case, the goddess Siduri, who, like Circe, dwells by the sea at the ends of the earth. Her home is also associated with the sun: Gilgamesh reaches Siduri 's house by passing through a tunnel underneath Mt. Mashu, the high mountain from which the sun comes into the sky. West argues that the similarity of Odysseus ' and Gilgamesh 's journeys to the edges of the earth are the result of the influence of the Gilgamesh epic upon the Odyssey.
In 1914, paleontologist Othenio Abel surmised the origins of the cyclops to be the result of ancient Greeks finding an elephant skull. The enormous nasal passage in the middle of the forehead could have looked like the eye socket of a giant, to those who had never seen a living elephant. Classical scholars, on the other hand, have long realized that the story of the cyclops was originally a Greek folk tale, which existed independently of the Odyssey and which only became embedded in it at a later date. Similar stories are found in cultures across Europe and the Middle East. According to this explanation, the cyclops was originally simply a giant or ogre, much like Humbaba in the Epic of Gilgamesh. The detail about it having one eye was simply invented in order to explain how the creature was so easily blinded.
The oldest known extract of the Odyssey was found near the remains of the Temple of Zeus, on an engraved clay plaque in Olympia, Greece. It is believed to date from the 3rd century AD.
An important factor to consider about Odysseus ' homecoming is the hint at potential endings to the epic by using other characters as parallels for his journey. For instance, one example is that of Agamemnon 's homecoming versus Odysseus ' homecoming. Upon Agamemnon 's return, his wife Clytemnestra and her lover, Aegisthus kill Agamemnon. Agamemnon 's son, Orestes, out of vengeance for his father 's death, kills Aegisthus. This parallel compares the death of the suitors to the death of Aegisthus and sets Orestes up as an example for Telemachus. Also, because Odysseus knows about Clytemnestra 's betrayal, Odysseus returns home in disguise in order to test the loyalty of his own wife, Penelope. Later, Agamemnon praises Penelope for not killing Odysseus. It is because of Penelope that Odysseus has fame and a successful homecoming. This successful homecoming is unlike Achilles, who has fame but is dead, and Agamemnon, who had an unsuccessful homecoming resulting in his death.
Only two of Odysseus 's adventures are described by the poet. The rest of Odysseus ' adventures are recounted by Odysseus himself. The two scenes that the poet describes are Odysseus on Calypso 's island and Odysseus ' encounter with the Phaeacians. These scenes are told by the poet to represent an important transition in Odysseus ' journey: being concealed to returning home. Calypso 's name means "concealer '' or "one who conceals, '' and that is exactly what she does with Odysseus. Calypso keeps Odysseus concealed from the world and unable to return home. After leaving Calypso 's island, the poet describes Odysseus ' encounters with the Phaeacians -- those who "convoy without hurt to all men '' -- which represents his transition from not returning home to returning home. Also, during Odysseus ' journey, he encounters many beings that are close to the gods. These encounters are useful in understanding that Odysseus is in a world beyond man and that influences the fact he can not return home. These beings that are close to the gods include the Phaeacians who lived near Cyclopes, whose king, Alcinous, is the great - grandson of the king of the giants, Eurymedon, and the grandson of Poseidon. Some of the other characters that Odysseus encounters are Polyphemus who is the cyclops son of Poseidon, God of Oceans, Circe who is the sorceress daughter of the Sun that turns men into animals, Calypso who is a goddess, and the Laestrygonians who are cannibalistic giants.
Throughout the course of the epic, Odysseus encounters several examples of xenia ("guest - friendship ''), which provide models of how hosts should and should not act. The Phaeacians demonstrate exemplary guest - friendship by feeding Odysseus, giving him a place to sleep, and granting him a safe voyage home, which are all things a good host should do. Polyphemus demonstrates poor guest - friendship. His only "gift '' to Odysseus is that he will eat him last. Calypso also exemplifies poor guest - friendship because she does not allow Odysseus to leave her island. Another important factor to guest - friendship is that kingship implies generosity. It is assumed that a king has the means to be a generous host and is more generous with his own property. This is best seen when Odysseus, disguised as a beggar, begs Antinous, one of the suitors, for food and Antinous denies his request. Odysseus essentially says that while Antinous may look like a king, he is far from a king since he is not generous.
Another theme throughout the Odyssey is testing. This occurs in two distinct ways. Odysseus tests the loyalty of others and others test Odysseus ' identity. An example of Odysseus testing the loyalties of others is when he returns home. Instead of immediately revealing his identity, he arrives disguised as a beggar and then proceeds to determine who in his house has remained loyal to him and who has helped the suitors. After Odysseus reveals his true identity, the characters test Odysseus ' identity to see if he really is who he says he is. For instance, Penelope tests Odysseus ' identity by saying that she will move the bed into the other room for him. This is a difficult task since it is made out of a living tree that would require being cut down, a fact that only the real Odysseus would know, thus proving his identity. For more information on the progression of testing type scenes, read more below.
Omens occur frequently throughout the Odyssey, as well as in many other epics. Within the Odyssey, omens frequently involve birds. It is important to note who receives the omens and what these omens mean to the characters and to the epic as a whole. For instance, bird omens are shown to Telemachus, Penelope, Odysseus, and the suitors. Telemachus and Penelope receive their omens as well in the form of words, sneezes, and dreams. However, Odysseus is the only character who receives thunder or lightning as an omen. This is important to note because the thunder came from Zeus, the king of the gods. This direct relationship between Zeus and Odysseus represents the kingship of Odysseus.
Finding scenes occur in the Odyssey when a character discovers another character within the epic. Finding scenes proceed as followed:
These finding scenes can be identified several times throughout the epic including when Telemachus and Pisistratus find Menelaus when Calypso finds Odysseus on the beach, and when the suitor Amphimedon finds Agamemnon in Hades.
Guest - friendship is also a theme in the Odyssey, but it too follows a very specific pattern. This pattern is:
Another important factor of guest - friendship is not keeping the guest longer than they wish and also promising their safety while they are a guest within the host 's home.
While testing is a theme with the epic, it also has a very specific type scene that accompanies it as well. Throughout the epic, the testing of others follows a typical pattern. This pattern is:
Omens are another example of a type scene in the Odyssey. Two important parts of an omen type scene are the recognition of the omen and then the interpretation. In the Odyssey specifically, there are several omens involving birds. All of the bird omens -- with the exception of the first one in the epic -- show large birds attacking smaller birds. Accompanying each omen is a wish which can be either explicitly stated or only implied. For example, Telemachus wishes for vengeance and for Odysseus to be home, Penelope wishes for Odysseus ' return, and the suitors wish for the death of Telemachus. The omens seen in the Odyssey are also a recurring theme throughout the epic.
The Odyssey is regarded as one of the most important foundational works of western literature. It is widely regarded by western literary critics as a timeless classic.
Straightforward retellings of the Odyssey have flourished ever since the Middle Ages. Merugud Uilix maicc Leirtis ("On the Wandering of Ulysses, son of Laertes '') is an eccentric Old Irish version of the material; the work exists in a 12th - century AD manuscript, which linguists believe is based on an 8th - century original. Il ritorno d'Ulisse in patria, first performed in 1640, is an opera by Claudio Monteverdi based on the second half of Homer 's Odyssey. The first canto of Ezra Pound 's The Cantos (1917) is both a translation and a retelling of Odysseus ' journey to the underworld. The poem "Ulysses '' by Alfred, Lord Tennyson is narrated by an aged Ulysses who is determined to continue to live life to the fullest. The Odyssey (1997), a made - for - TV movie directed by Andrei Konchalovsky, is a slightly abbreviated version of the epic.
Other authors have composed more creative reworkings of the poem, often updated to address contemporary themes and concerns. Cyclops by Euripides, the only fully extant satyr play, retells the episode involving Polyphemus with a humorous twist. A True Story, written by Lucian of Samosata in the 2nd century AD, is a satire on the Odyssey and on ancient travel tales, describing a journey sailing westward, beyond the Pillars of Hercules and to the Moon, the first known text that could be called science fiction.
James Joyce 's modernist novel Ulysses (1922) is a retelling of the Odyssey set in modern - day Dublin. Each chapter in the book has an assigned theme, technique, and correspondences between its characters and those of Homer 's Odyssey. Homer 's Daughter by Robert Graves is a novel imagining how the version we have might have been invented out of older tales. The Japanese - French anime Ulysses 31 (1981) updates the ancient setting into a 31st - century space opera. Omeros (1991), an epic poem by Derek Walcott, is in part a retelling of the Odyssey, set on the Caribbean island of St. Lucia. The film Ulysses ' Gaze (1995) directed by Theo Angelopoulos has many of the elements of the Odyssey set against the backdrop of the most recent and previous Balkan Wars.
Daniel Wallace 's Big Fish: A Novel of Mythic Proportions (1998) adapts the epic to the American South, while also incorporating tall tales into its first - person narrative much as Odysseus does in the Apologoi (Books 9 - 12). The Coen Brothers ' 2000 film O Brother, Where Art Thou? is loosely based on Homer 's poem. Margaret Atwood 's 2005 novella The Penelopiad is an ironic rewriting of the Odyssey from Penelope 's perspective. Zachary Mason 's The Lost Books of the Odyssey (2007) is a series of short stories that rework Homer 's original plot in a contemporary style reminiscent of Italo Calvino. The Heroes of Olympus (2010 -- 2014) by Rick Riordan is based entirely on Greek mythology and includes many aspects and characters from the Odyssey.
Authors have sought to imagine new endings for the Odyssey. In canto XXVI of the Inferno, Dante Alighieri meets Odysseus in the eighth circle of hell, where Odysseus himself appends a new ending to the Odyssey in which he never returns to Ithaca and instead continues his restless adventuring. Nikos Kazantzakis aspires to continue the poem and explore more modern concerns in his epic poem The Odyssey: A Modern Sequel, which was first published in 1938 in modern Greek.
In 2018, BBC Culture polled experts around the world to nominate the stories they felt had shaped mindsets or influenced history. Odyssey topped the list.
This is a partial list of translations into English of Homer 's Odyssey.
Lucian of Samosata, the Greco - Syrian satirist of the second century, appears today as an exemplar of the science - fiction artist. There is little, if any, need to argue that his mythopoeic Milesian Tales and his literary fantastic voyages and utopistic hyperbole comport with the genre of science fiction;...
|
where does the issues with food security take place | Food security - wikipedia
Food security is a condition related to the supply of food, and individuals ' access to it. There is evidence of being in use over 10,000 years ago, with central authorities in civilizations ancient China and ancient Egypt being known to release food from storage in times of famine. At the 1974 World Food Conference the term "food security '' was defined with an emphasis on supply. Food security, they said, is the "availability at all times of adequate, nourishing, diverse, balanced and moderate world food supplies of basic foodstuffs to sustain a steady expansion of food consumption and to offset fluctuations in production and prices ''. Later definitions added demand and access issues to the definition. The final report of the 1996 World Food Summit states that food security "exists when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life ''.
Household food security exists when all members, at all times, have access to enough food for an active, healthy life. Individuals who are food secure do not live in hunger or fear of starvation. Food insecurity, on the other hand, is a situation of "limited or uncertain availability of nutritionally adequate and safe foods or limited or uncertain ability to acquire acceptable foods in socially acceptable ways '', according to the United States Department of Agriculture (USDA). Food security incorporates a measure of resilience to future disruption or unavailability of critical food supply due to various risk factors including droughts, shipping disruptions, fuel shortages, economic instability, and wars. In the years 2011 -- 2013, an estimated 842 million people were suffering from chronic hunger. The Food and Agriculture Organization of the United Nations, or FAO, identified the four pillars of food security as availability, access, utilization, and stability. The United Nations (UN) recognized the Right to Food in the Declaration of Human Rights in 1948, and has since noted that it is vital for the enjoyment of all other rights.
The 1996 World Summit on Food Security declared that "food should not be used as an instrument for political and economic pressure ''. According to the International Centre for Trade and Sustainable Development, failed agriculture market regulation and the lack of anti-dumping mechanisms cause much of the world 's food scarcity and malnutrition.
Food security can be measured by calorie intake per person per day, available on a household budget. In general the objective of food security indicators and measures is to capture some or all of the main components of food security in terms of food availability, access and utilization or adequacy. While availability (production and supply) and utilization / adequacy (nutritional status / anthropometric measures) seemed much easier to estimate, thus more popular, access (ability to acquire sufficient quantity and quality) remain largely elusive. The factors influencing household food access are often context specific.
Several measures have been developed that aim to capture the access component of food security, with some notable examples developed by the USAID - funded Food and Nutrition Technical Assistance (FANTA) project, collaborating with Cornell and Tufts University and Africare and World Vision. These include:
Food insecurity is measured in the United States by questions in the Census Bureau 's Current Population Survey. The questions asked are about anxiety that the household budget is inadequate to buy enough food, inadequacy in the quantity or quality of food eaten by adults and children in the household, and instances of reduced food intake or consequences of reduced food intake for adults and for children. A National Academy of Sciences study commissioned by the USDA criticized this measurement and the relationship of "food security '' to hunger, adding "it is not clear whether hunger is appropriately identified as the extreme end of the food security scale. ''
The FAO, World Food Programme (WFP), and International Fund for Agricultural Development (IFAD) collaborate to produce The State of Food Insecurity in the World. The 2012 edition described improvements made by the FAO to the prevalence of undernourishment (PoU) indicator that is used to measure rates of food insecurity. New features include revised minimum dietary energy requirements for individual countries, updates to the world population data, and estimates of food losses in retail distribution for each country. Measurements that factor into the indicator include dietary energy supply, food production, food prices, food expenditures, and volatility of the food system. The stages of food insecurity range from food secure situations to full - scale famine. A new peer - reviewed journal, Food Security: The Science, Sociology and Economics of Food Production and Access to Food, began publishing in 2009.
With its prevalence of undernourishment (PoU) indicator, the FAO reported that almost 870 million people were chronically undernourished in the years 2010 -- 2012. This represents 12.5 % of the global population, or 1 in 8 people. Higher rates occur in developing countries, where 852 million people (about 15 % of the population) are chronically undernourished. The report noted that Asia and Latin America have achieved reductions in rates of undernourishment that put these regions on track for achieving the Millennium Development Goal of halving the prevalence of undernourishment by 2015. The UN noted that about 2 billion people do not consume a sufficient amount of vitamins and minerals. In India, the second-most populous country in the world, 30 million people have been added to the ranks of the hungry since the mid-1990s and 46 % of children are underweight.
Famine s have been frequent in world history. Some have killed millions and substantially diminished the population of a large area. The most common causes have been drought and war, but the greatest famines in history were caused by economic policy.
Afghanistan
In Afghanistan about 35 % of households are food insecure. The prevalence of under - weight, stunting, and wasting in children under 5 years of age is also very high.
Food insecurity has distressed Mexico throughout its history and continues to do so in the present. Food availability is not the issue; rather, severe deficiencies in the accessibility of food contributes to the insecurity. Between 2003 and 2005, the total Mexican food supply was well above the sufficient to meet the requirements of the Mexican population, averaging 3,270 kilocalories per daily capita, higher than the minimum requirements of 1,850 kilocalories per daily capita. However, at least 10 percent of the population in every Mexican state suffers from inadequate food access. In nine states, 25 -- 35 percent live in food - insecure households. More than 10 percent of the populations of seven Mexica states fall into the category of Serious Food Insecurity.
The issue of food inaccessibility is magnified by chronic child malnutrition as well as obesity in children, adolescents, and family.
Mexico is vulnerable to drought which can further cripple agriculture.
The United States Department of Agriculture defines food insecurity as "limited or uncertain availability of nutritionally adequate and safe foods or limited or uncertain ability to acquire acceptable foods in socially acceptable ways. '' Food security is defined by the USDA as "access by all people at all times to enough food for an active, healthy life. ''
National Food Security Surveys are the main survey tool used by the USDA to measure food security in the United States. Based on respondents ' answers to survey questions, the household can be placed on a continuum of food security defined by the USDA. This continuum has four categories: high food security, marginal food security, low food security, and very low food security. Economic Research Service report number 155 (ERS - 155) estimates that 14.5 percent (17.6 million) of US households were food insecure at some point in 2012. The prevalence of food insecurity has been relatively in the United States since the economic recession 2008.
In 2016:
Source: https://www.ers.usda.gov/topics/food-nutrition-assistance/food-security-in-the-us/key-statistics-graphics.aspx
In 2010 the government of the United States began the Feed the Future Initiative. This initiative is expected to work on the basis of country - led priorities that call for consistent support by the governments, donor organizations, the private sector, and the civil society to accomplish its long - term goals.
The World Summit on Food Security, held in Rome in 1996, aimed to renew a global commitment to the fight against hunger. The Food and Agriculture Organization of the United Nations (FAO) called the summit in response to widespread under - nutrition and growing concern about the capacity of agriculture to meet future food needs. The conference produced two key documents, the Rome Declaration on World Food Security and the World Food Summit Plan of Action.
The Rome Declaration called for the members of the United Nations to work to halve the number of chronically undernourished people on the Earth by the year 2015. The Plan of Action set a number of targets for government and non-governmental organizations for achieving food security, at the individual, household, national, regional and global levels.
Another World Summit on Food Security took place at the FAO 's headquarters in Rome between November 16 and 18, 2009. The decision to convene the summit was taken by the Council of FAO in June 2009, at the proposal of FAO Director - General Dr Jacques Diouf. Heads of state and government attended this summit.
The WHO states that there are three pillars that determine food security: food availability, food access, and food use and misuse. The FAO adds a fourth pillar: the stability of the first three dimensions of food security over time. In 2009, the World Summit on Food Security stated that the "four pillars of food security are availability, access, utilization, and stability ''.
Food availability relates to the supply of food through production, distribution, and exchange. Food production is determined by a variety of factors including land ownership and use; soil management; crop selection, breeding, and management; livestock breeding and management; and harvesting. Crop production can be affected by changes in rainfall and temperatures. The use of land, water, and energy to grow food often competes with other uses, which can affect food production. Land used for agriculture can be used for urbanization or lost to desertification, salinization, and soil erosion due to unsustainable agricultural practices. Crop production is not required for a country to achieve food security. Nations do n't have to have the natural resources required to produce crops in order to achieve food security, as seen in the examples of Japan and Singapore.
Because food consumers outnumber producers in every country, food must be distributed to different regions or nations. Food distribution involves the storage, processing, transport, packaging, and marketing of food. Food - chain infrastructure and storage technologies on farms can also affect the amount of food wasted in the distribution process. Poor transport infrastructure can increase the price of supplying water and fertilizer as well as the price of moving food to national and global markets. Around the world, few individuals or households are continuously self - reliant for food. This creates the need for a bartering, exchange, or cash economy to acquire food. The exchange of food requires efficient trading systems and market institutions, which can affect food security. Per capita world food supplies are more than adequate to provide food security to all, and thus food accessibility is a greater barrier to achieving food security.
Food access refers to the affordability and allocation of food, as well as the preferences of individuals and households. The UN Committee on Economic, Social, and Cultural Rights noted that the causes of hunger and malnutrition are often not a scarcity of food but an inability to access available food, usually due to poverty. Poverty can limit access to food, and can also increase how vulnerable an individual or household is to food price spikes. Access depends on whether the household has enough income to purchase food at prevailing prices or has sufficient land and other resources to grow its own food. Households with enough resources can overcome unstable harvests and local food shortages and maintain their access to food.
There are two distinct types of access to food: direct access, in which a household produces food using human and material resources, and economic access, in which a household purchases food produced elsewhere. Location can affect access to food and which type of access a family will rely on. The assets of a household, including income, land, products of labor, inheritances, and gifts can determine a household 's access to food. However, the ability to access sufficient food may not lead to the purchase of food over other materials and services. Demographics and education levels of members of the household as well as the gender of the household head determine the preferences of the household, which influences the type of food that are purchased. A household 's access to enough and nutritious food may not assure adequate food intake of all household members, as intrahousehold food allocation may not sufficiently meet the requirements of each member of the household. The USDA adds that access to food must be available in socially acceptable ways, without, for example, resorting to emergency food supplies, scavenging, stealing, or other coping strategies.
The next pillar of food security is food utilization, which refers to the metabolism of food by individuals. Once food is obtained by a household, a variety of factors affect the quantity and quality of food that reaches members of the household. In order to achieve food security, the food ingested must be safe and must be enough to meet the physiological requirements of each individual. Food safety affects food utilization, and can be affected by the preparation, processing, and cooking of food in the community and household. Nutritional values of the household determine food choice, and whether food meets cultural preferences is important to utilization in terms of psychological and social well - being. Access to healthcare is another determinant of food utilization, since the health of individuals controls how the food is metabolized. For example, intestinal parasites can take nutrients from the body and decrease food utilization. Sanitation can also decrease the occurrence and spread of diseases that can affect food utilization. Education about nutrition and food preparation can affect food utilization and improve this pillar of food security.
Food stability refers to the ability to obtain food over time. Food insecurity can be transitory, seasonal, or chronic. In transitory food insecurity, food may be unavailable during certain periods of time. At the food production level, natural disasters and drought result in crop failure and decreased food availability. Civil conflicts can also decrease access to food. Instability in markets resulting in food - price spikes can cause transitory food insecurity. Other factors that can temporarily cause food insecurity are loss of employment or productivity, which can be caused by illness. Seasonal food insecurity can result from the regular pattern of growing seasons in food production.
Chronic (or permanent) food insecurity is defined as the long - term, persistent lack of adequate food. In this case, households are constantly at risk of being unable to acquire food to meet the needs of all members. Chronic and transitory food insecurity are linked, since the reoccurrence of transitory food security can make households more vulnerable to chronic food insecurity.
Famine and hunger are both rooted in food insecurity. Chronic food insecurity translates into a high degree of vulnerability to famine and hunger; ensuring food security presupposes elimination of that vulnerability.
Many countries experience ongoing food shortages and distribution problems. These result in chronic and often widespread hunger amongst significant numbers of people. Human populations can respond to chronic hunger and malnutrition by decreasing body size, known in medical terms as stunting or stunted growth. This process starts in utero if the mother is malnourished and continues through approximately the third year of life. It leads to higher infant and child mortality, but at rates far lower than during famines. Once stunting has occurred, improved nutritional intake after the age of about two years is unable to reverse the damage. Stunting itself can be viewed as a coping mechanism, bringing body size into alignment with the calories available during adulthood in the location where the child is born. Limiting body size as a way of adapting to low levels of energy (calories) adversely affects health in three ways:
Water deficits, which are already spurring heavy grain imports in numerous smaller countries, may soon do the same in larger countries, such as China or India. The water tables are falling in scores of countries (including northern China, the US, and India) due to widespread overpumping using powerful diesel and electric pumps. Other countries affected include Pakistan, Afghanistan, and Iran. This will eventually lead to water scarcity and cutbacks in grain harvest. Even with the overpumping of its aquifers, China is developing a grain deficit. When this happens, it will almost certainly drive grain prices upward. Most of the 3 billion people projected to be born worldwide by mid-century will be born in countries already experiencing water shortages. After China and India, there is a second tier of smaller countries with large water deficits -- Afghanistan, Algeria, Egypt, Iran, Mexico, and Pakistan. Four of these already import a large share of their grain. Only Pakistan remains self - sufficient. But with a population expanding by 4 million a year, it will likely soon turn to the world market for grain.
Regionally, Sub-Saharan Africa has the largest number of water - stressed countries of any place on the globe, as of an estimated 800 million people who live in Africa, 300 million live in a water - stressed environment. It is estimated that by 2030, 75 million to 250 million people in Africa will be living in areas of high water stress, which will likely displace anywhere between 24 million and 700 million people as conditions become increasingly unlivable. Because the majority of Africa remains dependent on an agricultural lifestyle and 80 to 90 percent of all families in rural Africa rely upon producing their own food, water scarcity translates to a loss of food security.
Multimillion - dollar investments beginning in the 1990s by the World Bank have reclaimed desert and turned the Ica Valley in Peru, one of the driest places on earth, into the largest supplier of asparagus in the world. However, the constant irrigation has caused a rapid drop in the water table, in some places as much as eight meters per year, one of the fastest rates of aquifer depletion in the world. The wells of small farmers and local people are beginning to run dry and the water supply for the main city in the valley is under threat. As a cash crop, asparagus has provided jobs for local people, but most of the money goes to the buyers, mainly the British. A 2010 report concluded that the industry is not sustainable and accuses investors, including the World Bank, of failing to take proper responsibility for the effect of their decisions on the water resources of poorer countries. Diverting water from the headwaters of the Ica River to asparagus fields has also led to a water shortage in the mountain region of Huancavelica, where indigenous communities make a marginal living herding.
Intensive farming often leads to a vicious cycle of exhaustion of soil fertility and decline of agricultural yields. Approximately 40 percent of the world 's agricultural land is seriously degraded. In Africa, if current trends of soil degradation continue, the continent might be able to feed just 25 percent of its population by 2025, according to UNU 's Ghana - based Institute for Natural Resources in Africa.
Extreme events, such as droughts and floods, are forecast to increase as climate change and global warming takes hold. Ranging from overnight floods to gradually worsening droughts, these will have a range of effects on the agricultural sector. According to the Climate & Development Knowledge Network report Managing Climate Extremes and Disasters in the Agriculture Sectors: Lessons from the IPCC SREX Report, the effects will include changing productivity and livelihood patterns, economic losses, and effects on infrastructure, markets and food security. Food security in future will be linked to our ability to adapt agricultural systems to extreme events. An example of a shifting weather pattern would be a rise in temperatures. As temperatures rise due to climate change there is a risk of a diminished food supply due to heat damage.
Approximately 2.4 billion people live in the drainage basin of the Himalayan rivers. India, China, Pakistan, Afghanistan, Bangladesh, Nepal and Myanmar could experience floods followed by severe droughts in coming decades. In India alone, the Ganges provides water for drinking and farming for more than 500 million people. The west coast of North America, which gets much of its water from glaciers in mountain ranges such as the Rocky Mountains and Sierra Nevada, also would be affected. Glaciers are n't the only worry that the developing nations have; sea level is reported to rise as climate change progresses, reducing the amount of land available for agriculture.
In other parts of the world, a big effect will be low yields of grain according to the World Food Trade Model, specifically in the low latitude regions where much of the developing world is located. From this the price of grain will rise, along with the developing nations trying to grow the grain. Due to this, every 2 -- 2.5 % price hike will increase the number of hungry people by 1 %. Low crop yields are just one of the problem facing farmers in the low latitudes and tropical regions. The timing and length of the growing seasons, when farmers plant their crops, are going to be changing dramatically, per the USDA, due to unknown changes in soil temperature and moisture conditions.
Another way of thinking about food security and climate change comes from Evan Fraser, a geographer working at the University of Guelph in Ontario Canada. His approach is to explore the vulnerability of food systems to climate change and he defines vulnerability to climate change as situations that occur when relatively minor environmental problems cause major effects on food security. Examples of this include the Irish Potato Famine, which was caused by a rainy year that created ideal conditions for the fungal blight to spread in potato fields, or the Ethiopian Famine in the early 1980s. Three factors stand out as common in such cases, and these three factors act as a diagnostic "tool kit '' through which to identify cases where food security may be vulnerable to climate change. These factors are: (1) specialized agro-ecosystems; (2) households with very few livelihood options other than farming; (3) situations where formal institutions do not provide adequate safety nets to protect people. "The International Food Policy Research Institute (IFPRI) estimates that an additional US $ 7.1 -- 7.3 billion per year are needed in agricultural investments to offset the negative effect of climate change on nutrition for children by 2050 (Table 6). ''
"Results show that climate change is likely to reduce agricultural production, thus reducing food availability '' (Brown etal., 2008.) "The food security threat posed by climate change is greatest for Africa, where agricultural yields and per capita food production has been steadily declining, and where population growth will double the demand for food, water, and livestock forage in the next 30 years '' (Devereux et al., 2004). In 2060, the hungry population could range from 641 million to 2087 million with climate change (Chen et al., 1994). By the year 2030, Cereal crops will decrease from 15 to 19 percent, temperatures are estimated to rise from 1 degrees Celsius to 2. 75 degrees Celsius, which will lead to less rainfall, which will all result in an increase in food insecurity in 2030 (Devereux etal, 2004). In prediction farming countries will be the worst sectors hit, hot countries and drought countries will reach even higher temperatures and richer countries will be hit the least as they have more access to more resources (Devereux et al. 2004). From a food security perspective, climate change is the dominant rationale to the increase in recent years and predicted years to come.
Diseases affecting livestock or crops can have devastating effects on food availability especially if there are no contingency plans in place. For example, Ug99, a lineage of wheat stem rust which can cause up to 100 % crop losses, is present in wheat fields in several countries in Africa and the Middle East and is predicted to spread rapidly through these regions and possibly further afield, potentially causing a wheat production disaster that would affect food security worldwide.
The genetic diversity of the crop wild relatives of wheat can be used to improve modern varieties to be more resistant to rust. In their centers of origin wild wheat plants are screened for resistance to rust, then their genetic information is studied and finally wild plants and modern varieties are crossed through means of modern plant breeding in order to transfer the resistance genes from the wild plants to the modern varieties.
Farmland and other agricultural resources have long been used to produce non-food crops including industrial materials such as cotton, flax, and rubber; drug crops such as tobacco and opium, and biofuels such as firewood, etc. In the 21st century the production of fuel crops has increased, adding to this diversion. However technologies are also developed to commercially produce food from energy such as natural gas and electrical energy with tiny water and land foot print.
Nobel Prize winning economist Amartya Sen observed that "there is no such thing as an apolitical food problem. '' While drought and other naturally occurring events may trigger famine conditions, it is government action or inaction that determines its severity, and often even whether or not a famine will occur. The 20th century has examples of governments, as in Collectivization in the Soviet Union or the Great Leap Forward in the People 's Republic of China undermining the food security of their own nations. Mass starvation is frequently a weapon of war, as in the blockade of Germany, the Battle of the Atlantic, and the blockade of Japan during World War I and World War II and in the Hunger Plan enacted by Nazi Germany.
Governments sometimes have a narrow base of support, built upon cronyism and patronage. Fred Cuny pointed out in 1999 that under these conditions: "The distribution of food within a country is a political issue. Governments in most countries give priority to urban areas, since that is where the most influential and powerful families and enterprises are usually located. The government often neglects subsistence farmers and rural areas in general. The more remote and underdeveloped the area the less likely the government will be to effectively meet its needs. Many agrarian policies, especially the pricing of agricultural commodities, discriminate against rural areas. Governments often keep prices of basic grains at such artificially low levels that subsistence producers can not accumulate enough capital to make investments to improve their production. Thus, they are effectively prevented from getting out of their precarious situation. ''
Dictators and warlords have used food as a political weapon, rewarding supporters while denying food supplies to areas that oppose their rule. Under such conditions food becomes a currency with which to buy support and famine becomes an effective weapon against opposition.
Governments with strong tendencies towards kleptocracy can undermine food security even when harvests are good. When government monopolizes trade, farmers may find that they are free to grow cash crops for export, but under penalty of law only able to sell their crops to government buyers at prices far below the world market price. The government then is free to sell their crop on the world market at full price, pocketing the difference.
When the rule of law is absent, or private property is non-existent, farmers have little incentive to improve their productivity. If a farm becomes noticeably more productive than neighboring farms, it may become the target of individuals well connected to the government. Rather than risk being noticed and possibly losing their land, farmers may be content with the perceived safety of mediocrity.
As pointed out by William Bernstein in The Birth of Plenty: "Individuals without property are susceptible to starvation, and it is much easier to bend the fearful and hungry to the will of the state. If a (farmer 's) property can be arbitrarily threatened by the state, that power will inevitably be employed to intimidate those with divergent political and religious opinions. ''
The approach known as food sovereignty views the business practices of multinational corporations as a form of neocolonialism. It contends that multinational corporations have the financial resources available to buy up the agricultural resources of impoverished nations, particularly in the tropics. They also have the political clout to convert these resources to the exclusive production of cash crops for sale to industrialized nations outside of the tropics, and in the process to squeeze the poor off of the more productive lands. Under this view subsistence farmers are left to cultivate only lands that are so marginal in terms of productivity as to be of no interest to the multinational corporations. Likewise, food sovereignty holds it to be true that communities should be able to define their own means of production and that food is a basic human right. With several multinational corporations now pushing agricultural technologies on developing countries, technologies that include improved seeds, chemical fertilizers, and pesticides, crop production has become an increasingly analyzed and debated issue. Many communities calling for food sovereignty are protesting the imposition of Western technologies on to their indigenous systems and agency.
Current UN projections show a continued increase in population in the future (but a steady decline in the population growth rate), with the global population expected to reach 9.8 billion in 2050 and 11.2 billion by 2100. Estimates by the UN Population Division for the year 2150 range between 3.2 and 24.8 billion; mathematical modeling supports the lower estimate. Some analysts have questioned the sustainability of further world population growth, highlighting the growing pressures on the environment, global food supplies, and energy resources. Solutions for feeding the extra billions in the future are being studied and documented. One out of every seven people on our planet go to sleep hungry. People are suffering due to overpopulation, 25,000 people die of malnutrition and hunger related diseases everyday.
While agricultural output has increased, energy consumption to produce a crop has also increased at a greater rate, so that the ratio of crops produced to energy input has decreased over time. Green Revolution techniques also heavily rely on chemical fertilizers, pesticides and herbicides, many of which are petroleum products, making agriculture increasingly reliant on petroleum.
Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250 %. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation.
David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (NRIFN), place in their study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 210 million. To achieve a sustainable economy and avert disaster, the United States must reduce its population by at least one - third, and world population will have to be reduced by two - thirds, says the study.
The authors of this study believe that the mentioned agricultural crisis will only begin to affect us after 2020, and will not become critical until 2050. The oncoming peaking of global oil production (and subsequent decline of production), along with the peak of North American natural gas production will very likely precipitate this agricultural crisis much sooner than expected. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.
Since 1961, human diets across the world have become more diverse in the consumption of major commodity staple crops, with a corollary decline in consumption of local or regionally important crops, and thus have become more homogeneous globally. The differences between the foods eaten in different countries were reduced by 68 % between 1961 and 2009. The modern "global standard '' diet contains an increasingly large percentage of a relatively small number of major staple commodity crops, which have increased substantially in the share of the total food energy (calories), protein, fat, and food weight that they provide to the world 's human population, including wheat, rice, sugar, maize, soybean (by + 284 %), palm oil (by + 173 %), and sunflower (by + 246 %). Whereas nations used to consume greater proportions of locally or regionally important crops, wheat has become a staple in over 97 % of countries, with the other global staples showing similar dominance worldwide. Other crops have declined sharply over the same period, including rye, yam, sweet potato (by − 45 %), cassava (by − 38 %), coconut, sorghum (by − 52 %) and millets (by − 45 %). Such crop diversity change in the human diet is associated with mixed effects on food security, improving under - nutrition in some regions but contributing to the diet - related diseases caused by over-consumption of macronutrients.
On April 30, 2008, Thailand, one of the world 's biggest rice exporters, announced the creation of the Organisation of Rice Exporting Countries with the potential to develop into a price - fixing cartel for rice. It is a project to organize 21 rice exporting countries to create a homonymous organisation to control the price of rice. The group is mainly made up of Thailand, Vietnam, Cambodia, Laos and Myanmar. The organization attempts to serve the purpose of making a "contribution to ensuring food stability, not just in an individual country but also to address food shortages in the region and the world ''. However, it is still questionable whether this organization will serve its role as an effective rice price fixing cartel, that is similar to OPEC 's mechanism for managing petroleum. Economic analysts and traders said the proposal would go nowhere because of the inability of governments to cooperate with each other and control farmers ' output. Moreover, countries that are involved expressed their concern, that this could only worsen the food security.
China needs not less than 120 million hectares of arable land for its food security. China has recently reported a surplus of 15 million hectares. On the other side of the coin, some 4 million hectares of conversion to urban use and 3 million hectares of contaminated land have been reported as well. Furthermore, a survey found that 2.5 % of China 's arable land is too contaminated to grow food without harm. In Europe, the conversion of agricultural soil implied a net loss of potential. But the rapid loss in the area of arable soils appears to be economically meaningless because EU is perceived to be dependent on internal food supply anymore. During the period 2000 -- 2006 the European Union lost 0.27 % of its cropland and 0.26 % of its crop productive potential. The loss of agricultural land during the same time was the highest in the Netherlands, which lost 1.57 % of its crop production potential within six years. The figures are quite alarming for Cyprus (0.84 %), Ireland (0.77 %) and Spain (0.49 %) as well. In Italy, in the Emilia - Romagna plain (ERP), the conversion of 15,000 hectare of agricultural soil (period 2003 - 2008) implied a net loss of 109,000 Mg per year of wheat, which accounts for the calories needed by 14 % of ERP population (425,000 people). Such a loss in wheat production is just 0.02 % of gross domestic product (GDP) of the Emilia - Romagna region which is actually a minor effect in financial terms. Additionally, the income from the new land use is often much higher than the one guaranteed by agriculture, as in the case of urbanisation or extraction of raw materials.
As anthropogenic greenhouse gas emissions reduce the stability of the global climate, abrupt climate change could become more intense. The impact of an asteroid or comet larger than about 1 km diameter has the potential to block the sun globally, causing impact winter. Particles in the troposphere would quickly rain out, but particles in the stratosphere, especially sulfate, could remain there for years. Similarly, a supervolcanic eruption would reduce the potential of agricultural production from solar photosynthesis, causing volcanic winter. The Toba super volcanic eruption approximately 70,000 years ago may have nearly caused the extinction of humans (see Toba catastrophe theory). Again, primarily sulfate particles could block the sun for years. Solar blocking is not limited to natural causes as nuclear winter is also possible, which refers to the scenario involving widespread nuclear war and burning of cities that release soot into the stratosphere that would stay there for about 10 years. The high stratospheric temperatures produced by soot absorbing solar radiation would create near - global ozone hole conditions even for a regional nuclear conflict.
Agricultural subsidies are paid to farmers and agribusinesses to supplement their income, manage the supply of their commodities and influence the cost and supply of those commodities. In the United States, the main crops the government subsidizes contribute to the obesity problem; since 1995, $300 billion have gone to crops that are used to create junk food.
Taxpayers heavily subsidize corn and soy, which are main ingredients in processed foods and fatty foods which the government does not encourage, and used to fatten livestock. Half of farmland is devoted to corn and soy, the rest is wheat. Soy and corn can be found in sweeteners like high fructose corn syrup. Over $19 billion during the prior 18 years to 2013 was spent to incent farmers to grow these crops, raising the price of fruits and vegetables by about 40 % and lowering the price of dairy and other animal products. Little land is used for fruit and vegetable farming.
Corn, a pillar of American agriculture for years, is now mainly used for ethanol, high fructose corn syrup and bio-based plastics. About 40 percent of corn is used for ethanol and 36 % is used as animal feed. Only a tiny fraction of corn is used as a food source, much of that fraction is used for high - fructose corn syrup, which is a main ingredient in processed, unhealthy junk food.
People who ate the most subsidized food had a 37 % higher risk of being obese compared to people who ate the least amount of subsidized food. This brings up the concern that minority communities are more prone to risks of obesity due to financial limitations. The subsidies result in those commodities being cheap to the public, compared to those recommended by dietary guidelines.
President Trump proposed a 21 % cut to government discretionary spending in the agriculture sector, which has met partisan resistance. This budget proposal would also reduce spending on the Special Supplement Nutrition Program for Women, Infants and Children, albeit less than President Obama did.
On April 29, 2008, a UNICEF UK report found that the world 's poorest and most vulnerable children are being hit the hardest by climate change. The report, "Our Climate, Our Children, Our Responsibility: The Implications of Climate Change for the World 's Children '', says that access to clean water and food supplies will become more difficult, particularly in Africa and Asia.
By way of comparison, in one of the largest food producing countries in the world, the United States, approximately one out of six people are "food insecure '', including 17 million children, according to the U.S. Department of Agriculture. A 2012 study in the Journal of Applied Research on Children found that rates of food security varied significantly by race, class and education. In both kindergarten and third grade, 8 % of the children were classified as food insecure, but only 5 % of white children were food insecure, while 12 % and 15 % of black and Hispanic children were food insecure, respectively. In third grade, 13 % of black and 11 % of Hispanic children are food insecure compared to 5 % of white children.
There are also striking regional variations in food security. Although food insecurity can be difficult to measure, 45 % of elementary and secondary students in Maine qualify for free or reduced - price school lunch; by some measures Maine has been declared the most food - insecure of the New England states. Transportation challenges and distance are common barriers to families in rural areas who seek food assistance. Social stigma is another important consideration, and for children, sensitively administering in - school programs can make the difference between success and failure. For instance, when John Woods, co-founder of Full Plates, Full Potential, learned that embarrassed students were shying away from the free breakfasts being distributed at a school he was working with, he made arrangements to provide breakfast free of charge to all of the students there.
According to a 2015 Congressional Budget Office report on child nutrition programs, it is more likely that food insecure children will participate in school nutrition programs than children from food secure families. School nutrition programs, such as the National School Lunch Program (NSLP) and the School Breakfast Program (SBP) have provided millions of children access to healthier lunch and breakfast meals, since their inceptions in the mid-1900s. According to the Centers for Disease Control and Prevention, NSLP has served over 300 million, while SBP has served about 10 million students each day. Nevertheless, far too many qualifying students still fail to receive these benefits simply due to not submitting the necessary paperwork. Multiple studies have reported that school nutrition programs play an important role in ensuring students are accessing healthy meals. Students who ate school lunches provided by NLSP showed higher diet quality than if they had their own lunches. Even more, the USDA improved standards for school meals, which ultimately lead to positive impacts on children 's food selection and eating habits.
Countless partnerships have emerged in the quest for food security. A number of federal nutrition programs exist to provide food specifically for children, including the Summer Food Service Program, Special Milk Program (SMP) and Child and Adult Care Food Program (CACFP), and community and state organizations often network with these programs. The Summer Food Program in Bangor, Maine, is run by the Bangor Housing Authority and sponsored by Good Shepherd Food Bank. In turn, Waterville Maine 's Thomas College, for example, is among the organizations holding food drives to collect donations for Good Shepherd. Children whose families qualify for Supplemental Nutrition Assistance Program (SNAP) or Women, Infants, and Children (WIC) may also receive food assistance. WIC alone served approximately 7.6 million participants, 75 % of which are children and infants.
Despite the sizable populations served by these programs, Conservatives have regularly targeted these programs for defunding. Conservatives ' arguments against school nutrition programs include fear of wasting food and fraud from applications. On January 23, 2017, H.R. 610 was introduced to the House by Republican Representative Steve King. The bill seeks to repeal a rule set by the Food and Nutrition Service of the Department of Agriculture, which mandates schools to provide more nutritious and diverse foods across the food plate. Two months later, the Trump administration released a preliminary 2018 budget that proposed a $2 billion cut from WIC.
Food insecurity in children can lead to developmental impairments and long term consequences such as weakened physical, intellectual and emotional development.
Food insecurity also related to obesity for people living in neighborhoods where nutritious food are unavailable or unaffordable.
Gender inequality both leads to and is a result of food insecurity. According to estimates women and girls make up 60 % of the world 's chronically hungry and little progress has been made in ensuring the equal right to food for women enshrined in the Convention on the Elimination of All Forms of Discrimination against Women. Women face discrimination both in education and employment opportunities and within the household, where their bargaining power is lower. Women 's employment is essential for not only advancing gender equality within the workforce, but ensuring a sustainable future as it means less pressure for high birth rates and net migration. On the other hand, gender equality is described as instrumental to ending malnutrition and hunger. Women tend to be responsible for food preparation and childcare within the family and are more likely to spend their income on food and their children 's needs. Women also play an important role in food production, processing, distribution and marketing. They often work as unpaid family workers, are involved in subsistence farming and represent about 43 % of the agricultural labor force in developing countries, varying from 20 % in Latin America to 50 % in Eastern and Southeastern Asia and Sub-Saharan Africa. However, women face discrimination in access to land, credit, technologies, finance and other services. Empirical studies suggest that if women had the same access to productive resources as men, women could boost their yields by 20 -- 30 %; raising the overall agricultural output in developing countries by 2.5 to 4 %. While those are rough estimates, the significant benefit of closing the gender gap on agricultural productivity can not be denied. The gendered aspects of food security are visible along the four pillars of food security: availability, access, utilization and stability, as defined by the Food and Agriculture Organization.
The number of people affected by hunger is extremely high, with enormous effects on women and girls. Making this trend disappear "must be a top priority for governments and international institutions ''. Actions governments take must take into consideration that food insecurity is an issue regarding "equality, rights and social justice ''. "Food and nutrition insecurity is a political and economic phenomenon fuelled by inequitable global and national processes ''. Factors like capitalism, exploration of Indigenous lands all contribute to food insecurity for minorities and the people who are the most oppressed in various countries (women being one of these oppressed groups). To emphasis, "food and nutrition insecurity is a gender justice issue ''. The facts that women and girls are the most oppressed by "the inequitable global economic processes that govern food systems and by global trends such as climate change '', shows how institutions continue to place women in positions of disadvantage and impoverishment to make money and thrive on capitalizing the food system. When the government withholds food by raising its prices to amounts only privileged people can afford, they both benefit and are able to control the "lower - class '' / marginalized people via the food market. An interesting fact is that "despite rapid economic growth in India, thousands of women and girls still lack food and nutrition security as a direct result of their lower status compared with men and boys ''. "Such inequalities are compounded by women and girls ' often limited access to productive resources, education and decision - making, by the ' normalised ' burden of unpaid work -- including care work -- and by the endemic problems of gender - based violence (GBV), HIV and AIDS ''.
One of the most up - and - coming techniques to ensuring global food security is the use of genetically modified (GM) crops. The genome of these crops can be altered to address one or more aspects of the plant that may be preventing it from being grown in various regions under certain conditions. Many of these alterations can address the challenges that were previously mentioned above, including the water crisis, land degradation, and the ever - changing climate.
In agriculture and animal husbandry, the Green Revolution popularized the use of conventional hybridization to increase yield by creating "high - yielding varieties ''. Often the handful of hybridized breeds originated in developed countries and were further hybridized with local varieties in the rest of the developing world to create high yield strains resistant to local climate and diseases.
The area sown to genetically engineered crops in developing countries is rapidly catching up with the area sown in industrial nations. According to the International Service for the Acquisition of Agri - biotech Applications (ISAAA), GM crops were grown by approximately 8.5 million farmers in 21 countries in 2005; up from 8.25 million farmers in 17 countries in 2004. However, the ISAAA is funded by organisations including prominent agricultural biotechnology corporations, such as Monsanto and Bayer, and there have been several challenges made to the accuracy of ISAAA 's global figures.
Some scientists question the safety of biotechnology as a panacea; agroecologists Miguel Altieri and Peter Rosset have enumerated ten reasons why biotechnology will not ensure food security, protect the environment, or reduce poverty. Reasons include:
Based on evidence from previous attempts, there is a likely lack of transferability of one type of GM crop from one region to another. For example, modified crops that have proven successful in Asia from the Green Revolution have failed when tried in regions of Africa. More research must be done regarding the specific requirements of growing a specific crop in a specific region.
There is also a drastic lack of education given to governments, farmers, and the community about the science behind GM crops, as well as suitable growing practices. In most relief programs, farmers are given seeds with little explanation and little attention is paid to the resources available to them or even laws that prohibit them from distributing produce. Governments are often not advised on the economic and health implications that come with growing GM crops, and are then left to make judgments on their own. Because they have so little information regarding these crops, they usually shy away from allowing them or do not take the time and effort required to regulate their use. Members of the community that will then consume the produce from these crops are also left in the dark about what these modifications mean and are often scared off by their ' unnatural ' origins. This has resulted in failure to properly grow crops as well as strong opposition to the unknown practices.
A study published in June 2016 evaluated the status of the implementation of Golden Rice, which was first developed in the 1990s to produce higher levels of Vitamin A than its non-GMO counterparts. This strain of rice was designed so that malnourished women and children in third world countries who were more susceptible to deficiencies could easily improve their Vitamin A intake levels and prevent blindness, which is a common result. Golden Rice production was centralized to the Philippines, yet there have been many hurdles to jump in order to get production moving. The study showed that the project is far behind schedule and is not living up to its expectations. Although research on Golden Rice still continues, the country has moved forward with other non-GMO initiatives to address the Vitamin A deficiency problem which is so prevasive in that region.
Many anti-GMO activists argue that the use of GM crops decreases biodiversity amongst plants. Livestock biodiversity is also threatened by the modernization of agriculture and the focus on more productive major breeds. Therefore, efforts have been made by governments and non-governmental organizations to conserve livestock biodiversity through strategies such as Cryoconservation of animal genetic resources.
Many GM crop success stories exist, primarily in developed nations like the USA, China, and various countries in Europe. Common GM crops include cotton, maize, and soybeans, all of which are grown throughout North and South America as well as regions of Asia. Modified cotton crops, for example, have been altered such that they are resistant to pests, can grown in more extreme heat, cold, or drought, and produce longer, stronger fibers to be used in textile production.
One of the biggest threats to rice, which is a staple food crop especially in India and other countries within Asia, is blast disease which is a fungal infection that causes lesions to form on all parts of the plant. A genetically engineered strain of rice has been developed so that it is resistant to blast, greatly improving the crop yield of farmers and allowing rice to be more accessible to everyone. Some other crops have been modified such that they produce higher yields per plant or that they require less land for growing. The latter can be helpful in extreme climates with little arable land and also decreases deforestation, as fewer trees need to be cut down in order to make room for crop fields. Others yet have been altered such that they do not require the use of insecticides or fungicides. This addresses various health concerns associated with such pesticides and can also work to improve biodiversity within the area in which these crops are grown.
In a review of Borlaug 's 2000 publication entitled Ending world hunger: the promise of biotechnology and the threat of antiscience zealotry, the authors argued that Borlaug 's warnings were still true in 2010,
GM crops are as natural and safe as today 's bread wheat, opined Dr. Borlaug, who also reminded agricultural scientists of their moral obligation to stand up to the antiscience crowd and warn policy makers that global food insecurity will not disappear without this new technology and ignoring this reality global food insecurity would make future solutions all the more difficult to achieve.
Research conducted by the GMO Risk Assessment and Communication of Evidence (GRACE) program through the EU between 2007 and 2013 focused on many uses of GM crops and evaluated many facets of their effects on human, animal, and environmental health.
The body of scientific evidence concluding that GM foods are safe to eat and do not pose environmental risks is wide. Findings from the International Council of Scientists (2003) that analyzed a selection of approximately 50 science - based reviews concluded that "currently available genetically modified foods are safe to eat, '' and "there is no evidence of any deleterious environmental effects having occurred from the trait / species combinations currently available. '' The United Nations Food and Agriculture Organization (FAO) supported the same consensus a year later in addition to recommending the extension of biotechnology to the developing world. Similarly, the Royal Society (2003) and British Medical Association (2004) found no adverse health effects of consuming genetically modified foods. These findings supported the conclusions of earlier studies by the European Union Research Directorate, a compendium of 81 scientific studies conducted by more than 400 research teams did not show "any new risks to human health or the environment, beyond the usual uncertainties of conventional plant breeding. '' Likewise, the Organization for Economic Cooperation and Development in Europe (OECD) and the Nuffield Council on Bioethics (1999) did not find that genetically modified foods posed a health risk.
The UN Millennium Development Goals are one of the initiatives aimed at achieving food security in the world. The first Millennium Development Goal states that the UN "is to eradicate extreme hunger and poverty '' by 2015. Olivier De Schutter, the UN Special Rapporteur on the Right to Food, advocates for a multidimensional approach to food security challenges. This approach emphasizes the physical availability of food; the social, economic and physical access people have to food; and the nutrition, safety and cultural appropriateness or adequacy of food.
The Food and Agriculture Organization of the United Nations stated in The State of Food Insecurity in the World 2003 that countries that have reduced hunger often had rapid economic growth, specifically in their agricultural sectors. These countries were also characterized as having slower population growth, lower HIV rates, and higher rankings in the Human Development Index. At that time, the FAO considered addressing agriculture and population growth vital to achieving food security. In The State of Food Insecurity in the World 2012, the FAO restated its focus on economic growth and agricultural growth to achieve food security and added a focus on the poor and on "nutrition - sensitive '' growth. For example, economic growth should be used by governments to provide public services to benefit poor and hungry populations. The FAO also cited smallholders, including women, as groups that should be involved in agricultural growth to generate employment for the poor. For economic and agricultural growth to be "nutrition - sensitive '', resources should be utilized to improve access to diverse diets for the poor as well as access to a safe water supply and to healthcare. The FAO has proposed a "twin track '' approach to fight food insecurity that combines sustainable development and short - term hunger relief. Development approaches include investing in rural markets and rural infrastructure. In general, the FAO proposes the use of public policies and programs that promote long - term economic growth that will benefit the poor. To obtain short - term food security, vouchers for seeds, fertilizer, or access to services could promote agricultural production. The use of conditional or unconditional food or cash transfers was another approach the FAO noted. Conditional transfers could include school feeding programs, while unconditional transfers could include general food distribution, emergency food aid or cash transfers. A third approach is the use of subsidies as safety nets to increase the purchasing power of households. The FAO stated that "approaches should be human rights - based, target the poor, promote gender equality, enhance long - term resilience and allow sustainable graduation out of poverty. ''
The FAO noted that some countries have been successful in fighting food insecurity and decreasing the number of people suffering from undernourishment. Bangladesh is an example of a country that has met the Millennium Development Goal hunger target. The FAO credited growth in agricultural productivity and macroeconomic stability for the rapid economic growth in the 1990s that resulted in an increase in food security. Irrigation systems were established through infrastructure development programs. Two programs, HarvestPlus and the Golden Rice Project, provided biofortified crops in order to decrease micronutrient deficiencess.
World Food Day was established on October 16, in honor of the date that the FAO was founded in 1945. On this day, the FAO hosts a variety of event at the headquarters in Rome and around the world, as well as seminars with UN officials.
The World Food Programme (WFP) is an agency of the United Nations that uses food aid to promote food security and eradicate hunger and poverty. In particular, the WFP provides food aid to refugees and to others experiencing food emergencies. It also seeks to improve nutrition and quality of life to the most vulnerable populations and promote self - reliance. An example of a WFP program is the "Food For Assets '' program in which participants work on new infrastructure, or learn new skills, that will increase food security, in exchange for food. The WFP and the Government of Kenya have partnered in the Food For Assets program in hopes of increasing the resilience of communities to shocks.
In April 2012, the Food Assistance Convention was signed, the world 's first legally binding international agreement on food aid. The May 2012 Copenhagen Consensus recommended that efforts to combat hunger and malnutrition should be the first priority for politicians and private sector philanthropists looking to maximize the effectiveness of aid spending. They put this ahead of other priorities, like the fight against malaria and AIDS.
The main global policy to reduce hunger and poverty are the recently approved Sustainable Development Goals. In particular Goal 2: Zero Hunger sets globally agreed targets to end hunger, achieve food security and improved nutrition and promote sustainable agriculture by 2030. A number of organizations have formed initiatives with the more ambitious goal to achieve this outcome in only 10 years, by 2025:
The United States Agency for International Development (USAID) proposes several key steps to increasing agricultural productivity which is in turn key to increasing rural income and reducing food insecurity. They include:
Since the 1960s, the U.S. has been implementing a food stamp program (now called the Supplemental Nutrition Assistance Program) to directly target consumers who lack the income to purchase food. According to Tim Josling, a Senior Fellow at the Freeman Spogli Institute for International Studies, Stanford University, food stamps or other methods of distribution of purchasing power directly to consumers might fit into the range of international programs under consideration to tackle food insecurity.
There are strong, direct relationships between agricultural productivity, hunger, poverty, and sustainability. Three - quarters of the world 's poor live in rural areas and make their living from agriculture. Hunger and child malnutrition are greater in these areas than in urban areas. Moreover, the higher the proportion of the rural population that obtains its income solely from subsistence farming (without the benefit of pro-poor technologies and access to markets), the higher the incidence of malnutrition. Therefore, improvements in agricultural productivity aimed at small - scale farmers will benefit the rural poor first. Food and feed crop demand is likely to double in the next 50 years, as the global population approaches nine billion. Growing sufficient food will require people to make changes such as increasing productivity in areas dependent on rainfed agriculture; improving soil fertility management; expanding cropped areas; investing in irrigation; conducting agricultural trade between countries; and reducing gross food demand by influencing diets and reducing post-harvest losses.
According to the Comprehensive Assessment of Water Management in Agriculture, a major study led by the International Water Management Institute (IWMI), managing rainwater and soil moisture more effectively, and using supplemental and small - scale irrigation, hold the key to helping the greatest number of poor people. It has called for a new era of water investments and policies for upgrading rainfed agriculture that would go beyond controlling field - level soil and water to bring new freshwater sources through better local management of rainfall and runoff. Increased agricultural productivity enables farmers to grow more food, which translates into better diets and, under market conditions that offer a level playing field, into higher farm incomes. With more money, farmers are more likely to diversify production and grow higher - value crops, benefiting not only themselves but the economy as a whole. ''
Researchers suggest forming an alliance between the emergency food program and community - supported agriculture, as some countries ' food stamps can not be used at farmer 's markets and places where food is less processed and grown locally. The gathering of wild food plants appears to be an efficient alternative method of subsistence in tropical countries, which may play a role in poverty alleviation.
The minimum annual global wheat storage is approximately two months. To counteract the severe food security issues caused by global catastrophic risks, years of food storage has been proposed. Though this could ameliorate smaller scale problems like regional conflict and drought, it would exacerbate current food insecurity by raising food prices.
Insurance is a financial instrument, which allows exposed individuals to pool resources to spread their risk. They do so by contributing premium to an insurance fund, which will indemnify those who suffer insured loss. This procedure reduces the risk for an individual by spreading his / her risk among the multiple fund contributors. Insurance can be designed to protect many types of individuals and assets against single or multiple perils and buffer insured parties against sudden and dramatic income or asset loss.
Crop insurance is purchased by agricultural producers to protect themselves against either the loss of their crops due to natural disasters. Two type of insurances are available: (1) claim - based insurances, and (2) index - based insurances. In particular in poor countries facing food security problems, index - based insurances offer some interesting advantages: (1) indices can be derived from globally available satellite images that correlate well with what is insured; (2) these indices can be delivered at low cost; and (3) the insurance products open up new markets that are not served by claim - based insurances.
An advantage of index - based insurance is that it can potentially be delivered at lower cost. A significant barrier that hinders uptake of claim - based insurance is the high transaction cost for searching for prospective policyholders, negotiating and administering contracts, verifying losses and determining payouts. Index insurance eliminates the loss verification step, thereby mitigating a significant transaction cost. A second advantage of index - based insurance is that, because it pays an indemnity based on the reading of an index rather than individual losses, it eliminates much of the fraud, moral hazard and adverse selection, which are common in classical claim - based insurance. A further advantage of index insurance is that payments based on a standardized and indisputable index also allow for a fast indemnity payment. The indemnity payment could be automated, further reducing transaction costs.
Basis risk is a major disadvantage of index - based insurance. It is the situation where an individual experiences a loss without receiving payment or vice versa. Basis risk is a direct result of the strength of the relation between the index that estimates the average loss by the insured group and the loss of insured assets by an individual. The weaker this relation the higher the basis risk. High basis risk undermines the willingness of potential clients to purchase insurance. It thus challenges insurance companies to design insurances such as to minimize basis risk.
The Food Justice Movement has been seen as a unique and multifaceted movement with relevance to the issue of food security. It has been described as a movement about social - economic and political problems in connection to environmental justice, improved nutrition and health, and activism. Today, a growing number of individuals and minority groups are embracing the Food Justice due to the perceived increase in hunger within nations such as the United States as well as the amplified effect of food insecurity on many minority communities, particularly the Black and Latino communities. A number of organizations have either championed the Food Justice Cause or greatly impacted the Food Justice space. An example of a prominent organization within the food justice movement has been the Coalition of Immokalee Workers, which is a worker - based human rights organization that has been recognized globally for its accomplishments in the areas of human trafficking, social responsibility and gender - based violence at work. The Coalition of Immoaklee Workers most prominent accomplishment related to the food justice space has been its part in implementing the Fair Food Program which increased the pay and bettered working conditions of farm workers in the tomato industry who had been exploited for generations. This accomplishment provided over 30,000 workers more income and the ability to access better and more healthy foods for themselves and their families. Another organization in the food justice space is the Fair Food Network, an organization that has embraced the mission of helping familIes who need healthy food to gain access to it while also increasing the livelihoold for farmers in America and growing local economies. Started by Oran B. Hesterma, the Fair Food Network has invested over $200 million in various projects and initiatives, such as the Double Up Food Bucks program, to help low - income and minority communities access healthier food.
Bees and other pollinating insects are currently improving the food production of 2 billion small farmers worldwide, helping to ensure food security for the world 's population. Research shows that if pollination is managed well on small diverse farms, with all other factors being equal, crop yields can increase by a significant median of 24 percent.
How animal pollinators positively affect fruit condition and nutrient content is still being discovered.
As of 2015 the concept of food security has mostly focused on food calories rather than the quality and nutrition of food. The concept of nutrition security evolved over time. In 1995, it has been defined as "adequate nutritional status in terms of protein, energy, vitamins, and minerals for all household members at all times ''.
Organizations:
http://www.fao.org/fsnforum/ The Global Food Security and Nutrition Forum (FSN Forum))
|
what job does vanessa hold in the bee movie | Bee Movie - Wikipedia
Bee Movie is a 2007 American computer animated comedy film produced by DreamWorks Animation and distributed by Paramount Pictures. Directed by Simon J. Smith and Steve Hickner, the film stars Jerry Seinfeld and Renée Zellweger, with Matthew Broderick, Patrick Warburton, John Goodman and Chris Rock in supporting roles. Its story follows Barry B. Benson (Seinfeld), a honey bee who sues the human race for exploiting bees after learning from his florist friend Vanessa (Zellweger) that humans sell and consume honey.
Bee Movie is the first motion - picture script to be written by Seinfeld, who co-wrote the film with Spike Feresten, Barry Marder, and Andy Robin. The film was produced by Seinfeld, Christina Steinberg, and Cameron Stevning. The production was designed by Alex McDowell, and Christophe Lautrette was the art director. Nick Fletcher was the supervising editor and music for the film was composed by Rupert Gregson - Williams.
The cast and crew include some veterans of Seinfeld 's long - running NBC sitcom Seinfeld, including writer / producers Feresten and Robin, and actors Warburton (Seinfeld character David Puddy), Michael Richards (Seinfeld character Cosmo Kramer), and Larry Miller (who plays the title character on the Seinfeld episode "The Doorman ''). Coincidentally, NBC was host to the broadcast television premiere of the film on November 27, 2010.
Bee Movie opened on November 2, 2007. Upon release, the film was met with mixed reviews, with primary criticism directed at the film 's premise. While domestic box office performance failed to recoup its $150 million budget, it ultimately saw worldwide box office performance of $287.6 million and domestic video sales of $92.7 million.
A young honey bee named Barry B. Benson (Jerry Seinfeld) has recently graduated from college and is about to enter the hive 's Honex Industries honey - making workforce alongside his best friend Adam Flayman (Matthew Broderick). Barry is initially excited to join the workforce, but his courageous, non-conformist attitude emerges upon discovering that his choice of job will never change once picked. Later, the two bees run into a group of Pollen Jocks, bees who collect pollen from flowers outside the hive. The Jocks offer to take Barry outside the hive to a flower patch, and he accepts. While on his first pollen - gathering expedition in New York City, Barry gets lost in the rain, and ends up on the balcony of a human florist named Vanessa (Renée Zellweger). Upon noticing Barry, Vanessa 's boyfriend Ken (Patrick Warburton) attempts to squash him, but Vanessa gently catches and releases Barry outside the window, saving his life.
Barry later returns to express his gratitude to Vanessa, breaking the sacred rule that bees are not supposed to communicate with humans. Barry and Vanessa develop a close bond, bordering on attraction, and spend time together frequently. Later, while Barry and Vanessa are walking through a grocery store, Barry is horrified to discover that the humans have been stealing and eating the bees ' honey for centuries. He decides to journey to Honey Farms, which supplies the grocery store with its honey. Furious at the poor treatment of the bees in the hive, including the use of bee smokers to subdue the colony, Barry decides to sue the human race to put an end to the exploitation of bees.
Barry 's mission attracts wide attention from bees and humans alike, and hundreds of people show up to watch the trial. Although Barry is up against tough defense attorney Layton T. Montgomery (John Goodman) the trial 's first day goes well. That evening, Barry is having dinner with Vanessa when Ken shows up. Vanessa leaves the room, and Ken expresses to Barry that he hates the pair spending time together. When Barry leaves to use the restroom, Ken ambushes Barry and attempts to kill him, only for Vanessa to intervene and break up with Ken. The next day at the trial, Montgomery taunts the bees, which causes Adam to sting him. Adam 's actions jeopardize the bees ' credibility and put his life in danger, though he manages to survive. While visiting Adam in the hospital, Barry notices two people smoking outside, and is struck by inspiration. The next day, Barry wins the trial by exposing the jury to the cruel treatment bees are subjected to, particularly the smoker, and humans are banned from stealing honey from bees ever again.
Having lost the trial, Montgomery cryptically warns Barry that a negative shift in the balance of nature is imminent. As it turns out, the sudden, massive stockpile of honey has put every bee out of a job, including the vitally important Pollen Jocks. As a result, without anything to pollinate them, the world 's flowers slowly begin to die out. Before long, the only flowers left with healthy pollen are those in a flower parade called "The Tournament of Roses '' in Pasadena, California. Barry and Vanessa travel to the parade and steal a parade float, which they load onto a plane to be delivered to the bees so they can re-pollinate the world 's flowers. When the plane 's pilot and copilot are knocked unconscious, Vanessa is forced to land the plane, with help from Barry and the bees from Barry 's hive.
Armed with the pollen of the last flowers, Barry and the Pollen Jocks manage to reverse the damage and save the world 's flowers, restarting the bees ' honey production. Humans and bees are seen working together, and certain brands of honey are now "bee - approved ''. Barry becomes a member of the Pollen Jocks, helping to pollinate the world 's plants. Barry is also seen running a law firm inside Vanessa 's flower shop, titled "Insects at Law '', handling disputes between animals and humans. The film ends with Barry flying off to a flower patch with the Pollen Jocks.
All music composed by Rupert Gregson - Williams, except as noted.
Two teaser trailers were released for the film that feature Seinfeld dressed in a bee costume, trying to shoot the film in live - action. Eddie Izzard portrays the direction agent, and Steven Spielberg suggests to Seinfeld in the second trailer to just do it as a cartoon. Upon the release of the first trailer, it was announced that three of the live - action teasers would be released in total. In the second trailer, Steven Spielberg is taking a picture of himself and an assistant director, referencing the camera gag Ellen DeGeneres pulled on him during the 79th Academy Awards. After Seinfeld fails to do scenes in live - action, Spielberg suggests Seinfeld that the film can just be made as a cartoon. One of the crew members announce that the film is a cartoon, having the crew leave the stage studio. The trailer finally shows the movie as an animated CGI feature. Also in the second trailer, the bear that jumps out at Barry is Vincent the Bear from Over the Hedge, another DreamWorks Animation SKG movie.
The third trailer was released with Shrek the Third, but this was an animated teaser. The fourth trailer was released on the Bee Movie official website, and revealed most of the film 's plot. In addition, two weeks before the release, NBC aired 22 behind - the - scenes skits called "Bee Movie TV Juniors, '' all of which are staged and tongue - in - cheek in nature. The popular internet site Gaia Online featured a great deal of promotional material for the film.
Ten books were released for the film: Bee Movie: The Story Book, Bee Movie: The Honey Disaster, The Art of Bee Movie, Bee Movie: Deluxe Sound Storybook, Bee Movie Ultimate Sticker Book, Bee Movie (I Can Find It), Bee Movie: The Junior Novel, Bee Movie: What 's the Buzz?, Bee Movie Mad Libs, and Bee Movie: Bee Meets Girl.
A video game titled Bee Movie Game was released on October 30, 2007 for Microsoft Windows, Xbox 360, Wii, PlayStation 2, and Nintendo DS.
Bee Movie was released on DVD on March 11, 2008 in both fullscreen and widescreen formats and a 2 - disc special edition DVD. The single - disc extras include the "Inside the Hive: The Cast of Bee Movie '' and "Tech of Bee Movie '' featurettes, "We Got the Bee '' music video, "Meet Barry B. Benson '' feature, and interactive games. The special edition DVD extras additionally include a filmmaker commentary, alternate endings, lost scenes with commentary, the live action trailers, and Jerry 's Flight Over Cannes. An HD DVD version was cancelled after the demise of HD DVD. Paramount released the movie on Blu - ray Disc on May 20, 2008.
The film received a 51 % approval rating on the review aggregator website Rotten Tomatoes, based on 169 reviews with an average rating of 5.7 / 10. The site 's critical consensus reads: "Bee Movie has humorous moments, but its awkward premise and tame delivery render it mostly forgettable. '' Another review aggregator, Metacritic, which assigns a normalized rating out of 100 top reviews from mainstream critics, calculated a score of 54 based on 34 reviews. Audiences polled by CinemaScore gave the film an average grade of "B + '' on an A+ to F scale.
Kyle Smith of the New York Post gave the film three out of four stars, saying "After Shrek the Third 's flatulence jokes, the return of that Seinfeldian wit brings animation up a level. '' Michael Phillips of the Chicago Tribune gave the film two and a half stars out of four, saying "It 's on the easygoing level of Surf 's Up, and a full tick up from, say, Over the Hedge or The Ant Bully. But given the Seinfeld pedigree it 's something of a disappointment. '' Peter Travers of Rolling Stone gave the film three out of four stars, saying "At its relaxed best, when it 's about, well, nothing, the slyly comic Bee Movie is truly beguiling. '' Desson Thomson of The Washington Post said, "Bee Movie feels phoned in on every level. The images, usually computer animation 's biggest draw, are disappointingly average. And as for the funny stuff, well, that 's where you were supposed to come in. ''
A.O. Scott of The New York Times gave the film three and a half stars out of four, saying "The most genuinely apian aspect of Bee Movie is that it spends a lot of its running time buzzing happily around, sniffing out fresh jokes wherever they may bloom. '' Claudia Puig gave the film one and a half stars out of four, saying "Bee Movie is certainly not low - budget, but it has all the staying power and creative value of a B - movie. The secret life of bees, as told by Seinfeld, is a bore with a capital B. '' Steven Rea of The Philadelphia Inquirer gave the film three stars out of four, saying "Bee Movie is not Shrek, and it is not Ratatouille either (by far the standout computer - animated feature of the year). But it has enough buzzing wit and eye - popping animation to win over the kids -- and probably more than a few parents, too. '' Richard Roeper gave the film a positive review, saying "This is a beautifully animated, cleverly executed, warm and funny adventure. ''
Roger Ebert of the Chicago Sun - Times gave the film two out of four stars, saying "All of this material, written by Seinfeld and writers associated with his television series, tries hard, but never really takes off. We learn at the outset of the movie that bees theoretically can not fly. Unfortunately, in the movie, that applies only to the screenplay. It is really, really, really hard to care much about a platonic romantic relationship between Renee Zellweger and a bee, although if anyone could pull it off, she could. '' Ty Burr of The Boston Globe gave the film three out of four stars, saying "The vibe is loose - limbed and fluky, and the gags have an extra snap that 's recognizably Seinfeldian. If I believed in a sitcom afterlife, I 'd swear the whole thing was cooked up by Kramer and George 's dad. '' Jack Mathews of the New York Daily News gave the film three out of four stars, saying "Watching this pun - filled cartoon is like falling into a tray of children 's watercolors -- the warm end, where oranges and yellows and ambers wave. ''
Stephen Whitty of the Newark Star - Ledger gave the film two and a half stars out of four, saying "The movie has some pretty pictures and a few good jokes, but not nearly enough. And the story suffers from sitcom attention - deficit disorder, veering off in a new direction every half - hour or so. '' David Botti of Newsweek said, "What I like about Bee Movie is its comfy, off - the - cuff charm: unlike a lot of animated family entertainment, it 's not all Thwack Smash Kaboom. '' Moira MacDonald of The Seattle Times gave the film two and a half stars out of four, saying "Bee Movie does n't touch the bar raised so high by Pixar, but it creates a little buzz of its own. '' Peter Howell of the Toronto Star gave the film two and a half stars out of four, saying "Bee Movie is a cute movie. Not that there 's anything... well, you know the rest. But cute is not what adults expect from Jerry Seinfeld, although children will be delighted. ''
The film opened in second place to American Gangster, but its gross of $38,021,044 had it more in line with the studios ' lowest - grossing features, such as Shark Tale. The film had an average of $9,679 from 3,928 theaters. In its second weekend, the film held well with a 33 % drop to $25,565,462 and claiming the top spot, resulting in a $6,482 average from expanding to 3,944 theaters. Its widest release was 3,984 theaters, and closed on February 14, 2008 after 104 days of release, grossing $126,631,277 domestically along with an additional $160,963,300 overseas for a worldwide total of $287,594,577. Based on its domestic box office performance, the film failed to recoup its production budget of $150 million. Following the income from worldwide box office, home media, and pay television, the film ultimately turned a small profit for the studio.
Bee Movie was nominated for Best Animated Feature Film at the 65th Golden Globe Awards.
Barry B. Benson was the presenter for Academy Award for Best Animated Short Film on the 80th Academy Awards for 2008. Beforehand, he showed the audience some of his "prior '' roles, including every bee in the swarm in The Swarm.
Bee Movie is alleged to be similar to a concept developed in 2000 by a team of Swedish animation students, which they claim was presented to DreamWorks in 2001 under the name Beebylon. The animation students say DreamWorks rejected the idea, on the basis of it being "too childish ''. When Bee Movie was announced in 2003, the students claim they once again contacted DreamWorks to make sure the movie was not similar to their original concept, and was given a reassuring answer. When one of the members of the Beebylon team saw a trailer of the movie in 2007, he found it to be extremely similar and attempted to find a U.S. lawyer who could represent them. Jerry Seinfeld rejected the plagiarism claims during his PR tour for Bee Movie in Sweden. "I 'm doing my best not to laugh and I 'm taking it as serious as I can. But it 's a little bit hard. It is entirely possible that somebody else came up with an idea about making a movie about bees. I knew nothing of this until this very morning and I hope they are not too upset. ''
A Florida - based cosmetics company called Beeceuticals filed a lawsuit over the use of their trademarked phrase "Give Bees a Chance ''. The suit between the parties was settled out of court.
Several years after the film 's release, Bee Movie had an unexpected rise in popularity as an Internet meme.
In 2015, posts of the entire film screenplay spread across Facebook. In November 2016 YouTube user "Avoid at All Costs '' uploaded a video where the entire film sped up every time the word "bee '' was used. The video has gathered over 17 million views as of May 2017. The popularity of this video spawned several variants where the movie or trailer is edited in unusual ways. Vanity Fair would later characterize the film 's late popularity as "totally bizarre. ''
There have been some attempts to explain the phenomenon: Jason Richards, whom Vanity Fair identified as one of the larger promoters of the meme via his @ Seinfield2000 Twitter handle has noted the "off - brand Pixar quality '' as a possible reason, while Barry Marder, one of the film 's script writers, identified "that odd relationship between an insect and a human woman, '' as the possible cause. Inverse meanwhile writes that the film 's internet popularity "was a reaction not just to the movie itself but to the realization among millennials that they 'd been shown a truly odd movie as children and thought nothing of it. ''
Writing for New York magazine, Paris Martineau identified the meme as starting on Tumblr circa 2011 at which point users would, apparently in earnest, post the opening quotation identifying it as inspiring. By December 2012, however these posts became so ubiquitous that it would inspire parodies. It has also been suggested that the spread of such videos was inspired by the preceding popularity of the "We Are Number One '' meme.
Seinfeld himself said that he has no interest to make a sequel to Bee Movie in the wake of its popularity. During a Reddit AMA in June 2016, a fan asked if a Bee Movie 2 would happen. Seinfeld had this to say,
I considered it this spring for a solid six hours. There 's a fantastic energy now for some reason, on the internet particularly. Tumblr, people brought my attention to. I actually did consider it, but then I realized it would make Bee Movie 1 less iconic. But my kids want me to do it, a lot of people want me to do it. A lot of people that do n't know what animation is want me to do it. If you have any idea what animation is, you 'd never do it.
|
what pokemon games can you play on wii u | List of Virtual Console games for Wii U (North America) - wikipedia
The list of Virtual Console games for Wii U in North America names releases of vintage games. Emulated by the Wii U Virtual Console, these releases take advantage of the console 's unique features, such as Off TV Play with the Wii U GamePad and posting to Miiverse. Some of these games may already be available on the Wii Virtual Console, which can also be played through Wii U 's Wii Mode, but these legacy versions lack some features of the Wii U Virtual Console. While Wii Virtual Console titles can not be played using the Wii U GamePad 's controls, a September 2013 system update enabled the use of the GamePad 's screen as a display. While some Wii games are also available for download from the Wii U eShop, these are not designated as Virtual Console releases and lack Virtual Console features.
The list is sorted by system and in the order in which they were added in Nintendo eShop for Wii U. To sort by other columns, click the corresponding icon in the header row.
The following is a list of the 311 games available on the Virtual Console for the Wii U in North America, sorted by system and in the order they were added in Nintendo eShop. To sort by other columns, click the corresponding icon in the header row.
These titles were originally released for use on the Nintendo Entertainment System, which was launched in 1985.
There are 94 games available to purchase.
These titles were originally released for use on the Super Nintendo Entertainment System, which was launched in 1991.
There are 51 games available to purchase.
These titles were originally released for use on the Nintendo 64, which was launched in 1996.
There are 21 games available to purchase.
These titles were originally released for use on the Game Boy Advance (GBA), which was launched in 2001.
There are 74 games available to purchase.
These titles were originally released for use on the Nintendo DS, which was launched in 2004.
There are 31 games available to purchase.
These titles were originally released for use on the TurboGrafx - 16, which was launched in 1989.
There are 40 games available to purchase.
|
who played the guitar with a violin bow | Bowed guitar - wikipedia
Bowed guitar is a method of playing a guitar, acoustic or electric, in which the guitarist uses a bow to play the instrument, similar to playing a viola da gamba. Unlike other bowed instruments, the guitar has a flat bridge, making it difficult to bow individual notes on the middle strings. The technique is most closely associated with Jimmy Page of Led Zeppelin, The Yardbirds, and Jónsi of Sigur Ros.
|
how far is reading pa from allentown pa | Reading, Pennsylvania - wikipedia
Reading (/ ˈrɛdɪŋ / RED - ing) (Pennsylvania German: Reddin) is a city in and the county seat of Berks County, Pennsylvania, United States. With a population of 87,575, it is the fifth - largest city in Pennsylvania. Located in the southeastern part of the state, it is the principal city of the Greater Reading Area.
The city, which is approximately halfway between the state 's most populous city, Philadelphia, and the state capital, Harrisburg (as well as about halfway between Allentown and Lancaster) is strategically situated along a major transportation route from Central to Eastern Pennsylvania, and lent its name to the now - defunct Reading Railroad, which transported anthracite coal from the Pennsylvania Coal Region to the eastern United States via the Port of Philadelphia. Reading Railroad is one of the four railroad properties in the classic United States version of the Monopoly board game.
Reading was one of the first localities where outlet shopping became a tourist industry. It has been known as "The Pretzel City '', because of numerous local pretzel bakeries. Currently, Bachman, Dieffenbach, Tom Sturgis, and Unique Pretzel bakeries call the Reading area home.
According to the 2010 census, Reading has the highest share of citizens living in poverty in the nation.
In recent years, the Reading area has become a destination for cyclists. With more than 125 miles of trails in five major preserves, it is an International Mountain Bicycling Association Ride Center and held the Reading Radsport Festival on September 8 -- 9, 2017.
In April 2017, it was announced that an indoor velodrome, or cycling track, will be built in Reading as the first of its kind on the East Coast and only the second in the entire country. Albright College and the World Cycling League formally announced plans April 6, 2017, to build the $20 million, 2,500 - seat facility, which will be called the National Velodrome and Events Center at Albright College. It will also serve as the Cycling League 's world headquarters.
Lenni Lenape people, also known as "Delaware Indians '', were the original inhabitants of the Reading area.
The Colony of Pennsylvania was a 1680 land grant from King Charles II of England to William Penn. Comprising more than 45,000 square miles (120,000 km2), it was named for his father, Sir William Penn.
In 1743, Richard and Thomas Penn (sons of William Penn) mapped out the town of Reading with Conrad Weiser. Taking its name from Reading, Berkshire, England, the town was established in 1748. Upon the creation of Berks County in 1752, Reading became the county seat. The region was settled by emigrants from southern and western Germany, who bought land from the Penns. The first Amish community in the New World was established in Greater Reading, Berks County. The Pennsylvanian German dialect was spoken in the area well into the 1950s and later.
During the French and Indian War, Reading was a military base for a chain of forts along the Blue Mountain.
By the time of the American Revolution, the area 's iron industry had a total production exceeding England 's. That output helped supply George Washington 's troops with cannons, rifles, and ammunition in the Revolutionary War. During the early period of the conflict, Reading was again a depot for military supply. Hessian prisoners from the Battle of Trenton were also detained here.
Philadelphia, Pennsylvania was the capital of the United States at the time of the Yellow Fever Epidemic of 1793. President Washington traveled to Reading, and considered making it the emergency national capital, but chose Germantown instead.
Susanna Cox was tried and convicted for infanticide in Reading in 1809. Her case attracted tremendous sympathy; 20,000 viewers came to view her hanging, swamping the 3,000 inhabitants.
Census data showed that, from 1810 to 1950, Reading was among the nation 's top one hundred largest urban places.
The Schuylkill Canal, a north - south canal completed in 1825, paralleled the Schuylkill River and connected Reading with Philadelphia and the Delaware River. The Union Canal, an east - west canal completed in 1828, connected the Schuylkill and Susquehanna Rivers, and ran from Reading to Middletown, Pennsylvania, a few miles south of Harrisburg, Pennsylvania. Railroads forced the abandonment of the canals by the 1880s.
The Philadelphia and Reading Railroad (P&R) was incorporated in 1833. During the Long Depression following the Panic of 1873, a statewide railroad strike in 1877 over delayed wages led to a violent protest and clash with the National Guard in which six Reading men were killed. Following more than a century of prosperity, the Reading Company was forced to file for bankruptcy protection in 1971. The bankruptcy was a result of dwindling coal shipping revenues and strict government regulations that denied railroads the ability to set competitive prices, required high taxes, and forced the railroads to continue to operate money - losing passenger service lines. On April 1, 1976, the Reading Company sold its current railroad interests to the newly formed Consolidated Railroad Corporation (Conrail).
Early in the 20th century, the city participated in the burgeoning automobile and motorcycle industry as home to the pioneer "Brass Era '' companies, Daniels Motor Company, Duryea Motor Wagon Company and Reading - Standard Company.
Reading experienced continuous growth until the 1930s, when its population reached nearly 120,000. From the 1940s to the 1970s, however, the city saw a sharp downturn in prosperity, largely owing to the decline of the heavy industry and railroads, on which Reading had been built, and a national trend of urban decline.
In 1972, Hurricane Agnes caused extensive flooding in the city, not the last time the lower precincts of Reading were inundated by the Schuylkill River. A similar, though not as devastating, flood occurred during June 2006.
The 2000 census showed that Reading 's population decline had ceased. This was attributed to an influx of Hispanic residents from New York City, as well as from the extension of suburban sprawl from Philadelphia 's northwest suburbs.
Reading has its share of obstacles to overcome, namely crime. However, new crime fighting strategies appear to have had an impact. In 2006, the city dropped in the rankings of dangerous cities, and again in 2007.
In December 2007, NBC 's Today show featured Reading as one of the top four "Up and Coming Neighborhoods '' in the United States as showing potential for a real estate boom. The interviewee, Barbara Corcoran, chose the city by looking for areas of big change, renovations, cleanups of parks, waterfronts, and warehouses. Corcoran also noted Reading 's proximity to Philadelphia, New York, and other cities.
The climate in and around Reading is variable, but relatively mild. The Reading area is considered a humid subtropical climate, with areas just to the north designated as a humid continental climate. Summers are warm and humid with average July highs around 85 ° F. Extended periods of heat and high humidity occur. On average, there are 15 -- 20 days per year where the temperature exceeds 90 ° F. Reading becomes milder in the autumn, as the heat and humidity of summer relent to lower humidity and temperatures. The first killing frost generally occurs in mid to late October.
Winters bring freezing temperatures, but usually move above freezing during the day 's warmest point. The average January high is 38; the average January low is 22 ° F, but it is not unusual for winter temperatures to be much lower or higher than the averages. The all - time record low (not including wind chill) was − 21 ° F during a widespread cold wave in January 1994. Snow is common in some winters, but the harsher winter conditions experienced to the north and west are not typical of Greater Reading. Annual snowfall is variable, but averages around 32 inches. Spring temperatures vary widely between freezing temperatures and the 80s or even 90s later in Spring. The last killing frost usually is in later April, but freezing temperatures have occurred in May. Total precipitation for the entire year is around 45 inches (112 cm).
Reading is located at 40 ° 20 ′ 30 '' N 75 ° 55 ′ 35 '' W / 40.34167 ° N 75.92639 ° W / 40.34167; - 75.92639 (40.341692, − 75.926301) in southeastern Pennsylvania, roughly 65 miles (105 km) northwest of Philadelphia. According to the United States Census Bureau, the city has a total area of 10.1 square miles (26 km). 9.8 square miles (25 km) of it is land and 0.2 square miles (0.52 km) of it (2.39 %) is water. The total area is 2.39 % water. The city is largely bounded on the west by the Schuylkill River, on the east by Mount Penn, and on the south by Neversink Mountain. The Reading Prong, the mountain formation stretching north into New Jersey, has come to be associated with naturally occurring radon gas; however, homes in Reading are not particularly affected. The surrounding county is home to a number of family - owned farms.
Companies based in Reading and surrounding communities include Boscov 's, Carpenter, GK Elite Sportswear, Penske Truck Leasing, and Redner 's Markets.
In 2012, The New York Times called Reading "the nation 's poorest city. ''
According to the Greater Reading Chamber of Commerce and Industry, the largest employers in the Berks county area are:
Jump Start Incubator, a program of Berks County Community Foundation and the Kutztown University Small Business Development Center, is intended to help entrepreneurs open new businesses in the area.
A number of federal and state highways allow entry to and egress from Reading. U.S. Route 422, the major east - west artery, circles the western edge of the city and is known locally as The West Shore Bypass. US 422 leads west to Lebanon and east to Pottstown. U.S. Route 222 bypasses the city to the west, leading southwest to Lancaster and northeast to Allentown. Interstate 176 heads south from US 422 near Reading and leads to the Pennsylvania Turnpike (Interstate 76) in Morgantown. Pennsylvania Route 12 is known as the Warren Street Bypass, as it bypasses the city to the north. PA 12 begins at US 422 / US 422 in Wyomissing and heads northeast on the Warren Street Bypass before becoming Pricetown Road and leading northeast to Pricetown. Pennsylvania Route 10 is known as Morgantown Road and heads south from Reading parallel to I - 76 to Morgantown. Pennsylvania Route 61 heads north from Reading on Centre Avenue and leads to Pottsville. Pennsylvania Route 183 heads northwest from Reading on Schuylkill Avenue and Bernville Road, leading to Bernville. U.S. Route 222 Business is designated as Lancaster Avenue, Bingaman Street, South 4th Street, and 5th Street through Reading. U.S. Route 422 Business is designated as Penn Street, Washington Street (westbound), Franklin Street (eastbound), and Perkiomen Avenue through Reading.
Public transit in Reading and its surrounding communities has been provided since 1973 by the Berks Area Regional Transportation Authority (BARTA). BARTA operates a fleet of 52 buses serving 19 routes, mostly originating at the BARTA Transportation Center in Downtown Reading. BARTA also provides paratransit service in addition to fixed route service. In addition, Greyhound and Bieber Transportation Group bus routes are available from the InterCity Bus Terminal. The former Reading Railroad Franklin Street Station was refurbished and reopened to bus service on September 9, 2013 with buses running the express route back and forth to Lebanon Transit. The route to Lebanon was discontinued after a short period, resulting in the refurbished station sitting vacant.
Reading and the surrounding area is serviced by the Reading Regional Airport, a general aviation airfield. The three - letter airport code for Reading is RDG. Scheduled commercial airline service to Reading ended in 2004, when the last airline, USAir stopped flying into Reading.
Freight rail service in Reading is provided by the Norfolk Southern Railway, the Reading Blue Mountain and Northern Railroad, and the East Penn Railroad. Norfolk Southern Railway serves Reading along the Harrisburg Line, which runs east to Philadelphia and west to Harrisburg, and the Reading Line, which runs northeast to Allentown. Norfolk Southern Railway operates the Reading Yard in Reading. The Reading Blue Mountain and Northern Railroad operates the Reading Division line from an interchange with the Norfolk Southern Railway in Reading north to Port Clinton and Packerton. The East Penn Railroad operates the Lancaster Northern line from Sinking Spring southwest to Ephrata, using trackage rights along Norfolk Southern Railway east from Sinking Spring to an interchange with the Norfolk Southern Railway in Reading.
Passenger trains ran between Pottsville, Reading, Pottstown, and Philadelphia along the Pottsville Line until July 27, 1981, when transit operator SEPTA curtailed commuter service to electrified lines. Since then, there have been repeated calls for the resumption of the services.
In the late 1990s and up to 2003, SEPTA, in cooperation with Reading - based BARTA, funded a study called the Schuylkill Valley Metro which included plans to extend both sides of SEPTA 's R6 passenger line to Pottstown, Reading, and Wyomissing, Pennsylvania. The project suffered a major setback when it was rejected by the Federal Transit Administration New Starts program, which cited doubts about the ridership projections and financing assumptions used by the study. With the recent surge in gasoline prices and ever - increasing traffic, the planning commissions of Montgomery County and Berks County have teamed to study the feasibility of a simple diesel shuttle train between the Manayunk / Norristown Line and Pottstown / Reading.
Electricity in Reading is provided by Met - Ed, a subsidiary of FirstEnergy. Natural gas service in Reading is provided by UGI Utilities. The Reading Area Water Authority provides water to the city, with the city 's water supply coming from Lake Ontelaunee and the city 's water treated at the Maidencreek Filter Plant. The Reading Water Company was founded in 1821 to supply water to the city. The Reading Area Water Authority was established on May 20, 1994 to take over the water system in the city. Sewer service is provided by the city 's Public Works department, with a wastewater treatment plant owned by the city located on Fritz Island. The city 's Public Works department provides trash and recycling collection to Reading.
Hospitals serving the Reading area include Reading Hospital in West Reading and Penn State Health St. Joseph in Bern Township and downtown Reading. Reading Hospital offers an emergency department with a Level II trauma center and various services including Cancer Care, Heart Center, Orthopedic Services, Pediatrics, Primary Care, and Women 's Health. Penn State Health St. Joseph offers an emergency department, heart institute, cancer center, stroke center, wound center, orthopedics, and primary care physicians.
As of the 2010 census, the city was 48.4 % White, 13.2 % Black or African American, 0.9 % Native American, 1.2 % Asian, 0.1 % Native Hawaiian, and 6.1 % were two or more races. 58.2 % of the population were of Hispanic or Latino ancestry.
As of the census of 2000, there were 30,113 households, out of which 33.7 % had children under the age of 18 living with them, 34.4 % were married couples living together, 20.2 % had a female householder with no husband present, and 38.8 % were non-families. 31.7 % of all households were made up of individuals, and 12.4 % had someone living alone who was 65 years of age or older. The average household size was 2.63 and the average family size was 3.33.
In the city, the population was spread out, with 29.9 % under the age of 18, 11.7 % from 18 to 24, 28.9 % from 25 to 44, 17.0 % from 45 to 64, and 12.4 % who were 65 years of age or older. The median age was 31 years. For every 100 females, there were 93.3 males. For every 100 females age 18 and over, there were 88.5 males.
The median income for a household in the city was $26,698, and the median income for a family was $31,067. Males had a median income of $28,114 versus $21,993 for females. The per capita income for the city was $13,086. 26.1 % of the population and 22.3 % of families were below the poverty line. 36.5 % of those under the age of 18 and 15.6 % of those 65 and older were living below the poverty line.
As of the American Community Survey 1 - Year Estimates, Reading had a population of 80,997. The racial makeup of the city was 48.8 % White, 14.0 % African American, 0.2 % Native American, 1.4 % Asian, 0.0 % Pacific Islander, 31.1 % from other races, and 4.5 % from two or more races. 56.3 % were Hispanic or Latino of any race, with 33.5 % being of Puerto Rican descent. 33.0 % of all people were living below the poverty line, including 42.0 % of those under 18.
According to the US Census Bureau, 32.9 % of all residents live below the poverty level, including 45.7 % of those under 18. Reading 's unemployment rate in May 2010 was 14.7 %, while Berks County 's unemployment rate was 9.9 %.
The city of Reading is protected by the 135 firefighters and paramedics of the Reading Fire and EMS Department (RFD). The RFD operates out of seven fire stations throughout the city. The RFD operates a fire apparatus fleet of five Engine Companies, three Ladder Companies, one Rescue Company, brush unit, and four front - line Medic Ambulances. In 2016, fire units responded to 9,751 incidents. EMS responses totaled 19,058 calls for service. Department staffing is two firefighters per apparatus.
The Reading School District provides elementary and middle schools for the city 's children. Numerous Catholic parochial schools are also available.
Press reports have indicated that in 2012, about eight percent of Reading 's residents have a college degree, compared to a national average of 28 %.
Four institutions of higher learning are located in Reading:
Four high schools serve the city:
Reading is known for the Reading Fightin Phils, minor league affiliate of the Philadelphia Phillies, who play at FirstEnergy Stadium. Notable alumni are Larry Bowa, Ryne Sandberg, Mike Schmidt, Ryan Howard, and Jimmy Rollins.
The city has been the residence of numerous professional athletes. Among these native to Reading are Brooklyn Dodgers outfielder Carl Furillo, Baltimore Colts running back Lenny Moore, and Philadelphia 76ers forward Donyell Marshall. Pro golfer Betsy King, a member of the World Golf Hall of Fame, was born in Reading.
The open - wheel racing portion of Penske Racing had been based in Reading, Pennsylvania since 1973 with the cars, during the F1 and CART era, being constructed in Poole, Dorset, England as well as being the base for the F1 team. On October 31, 2005, Penske Racing announced after the 2006 IRL season, they would consolidate IRL and NASCAR operations at the team 's Mooresville, North Carolina facility; with the flooding in Pennsylvania in 2006, the team 's operations were moved to Mooresville earlier than expected. Penske Truck Leasing is still based in Reading.
Duryea Drive, which ascends Mount Penn in a series of switchbacks, was a testing place for early automobiles and was named for Charles Duryea. The Blue Mountain Region Sports Car Club of America hosts the Duryea Hill Climb, the longest in the Pennsylvania Hillclimb Association series, which follows the same route the automaker used to test his cars.
Reading played host to a stop on the PGA Tour, the Reading Open, in the late 1940s and early 1950s.
The city 's cultural institutions include the Reading Symphony Orchestra and its education project the Reading Symphony Youth Orchestra, the Reading Choral Society, Opus One: Berks Chamber Choir, the GoggleWorks Art Gallery, the Reading Public Museum and the Historical Society of Berks County.
Reading is the birthplace of graphic artist Jim Steranko, guitar virtuoso Richie Kotzen, novelist and poet John Updike, poet Wallace Stevens, and singer - songwriter Taylor Swift. Marching band composer and writer John Philip Sousa, the March King, died in Reading 's Abraham Lincoln Hotel in 1932. Artist Keith Haring was born in Reading.
Reading is home to the 15 - time world - champion drum and bugle corps, the Reading Buccaneers.
In 1914, one the anchors of the Battleship Maine was delivered from the Washington Navy Yard to City Park, off of Perkiomen Avenue. The anchor was dedicated during a ceremony presided over by Franklin D. Roosevelt, who was then assistant secretary of the navy.
Reading was home to several movie and theater palaces in the early 20th Century. The Astor, Embassy, Loew 's Colonial, and Rajah Shrine Theater were grand monuments of architecture and entertainment. Today, after depression, recession, and urban renewal, the Rajah is the only one to remain. The Astor Theater was demolished in 1998 to make way for The Sovereign Center. Certain steps were taken to retain mementos of the Astor, including its ornate Art Deco chandelier and gates. These are on display and in use inside the arena corridors, allowing insight into the ambiance of the former movie house. In 2000, the Rajah was purchased from the Shriners. After a much needed restoration, it was renamed the Sovereign Performing Arts Center.
The Mid-Atlantic Air Museum is a membership - supported museum and restoration facility located at Carl A. Spaatz Field. The museum actively displays and restores historic and rare war aircraft and civilian airliners. Most notable to their collection is a Northrop P - 61 Black Widow under active restoration since its recovery from Mount Cyclops, New Guinea in 1989. Beginning in 1990, the museum has hosted "World War II Weekend Air Show '', scheduled to coincide with D - Day. On display are period wartime aircraft (many of which fly throughout the show) vehicles, and weapons.
The mechanical ice cream scoop was invented in Reading by William Clewell in 1878. The 5th Ave Bar and York Peppermint Patty were invented in Reading.
The City of Reading and Reutlingen, Germany are sister cities which participate in student exchanges. Students from Reading High School can apply to become a part of the exchange and travel to Reutlingen for two weeks (mid-October to early September) and in return host German exchange students in the spring. Kutztown University also has a program with Reutlingen.
Reading is twinned with:
In 1908, a Japanese - style pagoda was built on Mount Penn, where it overlooks the city and is visible from almost everywhere in town. Locally, it is called the "Pagoda ''. It is currently the home of a café and a gift shop. It remains a popular tourist attraction.
Another fixture in Reading 's skyline is the William Penn Memorial Fire Tower, one mile from the Pagoda on Skyline Drive. Built in 1939 for fire department and forestry observation, the tower is 120 feet tall, and rises 950 feet above the intersection of fifth and Penn Streets. From the top of the tower is a 60 - mile panoramic view.
The Reading Glove and Mitten Manufacturing Company founded in 1899, just outside Reading city limits, in West Reading and Wyomissing boroughs changed its name to Vanity Fair in 1911 and is now the major clothing manufacturer VF Corp. In the early 1970s, the original factories were developed to create the VF Outlet Village, the first outlet mall in the United States.
The book and movie Rabbit, Run and the other three novels of the Rabbit series by John Updike were set in fictionalized versions of Reading and nearby Shillington, called Brewer and Olinger respectively. Updike was born in Reading and lived in nearby Shillington until he was thirteen. He also makes reference to the Brewer suburb of Mount Judge, equivalent to Mount Penn, east of Reading.
Filmmakers Gary Adelstein, Costa Mantis, and Jerry Orr created Reading 1974: Portrait of a City; relying heavily on montage; the film is a cultural time capsule.
The play Sweat by Lynn Nottage is set in Reading.
The movie Goon: Last of the Enforcers features Reading as the home of rival team, the Reading Wolf Dogs.
|
what was the heir to the roman empire called | Roman emperor - wikipedia
The Roman Emperor was the ruler of the Roman Empire during the imperial period (starting in 27 BC). The emperors used a variety of different titles throughout history. Often when a given Roman is described as becoming "emperor '' in English, it reflects his taking of the title Augustus or Caesar. Another title often used was imperator, originally a military honorific. Early Emperors also used the title princeps (first citizen). Emperors frequently amassed republican titles, notably Princeps senatus, Consul and Pontifex Maximus.
The legitimacy of an emperor 's rule depended on his control of the army and recognition by the Senate; an emperor would normally be proclaimed by his troops, or invested with imperial titles by the Senate, or both. The first emperors reigned alone; later emperors would sometimes rule with co-Emperors and divide administration of the Empire between them.
The Romans considered the office of emperor to be distinct from that of a king. The first emperor, Augustus, resolutely refused recognition as a monarch. Although Augustus could claim that his power was authentically republican, his successor, Tiberius, could not convincingly make the same claim. Nonetheless, for the first three hundred years of Roman Emperors, from Augustus until Diocletian, a great effort was made to emphasize that the Emperors were the leaders of a Republic.
From Diocletian onwards, emperors ruled in an openly monarchic style and did not preserve the nominal principle of a republic, but the contrast with "kings '' was maintained: although the imperial succession was generally hereditary, it was only hereditary if there was a suitable candidate acceptable to the army and the bureaucracy, so the principle of automatic inheritance was not adopted. Elements of the Republican institutional framework (senate, consuls, and magistrates) were preserved until the very end of the Western Empire.
The Eastern (Byzantine) emperors ultimately adopted the title of "Basileus '' (βασιλεύς), which had meant king in Greek, but became a title reserved solely for the Roman Emperor and the ruler of the Sasanian Empire. Other kings were then referred to as rēgas.
In addition to their pontifical office, some emperors were given divine status after death. With the eventual hegemony of Christianity, the emperor came to be seen as God 's chosen ruler, as well as a special protector and leader of the Christian Church on Earth, although in practice an emperor 's authority on Church matters was subject to challenge.
The Western Roman Empire collapsed in the late 5th century. Romulus Augustulus is often considered to be the last emperor of the west after his forced abdication in 476, although Julius Nepos maintained a claim to the title until his death in 480. Meanwhile, in the east, emperors continued to rule from Constantinople ("New Rome ''); these are referred to in modern scholarship as "Byzantine emperor '' but they used no such title and called themselves "Emperor of the Romans '' (βασιλεύς Ῥωμαίων). Constantine XI Palaiologos was the last Byzantine Roman Emperor in Constantinople, dying in the Fall of Constantinople to the Ottomans in 1453.
Due to the cultural rupture of the Turkish conquest, most western historians treat Constantine XI as the last meaningful claimant to the title Roman Emperor, although from 1453 Ottoman rulers were titled "Caesar of Rome '' (Turkish: Kayser - i Rum) until the Ottoman Empire ended in 1922. A Byzantine group of claimant Roman Emperors existed in the Empire of Trebizond until its conquest by the Ottomans in 1461. In western Europe the title of Roman Emperor was revived by Germanic rulers, the "Holy Roman Emperors '', in 800, and was used until 1806.
Modern historians conventionally regard Augustus as the first Emperor whereas Julius Caesar is considered the last dictator of the Roman Republic, a view having its origins in the Roman writers Plutarch, Tacitus and Cassius Dio. However, the majority of Roman writers, including Josephus, Pliny the Younger, Suetonius and Appian, as well as most of the ordinary people of the Empire, thought of Julius Caesar as the first Emperor.
At the end of the Roman Republic no new, and certainly no single, title indicated the individual who held supreme power. Insofar as emperor could be seen as the English translation of imperator, then Julius Caesar had been an emperor, like several Roman generals before him. Instead, by the end of the civil wars in which Julius Caesar had led his armies, it became clear that there was certainly no consensus to return to the old - style monarchy, but that the period when several officials, bestowed with equal power by the senate, would fight one another had come to an end.
Julius Caesar, and then Augustus after him, accumulated offices and titles of the highest importance in the Republic, making the power attached to those offices permanent, and preventing anyone with similar aspirations from accumulating or maintaining power for themselves. However, Julius Caesar, unlike those after him, did so without the Senate 's vote and approval.
Julius Caesar held the Republican offices of consul four times and dictator five times, was appointed dictator in perpetuity (dictator perpetuo) in 45 BC and had been "pontifex maximus '' for a long period. He gained these positions by senatorial consent. By the time of his assassination, he was the most powerful man in the Roman world.
In his will, Caesar appointed his adopted son Octavian as his heir. On Caesar 's death, Octavian inherited his adoptive father 's property and lineage, the loyalty of most of his allies and -- again through a formal process of senatorial consent -- an increasing number of the titles and offices that had accrued to Caesar. A decade after Caesar 's death, Octavian 's victory over his erstwhile ally Mark Antony at Actium put an end to any effective opposition and confirmed Octavian 's supremacy.
In 27 BC, Octavian appeared before the Senate and offered to retire from active politics and government; the Senate not only requested he remain, but increased his powers and made them lifelong, awarding him the title of Augustus (the elevated or divine one, somewhat less than a god but approaching divinity). Augustus stayed in office until his death; the sheer breadth of his superior powers as princeps and permanent imperator of Rome 's armies guaranteed the peaceful continuation of what nominally remained a republic. His "restoration '' of powers to the Senate and the people of Rome was a demonstration of his auctoritas and pious respect for tradition.
Some later historians such as Tacitus would say that even at Augustus ' death, the true restoration of the Republic might have been possible. Instead, Augustus actively prepared his adopted son Tiberius to be his successor and pleaded his case to the Senate for inheritance on merit. The Senate disputed the issue but eventually confirmed Tiberius as princeps. Once in power, Tiberius took considerable pains to observe the forms and day - to - day substance of republican government.
Rome had no single constitutional office, title or rank exactly equivalent to the English title "Roman emperor ''. Romans of the Imperial era used several titles to denote their emperors, and all were associated with the pre-Imperial, Republican era.
The emperor 's legal authority derived from an extraordinary concentration of individual powers and offices that were extant in the Republic rather than from a new political office; emperors were regularly elected to the offices of consul and censor. Among their permanent privileges were the traditional Republican title of princeps senatus (leader of the Senate) and the religious office of pontifex maximus (chief priest of the College of Pontiffs). Every emperor held the latter office and title until Gratian surrendered it in AD 382 to Pope Siricius; it eventually became an auxiliary honor of the Bishop of Rome.
These titles and offices conferred great personal prestige (dignitas) but the basis of an emperor 's powers derived from his auctoritas: this assumed his greater powers of command (imperium maius) and tribunician power (tribunicia potestas) as personal qualities, separate from his public office. As a result, he formally outranked provincial governors and ordinary magistrates. He had the right to enact or revoke sentences of capital punishment, was owed the obedience of private citizens (privati) and by the terms of the ius auxiliandi could save any plebeian from any patrician magistrate 's decision. He could veto any act or proposal of any magistrate, including the tribunes of the people (ius intercedendi or ius intercessionis). His person was held to be sacrosanct.
Roman magistrates on official business were expected to wear the form of toga associated with their office; different togas were worn by different ranks; senior magistrates had the right to togas bordered with purple. A triumphal imperator of the Republic had the right to wear the toga picta (of solid purple, richly embroidered) for the duration of the triumphal rite. During the Late Republic, the most powerful had this right extended. Pompey and Caesar are both thought to have worn the triumphal toga and other triumphal dress at public functions. Later emperors were distinguished by wearing togae purpurae, purple togas; hence the phrase "to don the purple '' for the assumption of imperial dignity.
The titles customarily associated with the imperial dignity are imperator ("commander ''), which emphasizes the emperor 's military supremacy and is the source of the English word emperor; Caesar, which was originally a name but came to be used for the designated heir (as Nobilissimus Caesar, "Most Noble Caesar '') and was retained upon accession. The ruling emperor 's title was the descriptive Augustus ("majestic '' or "venerable '', which had tinges of the divine), which was adopted upon accession. In Greek, these three titles were rendered as autokratōr ("Αὐτοκράτωρ ''), kaisar ("Καίσαρ ''), and augoustos ("Αὔγουστος '') or sebastos ("Σεβαστός '') respectively. In Diocletian 's Tetrarchy, the traditional seniorities were maintained: "Augustus '' was reserved for the two senior emperors and "Caesar '' for the two junior emperors -- each delegated a share of power and responsibility but each an emperor - in - waiting, should anything befall his senior.
As princeps senatus (lit., "first man of the senate ''), the emperor could receive foreign embassies to Rome; some emperors (such as Tiberius) are known to have delegated this task to the Senate. In modern terms these early emperors would tend to be identified as chiefs of state. The office of princeps senatus, however, was not a magistracy and did not entail imperium. At some points in the Empire 's history, the emperor 's power was nominal; powerful praetorian prefects, masters of the soldiers and on a few occasions, other members of the Imperial household including Imperial mothers and grandmothers were the true source of power.
The title imperator dates back to the Roman Republic, when a victorious commander could be hailed as imperator in the field by his troops. The Senate could then award or withhold the extraordinary honour of a triumph; the triumphal commander retained the title until the end of his magistracy. In Roman tradition, the first triumph was that of Romulus, but the first attested recipient of the title imperator in a triumphal context is Aemilius Paulus in 189 BC. It was a title held with great pride: Pompey was hailed imperator more than once, as was Sulla, but it was Julius Caesar who first used it permanently -- according to Dio, this was a singular and excessive form of flattery granted by the Senate, passed to Caesar 's adopted heir along with his name and virtually synonymous with it.
In 38 BC Agrippa refused a triumph for his victories under Octavian 's command, and this precedent established the rule that the princeps should assume both the salutation and title of imperator. It seems that from then on Octavian (later the first emperor Augustus) used imperator as a first name (praenomen): Imperator Caesar not Caesar imperator. From this the title came to denote the supreme power and was commonly used in that sense. Otho was the first to imitate Augustus, but only with Vespasian did imperator (emperor) become the official title by which the ruler of the Roman Empire was known.
The word princeps (plural principes), meaning "first '', was a republican term used to denote the leading citizen (s) of the state. It was a purely honorific title with no attached duties or powers. It was the title most preferred by Caesar Augustus as its use implies only primacy, as opposed to another of his titles, imperator, which implies dominance. Princeps, because of its republican connotation, was most commonly used to refer to the emperor in Latin (although the emperor 's actual constitutional position was essentially "pontifex maximus with tribunician power and imperium superseding all others '') as it was in keeping with the façade of the restored Republic; the Greek word basileus ("king '') was modified to be synonymous with emperor (and primarily came into favour after the reign of Heraclius) as the Greeks had no republican sensibility and openly viewed the emperor as a monarch.
In the era of Diocletian and beyond, princeps fell into disuse and was replaced with dominus ("lord ''); later emperors used the formula Imperator Caesar NN. Pius Felix (Invictus) Augustus: NN representing the individual 's personal name; Pius Felix meaning "Pious and Blest ''; and Invictus meaning "undefeated ''. The use of princeps and dominus broadly symbolise the differences in the empire 's government, giving rise to the era designations "Principate '' and "Dominate ''.
In 293, following the Crisis of the Third Century which had severely damaged Imperial administration, Emperor Diocletian enacted sweeping reforms that washed away many of the vestiges and façades of republicanism which had characterized the Augustan order in favor of a more frank autocracy. As a result, historians distinguish the Augustan period as the principate and the period from Diocletian to the 7th century reforms of Emperor Heraclius as the dominate (from the Latin for "lord ''.)
Reaching back to the oldest traditions of job - sharing in the republic, however, Diocletian established at the top of this new structure the Tetrarchy ("rule of four '') in an attempt to provide for smoother succession and greater continuity of government. Under the Tetrarchy, Diocletian set in place a system of co-emperors, styled "Augustus '', and junior emperors, styled "Caesar ''. When a co-emperor retired (as Diocletian and his co-emperor Maximian did in 305) or died, a junior "Caesar '' would succeed him and the co-emperors would appoint new Caesars as needed.
The four members of the Imperial college (as historians call the arrangement) shared military and administrative challenges by each being assigned specific geographic areas of the empire. From this innovation, often but not consistently repeated over the next 187 years, comes the notion of an east - west partition of the empire that became popular with historians long after the practice had stopped. The two halves of empire, while often run as de facto separate entities day - to - day, were always considered and seen, legally and politically, as separate administrative divisions of a single, insoluble imperium by the Romans of the time.
The final period of co-emperorship began in 395, when Emperor Theodosius I 's sons Arcadius and Honorius succeeded as co-emperors. Eighty - five years later, following Germanic migrations which had reduced the empire 's effective control across Brittania, Gaul and Hispania and a series of military coup d'état which drove Emperor Nepos out of Italy, the idea of dividing the position of emperor was formally abolished by Emperor Zeno (480).
The Roman Empire survived in the east until 1453, but the marginalization of the former heartland of Italy to the empire had a profound cultural impact on the empire and the position of emperor. In 620, the official language was changed from Latin to Greek. The Greek - speaking inhabitants were Romaioi (Ῥωμαῖοι), and were still considered Romans by themselves and the populations of Eastern Europe, the Near East, India, and China. But many in Western Europe began to refer to the political entity as the "Greek Empire ''. The evolution of the church in the no - longer imperial city of Rome and the church in the now supreme Constantinople began to follow divergent paths, culminating in the schism between the Roman Catholic and Eastern Orthodox faiths. The position of emperor was increasingly influenced by Near Eastern concepts of kingship. Starting with Emperor Heraclius, Roman emperors styled themselves "King of Kings '' (from the imperial Persian Shahanshah) from 627 and "Basileus '' (from the title used by Alexander the Great) from 629. The later period of the empire is today called the Byzantine Empire as a matter of scholarly convention.
Although these are the most common offices, titles, and positions, not all Roman emperors used them, nor were all of them used at the same time in history. The consular and censorial offices especially were not an integral part of the Imperial dignity, and were usually held by persons other than the reigning emperor.
When Augustus established the Princeps, he turned down supreme authority in exchange for a collection of various powers and offices, which in itself was a demonstration of his auctoritas ("authority ''). As holding princeps senatus, the emperor declared the opening and closure of each Senate session, declared the Senate 's agenda, imposed rules and regulation for the Senate to follow, and met with foreign ambassadors in the name of the Senate. Being pontifex maximus made the emperor the chief administrator of religious affairs, granting him the power to conduct all religious ceremonies, consecrate temples, control the Roman calendar (adding or removing days as needed), appoint the vestal virgins and some flamens, lead the Collegium Pontificum, and summarize the dogma of the Roman religion.
While these powers granted the emperor a great deal of personal pride and influence, they did not include legal authority. In 23 BC, Augustus gave the emperorship its legal power. The first was Tribunicia Potestas, or the powers of the tribune of the plebs without actually holding the office (which would have been impossible, since a tribune was by definition a plebeian, whereas Augustus, although born into a plebeian family, had become a patrician when he was adopted into the gens Julia). This endowed the emperor with inviolability (sacrosanctity) of his person, and the ability to pardon any civilian for any act, criminal or otherwise. By holding the powers of the tribune, the emperor could prosecute anyone who interfered with the performance of his duties. The emperor 's tribuneship granted him the right to convene the Senate at his will and lay proposals before it, as well as the ability to veto any act or proposal by any magistrate, including the actual tribune of the plebeians. Also, as holder of the tribune 's power, the emperor would convoke the Council of the People, lay legislation before it, and served as the council 's president. But his tribuneship only granted him power within Rome itself. He would need another power to veto the act of governors and that of the consuls while in the provinces.
To solve this problem, Augustus managed to have the emperor be given the right to hold two types of imperium. The first being consular imperium while he was in Rome, and imperium maius outside of Rome. While inside the walls of Rome, the reigning consuls and the emperor held equal authority, each being able to veto each other 's proposals and acts, with the emperor holding all of the consul 's powers. But outside of Rome, the emperor outranked the consuls and could veto them without the same effects on himself. Imperium Maius also granted the emperor authority over all the provincial governors, making him the ultimate authority in provincial matters and gave him the supreme command of all of Rome 's legions. With Imperium Maius, the emperor was also granted the power to appoint governors of imperial provinces without the interference of the Senate. Also, Imperium Maius granted the emperor the right to veto the governors of the provinces and even the reigning consul while in the provinces.
The nature of the imperial office and the Principate was established under Julius Caesar 's heir and posthumously adopted son, Caesar Augustus, and his own heirs, the descendants of his wife Livia from her first marriage to a scion of the distinguished Claudian clan. This Julio - Claudian dynasty came to an end when the Emperor Nero -- a great - great - grandson of Augustus through his daughter and of Livia through her son -- was deposed in 68.
Nero was followed by a succession of usurpers throughout 69, commonly called the "Year of the Four Emperors ''. The last of these, Vespasian, established his own Flavian dynasty. Nerva, who replaced the last Flavian emperor, Vespasian 's son Domitian, in 96, was elderly and childless, and chose therefore to adopt an heir, Trajan, from outside his family. When Trajan acceded to the purple he chose to follow his predecessor 's example, adopting Hadrian as his own heir, and the practice then became the customary manner of imperial succession for the next century, producing the "Five Good Emperors '' and the Empire 's period of greatest stability.
The last of the Good Emperors, Marcus Aurelius, chose his natural son Commodus as his successor rather than adopting an heir. Commodus 's misrule led to his murder on 31 December 192, following which a brief period of instability quickly gave way to Septimius Severus, who established the Severan dynasty which, except for an interruption in 217 -- 218 when Macrinus was emperor, held the purple until 235.
The accession of Maximinus Thrax marks both the close and the opening of an era. It was one of the last attempts by the increasingly impotent Roman Senate to influence the succession. Yet it was the second time that a man had achieved the purple while owing his advancement purely to his military career; both Vespasian and Septimius Severus had come from noble or middle - class families, while Thrax was born a commoner. He never visited the city of Rome during his reign, which marks the beginning of a series of "barracks emperors '' who came from the army. Between 235 and 285 over a dozen emperors achieved the purple, but only Valerian and Carus managed to secure their own sons ' succession to the throne; both dynasties died out within two generations.
The accession on 20 November 284, of Diocletian, the lower - class, Greek - speaking Dalmatian commander of Carus 's and Numerian 's household cavalry (protectores domestici), marked major innovations in Rome 's government and constitutional theory. Diocletian, a traditionalist and religious conservative, attempted to secure efficient, stable government and a peaceful succession with the establishment of the Tetrarchy. The empire was divided into East and West, each ruled by an Augustus assisted by a Caesar as emperor - in - waiting. These divisions were further subdivided into new or reformed provinces, administered by a complex, hierarchic bureaucracy of unprecedented size and scope. Diocletian 's own court was based at Nicomedia. His co-Augustus, Maximian, was based at Mediolanum (modern Milan). Their courts were peripatetic, and Imperial progressions through the provinces made much use of the impressive, theatrical adventus, or "Imperial arrival '' ceremony, which employed an elaborate choreography of etiquette to emphasise the emperor 's elevation above other mortals. Hyperinflation of imperial honours and titles served to distinguish the Augusti from their Caesares, and Diocletian, as senior Augustus, from his colleague Maximian. The senior Augustus in particular was made a separate and unique being, accessible only through those closest to him. The overall unity of the Empire still required the highest investiture of power and status in one man.
The Tetrarchy ultimately degenerated into civil war, but the eventual victor, Constantine the Great, restored Diocletian 's division of Empire into East and West. He kept the East for himself and founded his city of Constantinople as its new capital. Constantine 's own dynasty was also soon swallowed up in civil war and court intrigue until it was replaced, briefly, by Julian the Apostate 's general Jovian and then, more permanently, by Valentinian I and the dynasty he founded in 364. Though a soldier from a low middle - class background, Valentinian was made emperor by a conclave of senior generals and civil officials.
Theodosius I acceded to the purple in the East in 379 and in the West in 394. He outlawed paganism and made Christianity the Empire 's official religion. He was the last emperor to rule over a united Roman Empire; the distribution of the East to his son Arcadius and the West to his son Honorius after his death in 395 represented a permanent division.
In the West, the office of emperor soon degenerated into being little more than a puppet of a succession of Germanic tribal kings, until finally the Heruli Odoacer simply overthrew the child - emperor Romulus Augustulus in 476, shipped the imperial regalia to the Emperor Zeno in Constantinople and became King of Italy. Though during his own lifetime Odoacer maintained the legal fiction that he was actually ruling Italy as the viceroy of Zeno, historians mark 476 as the traditional date of the fall of the Roman Empire in the West. Large parts of Italy (Sicily, the south part of the peninsula, Ravenna, Venice etc.), however, remained under actual imperial rule from Constantinople for centuries, with imperial control slipping or becoming nominal only as late as the 11th century. In the East, the Empire continued until the fall of Constantinople to the Ottoman Turks in 1453. Although known as the Byzantine Empire by contemporary historians, the Empire was simply known as the Roman Empire to its citizens and neighboring countries.
The line of Roman emperors in the Eastern Roman Empire continued unbroken at Constantinople until the capture of Constantinople in 1204 by the Fourth Crusade. In the wake of this action, four lines of Emperors emerged, each claiming to be the legal successor: the Empire of Thessalonica, evolving from the Despotate of Epirus, which was reduced to impotence when its founder Theodore Komnenos Doukas was defeated, captured and blinded by the Bulgarian Emperor Ivan Asen III; the Latin Empire, which came to an end when the Empire of Nicaea recovered Constantinople in 1261; the Empire of Trebizond, whose importance declined over the 13th century, and whose claims were simply ignored; and the Empire of Nicaea, whose claims based on kinship with the previous emperors, control of the Patriarch of Constantinople, and possession of Constantinople through military prowess, prevailed. The successors of the emperors of Nicaea continued until the fall of Constantinople in 1453 under Constantine XI Palaiologos.
These emperors eventually normalized the imperial dignity into the modern conception of an emperor, incorporated it into the constitutions of the state, and adopted the aforementioned title Basileus kai autokratōr Rhomaiōn ("Emperor and Autocrat of the Romans ''). They had also ceased to use Latin as the language of state after Emperor Heraclius (d. 641 AD). Historians have customarily treated the state of these later Eastern emperors under the name "Byzantine Empire ''. It is important to note, however, that the adjective Byzantine, although historically used by Eastern Roman authors in a metonymic sense, was never an official term.
Constantine XI Palaiologos was the last reigning Roman emperor. A member of the Palaiologos dynasty, he ruled the remnant of the Eastern Roman Empire from 1449 until his death in 1453 defending its capital Constantinople.
He was born in Mystra as the eighth of ten children of Manuel II Palaiologos and Helena Dragaš, the daughter of the Serbian prince Constantine Dragaš of Kumanovo. He spent most of his childhood in Constantinople under the supervision of his parents. During the absence of his older brother in Italy, Constantine was regent in Constantinople from 1437 -- 40.
Before the beginning of the siege, Mehmed the Conqueror made an offer to Constantine XI. In exchange for the surrender of Constantinople, the emperor 's life would be spared and he would continue to rule in Mystra. Constantine refused this offer. Instead he led the defense of the city and took an active part in the fighting along the land walls. At the same time, he used his diplomatic skills to maintain the necessary unity between the Genovese, Venetian, and Byzantine troops. As the city fell on May 29, 1453, Constantine is said to have remarked: "The city is fallen but I am alive. '' Realizing that the end had come, he reportedly discarded his purple cloak and led his remaining soldiers into a final charge, in which he was killed. With his death, Roman imperial succession came to an end, almost 1500 years after Augustus.
After the fall of Constantinople, Thomas Palaiologos, brother of Constantine XI, was elected emperor and tried to organize the remaining forces. His rule came to an end after the fall of the last major Byzantine city, Corinth. He then moved in Italy and continued to be recognized as Eastern emperor by the Christian powers.
His son Andreas Palaiologos continued claims on the Byzantine throne until he sold the title to Ferdinand of Aragon and Isabella of Castile, the grandparents of Holy Roman Emperor Charles V.
The concept of the Roman Empire was renewed in the West with the coronation of the king of the Franks, Charlemagne (Charles the Great), as Roman emperor by the Pope on Christmas Day, 800. This coronation had its roots in the decline of influence of the Pope in the affairs of the Byzantine Empire at the same time the Byzantine Empire declined in influence over politics in the West. The Pope saw no advantage to be derived from working with the Byzantine Empire, but as George Ostrogorsky points out, "an alliance with the famous conqueror of the Lombards, on the other hand... promised much ''.
The immediate response of the Eastern Roman Emperor was not welcoming. "At that time it was axiomatic that there could be only one Empire as there could be only one church '', writes Ostrogorsky. "The coronation of Charles the Great violated all traditional ideas and struck a hard blow at Byzantine interests, for hitherto Byzantium, the new Rome, had unquestionably been regarded as the sole Empire which had taken over the inheritance of the old Roman imperium. Conscious of its imperial rights, Byzantium could only consider the elevation of Charles the Great to be an act of usurpation. ''
Nikephoros I chose to ignore Charlemagne 's claim to the imperial title, clearly recognizing the implications of this act. According to Ostrogorsky, "he even went so far as to refuse the Patriarch Nicephorus permission to dispatch the customary synodica to the Pope. '' Meanwhile, Charlemagne 's power steadily increased: he subdued Istria and several Dalmatian cities during the reign of Irene, and his son Pepin brought Venice under Western hegemony, despite a successful counter-attack by the Byzantine fleet. Unable to counter this encroachment on Byzantine territory, Nikephoros ' successor Michael I Rangabe capitulated; in return for the restoration of the captured territories, Michael sent Byzantine delegates to Aachen in 812 who recognized Charlemagne as Basileus. Michael did not recognize him as Basileus of the Romans, however, which was a title that he reserved for himself.
This line of Roman emperors was actually generally Germanic rather than Roman, but maintained their Roman - ness as a matter of principle. These emperors used a variety of titles (most frequently "Imperator Augustus '') before finally settling on Imperator Romanus Electus ("Elected Roman Emperor ''). Historians customarily assign them the title "Holy Roman Emperor '', which has a basis in actual historical usage, and treat their "Holy Roman Empire '' as a separate institution. To Latin Catholics of the time, the Pope was the temporal authority as well as spiritual authority, and as Bishop of Rome he was recognized as having the power to anoint or crown a new Roman emperor. The last man to be crowned by the pope (although in Bologna, not Rome) was Charles V. All his successors bore only a title of "Elected Roman Emperor ''.
This line of Emperors lasted until 1806 when Francis II dissolved the Empire during the Napoleonic Wars. Despite the existence of later potentates styling themselves "emperor '', such as the Napoleons, the Habsburg Emperors of Austria, and the Hohenzollern heads of the German Reich, this marked the end of the Western Empire. Although there is a living heir, Karl von Habsburg, to the Habsburg dynasty, as well as a Pope and pretenders to the positions of the electors, and although all the medieval coronation regalia are still preserved in Austria, the legal abolition of all aristocratic prerogatives of the former electors and the imposition of republican constitutions in Germany and Austria render quite remote any potential for a revival of the Holy Roman Empire.
Lists:
|
what is the current version of internet explorer | Internet Explorer version History - wikipedia
Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, commonly abbreviated IE or MSIE) is a series of graphical web browsers developed by Microsoft and included as part of the Microsoft Windows line of operating systems, starting in 1995.
The first version of Internet Explorer, (at that time named Microsoft Internet Explorer, later referred to as Internet Explorer 1) made its debut on 17 August 1995. It was a reworked version of Spyglass Mosaic, which Microsoft licensed from Spyglass Inc., like many other companies initiating browser development. It was first released as part of the add - on package Plus! for Windows 95 that year. Later versions were available as free downloads, or in service packs, and included in the OEM service releases of Windows 95 and later versions of Windows.
Originally Microsoft Internet Explorer only ran on Windows using Intel 80386 (IA - 32) processor. Current versions also run on x64, 32 - bit ARMv7, PowerPC and IA - 64. Versions on Windows have supported MIPS, Alpha AXP and 16 - bit and 32 - bit x86 but currently support only 32 - bit or 64 - bit. A version exists for Xbox 360 called Internet Explorer for Xbox using PowerPC and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile, which is currently based on Internet Explorer 9 and made for Windows Phone using ARMv7, Windows CE, and previously, based on Internet Explorer 7 for Windows Mobile. It remains in development alongside the desktop versions.
Internet Explorer has supported other operating systems with Internet Explorer for Mac (using Motorola 68020 +, PowerPC) and Internet Explorer for UNIX (Solaris using SPARC and HP - UX using PA - RISC), which have been discontinued.
Since its first release, Microsoft has added features and technologies such as basic table display (in version 1.5); XMLHttpRequest (in version 5), which adds creation of dynamic web pages; and Internationalized Domain Names (in version 7), which allow Web sites to have native - language addresses with non-Latin characters. The browser has also received scrutiny throughout its development for use of third - party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and both the United States and the European Union have alleged that integration of Internet Explorer with Windows has been to the detriment of other browsers.
The latest stable release has an interface allowing for use as both a desktop application, and as a Windows 8 application.
IE versions, over time, have had widely varying OS compatibility, ranging from being available for many platforms and several versions of Windows to only a few versions of Windows. Many versions of IE had some support for an older OS but stopped getting updates. The increased growth of the Internet in the 1990s and 2000s means that current browsers with small market shares have more total users than the entire market early on. For example, 90 % market share in 1997 would be roughly 60 million users, but by the start of 2007 90 % market share would equate to over 900 million users. The result is that later versions of IE6 had many more users in total than all the early versions put together.
The release of IE7 at the end of 2006 resulted in a collapse of IE6 market share; by February 2007, market version share statistics showed IE6 at about 50 % and IE7 at 29 %. Regardless of the actual market share, the most compatible version (across operating systems) of IE was 5. x, which had Mac OS 9 and Mac OS X, Unix, and most Windows versions available and supported for a short period in the late 1990s (although 4. x had a more unified codebase across versions). By 2007, IE had much narrower OS support, with the latest versions supporting only Windows XP Service Pack 2 and above. Internet Explorer 5.0, 5.5, 6.0, and 7.0 (Experimental) have also been unofficially ported to the Linux operating system from the project IEs4Linux.
Mac OS 7.1 to 8.1 68k
7.1. 2 PPC
Microsoft Internet Explorer (later referred to as Internet Explorer 1) made its debut on 17 August 1995. It was a reworked version of Spyglass Mosaic which Microsoft had licensed, like many other companies initiating browser development, from Spyglass Inc. It came with the purchase of Microsoft Plus! for Windows 95 and with at least some OEM releases of Windows 95 without Plus!. It was installed as part of the Internet Jumpstart Kit in Plus! for Windows 95. The Internet Explorer team began with about six people in early development. Microsoft Internet Explorer 1.5 was released several months later for Windows NT and added support for basic HTML table rendering. By including it free of charge on their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US $ 8 million settlement on January 22, 1997.
Although not included, this software can also be installed on the original release of Windows 95.
Microsoft Internet Explorer (that is version 1. x) is no longer supported, or available for download from Microsoft. However, archived versions of the software can be found on various websites.
Microsoft Internet Explorer came with an install routine replacing a manual installation required by many of the existing web browsers.
Microsoft Internet Explorer 2 was released for Windows 95, Windows NT 3.51, and NT 4.0 on 22 November 1995 (following a 2.0 beta in October). It featured support for JavaScript, SSL, cookies, frames, VRML, RSA, and Internet newsgroups. Version 2 was also the first release for Windows 3.1 and Macintosh System 7.0. 1 (PPC or 68k), although the Mac version was not released until January 1996 for PPC, and April for 68k. Version 2.1 for the Mac came out in August 1996, although by this time, Windows was getting 3.0. Version 2 was included in Windows 95 OSR 1 and Microsoft 's Internet Starter Kit for Windows 95 in early 1996. It launched with twelve languages, including English, but by April 1996, this was expanded to 24, 20, and 9 for Win 95, Win 3.1, and Mac, respectively. The 2.0 i version supported double - byte character - set.
Microsoft Internet Explorer 3 was released on 13 August 1996 and went on to be much more popular than its predecessors. Microsoft Internet Explorer 3 was the first major browser with CSS support, although this support was only partial. It also introduced support for ActiveX controls, Java applets, inline multimedia, and the PICS system for content metadata. Version 3 also came bundled with Internet Mail and News, NetMeeting, and an early version of the Windows Address Book, and was itself included with Windows 95 OSR 2. Version 3 proved to be the first more popular version of Internet Explorer, bringing with it increased scrutiny. In the months following its release, a number of security and privacy vulnerabilities were found by researchers and hackers. This version of Internet Explorer was the first to have the ' blue e ' logo. The Internet Explorer team consisted of roughly 100 people during the development of three months. The first major IE security hole, the Princeton Word Macro Virus Loophole, was discovered on 22 August 1996 in IE3.
Backwards compatibility was handled by allowing users who upgraded to IE3 to still use the previous version, because the installation renamed the old version (incorporating the old version number) and stored it in the same directory.
Microsoft Internet Explorer 4, released in September 1997, deepened the level of integration between the web browser and the underlying operating system. Installing version 4 on Windows 95 or Windows NT 4.0 and choosing Windows Desktop Update would result in the traditional Windows Explorer being replaced by a version more akin to a web browser interface, as well as the Windows desktop itself being web - enabled via Active Desktop. The integration with Windows, however, was subject to numerous packaging criticisms (see United States v. Microsoft). This option was no longer available with the installers for later versions of Internet Explorer, but was not removed from the system if already installed. Microsoft Internet Explorer 4 introduced support for Group Policy, allowing companies to configure and lock down many aspects of the browser 's configuration as well as support for offline browsing. Internet Mail and News was replaced with Outlook Express, and Microsoft Chat and an improved NetMeeting were also included. This version was also included with Windows 98. New features that allowed users to save and retrieve posts in comment forms were added, but they are not used today. Microsoft Internet Explorer 4.5 offered new features such as easier 128 - bit encryption. It also offered a dramatic stability improvement over prior versions, particularly the 68k version, which was especially prone to freezing.
Microsoft Internet Explorer 5, launched on 18 March 1999, and subsequently included with Windows 98 Second Edition and bundled with Office 2000, was another significant release that supported bi-directional text, ruby characters, XML, XSLT, and the ability to save web pages in MHTML format. IE5 was bundled with Outlook Express 5. Also, with the release of Microsoft Internet Explorer 5.0, Microsoft released the first version of XMLHttpRequest, giving birth to Ajax (even though the term "Ajax '' was not coined until years later). It was the last with a 16 - bit version. Microsoft Internet Explorer 5.01, a bug fix version included in Windows 2000, was released in December 1999. Microsoft Internet Explorer 5.5 followed in July 2000, improving its print preview capabilities, CSS and HTML standards support, and developer APIs; this version was bundled with Windows ME. However, version 5 was the last version for Mac and UNIX. Version 5.5 was the last to have Compatibility Mode, which allowed Microsoft Internet Explorer 4 to be run side by side with the 5. x. The IE team consisted of over 1,000 people by 1999, with funding on the order of US $100 million per year.
Microsoft Internet Explorer 6 was released on 27 August 2001, a few months before Windows XP. This version included DHTML enhancements, content restricted inline frames, and partial support of CSS level 1, DOM level 1, and SMIL 2.0. The MSXML engine was also updated to version 3.0. Other new features included a new version of the Internet Explorer Administration Kit (IEAK), Media bar, Windows Messenger integration, fault collection, automatic image resizing, P3P, and a new look - and - feel that was in line with the Luna visual style of Windows XP, when used in Windows XP. Internet Explorer 6.0 SP1 offered several security enhancements and coincided with the Windows XP SP1 patch release. In 2002, the Gopher protocol was disabled, and support for it was dropped in Internet Explorer 7. Internet Explorer 6.0 SV1 came out on 6 August 2004 for Windows XP SP2 and offered various security enhancements and new colour buttons on the user interface. Internet Explorer 6 updated the original ' blue e ' logo to a lighter blue and more 3D look. Microsoft now considers IE6 to be an obsolete product and recommends that users upgrade to Internet Explorer 8. Some corporate IT users have not upgraded despite this, in part because some still use Windows 2000, which will not run Internet Explorer 7 or above. Microsoft has launched a website, http://ie6countdown.com/, with the goal of getting Internet Explorer 6 usage to drop below 1 percent worldwide. Its usage is 6 % globally as of October 2012, and now about 6.3 % since June 2013, and depending on the country, the usage differs heavily: while the usage in Norway is 0.1 %, it is 21.3 % in the People 's Republic of China. On 3 January 2012, Microsoft announced that usage of IE6 in the United States had dropped below 1 %.
Windows Internet Explorer 7 was released on 18 October 2006. It includes bug fixes, enhancements to its support for web standards, tabbed browsing with tab preview and management, a multiple - engine search box, a web feeds reader, Internationalized Domain Name support (IDN), Extended Validation Certificate support, and an anti-phishing filter. With IE7, Internet Explorer has been decoupled from the Windows Shell -- unlike previous versions, the Internet Explorer ActiveX control is not hosted in the Windows Explorer process, but rather runs in a separate Internet Explorer process. It is included with Windows Vista and Windows Server 2008, and is available for Windows XP Service Pack 2 and later, and Windows Server 2003 Service Pack 1 and later. The original release of Internet Explorer 7 required the computer to pass a Windows Genuine Advantage validation check prior to installing, but on October 5, 2007, Microsoft removed this requirement. As some statistics show, by mid-2008, Internet Explorer 7 market share exceeded that of Internet Explorer 6 in a number of regions.
Windows Internet Explorer 8 was released on March 19, 2009. It is the first version of IE to pass the Acid2 test, and the last of the major browsers to do so (in the later Acid3 Test, it only scores 24 / 100.). According to Microsoft, security, ease of use, and improvements in RSS, CSS, and Ajax support were its priorities for IE8.
Internet Explorer 8 is the last version of Internet Explorer to run on Windows Server 2003 and Windows XP; the following version, Internet Explorer 9, works only on Windows Vista and later. Support for Internet Explorer 8 is bound to the lifecycle of the Windows version it is installed on as it is considered an OS component, thus it is unsupported on Windows XP due to the end of extended support for the latter in April 2014. Effective January 12, 2016, Internet Explorer 8 is no longer supported on any client or server version of Windows, due to new policies specifying that only the newest version of IE available for a supported version of Windows will be supported. However several Windows Embedded versions will remain supported until their respective EOL, unless otherwise specified.
Windows Internet Explorer 9 was released on March 14, 2011. Development for Internet Explorer 9 began shortly after the release of Internet Explorer 8. Microsoft first announced Internet Explorer 9 at PDC 2009, and spoke mainly about how it takes advantage of hardware acceleration in DirectX to improve the performance of web applications and quality of web typography. At MIX 10, Microsoft showed and publicly released the first Platform Preview for Internet Explorer 9, a frame for IE9 's engine not containing any UI of the browser. Leading up to the release of the final browser, Microsoft released updated platform previews, each featuring improved JavaScript compiling (32 - bit version), improved scores on the Acid3 test, as well as additional HTML5 standards support, approximately every 6 weeks. Ultimately, eight platform previews were released. The first public beta was released at a special event in San Francisco, which was themed around "the beauty of the web ''. The release candidate was released on February 10, 2011, and featured improved performance, refinements to the UI, and further standards support. The final version was released during the South by Southwest (SXSW) Interactive conference in Austin, Texas, on March 14, 2011.
Internet Explorer 9 is only supported on Windows 7, Windows Server 2008, and Windows Server 2008 R2, and was supported on Windows Vista SP2. It supports several CSS 3 properties (including border - radius, box - shadow, etc.), and embedded ICC v2 or v4 colour profiles support via Windows Color System. The 32 - bit version has faster JavaScript performance, this being due to a new JavaScript engine called "Chakra ''. It also features hardware accelerated graphics rendering using Direct2D, hardware - accelerated text rendering using DirectWrite, hardware - accelerated video rendering using Media Foundation, imaging support provided by Windows Imaging Component, and high fidelity printing powered by the XPS print pipeline. IE9 also supports the HTML5 video and audio tags and the Web Open Font Format. Internet Explorer 9 initially scored 95 / 100 on the Acid3 test, but has scored 100 / 100 since the test was updated in September 2011.
Internet Explorer was to be omitted from Windows 7 and Windows Server 2008 R2 in Europe, but Microsoft ultimately included it, with a browser option screen allowing users to select any of several web browsers (including Internet Explorer).
Internet Explorer is now available on Xbox 360 with Kinect support, as of October 2012.
Windows Internet Explorer 10 became generally available on October 26, 2012, alongside Windows 8 and Windows Server 2012, but is by now supported on Windows Server 2012, while Windows Server 2012 R2 only supports Internet Explorer 11. It became available for Windows 7 on February 26, 2013. Microsoft announced Internet Explorer 10 in April 2011, at MIX 11 in Las Vegas, releasing the first Platform Preview at the same time. At the show, it was said that Internet Explorer 10 was about 3 weeks in development. This release further improves upon standards support, including HTML5 Drag & Drop and CSS3 gradients. Internet Explorer 10 drops support for Windows Vista and will only run on Windows 7 Service Pack 1 and later. Internet Explorer 10 Release Preview was also released on the Windows 8 Release Preview platform.
Internet Explorer 11 is featured in a Windows 8.1 update which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It is a major update to its developer tools, enhanced scaling for high DPI screens, HTML5 prerender and prefetch, hardware - accelerated JPEG decoding, closed captioning, HTML5 full screen, and is the first Internet Explorer to support WebGL and Google 's protocol SPDY (starting at v3). This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto), adaptive bitrate streaming (Media Source Extensions) and Encrypted Media Extensions.
Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks.
Internet Explorer 11 's user agent string now identifies the agent as "Trident '' (the underlying layout engine) instead of "MSIE ''. It also announces compatibility with Gecko (the layout engine of Firefox).
Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013.
|
who did the us fight in the war of 1812 | War of 1812 - Wikipedia
Treaty of Ghent
United States
British Empire
Tecumseh 's Confederacy
Bourbon Spain
2,200 killed in action
1,160 killed in action
East Coast
Great Lakes / Saint Lawrence River
West Indies / Gulf Coast
Pacific Ocean
The War of 1812 (1812 -- 1815) was a conflict fought between the United States, the United Kingdom and their respective allies. Historians in Britain often see it as a minor theater of the Napoleonic Wars; in the United States and Canada, it is seen as a war in its own right.
Since the outbreak of war with Napoleonic France, Britain had enforced a naval blockade to choke off neutral trade to France, which the United States contested as illegal under international law. To man the blockade, Britain impressed American merchant sailors into the Royal Navy. Incidents such as the Chesapeake -- Leopard Affair inflamed anti-British sentiment. In 1811, the British were in turn outraged by the Little Belt Affair, which resulted in the deaths of 11 British sailors. British political support for a Native American buffer state, which conducted raids on American settlers on the frontier, hindered American expansion. On June 18, 1812, President James Madison, after receiving heavy pressure from the War hawks in Congress, signed the American declaration of war into law. Senior figures such as Lord Liverpool and Lord Castlereagh believed it to have been an opportunistic ploy to annex Canada while Britain was fighting a war with France. The view was shared in much of New England, whose leaders bitterly disputed the numbers of US sailors the War hawks claimed had been impressed by the British.
With the majority of its army in Europe fighting Napoleon, the British adopted a defensive strategy, though the war 's first engagement was an ill - fated assault on Sacket 's Harbor, New York. American prosecution of the war effort suffered from its unpopularity, especially in New England, where it was derogatorily referred to as "Mr. Madison 's War ''. American defeats at the Siege of Detroit and the Battle of Queenston Heights thwarted attempts to seize Upper Canada, improving British morale. American attempts to invade Montreal also failed. In 1813, at the Battle of Lake Erie the Americans won control of Lake Erie and at the Battle of the Thames, shattered Tecumseh 's Confederacy, securing a primary war goal. At sea, the powerful Royal Navy blockaded the American coast, allowing them to strike American trade at will. In 1814, the Burning of Washington took place. The Americans repulsed the British at the Battle of Plattsburgh, ending an attempt to invade the north, and, at the Battle of Baltimore, the British threat to the mid-Atlantic states was defeated.
At home, the British faced mounting opposition to wartime taxation, and demands to reopen trade with America. With the abdication of Napoleon, the maintenance of the blockade of France, as well as the issue of the impressment of American sailors, were nullified. The British were then able to increase the strength of the blockade on the United States coast, annihilating American maritime trade and bringing the United States government near to bankruptcy. Peace negotiations began in August 1814 and the Treaty of Ghent was signed on December 24 as neither side wanted to continue fighting. News of the peace would not reach America for some time. Unaware that the treaty had been signed, British forces invaded Louisiana and were defeated at the Battle of New Orleans in January 1815. The victory and the subsequent ending of the war was seen to have brought in the Era of Good Feelings by restoring American honour, after the failure to invade Canada, the bottling up of most of the United States Navy, leading to the collapse of anti-war sentiment. Propaganda within the country caused the capture of USS President, the American flagship, the next week to be overlooked. News of the treaty arrived shortly thereafter, halting military operations. The treaty was unanimously ratified by the United States on February 17, 1815, ending the war with Status quo ante bellum (no boundary changes).
Historians have long debated the relative weight of the multiple reasons underlying the origins of the War of 1812. This section summarizes several contributing factors which resulted in the declaration of war by the United States.
As Risjord (1961) notes, a powerful motivation for the Americans was the desire to uphold national honour in the face of what they considered to be British insults such as the Chesapeake -- Leopard Affair. H.W. Brands says, "The other war hawks spoke of the struggle with Britain as a second war of independence; (Andrew) Jackson, who still bore scars from the first war of independence held that view with special conviction. The approaching conflict was about violations of American rights, but it was also about vindication of American identity. '' Americans at the time and historians since often called it the United States ' "Second War of Independence ''.
The British too were offended by what they considered insults such as the Little Belt Affair. This led to the British having a particular interest in capturing the United States flagship President which they would eventually succeed at doing in 1815.
In 1807, Britain introduced a series of trade restrictions via the Orders in Council to impede neutral trade with France, which Britain was then fighting in the Napoleonic Wars. The United States contested these restrictions as illegal under international law. Historian Reginald Horsman states, "a large section of influential British opinion, both in the government and in the country, thought that America presented a threat to British maritime supremacy. ''
The American merchant marine had come close to doubling between 1802 and 1810, making it by far the largest neutral fleet. Britain was the largest trading partner, receiving 80 % of U.S. cotton and 50 % of other U.S. exports. The British public and press were resentful of the growing mercantile and commercial competition. The United States ' view was that Britain 's restrictions violated its right to trade with others.
During the Napoleonic Wars, the Royal Navy expanded to 176 ships of the line and 600 ships overall, requiring 140,000 sailors to man. While the Royal Navy could man its ships with volunteers in peacetime, it competed in wartime with merchant shipping and privateers for a small pool of experienced sailors and turned to impressment from ashore and foreign or domestic shipping when it could not operate its ships with volunteers alone.
The United States believed that British deserters had a right to become U.S. citizens. Britain did not recognize a right whereby a British subject could relinquish his status as a British subject, emigrate and transfer his national allegiance as a naturalized citizen to any other country. This meant that in addition to recovering naval deserters, it considered any United States citizens who were born British liable for impressment. Aggravating the situation was the reluctance of the United States to issue formal naturalization papers and the widespread use of unofficial or forged identity or protection papers by sailors. This made it difficult for the Royal Navy to distinguish Americans from non-Americans and led it to impress some Americans who had never been British. Some gained freedom on appeal. Thus while the United States recognized British - born sailors on American ships as Americans, Britain did not. It was estimated by the Admiralty that there were 11,000 naturalized sailors on United States ships in 1805. U.S. Secretary of the Treasury Albert Gallatin stated that 9,000 U.S. sailors were born in Britain. Moreover, a great number of these British born sailors were Irish. An investigation by Captain Isaac Chauncey in 1808 found that 58 % of sailors based in New York City were either naturalized citizens or recent immigrants, the majority of these foreign born sailors (134 of 150) being from Britain. Moreover, 80 of the 134 British sailors were Irish.
American anger at impressment grew when British frigates were stationed just outside U.S. harbours in view of U.S. shores and searched ships for contraband and impressed men while within U.S. territorial waters. Well publicized impressment actions such as the Leander Affair and the Chesapeake -- Leopard Affair outraged the American public.
The British public in turn were outraged by the Little Belt Affair, in which a larger American ship clashed with a small British sloop, resulting in the deaths of 11 British sailors. Both sides claimed the other fired first, but the British public in particular blamed the U.S. for attacking a smaller vessel, with calls for revenge by some newspapers, while the U.S. was encouraged by the fact they had won a victory over the Royal Navy. The U.S. Navy also forcibly recruited British sailors but the British government saw impressment as commonly accepted practice and preferred to rescue British sailors from American impressment on a case - by - case basis.
The Northwest Territory, which consisted of the modern states of Ohio, Indiana, Illinois, Michigan, and Wisconsin, was the battleground for conflict between the Native American Nations and the United States. The British Empire had ceded the area to the United States in the Treaty of Paris in 1783, both sides ignoring the fact that the land was already inhabited by various Native American nations. These included the Miami, Winnebago, Shawnee, Fox, Sauk, Kickapoo, Delaware and Wyandot. Some warriors, who had left their nations of origin, followed Tenskwatawa, the Shawnee Prophet and the brother of Tecumseh. Tenskwatawa had a vision of purifying his society by expelling the "children of the Evil Spirit '': the American settlers. The Indians wanted to create their own state in the Northwest, which would end the American threat forever as it became clear that the Americans wanted all of the land in the Old Northwest for themselves. Tenskwatawa and Tecumseh formed a confederation of numerous tribes to block American expansion. The British saw the Native American nations as valuable allies and a buffer to its Canadian colonies and provided arms. Attacks on American settlers in the Northwest further aggravated tensions between Britain and the United States. Raiding grew more common in 1810 and 1811; Westerners in Congress found the raids intolerable and wanted them permanently ended. British policy towards the Indians of the Northwest was torn between on one point the desire to keep the Americans fighting in the Northwest and to preserve a region that provided rich profits for Canadian fur traders vs. the fear of too much support for the Indians would cause a war with the United States. Through Tecumseh 's plans for an Indian state in the Northwest would benefit British North America by making it more defensible, at the same time, the defeats suffered by Tecumseh 's confederation had the British leery to going too far to support what was probably a losing cause and in the months running to the war, British diplomats attempted to defuse tensions on the frontier.
The confederation 's raids and existence hindered American expansion into rich farmlands in the Northwest Territory. Pratt writes:
There is ample proof that the British authorities did all in their power to hold or win the allegiance of the Indians of the Northwest with the expectation of using them as allies in the event of war. Indian allegiance could be held only by gifts, and to an Indian no gift was as acceptable as a lethal weapon. Guns and ammunition, tomahawks and scalping knives were dealt out with some liberality by British agents.
However, according to the U.S Army Center of Military History, the "land - hungry frontiersmen '', with "no doubt that their troubles with the Native Americans were the result of British intrigue '', exacerbated the problem by "(circulating stories) after every Native American raid of British Army muskets and equipment being found on the field ''. Thus, "the westerners were convinced that their problems could best be solved by forcing the British out of Canada ''.
The British had the long - standing goal of creating a large, "neutral '' Native American state that would cover much of Ohio, Indiana, and Michigan. They made the demand as late as the fall of 1814 at the peace conference, but lost control of western Ontario in 1813 at key battles on and around Lake Erie. These battles destroyed the Indian confederacy which had been the main ally of the British in that region, weakening its negotiating position. Although the area remained under British or British - allied Native Americans ' control until the end of the war, the British, at American insistence and with higher priorities, dropped the demands.
American expansion into the Northwest Territory was being obstructed by various Indian tribes since the end of the Revolution, who were supplied and encouraged by the British. Americans on the western frontier demanded that interference be stopped. There is dispute, however, over whether or not the American desire to annex Canada brought on the war. Several historians believe that the capture of Canada was intended only as a means to secure a bargaining chip, which would then be used to force Britain to back down on the maritime issues. It would also cut off food supplies for Britain 's West Indian colonies, and temporarily prevent the British from continuing to arm the Indians. However, many historians believe that a desire to annex Canada was a cause of the war. This view was more prevalent before 1940, but remains widely held today. Congressman Richard Mentor Johnson told Congress that the constant Indian atrocities along the Wabash River in Indiana were enabled by supplies from Canada and were proof that "the war has already commenced... I shall never die contented until I see England 's expulsion from North America and her territories incorporated into the United States. ''
Madison believed that British economic policies designed to foster imperial preference were harming the American economy and that as British North America existed, here was a conduit for American strugglers who were undercutting his trade policies, which thus required that the United States annex British North America. Furthermore, Madison believed that the Great Lakes - St. Lawrence trade route might become the main trade route for the export of North American goods to Europe at the expense of the U.S. economy, and if the United States controlled the resources of British North America like timber which the British needed for their navy, then Britain would be forced to change its maritime policies which had so offended American public opinion. Many Americans believed it was only natural that their country should swallow up North America with one Congressman, John Harper saying in a speech that "the Author of Nature Himself had marked our limits in the south, by the Gulf of Mexico and on the north, by the regions of eternal frost ''. Upper Canada (modern southern Ontario) had been settled mostly by Revolution - era exiles from the United States (United Empire Loyalists) or postwar American immigrants. The Loyalists were hostile to union with the United States, while the immigrant settlers were generally uninterested in politics and remained neutral or supported the British during the war. The Canadian colonies were thinly populated and only lightly defended by the British Army. Americans then believed that many men in Upper Canada would rise up and greet an American invading army as liberators. That did not happen. One reason American forces retreated after one successful battle inside Canada was that they could not obtain supplies from the locals. But the Americans thought that the possibility of local support suggested an easy conquest, as former President Thomas Jefferson believed: "The acquisition of Canada this year, as far as the neighborhood of Quebec, will be a mere matter of marching, and will give us the experience for the attack on Halifax, the next and final expulsion of England from the American continent. ''
Annexation was supported by American border businessmen who wanted to gain control of Great Lakes trade.
Carl Benn noted that the War Hawks ' desire to annex the Canadas was similar to the enthusiasm for the annexation of Spanish Florida by inhabitants of the American South; both expected war to facilitate expansion into long - desired lands and end support for hostile Indian tribes (Tecumseh 's Confederacy in the North and the Creek in the South).
Tennessee Congressman Felix Grundy considered it essential to acquire Canada to preserve domestic political balance, arguing that annexing Canada would maintain the free state - slave state balance, which might otherwise be thrown off by the acquisition of Florida and the settlement of the southern areas of the new Louisiana Purchase. However historian Richard Maass argued in 2015 that the expansionist theme is a myth that goes against the "relative consensus among experts that the primary U.S. objective was the repeal of British maritime restrictions ''. He argues that consensus among scholars is that the United States went to war "because six years of economic sanctions had failed to bring Britain to the negotiating table, and threatening the Royal Navy 's Canadian supply base was their last hope. '' Maass agrees that theoretically expansionism might have tempted Americans, but finds that "leaders feared the domestic political consequences of doing so. Notably, what limited expansionism there was focused on sparsely populated western lands rather than the more populous eastern settlements (of Canada). ''
Horsman argued expansionism played a role as a secondary cause after maritime issues, noting that many historians have mistakenly rejected expansionism as a cause for the war. He notes that it was considered key to maintaining sectional balance between free and slave states thrown off by American settlement of the Louisiana Territory, and widely supported by dozens of War Hawk congressmen such as John A. Harper, Felix Grundy, Henry Clay, and Richard M. Johnson, who voted for war with expansion as a key aim.
In disagreeing with those interpretations that have simply stressed expansionism and minimized maritime causation, historians have ignored deep - seated American fears for national security, dreams of a continent completely controlled by the republican United States, and the evidence that many Americans believed that the War of 1812 would be the occasion for the United States to achieve the long - desired annexation of Canada... Thomas Jefferson well - summarized American majority opinion about the war... to say "that the cession of Canada... must be a sine qua non (i.e. indispensable condition) at a treaty of peace ''.
However, Horsman states that in his view "the desire for Canada did not cause the War of 1812 '' and that "The United States did not declare war because it wanted to obtain Canada, but the acquisition of Canada was viewed as a major collateral benefit of the conflict. ''
Historian Alan Taylor says that many Republican congressmen, such as Richard M. Johnson, John A. Harper and Peter B. Porter, "longed to oust the British from the continent and to annex Canada. '' A few Southerners opposed this, fearing an imbalance of free and slave states if Canada was annexed, while anti-Catholicism also caused many to oppose annexing mainly Catholic Lower Canada, believing its French - speaking inhabitants "unfit... for republican citizenship ''. Even major figures such as Henry Clay and James Monroe expected to keep at least Upper Canada in the event of an easy conquest. Notable American generals, like William Hull were led by this sentiment to issue proclamations to Canadians during the war promising republican liberation through incorporation into the United States; a proclamation the government never officially disavowed. General Alexander Smyth similarly declared to his troops that when they invaded Canada "You will enter a country that is to become one of the United States. You will arrive among a people who are to become your fellow - citizens. '' A lack of clarity about American intentions undercut these appeals, however.
David and Jeanne Heidler argue that "Most historians agree that the War of 1812 was not caused by expansionism but instead reflected a real concern of American patriots to defend United States ' neutral rights from the overbearing tyranny of the British Navy. That is not to say that expansionist aims would not potentially result from the war. ''
However, they also argue otherwise, saying that "acquiring Canada would satisfy America 's expansionist desires '', also describing it as a key goal of western expansionists, who, they argue, believed that "eliminating the British presence in Canada would best accomplish '' their goal of halting British support for Indian raids. They argue that the "enduring debate '' is over the relative importance of expansionism as a factor, and whether "expansionism played a greater role in causing the War of 1812 than American concern about protecting neutral maritime rights. ''
While the British government was largely oblivious to the deteriorating North American situation because of its involvement in a continent - wide European War, the U.S. was in a period of significant political conflict between the Federalist Party (based mainly in the Northeast), which favoured a strong central government and closer ties to Britain, and the Democratic - Republican Party (with its greatest power base in the South and West), which favoured a weak central government, preservation of states ' rights (including slavery), expansion into Indian land, and a stronger break with Britain. By 1812, the Federalist Party had weakened considerably, and the Republicans, with James Madison completing his first term of office and control of Congress, were in a strong position to pursue their more aggressive agenda against Britain. Throughout the war, support for the U.S. cause was weak (or sometimes non-existent) in Federalist areas of the Northeast. Few men volunteered to serve; the banks avoided financing the war. The negativism of the Federalists, especially as exemplified by the Hartford Convention of 1814 -- 15, ruined its reputation and the Party survived only in scattered areas. By 1815 there was broad support for the war from all parts of the country. This allowed the triumphant Democratic - Republicans to adopt some Federalist policies, such as a national bank, which Madison reestablished in 1816.
The United States Navy (USN) had 7,250 sailors and Marines in 1812. The American Navy was well trained and a professional force that fought well against the Barbary pirates and France in the Quasi-War. The USN had 13 ocean - going warships, three of them "super-frigates '' and its principal problem was a lack of funding as many in Congress did not see the need for a strong navy. The American warships were all well - built ships that were equal, if not superior to British ships of a similar class (British shipbuilding emphasized quantity over quality). However, the biggest ships in the USN were frigates and the Americans had no ships - of - the - line capable of engaging in a fleet action with the Royal Navy at sea.
On the high seas, the Americans could only pursue a strategy of guerre de course of taking British merchantmen via their frigates and privateers. Before the war, the USN was largely concentrated on the Atlantic coast and at the war 's outbreak had only two gunboats on Lake Champlain, one brig on Lake Ontario and another brig in Lake Erie.
The United States Army was much larger than the British Army in North America, but leadership in the American officer corps was inconsistent with some officers proving themselves to be outstanding but many others inept, owing their positions to political favors. American soldiers were well trained and brave, but in the early battles were often led by officers of questionable ability. Congress was hostile to a standing army, and during the war, the U.S. government called out 450,000 men from the state militas, a number that was slightly smaller than the entire population of British North America. However, the state militias were poorly trained, armed and led. After the Battle of Bladensburg in 1814 in which the Maryland and Virginia militias were soundly defeated by the British Army, President Madison commented: "I could never have believed so great a difference existed between regular troops and a militia force, if I not witnessed the scenes of this day. ''
The British Royal Navy was a well - led, professional force, described by the Canadian historian Carl Benn as the world 's most powerful navy. However, as long as the war with France continued, North America was a secondary concern. In 1813, France had 80 ships - of - the - line while building another 35. Therefore, containing the French fleet had to be the main British naval concern. In Upper Canada, the British had the Provincial Marine was essential for keeping the army supplied since the roads in Upper Canada were abysmal. On Lake Ontario and the St. Lawrence, the Royal Navy had two schooners while the Provincial Marine maintained four small warships on Lake Erie. The British Army in North America was a very professional and well trained force, but suffered from being outnumbered.
The militias of Upper Canada and Lower Canada had a much more lower level of military effectiveness. Nevertheless, Canadian militia (and locally recruited regular units known as "Fencibles '') were often more reliable than American militia, particularly when defending their own territory. As such they played pivotal roles in various engagements, including at the Battle of Chateauguay where Canadian and Indian forces alone stopped a much larger American force despite not having assistance from regular British units.
Because of their lower population compared to whites, and lacking artillery, Indian allies of the British avoided pitched battles and instead relied on irregular warfare, including raids and ambushes. Given their low population, it was crucial to avoid heavy losses and, in general, Indian chiefs would seek to only fight under favorable conditions; any battle that promised heavy losses was avoided if possible. The main Indian weapons were a mixture of tomahawks, knives, swords, rifles, clubs, arrows and muskets. Indian warriors were brave, but the need to avoid heavy losses meant that they would only fight under the most favorable conditions and their tactics favored a defensive as opposed to offensive style.
In the words of Benn, those Indians fighting with the Americans provided the U.S with their "most effective light troops '' while the British desperately needed the Indian tribes to compensate for their numerical inferiority. The Indians, regardless of which side they fought for, saw themselves as allies, not subordinates and Indian chiefs did what they viewed as best for their tribes, much to the annoyance of both American and British generals, who often complained about their unreliability.
On June 1, 1812, President James Madison sent a message to Congress recounting American grievances against Great Britain, though not specifically calling for a declaration of war. After Madison 's message, the House of Representatives deliberated for four days behind closed doors before voting 79 to 49 (61 %) in favor of the first declaration of war. The Senate concurred in the declaration by a 19 to 13 (59 %) vote in favour. The conflict began formally on June 18, 1812, when Madison signed the measure into law and proclaimed it the next day. This was the first time that the United States had declared war on another nation, and the Congressional vote would prove to be the closest vote to formally declare war in American history. The Authorization for Use of Military Force Against Iraq Resolution of 1991, while not a formal declaration of war, was a closer vote. None of the 39 Federalists in Congress voted in favour of the war; critics of war subsequently referred to it as "Mr. Madison 's War. ''
Earlier in London on May 11, an assassin had killed Prime Minister Spencer Perceval, which resulted in Lord Liverpool coming to power. Liverpool wanted a more practical relationship with the United States. On June 23, he issued a repeal of the Orders in Council, but the United States was unaware of this, as it took three weeks for the news to cross the Atlantic. On June 28, 1812, HMS Colibri was despatched from Halifax under a flag of truce to New York. On July 9, she anchored off Sandy Hook, and three days later sailed on her return with a copy of the declaration of war, in addition to transporting the British ambassador to the United States, Mr. Foster and consul, Colonel Barclay. She arrived in Halifax, Nova Scotia eight days later. The news of the declaration took even longer to reach London.
However, the British commander in Upper Canada received news of the American declaration of war much faster. In response to the U.S. declaration of war, Isaac Brock issued a proclamation alerting the citizenry in Upper Canada of the state of war and urging all military personnel "to be vigilant in the discharge of their duty '' to prevent communication with the enemy and to arrest anyone suspected of helping the Americans. He also issued orders to the commander of the British post at Fort St. Joseph to initiate offensive operations against U.S. forces in northern Michigan, who it turned out, were not yet aware of their own government 's declaration of war. The resulting Siege of Fort Mackinac on July 17 was the first major land engagement of the war, and ended in an easy British victory.
The war was conducted in three theatres:
Although the outbreak of the war had been preceded by years of angry diplomatic dispute, neither side was ready for war when it came. Britain was heavily engaged in the Napoleonic Wars, most of the British Army was deployed in the Peninsular War (in Portugal and Spain), and the Royal Navy was compelled to blockade most of the coast of Europe. The number of British regular troops present in Canada in July 1812 was officially stated to be 6,034, supported by Canadian militia. Throughout the war, the British Secretary of State for War and the Colonies was Earl Bathurst. For the first two years of the war, he could spare few troops to reinforce North America and urged the commander - in - chief in North America (Lieutenant General Sir George Prévost) to maintain a defensive strategy. The naturally cautious Prévost followed these instructions, concentrating on defending Lower Canada at the expense of Upper Canada (which was more vulnerable to American attacks) and allowing few offensive actions.
The United States was not prepared to prosecute a war, for Madison had assumed that the state militias would easily seize Canada and that negotiations would follow. In 1812, the regular army consisted of fewer than 12,000 men. Congress authorized the expansion of the army to 35,000 men, but the service was voluntary and unpopular; it offered poor pay, and there were few trained and experienced officers, at least initially. The militia objected to serving outside their home states, were not open to discipline, and performed poorly against British forces when outside their home states. American prosecution of the war suffered from its unpopularity, especially in New England, where anti-war speakers were vocal. "Two of the Massachusetts members (of Congress), Seaver and Widgery, were publicly insulted and hissed on Change in Boston; while another, Charles Turner, member for the Plymouth district, and Chief - Justice of the Court of Sessions for that county, was seized by a crowd on the evening of August 3, (1812) and kicked through the town ''. The United States had great difficulty financing its war. It had disbanded its national bank, and private bankers in the Northeast were opposed to the war. The United States was able to obtain financing from London - based Barings Bank to cover overseas bond obligations. The failure of New England to provide militia units or financial support was a serious blow. Threats of secession by New England states were loud, as evidenced by the Hartford Convention. Britain exploited these divisions, blockading only southern ports for much of the war and encouraging smuggling.
American leaders assumed that Canada could be easily overrun. Former President Jefferson optimistically referred to the conquest of Canada as "a matter of marching ''. Many Loyalist Americans had migrated to Upper Canada after the Revolutionary War. There was also significant non-Loyalist American immigration to the area due to the offer of land grants to immigrants, and the U.S. assumed the latter would favour the American cause, but they did not. In prewar Upper Canada, General Prévost was in the unusual position of having to purchase many provisions for his troops from the American side. This peculiar trade persisted throughout the war in spite of an abortive attempt by the U.S. government to curtail it. In Lower Canada, which was much more populous, support for Britain came from the English elite with strong loyalty to the Empire, and from the Canadian elite, who feared American conquest would destroy the old order by introducing Protestantism, Anglicization, republican democracy, and commercial capitalism; and weakening the Catholic Church. The Canadian inhabitants feared the loss of a shrinking area of good lands to potential American immigrants.
In 1812 -- 13, British military experience prevailed over inexperienced American commanders. Geography dictated that operations would take place in the west: principally around Lake Erie, near the Niagara River between Lake Erie and Lake Ontario, and near the Saint Lawrence River area and Lake Champlain. This was the focus of the three - pronged attacks by the Americans in 1812. Although cutting the St. Lawrence River through the capture of Montreal and Quebec would have made Britain 's hold in North America unsustainable, the United States began operations first in the western frontier because of the general popularity there of a war with the British, who had sold arms to the Native Americans opposing the settlers.
The British scored an important early success when their detachment at St. Joseph Island, on Lake Huron, learned of the declaration of war before the nearby American garrison at the important trading post at Mackinac Island in Michigan. A scratch force landed on the island on July 17, 1812, and mounted a gun overlooking Fort Mackinac. After the British fired one shot from their gun, the Americans, taken by surprise, surrendered. This early victory encouraged the natives, and large numbers moved to help the British at Amherstburg. The island totally controlled access to the Old Northwest, giving the British nominal control of this area, and, more vitally, a monopoly on the fur trade.
An American army under the command of William Hull invaded Canada on July 12, with his forces chiefly composed of untrained and ill - disciplined militiamen. Once on Canadian soil, Hull issued a proclamation ordering all British subjects to surrender, or "the horrors, and calamities of war will stalk before you ''. This led many of the British forces to defect. John Bennett, printer and publisher of the York Gazette & Oracle, was a prominent defector. Andrew Mercer, who had the publication 's production moved to his house, lost the press and type destroyed during American occupation, an example of what happened to resisters. He also threatened to kill any British prisoner caught fighting alongside a native. The proclamation helped stiffen resistance to the American attacks. Hull 's army was too weak in artillery and badly supplied to achieve its objectives, and had to fight just to maintain its own lines of communication.
The senior British officer in Upper Canada, Major General Isaac Brock, felt that he should take bold measures to calm the settler population in Canada, and to convince the aboriginals who were needed to defend the region that Britain was strong. He moved rapidly to Amherstburg near the western end of Lake Erie with reinforcements and immediately decided to attack Detroit. Hull, fearing that the British possessed superior numbers and that the Indians attached to Brock 's force would commit massacres if fighting began, surrendered Detroit without a fight on August 16. Knowing of British - instigated indigenous attacks on other locations, Hull ordered the evacuation of the inhabitants of Fort Dearborn (Chicago) to Fort Wayne. After initially being granted safe passage, the inhabitants (soldiers and civilians) were attacked by Potowatomis on August 15 after travelling only 2 miles (3.2 km) in what is known as the Battle of Fort Dearborn. The fort was subsequently burned.
Brock promptly transferred himself to the eastern end of Lake Erie, where American General Stephen Van Rensselaer was attempting a second invasion. An armistice (arranged by Prévost in the hope the British renunciation of the Orders in Council to which the United States objected might lead to peace) prevented Brock from invading American territory. When the armistice ended, the Americans attempted an attack across the Niagara River on October 13, but suffered a crushing defeat at Queenston Heights. Brock was killed during the battle. While the professionalism of the American forces would improve by the war 's end, British leadership suffered after Brock 's death. A final attempt in 1812 by American General Henry Dearborn to advance north from Lake Champlain failed when his militia refused to advance beyond American territory.
In contrast to the American militia, the Canadian militia performed well. French Canadians, who found the anti-Catholic stance of most of the United States troublesome, and United Empire Loyalists, who had fought for the Crown during the American Revolutionary War, strongly opposed the American invasion. Many in Upper Canada were recent settlers from the United States who had no obvious loyalties to the Crown; nevertheless, while there were some who sympathized with the invaders, the American forces found strong opposition from men loyal to the Empire.
After Hull 's surrender of Detroit, General William Henry Harrison was given command of the U.S. Army of the Northwest. He set out to retake the city, which was now defended by Colonel Henry Procter in conjunction with Tecumseh. A detachment of Harrison 's army was defeated at Frenchtown along the River Raisin on January 22, 1813. Procter left the prisoners with an inadequate guard, who could not prevent some of his North American aboriginal allies from attacking and killing perhaps as many as sixty Americans, many of whom were Kentucky militiamen. The incident became known as the River Raisin Massacre. The defeat ended Harrison 's campaign against Detroit, and the phrase "Remember the River Raisin! '' became a rallying cry for the Americans.
In May 1813, Procter and Tecumseh set siege to Fort Meigs in northwestern Ohio. American reinforcements arriving during the siege were defeated by the natives, but the fort held out. The Indians eventually began to disperse, forcing Procter and Tecumseh to return north to Canada. A second offensive against Fort Meigs also failed in July. In an attempt to improve Indian morale, Procter and Tecumseh attempted to storm Fort Stephenson, a small American post on the Sandusky River, only to be repulsed with serious losses, marking the end of the Ohio campaign.
On Lake Erie, American commander Captain Oliver Hazard Perry fought the Battle of Lake Erie on September 10, 1813. His decisive victory at "Put - In - Bay '' ensured American military control of the lake, improved American morale after a series of defeats, and compelled the British to fall back from Detroit. This paved the way for General Harrison to launch another invasion of Upper Canada, which culminated in the U.S. victory at the Battle of the Thames on October 5, 1813, in which Tecumseh was killed.
Because of the difficulties of land communications, control of the Great Lakes and the St. Lawrence River corridor was crucial. When the war began, the British already had a small squadron of warships on Lake Ontario and had the initial advantage. To redress the situation, the Americans established a Navy yard at Sackett 's Harbor in northwestern New York. Commodore Isaac Chauncey took charge of the large number of sailors and shipwrights sent there from New York; they completed the second warship built there in a mere 45 days. Ultimately, almost 3,000 men worked at the naval shipyard, building eleven warships and many smaller boats and transports. Having regained the advantage by their rapid building program, Chauncey and Dearborn attacked York, on the northern shore of the lake, the capital of Upper Canada, on April 27, 1813. The Battle of York was a "pyrrhic '' American victory, marred by looting and the burning of the small Provincial Parliament buildings and a library (resulting in a spirit of revenge by the British / Canadians led by Gov. George Prévost, who later demanded satisfaction encouraging the British Admiralty to issue orders to their officers later operating in the Chesapeake Bay region to exact similar devastation on the American Federal capital village of Washington the following year). However, Kingston was strategically much more valuable to British supply and communications routes along the St. Lawrence corridor. Without control of Kingston, the U.S. Navy could not effectively control Lake Ontario or sever the British supply line from Lower Canada.
On May 25, 1813 the guns of the American Lake Ontario squadron joined by Fort Niagara began bombarding Fort George. On May 27, 1813, an American amphibious force from Lake Ontario assaulted Fort George on the northern end of the Niagara River and captured it without serious losses. The British also abandoned Fort Erie and headed towards the Burlington Heights. With the British position in Upper Canada on the verge of collapse, the Iroquois Indians living along the banks of the Grand River considered changing side and ignored a British appeal to come to their aid. The retreating British forces were not pursued, however, until they had largely escaped and organized a counteroffensive against the advancing Americans at the Battle of Stoney Creek on June 5. With Upper Canada on the line, the British a surprise attack at Stoney Creek at 2: 00 am, leading to much confused fighting. Through tactically a draw, the battle was a strategic British victory as the Americans pulled back to Forty Mile Creek rather than continuing their advance into Upper Canada. At this point, the Six Nations living on the Grand River began to come out to fight for the British as an American victory no longer seemed inevitable. The Iroquis ambushed an American patrol at Forty Mile Creek while the Royal Navy squadron based in Kingston came to bombard the American camp, leading to General Dearborn to retreat back to Fort George as he now mistakenly believed he was outnumbered and outgunned. The British commander, General John Vincent was heartened by the fact that more and more First Nations warriors were now arriving to assist him, providing about 800 additional men. On June 24, with the help of advance warning by Laura Secord, another American force was forced to surrender by a much smaller British and native force at the Battle of Beaver Dams, marking the end of the American offensive into Upper Canada. The British commander General Francis de Rottenberg did not have the strength to retake Fort George, so he build a blockade, hoping to starve the Americans into surrender. Meanwhile, Commodore James Lucas Yeo had taken charge of the British ships on the lake and mounted a counterattack, which was nevertheless repulsed at the Battle of Sackett 's Harbor. Thereafter, Chauncey and Yeo 's squadrons fought two indecisive actions, neither commander seeking a fight to the finish.
Late in 1813, the Americans abandoned the Canadian territory they occupied around Fort George. They set fire to the village of Newark (now Niagara - on - the - Lake) on December 10, 1813, incensing the Canadians and politicians in control. Many of the inhabitants were left without shelter, freezing to death in the snow. This led to British retaliation following the Capture of Fort Niagara on December 18, 1813. Early the next morning on December 19, the British and their native allies stormed the neighbouring town of Lewiston, New York, torching homes and buildings and killing about a dozen civilians. As the British were chasing the surviving residents out of town, a small force of Tuscarora natives intervened and stopped the pursuit, buying enough time for the locals to escape to safer ground. It is notable in that the Tuscaroras defended the Americans against their own Iroquois brothers, the Mohawks, who sided with the British. Later, the British attacked and burned Buffalo on December 30, 1813.
In 1814, the contest for Lake Ontario turned into a building race. Naval superiority shifted between the opposing fleets as each built new, bigger ships. However, neither was able to bring the other to battle when in a position of superiority, leaving the Engagements on Lake Ontario a draw. At war 's end, the British held the advantage with the 112 - gun HMS St Lawrence, but the Americans had laid down two even larger ships. The majority of these ships never saw action and were decommissioned after the war.
The British were potentially most vulnerable over the stretch of the St. Lawrence where it formed the frontier between Upper Canada and the United States. During the early days of the war, there was illicit commerce across the river. Over the winter of 1812 and 1813, the Americans launched a series of raids from Ogdensburg on the American side of the river, which hampered British supply traffic up the river. On February 21, Sir George Prévost passed through Prescott on the opposite bank of the river with reinforcements for Upper Canada. When he left the next day, the reinforcements and local militia attacked. At the Battle of Ogdensburg, the Americans were forced to retire.
For the rest of the year, Ogdensburg had no American garrison, and many residents of Ogdensburg resumed visits and trade with Prescott. This British victory removed the last American regular troops from the Upper St. Lawrence frontier and helped secure British communications with Montreal. Late in 1813, after much argument, the Americans made two thrusts against Montreal. Taking Montreal would have cut off the British forces in Upper Canada and thus potentially changed the war. The plan eventually agreed upon was for Major General Wade Hampton to march north from Lake Champlain and join a force under General James Wilkinson that would embark in boats and sail from Sackett 's Harbor on Lake Ontario and descend the St. Lawrence. Hampton was delayed by bad roads and supply problems and also had an intense dislike of Wilkinson, which limited his desire to support his plan. On October 25, his 4,000 - strong force was defeated at the Chateauguay River by Charles de Salaberry 's smaller force of Canadian Voltigeurs and Mohawks. Salaberry 's force of Lower Canada militia and Indians numbered only 339, but had a strong defensive position. Wilkinson 's force of 8,000 set out on October 17, but was also delayed by bad weather. After learning that Hampton had been checked, Wilkinson heard that a British force under Captain William Mulcaster and Lieutenant Colonel Joseph Wanton Morrison was pursuing him, and by November 10, he was forced to land near Morrisburg, about 150 kilometres (90 mi) from Montreal. On November 11, Wilkinson 's rear guard, numbering 2,500, attacked Morrison 's force of 800 at Crysler 's Farm and was repulsed with heavy losses. After learning that Hampton could not renew his advance, Wilkinson retreated to the U.S. and settled into winter quarters. He resigned his command after a failed attack on a British outpost at Lacolle Mills. Had the Americans taken Montreal as planned, Upper Canada would have certainly been lost and the failure of the campaign ended in the greatest British defeat in the Canadas during the war.
Rather trying to take Montreal or Kingston, the Americans chose again to invade the Niagara frontier to take Upper Canada, largely because the Americans had occupied southwestern Upper Canada after their victory in Moraviantown, and it was believed in Washington that if the Americans could take the rest of Upper Canada, then they would force the British to cede that province to them when it came time to negotiate the peace. The end of the war in Europe in April 1814 meant that the British could now redeploy their Army to North America, so the Americans were anxious to have Upper Canada to negotiate from a position of strength. The plan for 1814 to invade Upper Canada via the Niagara frontier while sending another force to recapture Mackinac. The British were sending supplies to the Indians in the Old Northwest from Montreal via Mackinac, so is why the island was considered important. By the middle of 1814, American generals, including Major Generals Jacob Brown and Winfield Scott, had drastically improved the fighting abilities and discipline of the army. The Americans ' renewed attack on the Niagara peninsula quickly captured Fort Erie on July 3, 1814 with the 170 garrison quickly surrendering to the 5, 000 Americans. General Phineas Riall rushed towards the frontier and unaware of Fort Erie 's fall or the size of the American force chose to engage in battle. Winfield Scott then gained a victory over an inferior British force at the Battle of Chippawa on July 5. The Americans brought out overwhelming firepower against the attacking British who lost about 600 dead to the 350 dead on the American side. An attempt to advance further ended with a hard - fought but inconclusive Battle of Lundy 's Lane on July 25. Both sides stood their ground, but after the battle, the American commander, General Jacob Brown, pulled back to Fort George while the British did not pursue them.
The outnumbered Americans withdrew but withstood a prolonged Siege of Fort Erie. The British tried to storm Fort Erie on August 14, 1814, but suffered heavy losses losing 950 killed, wounded and captured compared to only 84 dead and wounded on the American side. The British suffered heavy casualties in a failed assault and were weakened by exposure and shortage of supplies in their siege lines. Eventually the British raised the siege, but American Major General George Izard took over command on the Niagara front and followed up only halfheartedly. An American raid along the Grand River destroyed many farms that weakened British logistics. In October 1814 the American advanced into Upper Canada, engaged in skirmishes at Cook 's Mill, but pulled back when they heard that the new British warship, HMS St. Lawrence armed with 104 guns, which had been launched in Kingston that September was on its way. The Americans lacked provisions, and eventually destroyed the Fort Erie and retreated across the Niagara.
Meanwhile, following the abdication of Napoleon, 15,000 British troops were sent to North America under four of Wellington 's ablest brigade commanders. Fewer than half were veterans of the Peninsula and the rest came from garrisons. Prévost was ordered to neutralize American power on the lakes by burning Sackets Harbor, gain naval control of Lake Erie, Lake Ontario and the Upper Lakes, and defend Lower Canada from attack. He did defend Lower Canada but otherwise failed to achieve his objectives. Given the late season he decided to invade New York State. His army outnumbered the American defenders of Plattsburgh, but he was worried about his flanks so he decided he needed naval control of Lake Champlain. On the lake, the British squadron under Captain George Downie and the Americans under Master Commandant Thomas Macdonough were more evenly matched.
On reaching Plattsburgh, Prévost delayed the assault until the arrival of Downie in the hastily completed 36 - gun frigate HMS Confiance. Prévost forced Downie into a premature attack, but then unaccountably failed to provide the promised military backing. Downie was killed and his naval force defeated at the naval Battle of Plattsburgh in Plattsburgh Bay on September 11, 1814. The Americans now had control of Lake Champlain; Theodore Roosevelt later termed it "the greatest naval battle of the war ''. The successful land defence was led by Alexander Macomb. To the astonishment of his senior officers, Prévost then turned back, saying it would be too hazardous to remain on enemy territory after the loss of naval supremacy. Prévost was recalled and in London, a naval court - martial decided that defeat had been caused principally by Prévost 's urging the squadron into premature action and then failing to afford the promised support from the land forces. Prévost died suddenly, just before his own court - martial was to convene. Prévost 's reputation sank to a new low, as Canadians claimed that their militia under Brock did the job and he failed. Recently, however, historians have been more kindly, measuring him not against Wellington but against his American foes. They judge Prévost 's preparations for defending the Canadas with limited means to be energetic, well - conceived, and comprehensive; and against the odds, he had achieved the primary objective of preventing an American conquest.
To the east, the northern part of Massachusetts, soon to be Maine, was invaded. Fort Sullivan at Eastport was taken by Sir Thomas Hardy on July 11. Castine, Hampden, Bangor, and Machias were taken, and Castine became the main British base till April 15, 1815, when the British left, taking £ 10,750 in tariff duties, the "Castine Fund '' which was used to found Dalhousie University. Eastport was not returned to the United States till 1818.
The Mississippi River valley was the western frontier of the United States in 1812. The territory acquired in the Louisiana Purchase of 1803 contained almost no U.S. settlements west of the Mississippi except around Saint Louis and a few forts and trading posts. Fort Bellefontaine, an old trading post converted to a U.S. Army post in 1804, served as regional headquarters. Fort Osage, built in 1808 along the Missouri was the western-most U.S. outpost, it was abandoned at the start of the war. Fort Madison, built along the Mississippi in what is now Iowa, was also built in 1808, and had been repeatedly attacked by British - allied Sauk since its construction. In September 1813 Fort Madison was abandoned after it was attacked and besieged by natives, who had support from the British. This was one of the few battles fought west of the Mississippi. Black Hawk played a leadership role.
Little of note took place on Lake Huron in 1813, but the American victory on Lake Erie and the recapture of Detroit isolated the British there. During the ensuing winter, a Canadian party under Lieutenant Colonel Robert McDouall established a new supply line from York to Nottawasaga Bay on Georgian Bay. When he arrived at Fort Mackinac with supplies and reinforcements, he sent an expedition to recapture the trading post of Prairie du Chien in the far west. The Siege of Prairie du Chien ended in a British victory on July 20, 1814.
Earlier in July, the Americans sent a force of five vessels from Detroit to recapture Mackinac. A mixed force of regulars and volunteers from the militia landed on the island on August 4. They did not attempt to achieve surprise, and at the brief Battle of Mackinac Island, they were ambushed by natives and forced to re-embark. The Americans discovered the new base at Nottawasaga Bay, and on August 13, they destroyed its fortifications and the schooner Nancy that they found there. They then returned to Detroit, leaving two gunboats to blockade Mackinac. On September 4, these gunboats were taken unawares and captured by British boarding parties from canoes and small boats. These Engagements on Lake Huron left Mackinac under British control.
The British garrison at Prairie du Chien also fought off another attack by Major Zachary Taylor. In this distant theatre, the British retained the upper hand until the end of the war, through the allegiance of several indigenous tribes that received British gifts and arms, enabling them to take control of parts of what is now Michigan and Illinois, as well as the whole of modern Wisconsin. In 1814 U.S. troops retreating from the Battle of Credit Island on the upper Mississippi attempted to make a stand at Fort Johnson, but the fort was soon abandoned, along with most of the upper Mississippi valley.
After the U.S. was pushed out of the Upper Mississippi region, they held on to eastern Missouri and the St. Louis area. Two notable battles fought against the Sauk were the Battle of Cote Sans Dessein, in April 1815, at the mouth of the Osage River in the Missouri Territory, and the Battle of the Sink Hole, in May 1815, near Fort Cap au Gris.
At the conclusion of peace, Mackinac and other captured territory was returned to the United States. At the end of the war, some British officers and Canadians objected to handing back Prairie du Chien and especially Mackinac under the terms of the Treaty of Ghent. However, the Americans retained the captured post at Fort Malden, near Amherstburg, until the British complied with the treaty.
Fighting between Americans, the Sauk, and other indigenous tribes continued through 1817, well after the war ended in the east.
In 1812, Britain 's Royal Navy was the world 's largest, with over 600 cruisers in commission and some smaller vessels. Although most of these were involved in blockading the French navy and protecting British trade against (usually French) privateers, the Royal Navy still had 85 vessels in American waters, counting all British Navy vessels in North American and the Caribbean waters. However, the Royal Navy 's North American squadron based in Halifax, Nova Scotia (which bore the brunt of the war), numbered one small ship of the line, seven frigates, nine smaller sloops and brigs along with five schooners. By contrast, the United States Navy comprised 8 frigates, 14 smaller sloops and brigs, and no ships of the line. The U.S. had embarked on a major shipbuilding program before the war at Sackets Harbor, New York and continued to produce new ships. Three of the existing American frigates were exceptionally large and powerful for their class, larger than any British frigate in North America. Whereas the standard British frigate of the time was rated as a 38 gun ship, usually carrying up to 50 guns, with its main battery consisting of 18 - pounder guns; USS Constitution, President, and United States, in comparison, were rated as 44 - gun ships, carrying 56 -- 60 guns with a main battery of 24 - pounders.
The British strategy was to protect their own merchant shipping to and from Halifax, Nova Scotia, and the West Indies, and to enforce a blockade of major American ports to restrict American trade. Because of their numerical inferiority, the American strategy was to cause disruption through hit - and - run tactics, such as the capture of prizes and engaging Royal Navy vessels only under favourable circumstances. Days after the formal declaration of war, however, it put out two small squadrons, including the frigate President and the sloop Hornet under Commodore John Rodgers, and the frigates United States and Congress, with the brig Argus under Captain Stephen Decatur. These were initially concentrated as one unit under Rodgers, who intended to force the Royal Navy to concentrate its own ships to prevent isolated units being captured by his powerful force.
Large numbers of American merchant ships were returning to the United States with the outbreak of war, and if the Royal Navy was concentrated, it could not watch all the ports on the American seaboard. Rodgers ' strategy worked, in that the Royal Navy concentrated most of its frigates off New York Harbor under Captain Philip Broke, allowing many American ships to reach home. But, Rodgers ' own cruise captured only five small merchant ships, and the Americans never subsequently concentrated more than two or three ships together as a unit.
Meanwhile, Constitution, commanded by Captain Isaac Hull, sailed from Chesapeake Bay on July 12. On July 17, Broke 's British squadron gave chase off New York, but Constitution evaded her pursuers after two days. After briefly calling at Boston to replenish water, on August 19, Constitution engaged the British frigate HMS Guerriere. After a 35 - minute battle, Guerriere had been dis - masted and captured and was later burned. Constitution earned the nickname "Old Ironsides '' following this battle as many of the British cannonballs were seen to bounce off her hull. Hull returned to Boston with news of this significant victory. On October 25, United States, commanded by Captain Decatur, captured the British frigate HMS Macedonian, which he then carried back to port. At the close of the month, Constitution sailed south, now under the command of Captain William Bainbridge. On December 29, off Bahia, Brazil, she met the British frigate HMS Java. After a battle lasting three hours, Java struck her colors and was burned after being judged unsalvageable. Constitution, however, was relatively undamaged in the battle.
The successes gained by the three big American frigates forced Britain to construct five 40 - gun, 24 - pounder heavy frigates and two "spar - decked '' frigates (the 60 - gun HMS Leander and HMS Newcastle) and to razee three old 74 - gun ships of the line to convert them to heavy frigates. The Royal Navy acknowledged that there were factors other than greater size and heavier guns. The United States Navy 's sloops and brigs had also won several victories over Royal Navy vessels of approximately equal strength. While the American ships had experienced and well - drilled volunteer crews, the enormous size of the overstretched Royal Navy meant that many ships were shorthanded and the average quality of crews suffered. The constant sea duties of those serving in North America interfered with their training and exercises.
The capture of the three British frigates stimulated the British to greater exertions. More vessels were deployed on the American seaboard and the blockade tightened. On June 1, 1813, off Boston Harbor, the frigate Chesapeake, commanded by Captain James Lawrence, was captured by the British frigate HMS Shannon under Captain Philip Broke. Lawrence was mortally wounded and famously cried out, "Do n't give up the ship! Hold on, men! '' The two frigates were of near - identical size. Chesapeake 's crew was larger but most had not served or trained together. British citizens reacted with celebration and relief that the run of American victories had ended. Notably, this action was by ratio one of the bloodiest contests recorded during this age of sail, with more dead and wounded than HMS Victory suffered in four hours of combat at Trafalgar. Captain Lawrence was killed and Captain Broke was so badly wounded that he never again held a sea command.
In January 1813, the American frigate Essex, under the command of Captain David Porter, sailed into the Pacific to harass British shipping. Many British whaling ships carried letters of marque allowing them to prey on American whalers, and they nearly destroyed the industry. Essex challenged this practice. She inflicted considerable damage on British interests before she and her tender, USS Essex Junior (armed with twenty guns) were captured off Valparaíso, Chile, by the British frigate HMS Phoebe and the sloop HMS Cherub on March 28, 1814. In the summer of 1813, the brig USS Argus raided the waters off the British isles, taking 19 British merchant ships until she was captured after a battle with HMS Pelican on August 14, 1813.
The British Cruizer - class brig - sloops did not fare well against the American ship - rigged sloops of war. Hornet and Wasp constructed before the war were notably powerful vessels, and the Frolic class built during the war even more so (although Frolic was trapped and captured by a British frigate and a schooner). The British brig - rigged sloops tended to suffer fire to their rigging more frequently than the American ship - rigged sloops. In addition, the ship - rigged sloops could back their sails in action, giving them another advantage in manoeuvring.
Following their earlier losses, the British Admiralty instituted a new policy that the three American heavy frigates should not be engaged except by a ship of the line or smaller vessels in squadron strength. The capture of President by a squadron of four British frigates in January 1815 is an example of this, although the vast majority of damage done to President was done by a single ship and President surrendered to that specific ship first, only to try and escape and rest of the squadron catch up. But, a month later, Constitution engaged and captured two smaller British warships, HMS Cyane and HMS Levant, sailing in company, although the combined tonnage and number of men onboard Cyane and Levant was only two thirds of that of Constitution.
Success in single ship battles raised American morale after the repeated failed invasion attempts in Upper and Lower Canada. However these single ship victories had no military effect on the war at sea as they did not alter the balance of naval power, impede British supplies and reinforcements, or even raise insurance rates for British trade. During the war, the United States Navy captured 165 British merchantmen while the Royal Navy captured 1,400 American merchantmen.
After the Little Belt Affair, the USS President became one of the most prized targets of the Royal Navy. President was eventually captured on 15th January 1815. Commodore Stephen Decatur had surrendered President to HMS Endymion while a squadron of British ships was close behind Endymion, and then tried to escape only to be caught up by the rest of the squadron. Decatur would then give conflicting accounts to the British and to the Americans. The American would claim President was overwhelmed by a squadron of superior force, while the British would claim President was taken by a smaller British warship in a frigate duel. In reality it was something in - between. President 's only option was to escape the larger squadron and had succeeded in doing so except for Endymiom. Hence the engagement was not fought as a duel, but rather as chase in which would result in President being unable to escape Endymion and in turn surrendering to the smaller Endymion. The extent of the constroversy created by the engagement suggests it was a significant battle at the time. Indeed, the United States had lost its flagship and its finest warship. To the British, the 1576 ton, 24 - pounder President was evidence that the American 44 - gun frigate was clearly far more powerful than the 1067 ton, 18 - pounder British frigate. This restored honor to the British after the Americans had claimed the frigate actions of the first year of the war were of equal force, as this now was clearly not the case. While the Americans would celebrate their victory in New Orleans, the British would celebrate the taking of the USS President.
The operations of American privateers proved a more significant threat to British trade than the U.S. Navy. They operated throughout the Atlantic and continued until the close of the war, most notably from ports such as Baltimore. American privateers reported taking 1300 British merchant vessels, compared to 254 taken by the U.S. Navy. although the insurer Lloyd 's of London reported that only 1,175 British ships were taken, 373 of which were recaptured, for a total loss of 802. The Canadian historian Carl Benn wrote that American privateers took 1, 344 British ships, of which 750 were retaken by the British. However the British were able to limit privateering losses by the strict enforcement of convoy by the Royal Navy and by capturing 278 American privateers. Due to the massive size of the British merchant fleet, American captures only affected 7.5 % of the fleet, resulting in no supply shortages or lack of reinforcements for British forces in North America. Of 526 American privateers, 148 were captured by the Royal Navy and only 207 ever took a prize.
Due to the large size of their navy, the British did not rely as much on privateering. The majority of the 1,407 captured American merchant ships were taken by the Royal Navy. The war was the last time the British allowed privateering, since the practice was coming to be seen as politically inexpedient and of diminishing value in maintaining its naval supremacy. However privateering remained popular in British colonies. It was the last hurrah for privateers in Bermuda who vigorously returned to the practice after experience in previous wars. The nimble Bermuda sloops captured 298 American ships. Privateer schooners based in British North America, especially from Nova Scotia took 250 American ships and proved especially effective in crippling American coastal trade and capturing American ships closer to shore than the Royal Navy cruisers.
The naval blockade of the United States began informally in 1812 and expanded to cut off more ports as the war progressed. Twenty ships were on station in 1812 and 135 were in place by the end of the conflict. In March 1813, the Royal Navy punished the Southern states, who most vocal about annexing British North America by blockading Charleston, Port Royal, Savannah and New York city was well. However, as additional ships were sent to North America in 1813, the Royal Navy was able to tighten the blockade and extend it, first to the coast south of Narragansett by November 1813 and to the entire American coast on May 31, 1814. In May 1814, following the abdication of Napoleon, and the end of the supply problems with Wellington 's army, New England was blockaded.
The British government, having need of American foodstuffs for its army in Spain, benefited from the willingness of the New Englanders to trade with them, so no blockade of New England was at first attempted. The Delaware River and Chesapeake Bay were declared in a state of blockade on December 26, 1812. Illicit trade was carried on by collusive captures arranged between American traders and British officers. American ships were fraudulently transferred to neutral flags. Eventually, the U.S. government was driven to issue orders to stop illicit trading; this put only a further strain on the commerce of the country. The overpowering strength of the British fleet enabled it to occupy the Chesapeake and to attack and destroy numerous docks and harbours.
The blockade of American ports later tightened to the extent that most American merchant ships and naval vessels were confined to port. The American frigates USS United States and Macedonian ended the war blockaded and hulked in New London, Connecticut. The USS United States and USS Macedonian attempted to set sail to raid British shipping in the Caribbean, but were forced to turn back when confronted with a British squadron, and by the end of the war, the United States had six frigates and four ships - of - the - line sitting in port. Some merchant ships were based in Europe or Asia and continued operations. Others, mainly from New England, were issued licences to trade by Admiral Sir John Borlase Warren, commander in chief on the American station in 1813. This allowed Wellington 's army in Spain to receive American goods and to maintain the New Englanders ' opposition to the war. The blockade nevertheless resulted in American exports decreasing from $130 million in 1807 to $7 million in 1814. Most of these were food exports that ironically went to supply their enemies in Britain or British colonies. The blockade had a devastating effect on the American economy with the value of American exports and imports falling from $114 million in 1811 down to $20 million by 1814 while the US Customs took in $13 million in 1811 and $6 million in 1814, despite the fact that Congress had voted to double the rates. The British blockade further damaged the American economy by forcing merchants to abandon the cheap and fast coastal trade to the slow and more expensive inland roads. In 1814, only 1 out of 14 American merchantmen risked leaving port as a high probability that any ship leaving port would be seized.
As the Royal Navy base that supervised the blockade, Halifax profited greatly during the war. From that base British privateers seized many French and American ships and sold their prizes in Halifax.
The British Royal Navy 's blockades and raids allowed about 4,000 African Americans to escape slavery by fleeing American plantations to find freedom aboard British ships, migrants known, as regards those who settled in Canada, as the Black Refugees. The blockading British fleet in Chesapeake Bay received increasing numbers of enslaved black Americans during 1813. By British government order they were treated as free persons when reaching British hands. Alexander Cochrane 's proclamation of April 2, 1814, invited Americans who wished to emigrate to join the British, and though not explicitly mentioning slaves was taken by all as addressed to them. About 2,400 of the escaped slaves and their families who were carried on ships of the Royal Navy following their escape settled in Nova Scotia and New Brunswick during and after the war. From May 1814, younger men among the volunteers were recruited into a new Corps of Colonial Marines. They fought for Britain throughout the Atlantic campaign, including the Battle of Bladensburg and the attacks on Washington, D.C. and Battle of Baltimore, later settling in Trinidad after rejecting British government orders for transfer to the West India Regiments, forming the community of the Merikins. The slaves who escaped to the British represented the largest emancipation of African Americans before the American Civil War.
Maine, then part of Massachusetts, was a base for smuggling and illegal trade between the U.S. and the British. Until 1813 the region was generally quiet except for privateer actions near the coast. In September 1813, there was a notable naval action when the U.S. Navy 's brig Enterprise fought and captured the Royal Navy brig Boxer off Pemaquid Point. The first British assault came in July 1814, when Sir Thomas Masterman Hardy took Moose Island (Eastport, Maine) without a shot, with the entire American garrison of Fort Sullivan -- which became the British Fort Sherbrooke -- surrendering. Next, from his base in Halifax, Nova Scotia, in September 1814, Sir John Coape Sherbrooke led 3,000 British troops in the "Penobscot Expedition ''. In 26 days, he raided and looted Hampden, Bangor, and Machias, destroying or capturing 17 American ships. He won the Battle of Hampden (losing two killed while the Americans lost one killed). Retreating American forces were forced to destroy the frigate Adams. The British occupied the town of Castine and most of eastern Maine for the rest of the war, re-establishing the colony of New Ireland. The Treaty of Ghent returned this territory to the United States, though Machias Seal Island has remained in dispute. The British left in April 1815, at which time they took ₤ 10,750 obtained from tariff duties at Castine. This money, called the "Castine Fund '', was used to establish Dalhousie University, in Halifax, Nova Scotia.
The strategic location of the Chesapeake Bay near America 's new national capital, Washington, D.C. on the major tributary of the Potomac River, made it a prime target for the British and their Royal Navy and the King 's Army. Starting in March 1813, a squadron under Rear Admiral George Cockburn started a blockade of the mouth of the Bay at Hampton Roads harbour and raided towns along the Bay from Norfolk, Virginia, to Havre de Grace, Maryland.
On July 4, 1813, Commodore Joshua Barney, a Revolutionary War naval hero, convinced the U.S. Navy Department to build the Chesapeake Bay Flotilla, a squadron of twenty barges powered by small sails or oars (sweeps) to defend the Chesapeake Bay. Launched in April 1814, the squadron was quickly cornered in the Patuxent River, and while successful in harassing the Royal Navy, they were powerless to stop the British campaign that ultimately led to the "Burning of Washington ''. This expedition, led by Cockburn and General Robert Ross, was carried out between August 19 and 29, 1814, as the result of the hardened British policy of 1814 (although British and American commissioners had convened peace negotiations at Ghent in June of that year). As part of this, Admiral Warren had been replaced as commander in chief by Admiral Alexander Cochrane, with reinforcements and orders to coerce the Americans into a favourable peace.
A force of 2,500 soldiers under General Ross had just arrived in Bermuda aboard HMS Royal Oak, three frigates, three sloops and ten other vessels. Released from the Peninsular War in Spain and Portugal by British victory, the British intended to use them for diversionary raids along the coasts of Maryland and Virginia. In response to Prévost 's request, they decided to employ this force, together with the naval and military units already on the station, to strike at the "Federal City '' of Washington, D.C.
On August 24, U.S. Secretary of War John Armstrong Jr. insisted that the British would attack Baltimore rather than Washington, even when units of the British Army, accompanied by major ships of the Royal Navy, were obviously on their way to the capital. The inexperienced American militia, which had congregated nearby at Bladensburg, Maryland, to protect the capital, were defeated in the Battle of Bladensburg, opening the route to Washington. While First Lady Dolley Madison saved valuables from the then named "President 's House '' (or "President 's Palace '' (executive mansion) -- now the "White House ''), Fourth President James Madison and the government with members of the Presidential Cabinet, fled to Virginia. Seeing that the Battle of Bladensburg, northeast of the town in rural Prince George 's County was not going well, Secretary of the Navy William Jones ordered Captain Thomas Tingey, commandant of the Washington Naval Yard on the Eastern Branch of the Potomac River (now the Anacostia River), to set the facility ablaze to prevent the capture of American naval ships, buildings, shops and supplies. Tingey had overseen the Naval Yard 's planning and development since the national capital had been moved from Philadelphia to Washington in 1800, and waited until the very last possible minute, nearly four hours after the order was given to execute it. The destruction included most of the facility as well as the nearly - completed frigate "Columbia '' and the sloop "Argus ''.
The British commanders ate the supper that had been prepared for the President and his departmental secretaries after returning from hopeful glorious U.S. victory, before they burned the Executive Mansion; American morale was reduced to an all - time low. The British viewed their actions as retaliation for the destructive American invasions and raids into Canada, most notably the Americans ' burning of York earlier in 1813. Later that same evening, a furious storm (some later weather experts called it a thunderstorm, almost a hurricane) swept into Washington, D.C., sending one or more tornadoes into the rough, unfinished town that caused more damage but finally extinguished the fires with torrential rains, leaving fire - blackened walls and partial ruins of the President 's House, The Capitol and Treasury Department that were set alight the first night. In addition, the combustibles used to finish off the Navy Yard destruction that the Americans had started, exploded, killing or maiming a large number of "Red - Coats. '' The British left Washington, D.C. the day after the storm subsided.
Having destroyed Washington 's public buildings, including the President 's Mansion and the Treasury, the British army and navy next moved several weeks later to capture Baltimore, forty miles northeast, a busy port and a key base for American privateers. However, by not immediately going overland to the port city they sneeringly called a "nest of pirates '', but returning to their ships anchored in the Patuxent River and proceeding later up to the Upper Bay, they gave the Baltimoreans plenty of time to reinforce their fortifications and gather regular U.S. Army and state militia troops from surrounding counties and states. The subsequent "Battle for Baltimore '' began with the British landing on Sunday, September 12, 1814, at North Point, where the Baltimore harbour 's Patapsco River met the Chesapeake Bay, where they were met by American militia further up the "Patapsco Neck '' peninsula. An exchange of fire began, with casualties on both sides. Major Gen. Robert Ross was killed by American snipers as he attempted to rally his troops in the first skirmish. The snipers were killed moments later, and the British paused, then continued to march northwestward to the stationed Maryland and Baltimore City militia units deployed further up Long Log Lane on the peninsula at "Godly Wood '' where the later Battle of North Point was fought for several afternoon hours in a musketry and artillery duel under command of British Col. Arthur Brooke and American commander for the Maryland state militia and its Third Brigade (or "Baltimore City Brigade ''), Brig. Gen. John Stricker. The British also planned to simultaneously attack Baltimore by water on the following day, September 13, to support their military now arrayed facing the massed, heavily dug - in and fortified American units of approximately 15,000 with about a hundred cannon gathered along the eastern heights of the city named "Loudenschlager 's Hill '' (later "Hampstead Hill '' - now part of Patterson Park). These overall Baltimore defences had been planned in advance and foreseen by the state militia commander, Maj. Gen. Samuel Smith, who had been set in charge of the Baltimore defences instead of the discredited U.S. Army commander for the Mid-Atlantic's 10th Military District (following the debacle the previous month at Bladensburg), William H. Winder. Smith had been earlier a Revolutionary War officer and commander, then wealthy city merchant and U.S. Representative, Senator and later Mayor of Baltimore. The "Red Coats '' were unable to immediately reduce Fort McHenry, at the entrance to Baltimore Harbor to allow their ships to provide heavier naval gunfire to support their troops to the northeast.
At the bombardment of Fort McHenry, the British naval guns, mortars and revolutionary new "Congreve rockets '' had a longer range than the American cannon onshore, and the ships mostly stood off out of the Americans ' range, bombarding the fort, which returned very little fire and was not too heavily damaged during the onslaught except for a burst over a rear brickwall knocking out some fieldpieces and resulting in a few casualties. Despite however the heavy bombardment, casualties in the fort were slight and the British ships eventually realized that they could not force the passage to attack Baltimore in coordination with the land force. After a last ditch night feint and barge attack during the heavy rain storm at the time led by Capt. Charles Napier around the fort up the Middle Branch of the river to the west which was split and misdirected partly in the storm, then turned back with heavy casualties by alert gunners at supporting western batteries Fort Covington and Battery Babcock, so the British called off the attack and sailed downriver to pick up their army which had retreated from the east side of Baltimore. All the lights were extinguished in Baltimore the night of the attack, and the fort was bombarded for 25 hours. The only light was given off by the exploding shells over Fort McHenry, illuminating the flag that was still flying over the fort. The defence of the fort inspired the American lawyer Francis Scott Key to write "Defence of Fort M'Henry '', a poem that was set to music as "The Star - Spangled Banner ''.
Before 1813, the war between the Creeks (or Muscogee) had been largely an internal affair sparked by the ideas of Tecumseh farther north in the Mississippi Valley. A faction known as the Red Sticks, so named for the color of their war paint, had broken away from the rest of the Creek Confederacy, which wanted peace with the United States. The Red Sticks were allied with Tecumseh, who about a year before 1813 had visited the Creeks and encouraged greater resistance to the Americans. The Creek Nation was a trading partner of the United States actively involved with Spanish and British trade as well. The Red Sticks, as well as many southern Muscogeean people like the Seminole, had a long history of alliance with the Spanish and British Empires. This alliance helped the North American and European powers protect each other 's claims to territory in the south.
The Battle of Burnt Corn between Red Sticks and U.S. troops, occurred in the southern parts of Alabama on July 27, 1813. It prompted the state of Georgia as well as the Mississippi territory militia to immediately take major action against Creek offensives. The Red Sticks chiefs gained power in the east along the Alabama, Coosa, and Tallapoosa Rivers -- Upper Creek territory. The Lower Creek lived along the Chattahoochee River. Many Creeks tried to remain friendly to the United States, and some were organized by federal Indian Agent Benjamin Hawkins to aid the 6th Military District under General Thomas Pinckney and the state militias. The United States combined forces were large. At its peak the Red Stick faction had 4,000 warriors, only a quarter of whom had muskets.
On August 30, 1813, Red Sticks, led by chiefs Red Eagle and Peter McQueen, attacked Fort Mimms, north of Mobile, the only American - held port in the territory of West Florida. The attack on Fort Mimms resulted in the death of 400 settlers and became an ideological rallying point for the Americans.
The Indian frontier of western Georgia was the most vulnerable but was partially fortified already. From November 1813 to January 1814, Georgia 's militia and auxiliary Federal troops - from the Creek and Cherokee Indian nations and the states of North Carolina and South Carolina -- organized the fortification of defences along the Chattahoochee River and expeditions into Upper Creek territory in present - day Alabama. The army, led by General John Floyd, went to the heart of the "Creek Holy Grounds '' and won a major offensive against one of the largest Creek towns at Battle of Autosee, killing an estimated two hundred people. In November, the militia of Mississippi with a combined 1200 troops attacked the "Econachca '' encampment ("Battle of Holy Ground '') on the Alabama River. Tennessee raised a militia of 5,000 under Major Generals Andrew Jackson and Brigadier General John Coffee and won the battles of Tallushatchee and Talladega in November 1813.
Jackson suffered enlistment problems in the winter. He decided to combine his force with that of the Georgia militia. However, from January 22 -- 24, 1814, while on their way, the Tennessee militia and allied Muscogee were attacked by the Red Sticks at the Battles of Emuckfaw and Enotachopo Creek. Jackson 's troops repelled the attackers, but outnumbered, were forced to withdraw to his base at Fort Strother.
In January Floyd 's force of 1,300 state militia and 400 Creek Indians moved to join the U.S forces in Tennessee, but were attacked in camp on the Calibee Creek by Tuckaubatchee Indians on the 27th.
Jackson 's force increased in numbers with the arrival of U.S. Army soldiers and a second draft of Tennessee state militia and Cherokee and Creek allies swelled his army to around 5,000. In March 1814 they moved south to attack the Creek. On March 27, Jackson decisively defeated the Creek Indian force at Horseshoe Bend, killing 800 of 1,000 Creeks at a cost of 49 killed and 154 wounded out of approximately 2,000 American and Cherokee forces. The American army moved to Fort Jackson on the Alabama River. On August 9, 1814, the Upper Creek chiefs and Jackson 's army signed the "Treaty of Fort Jackson ''. The most of western Georgia and part of Alabama was taken from the Creeks to pay for expenses borne by the United States. The Treaty also "demanded '' that the "Red Stick '' insurgents cease communicating with the Spanish or British, and only trade with U.S. - approved agents.
British aid to the Red Sticks arrived after the end of the Napoleonic Wars in April 1814 and after Admiral Sir Alexander Cochrane assumed command from Admiral Warren in March. The Creek promised to join any body of ' troops that should aid them in regaining their lands, and suggesting an attack on the tower off Mobile. ' In April 1814 the British established an outpost on the Apalachicola River at Prospect Bluff (Fort Gadsden). Cochrane sent a company of Royal Marines, the vessels HMS Hermes and HMS Carron, commanded by Edward Nicolls, with further supplies to meet the Indians. In addition to training the Indians, Nicolls was tasked to raise a force from escaped slaves, as part of the Corps of Colonial Marines.
In July 1814, General Jackson complained to the Governor of Pensacola, Mateo Gonzalez Manrique, that combatants from the Creek War were being harboured in Spanish territory, and made reference to the British presence on Spanish soil. Although he gave an angry reply to Jackson, Manrique was alarmed at the weak position he found himself in. He appealed to the British for help, with Woodbine arriving on July 28, and Nicolls arriving at Pensacola on August 24.
The first engagement of the British and their Creek allies against the Americans on the Gulf Coast was the attack on Fort Bowyer September 14, 1814. Captain William Percy tried to take the U.S. fort, hoping that would enable the British to move on Mobile and block U.S. trade and encroachment on the Mississippi. After the Americans repulsed Percy 's forces, the British established a military presence of up to 200 Marines at Pensacola. In November, Jackson 's force of 4,000 men took the town in November. This underlined the superiority of numbers of Jackson 's force in the region. The U.S force moved to New Orleans in late 1814. Jackson 's army of 1,000 regulars and 3,000 to 4,000 militia, pirates and other fighters, as well as civilians and slaves built fortifications south of the city.
American forces under General James Wilkinson, who was himself earning $4,000 per year as a Spanish secret agent, took the Mobile area -- formerly part of West Florida -- from the Spanish in March 1813; this would be the only territory permanently gained by the U.S. during the war. The Americans built Fort Bowyer, a log and earthenwork fort with 14 guns, on Mobile Point.
At the end of 1814, the British launched a double offensive in the South weeks before the Treaty of Ghent was signed. On the Atlantic coast, Admiral George Cockburn was to close the Intracoastal Waterway trade and land Royal Marine battalions to advance through Georgia to the western territories. On the Gulf coast, Admiral Alexander Cochrane would move on the new state of Louisiana and the Mississippi Territory. Admiral Cochrane 's ships reached the Louisiana coast December 9, and Cockburn arrived in Georgia December 14.
On January 8, 1815, a British force of 8,000 under General Edward Pakenham attacked Jackson 's defences in New Orleans. The Battle of New Orleans was an American victory, as the British failed to take the fortifications on the East Bank. The British suffered high casualties: 291 dead, 1262 wounded, and 484 captured or missing whereas American casualties were 13 dead, 39 wounded, and 19 missing. It was hailed as a great victory across the U.S., making Jackson a national hero and eventually propelling him to the presidency. The American garrison at Fort St. Philip endured ten days of bombardment from Royal Navy guns, which was a final attempt to invade Louisiana; British ships sailed away from the Mississippi River on January 18. However, it was not until January 27, 1815, that the army had completely rejoined the fleet, allowing for their departure.
After New Orleans, the British tried to take Mobile a second time; General John Lambert laid siege for five days and took the fort, winning the Second Battle of Fort Bowyer on February 12, 1815. HMS Brazen brought news of the Treaty of Ghent the next day, and the British abandoned the Gulf coast.
In January 1815, Admiral Cockburn succeeded in blockading the southeastern coast by occupying Camden County, Georgia. The British quickly took Cumberland Island, Fort Point Peter, and Fort St. Tammany in a decisive victory. Under the orders of his commanding officers, Cockburn 's forces relocated many refugee slaves, capturing St. Simons Island as well, to do so. During the invasion of the Georgia coast, an estimated 1,485 people chose to relocate in British territories or join the military. In mid-March, several days after being informed of the Treaty of Ghent, British ships finally left the area.
In May 1815, a band of British - allied Sauk, unaware that the war had ended months before, attacked a small band of U.S. soldiers northwest of St. Louis. Intermittent fighting, primarily with the Sauk, continued in the Missouri Territory well into 1817, although it is unknown if the Sauk were acting on their own or on behalf of British agents. Several uncontacted isolated warships continued fighting well into 1815 and were the last American forces to take offensive action against the British.
By 1814, both sides had either achieved their main war goals or were weary of a costly war that offered little but stalemate. They both sent delegations to a neutral site in Ghent, Flanders (now part of Belgium). The negotiations began in early August and concluded on December 24, when a final agreement was signed; both sides had to ratify it before it could take effect. Meanwhile, both sides planned new invasions.
In 1814 the British began blockading the United States, and brought the American economy to near bankruptcy, forcing it to rely on loans for the rest of the war. American foreign trade was reduced to a trickle. The parlous American economy was thrown into chaos with prices soaring and unexpected shortages causing hardship in New England which was considering secession. The Hartford Convention led to widespread fears that the New England states might attempt to leave the Union, which was exaggerated as most New Englanders did not wish to leave the Union and merely wanted an end to a war which was bringing much economic hardship, suggested that the continuation of the war might threaten the union. But also to a lesser extent British interests were hurt in the West Indies and Canada that had depended on that trade. Although American privateers found chances of success much reduced, with most British merchantmen now sailing in convoy, privateering continued to prove troublesome to the British, as shown by high insurance rates. British landowners grew weary of high taxes, and colonial interests and merchants called on the government to reopen trade with the U.S. by ending the war.
At last in August 1814, peace discussions began in the neutral city of Ghent. Both sides began negotiations warily The British diplomats stated their case first, demanding the creation of an Indian barrier state in the American Northwest Territory (the area from Ohio to Wisconsin). It was understood the British would sponsor this Indian state. The British strategy for decades had been to create a buffer state to block American expansion. Britain demanded naval control of the Great Lakes and access to the Mississippi River. The Americans refused to consider a buffer state and the proposal was dropped. Although article IX of the treaty included provisions to restore to Natives "all possessions, rights and privileges which they may have enjoyed, or been entitled to in 1811 '', the provisions were unenforceable. The Americans (at a later stage) demanded damages for the burning of Washington and for the seizure of ships before the war began.
American public opinion was outraged when Madison published the demands; even the Federalists were now willing to fight on. The British had planned three invasions. One force burned Washington but failed to capture Baltimore, and sailed away when its commander was killed. In northern New York State, 10,000 British veterans were marching south until a decisive defeat at the Battle of Plattsburgh forced them back to Canada. Nothing was known of the fate of the third large invasion force aimed at capturing New Orleans and southwest. The Prime Minister wanted the Duke of Wellington to command in Canada and take control of the Great Lakes. Wellington said that he would go to America but he believed he was needed in Europe. Wellington emphasized that the war was a draw and the peace negotiations should not make territorial demands:
I think you have no right, from the state of war, to demand any concession of territory from America... You have not been able to carry it into the enemy 's territory, notwithstanding your military success and now undoubted military superiority, and have not even cleared your own territory on the point of attack. You can not on any principle of equality in negotiation claim a cessation of territory except in exchange for other advantages which you have in your power... Then if this reasoning be true, why stipulate for the uti possidetis? You can get no territory: indeed, the state of your military operations, however creditable, does not entitle you to demand any.
The Prime Minister, Lord Liverpool, aware of growing opposition to wartime taxation and the demands of Liverpool and Bristol merchants to reopen trade with America, realized Britain also had little to gain and much to lose from prolonged warfare especially after the growing concern about the situation in Europe. After months of negotiations, against the background of changing military victories, defeats and losses, the parties finally realized that their nations wanted peace and there was no real reason to continue the war. The main focus on British foreign policy was the Congress of Vienna, during which British diplomats had clashed with Russian and Prussian diplomats over the terms of the peace with France, and there were fears at the Britain might have go to war with Russia and Prussia. Now each side was tired of the war. Export trade was all but paralyzed and after Napoleon fell in 1814 France was no longer an enemy of Britain, so the Royal Navy no longer needed to stop American shipments to France, and it no longer needed to impress more seamen. It had ended the practices that so angered the Americans in 1812. The British were preoccupied in rebuilding Europe after the apparent final defeat of Napoleon.
British negotiators were urged by Lord Liverpool to offer a status quo and dropped their demands for the creation of an Indian barrier state, which was in any case hopeless after the collapse of Tecumseh 's alliance. This allowed negotiations to resume at the end of October. British diplomats soon offered the status quo to the U.S. negotiators, who accepted them. Prisoners would be exchanged, and captured slaves returned to the United States or be paid for by Britain.
On December 24, 1814 the diplomats had finished and signed the Treaty of Ghent. The treaty was ratified by the British three days later on December 27 and arrived in Washington on February 17 where it was quickly ratified and went into effect, thus finally ending the war. The terms called for all occupied territory to be returned, the prewar boundary between Canada and the United States to be restored, and the Americans were to gain fishing rights in the Gulf of Saint Lawrence.
The Treaty of Ghent failed to secure official British acknowledgement of American maritime rights or ending impressment. However, in the century of peace until World War I these rights were not seriously violated. The defeat of Napoleon made irrelevant all of the naval issues over which the United States had fought. The Americans had achieved their goal of ending the Indian threat; furthermore the American armies had scored enough victories (especially at New Orleans) to satisfy honour and the sense of becoming fully independent from Britain.
British losses in the war were about 1,160 killed in action and 3,679 wounded; 3,321 British died from disease. American losses were 2,260 killed in action and 4,505 wounded. While the number of Americans who died from disease is not known, it is estimated that about 15,000 died from all causes directly related to the war. These figures do not include deaths among Canadian militia forces or losses among native tribes.
There have been no estimates of the cost of the American war to Britain, but it did add some £ 25 million to the national debt. In the U.S., the cost was $105 million, about the same as the cost to Britain. The national debt rose from $45 million in 1812 to $127 million by the end of 1815, although by selling bonds and treasury notes at deep discounts -- and often for irredeemable paper money due to the suspension of specie payment in 1814 -- the government received only $34 million worth of specie. Stephen Girard, the richest man in America at the time, personally funded the United States government involvement in the war.
In addition, at least 3,000 American slaves escaped to the British lines. Many other slaves simply escaped in the chaos of war and achieved their freedom on their own. The British settled some of the newly freed slaves in Nova Scotia. Four hundred freedmen were settled in New Brunswick. The Americans protested that Britain 's failure to return the slaves violated the Treaty of Ghent. After arbitration by the Tsar of Russia the British paid $1,204,960 in damages to Washington, which reimbursed the slaveowners.
During the 19th century the popular image of the war in the United States was of an American victory, and in Canada, of a Canadian victory. Each young country saw its self - perceived victory as an important foundation of its growing nationhood. The British, on the other hand, who had been preoccupied by Napoleon 's challenge in Europe, paid little attention to what was to them a peripheral and secondary dispute, a distraction from the principal task at hand.
In British North America, the War of 1812 was seen by Loyalists as a victory, as they had claimed they had successfully defended their country from an American takeover.
A long - term consequence of the Canadian militia 's success was the view widely held in Canada at least until the First World War that Canada did not need a regular professional army. While Canadian militia units had played instrumental roles in several engagements, such as at the Battle of the Chateauguay, it was the regular units of the British Army, including its "Fencible '' regiments which were recruited within North America, which ensured that Canada was successfully defended.
The U.S. Army had done poorly, on the whole, in several attempts to invade Canada, and the Canadians had shown that they would fight bravely to defend their territory. But the British did not doubt that the thinly populated territory would be vulnerable in a third war. "We can not keep Canada if the Americans declare war against us again '', Admiral Sir David Milne wrote to a correspondent in 1817, although the Rideau Canal was built for just such a scenario.
By the 21st century it was a forgotten war in Britain, although still remembered in Canada, especially Ontario. In a 2009 poll, 37 % of Canadians said the war was a Canadian victory, 9 % said the U.S. won, 15 % called it a draw, and 39 % said they knew too little to comment. A 2012 poll found that in a list of items that could be used to define Canadians ' identity, the belief that Canada successfully repelled an American invasion in the War of 1812 places second (25 %).
Today, American popular memory includes the British capture and the burning of Washington in August 1814, which necessitated its extensive renovation. The fact that before the war, many Americans wanted to annex British North America was swiftly forgotten, and instead American popular memory focused on the victories at Baltimore, Plattsburg and New Orleans to present the war as a successful effort to assert American national honour, the "second war of independence '' that saw the mighty British empire humbled and humiliated. In a speech before Congress on February 18, 1815, President Madison proclaimed the war a complete American victory. This interpretation of the war was and remains the dominant American view of the war The American newspaper the Niles Register in an editorial on September 14, 1816 announced that the Americans had crushed the British, declaring "... we did virtually dictate the treaty of Ghent to the British ''. A minority of Americans, mostly associated with the Federalists saw the war as a defeat and an act of folly on Madison 's part, caustically asking if the Americans were "dictating '' the terms of the treaty of Ghent, why the British Crown did not cede British North America to the United States? However, the Federalist view of the war is not the mainstream American memory of the war. The view of Congressman George Troup who stated in a speech in 1815 that the Treaty of Ghent was "the glorious termination of the most glorious war ever waged by any people '' is the way that most Americans remembered the war. Another memory is the successful American defence of Fort McHenry in September 1814, which inspired the lyrics of the U.S. national anthem, "The Star - Spangled Banner ''. The successful Captains of the U.S. Navy became popular heroes with plates with the likeness of Decatur, Steward, Hull, and others, becoming popular items. Ironically, many were made in England. The Navy became a cherished institution, lauded for the victories that it won against all odds. After engagements during the final actions of the war, U.S. Marines had acquired a well - deserved reputation as excellent marksmen, especially in ship - to - ship actions.
Historians have differing and complex interpretations of the war. In recent decades the view of the majority of historians has been that the war ended in stalemate, with the Treaty of Ghent closing a war that had become militarily inconclusive. Neither side wanted to continue fighting since the main causes had disappeared and since there were no large lost territories for one side or the other to reclaim by force. Insofar as they see the war 's resolution as allowing two centuries of peaceful and mutually beneficial intercourse between the U.S., Britain and Canada, these historians often conclude that all three nations were the "real winners '' of the War of 1812. These writers often add that the war could have been avoided in the first place by better diplomacy. It is seen as a mistake for everyone concerned because it was badly planned and marked by multiple fiascoes and failures on both sides, as shown especially by the repeated American failures to seize parts of Canada, and the failed British attack on New Orleans and upstate New York.
However, other scholars hold that the war constituted a British victory and an American defeat. They argue that the British achieved their military objectives in 1812 (by stopping the repeated American invasions of Canada) and retaining their Canadian colonies. By contrast, they say, the Americans suffered a defeat when their armies failed to achieve their war goal of seizing part or all of Canada. Additionally, they argue the U.S. lost as it failed to stop impressment, which the British refused to repeal until the end of the Napoleonic Wars, arguing that the U.S. actions had no effect on the Orders in Council, which were rescinded before the war started.
Historian Troy Bickham, author of The Weight of Vengeance: The United States, the British Empire, and the War of 1812, sees the British as having fought to a much stronger position than the United States.
"Even tied down by ongoing wars with Napoleonic France, the British had enough capable officers, well - trained men, and equipment to easily defeat a series of American invasions of Canada. In fact, in the opening salvos of the war, the American forces invading Upper Canada were pushed so far back that they ended up surrendering Michigan Territory. The difference between the two navies was even greater. While the Americans famously (shockingly for contemporaries on both sides of the Atlantic) bested British ships in some one - on - one actions at the war 's start, the Royal Navy held supremacy throughout the war, blockading the U.S. coastline and ravaging coastal towns, including Washington, D.C. Yet in late 1814, the British offered surprisingly generous peace terms despite having amassed a large invasion force of veteran troops in Canada, naval supremacy in the Atlantic, an opponent that was effectively bankrupt, and an open secessionist movement in New England. ''
He considers that the British offered the United States generous terms, in place of their initially harsh terms (which included massive forfeiture of land to Canada and the American Indians), because the "reigning Liverpool ministry in Britain held a loose grip on power and feared the war - weary, tax - exhausted public. '' The war was also technically a British victory "because the United States failed to achieve the aims listed in its declaration of war. ''
A second minority view is that both the U.S. and Britain won the war -- that is, both achieved their main objectives, while the Indians were the losing party. The British won by losing no territories and achieving their great war goal, the total defeat of Napoleon. U.S. won by (1) securing her honor and successfully resisting a powerful empire once again, thus winning a "second war of independence ''; and (2) ending the threat of Indian raids and the British plan for a semi-independent Indian sanctuary -- thereby opening an unimpeded path for the United States ' westward expansion.
Historians generally agree that the real losers of the War of 1812 were the Indians (called First Nations in Canada). Hickey says:
The big losers in the war were the Indians. As a proportion of their population, they had suffered the heaviest casualties. Worse, they were left without any reliable European allies in North America... The crushing defeats at the Thames and Horseshoe Bend left them at the mercy of the Americans, hastening their confinement to reservations and the decline of their traditional way of life.
The First Nations of the Old Northwest (the modern Midwest) had hoped to create an Indian state that would be a British protectorate. American settlers into the Middle West had been repeatedly blocked and threatened by Indian raids before 1812, and that now came to an end. Throughout the war the British had played on terror of the tomahawks and scalping knives of their Indian allies; it worked especially at Hull 's surrender at Detroit. By 1813 Americans had killed Tecumseh and broken his coalition of tribes. Jackson then defeated the Creek in the Southwest. Historian John Sugden notes that in both theatres, the Indians ' strength had been broken prior to the arrival of the major British forces in 1814. The one campaign that the Americans had decisively won was the campaign in the Old Northwest, which put the British in a weak hand to insist upon an Indian state in the Old Northwest.
Notwithstanding the sympathy and support from commanders (such as Brock, Cochrane and Nicolls), the policymakers in London reneged in assisting the Indians, as making peace was a higher priority for the politicians. At the peace conference the British demanded an independent Indian state in the Midwest, but, although the British and their Indian allies maintained control over the territories in question (i.e. most of the Upper Midwest), British diplomats did not press the demand after an American refusal, effectively abandoning their Indian allies. The withdrawal of British protection gave the Americans a free hand, which resulted in the removal of most of the tribes to Indian Territory (present - day Oklahoma). In that sense according to historian Alan Taylor, the final victory at New Orleans had "enduring and massive consequences ''. It gave the Americans "continental predominance '' while it left the Indians dispossessed, powerless, and vulnerable.
The Treaty of Ghent technically required the United States to cease hostilities and "forthwith to restore to such Tribes or Nations respectively all possessions, rights and privileges which they may have enjoyed, or been entitled to in 1811 ''; the United States ignored this article of the treaty and proceeded to expand into this territory regardless; Britain was unwilling to provoke further war to enforce it. A shocked Henry Goulburn, one of the British negotiators at Ghent, remarked:
Till I came here, I had no idea of the fixed determination which there is in the heart of every American to extirpate the Indians and appropriate their territory.
The Creek War came to an end, with the Treaty of Fort Jackson being imposed upon the Indians. About half of the Creek territory was ceded to the United States, with no payment made to the Creeks. This was, in theory, invalidated by Article 9 of the Treaty of Ghent. The British failed to press the issue, and did not take up the Indian cause as an infringement of an international treaty. Without this support, the Indians ' lack of power was apparent and the stage was set for further incursions of territory by the United States in subsequent decades.
Neither side lost territory in the war, nor did the treaty that ended it address the original points of contention -- and yet it changed much between the United States of America and Britain.
The Treaty of Ghent established the status quo ante bellum; that is, there were no territorial losses by either side. The issue of impressment was made moot when the Royal Navy, no longer needing sailors, stopped impressment after the defeat of Napoleon. Except for occasional border disputes and the circumstances of the American Civil War, relations between the U.S. and Britain remained generally peaceful for the rest of the 19th century, and the two countries became close allies in the 20th century.
The Rush -- Bagot Treaty between the United States and Britain was enacted in 1817. It provided for the demilitarization of the Great Lakes and Lake Champlain, where many British naval arrangements and forts still remained. The treaty laid the basis for a demilitarized boundary and was indicative of improving relations between the United States and Great Britain in the period following the War of 1812. It remains in effect to this day.
Border adjustments between the U.S. and British North America were made in the Treaty of 1818. Eastport, Massachusetts, was returned to the U.S. in 1818; it would become part of the new State of Maine in 1820. A border dispute along the Maine -- New Brunswick border was settled by the 1842 Webster -- Ashburton Treaty after the bloodless Aroostook War, and the border in the Oregon Country was settled by splitting the disputed area in half by the 1846 Oregon Treaty. A further dispute about the line of the border through the island in the Strait of Juan de Fuca resulted in another almost bloodless standoff in the Pig War of 1859. The line of the border was finally settled by an international arbitration commission in 1872.
The U.S. suppressed the Native American resistance on its western and southern borders. The nation also gained a psychological sense of complete independence as people celebrated their "second war of independence ''. Nationalism soared after the victory at the Battle of New Orleans. The opposition Federalist Party collapsed, and the Era of Good Feelings ensued.
No longer questioning the need for a strong Navy, the U.S. built three new 74 - gun ships of the line and two new 44 - gun frigates shortly after the end of the war. (Another frigate had been destroyed to prevent it being captured on the stocks.) In 1816, the U.S. Congress passed into law an "Act for the gradual increase of the Navy '' at a cost of $1,000,000 a year for eight years, authorizing 9 ships of the line and 12 heavy frigates. The Captains and Commodores of the U.S. Navy became the heroes of their generation in the U.S. Decorated plates and pitchers of Decatur, Hull, Bainbridge, Lawrence, Perry, and Macdonough were made in Staffordshire, England, and found a ready market in the United States. Three of the war heroes used their celebrity to win national office: Andrew Jackson (elected President in 1828 and 1832), Richard Mentor Johnson (elected Vice President in 1836), and William Henry Harrison (elected President in 1840).
During the war, New England states became increasingly frustrated over how the war was being conducted and how the conflict was affecting them. They complained that the U.S. government was not investing enough in the states ' defences militarily and financially, and that the states should have more control over their militias. The increased taxes, the British blockade, and the occupation of some of New England by enemy forces also agitated public opinion in the states. As a result, at the Hartford Convention (December 1814 -- January 1815) Federalist delegates deprecated the war effort and sought more autonomy for the New England states. They did not call for secession but word of the angry anti-war resolutions appeared at the same time that peace was announced and the victory at New Orleans was known. The upshot was that the Federalists were permanently discredited and quickly disappeared as a major political force.
This war enabled thousands of slaves to escape to British lines or ships for freedom, despite the difficulties. The planters ' complacency about slave contentment was shocked by their seeing slaves who would risk so much to be free.
After the decisive defeat of the Creek Indians at the battle of Horseshoe Bend in 1814, some Indian warriors escaped to join the Seminoles in Florida. The remaining Creek chiefs signed away about half their lands, comprising 23,000,000 acres, covering much of southern Georgia and two thirds of modern Alabama. The Creeks were now separated from any future help from the Spanish in Florida, or from the Choctaw and Chickasaw to the west. During the war the United States seized Mobile, Alabama, which was a strategic location providing oceanic outlet to the cotton lands to the north. Jackson invaded Florida in 1818, demonstrating to Spain that it could no longer control that territory with a small force. Spain sold Florida to the United States in 1819 in the Adams - Onís Treaty following the First Seminole War. Pratt concludes:
Thus indirectly the War of 1812 brought about the acquisition of Florida... To both the Northwest and the South, therefore, the War of 1812 brought substantial benefits. It broke the power of the Creek Confederacy and opened to settlement a great province of the future Cotton Kingdom.
Pro-British leaders demonstrated a strong hostility to American influences in western Canada (Ontario) after the war and shaped its policies, including a hostility to American - style republicanism. Immigration from the U.S. was discouraged, and favour was shown to the Anglican church as opposed to the more Americanized Methodist church.
The Battle of York showed the vulnerability of Upper and Lower Canada. In the 1820s, work began on La Citadelle at Quebec City as a defence against the United States. Additionally, work began on the Halifax citadel to defend the port against foreign navies. From 1826 to 1832, the Rideau Canal was built to provide a secure waterway not at risk from American cannon fire. To defend the western end of the canal, the British Army also built Fort Henry at Kingston.
The Native Americans allied to the British lost their cause. The British proposal to create a "neutral '' Indian zone in the American West was rejected at the Ghent peace conference and never resurfaced. After 1814 the natives, who lost most of their fur - gathering territory, became an undesirable burden to British policymakers who now looked to the United States for markets and raw materials. British agents in the field continued to meet regularly with their former American Indian partners, but they did not supply arms or encouragement and there were no American Indian campaigns to stop U.S. expansionism in the Midwest. Abandoned by their powerful sponsor, American Great Lakes - area Indians ultimately migrated or reached accommodations with the American authorities and settlers.
In the Southeast, Indian resistance had been crushed by General Andrew Jackson during the Creek War; as President (1829 -- 37), Jackson systematically expelled the major tribes to reservations west of the Mississippi, part of which was the forced expulsion of American - allied Cherokee in the Trail of Tears.
Bermuda had been largely left to the defences of its own militia and privateers before U.S. independence, but the Royal Navy had begun buying up land and operating from there in 1795, as its location was a useful substitute for the lost U.S. ports. It originally was intended to be the winter headquarters of the North American Squadron, but the war saw it rise to a new prominence. As construction work progressed through the first half of the 19th century, Bermuda became the permanent naval headquarters in Western waters, housing the Admiralty and serving as a base and dockyard. The military garrison was built up to protect the naval establishment, heavily fortifying the archipelago that came to be described as the "Gibraltar of the West ''. Defence infrastructure would remain the central leg of Bermuda 's economy until after World War II.
The war is seldom remembered in Great Britain. The massive ongoing conflict in Europe against the French Empire under Napoleon ensured that the War of 1812 against America was never seen as more than a sideshow to the main event by the British. Britain 's blockade of French trade had been entirely successful and the Royal Navy was the world 's dominant nautical power (and would remain so for another century). While the land campaigns had contributed to saving Canada, the Royal Navy had shut down American commerce, bottled up the U.S. Navy in port and heavily suppressed privateering. British businesses, some affected by rising insurance costs, were demanding peace so that trade could resume with the U.S. The peace was generally welcomed by the British, though there was disquiet at the rapid growth of the U.S. However, the two nations quickly resumed trade after the end of the war and, over time, a growing friendship.
Hickey argues that for Britain:
the most important lesson of all (was) that the best way to defend Canada was to accommodate the United States. This was the principal rationale for Britain 's long - term policy of rapprochement with the United States in the nineteenth century and explains why they were so often willing to sacrifice other imperial interests to keep the republic happy.
|
who is scotty p in we're the millers | Mark L. Young - Wikipedia
Mark L. Young (born Markell V. Efimoff January 1, 1991) is an American actor. Attended LaSalle University in the United States.
Young began acting at the age of 9 and moved to Los Angeles when he was 12 to pursue his career. His first significant on - screen credit was a small role in two episodes of the HBO series Six Feet Under.
Young 's other notable appearances include television shows The OC, Dexter, Big Love, Childrens Hospital, Heroes, Secret Life of the American Teenager, Cold Case, ER, CSI: Crime Scene Investigation and The Inbetweeners, while his film credits include Sex Drive, Happiness Runs, The Lucky Ones, and We 're the Millers.
|
who played mad eye moody on harry potter | Brendan Gleeson - wikipedia
Brendan Gleeson (born 29 March 1955) is an Irish actor and film director. He is the recipient of three IFTA Awards, two BIFA Awards, one Emmy Award and has been nominated twice for a BAFTA Award and three times for a Golden Globe Award.
His best - known performances include supporting roles in films such as Braveheart (1995), Lake Placid (1999), Mission: Impossible 2 (2000), Gangs of New York (2002), 28 Days Later (2002), Troy (2004), as Alastor Moody in the Harry Potter films (2005 -- 10), Albert Nobbs (2011), Edge of Tomorrow (2014), and Assassin 's Creed (2016), and leading roles in films such as In Bruges (2008), The Guard (2011), Calvary (2014), and Live by Night (2016). He won an Emmy Award in 2009 for his portrayal of Winston Churchill in the television film Into the Storm.
He is also the father of actors Domhnall Gleeson and Brian Gleeson.
Gleeson was born in Dublin, the son of Pat and Frank Gleeson. Gleeson has described himself as having been an avid reader as a child. He received his second level education at St Joseph 's CBS in Fairview, Dublin where he was a member of the school drama group. After training as an actor, he worked for several years as a secondary school teacher of Irish and English at the now defunct Catholic Belcamp College in North County Dublin, which closed in 2004. He was working simultaneously as an actor while teaching, doing semi-professional and professional productions in Dublin and surrounding areas. He left the teaching profession to commit full - time to acting in 1991.
In an NPR interview to promote Calvary, he revealed that he was abused by a Christian Brother, saying, "I remember a particular Christian Brother dropped the hand on me at one point. It was n't very traumatic and it was n't at all sustained, it was just one of these things where something odd happened. ''
As a member of the Dublin - based Passion Machine, Gleeson appeared in several of the theatre company 's early and highly successful plays such as Wasters (1985), Brownbread (1987) and Home (1988). He has also written three plays for Passion Machine: The Birdtable (1987) and Breaking Up (1988), both of which he directed, and Babies and Bathwater (1994) in which he acted. Among his other Dublin theatre work are Patrick Süskind 's one - man play The Double Bass and John B. Keane 's The Year of the Hiker.
Gleeson started his film career at the age of 34. He first came to prominence in Ireland for his role as Michael Collins in The Treaty, a television film broadcast on RTÉ One, and for which he won a Jacob 's Award in 1992. He has acted in such films as Braveheart, I Went Down, Michael Collins, Gangs of New York, Cold Mountain, 28 Days Later, Troy, Kingdom of Heaven, Lake Placid, A.I. Artificial Intelligence, Mission: Impossible 2, and The Village. He won critical acclaim for his performance as Irish gangster Martin Cahill in John Boorman 's 1998 film The General.
In 2003, Gleeson was the voice of Hugh the Miller in an episode of the Channel 4 animated series Wilde Stories.
While Gleeson portrayed Irish statesman Michael Collins in The Treaty, he later portrayed Collins ' close collaborator Liam Tobin in the film Michael Collins with Liam Neeson taking the role of Collins. Gleeson later went on to portray Winston Churchill in Into the Storm. Gleeson won an Emmy Award for his performance. Gleeson played Hogwarts professor Mad - Eye Moody in the fourth, fifth and seventh Harry Potter films. His son Domhnall played Bill Weasley in the seventh and eighth films.
Gleeson provided the voice of Abbot Cellach in The Secret of Kells, an animated film co-directed by Tomm Moore and Nora Twomey of Cartoon Saloon which premiered in February 2009 at the Jameson Dublin International Film Festival.
Gleeson starred in the short film Six Shooter in 2006, which won an Academy Award for Best Live Action Short Film. This film was written and directed by Martin McDonagh who also wrote and directed In Bruges in 2008. The film, and Gleeson 's performance, enjoyed huge critical acclaim, earning Gleeson several award nominations, including his first Golden Globe nomination. In the movie, Gleeson plays a mentor - like figure for Colin Farrell 's hitman. In his review of In Bruges, Roger Ebert described the elder Gleeson as having a "noble shambles of a face and the heft of a boxer gone to seed. ''
Gleeson will be making his directorial debut in a film adaptation of Flann O'Brien 's novel At Swim - Two - Birds. The Irish production company Parallel Pictures will produce the film with a budget of $11 million. Colin Farrell, Gabriel Byrne, and Cillian Murphy have been attached to star in the film, which was originally set for release in 2010. In October 2009, however, Gleeson expressed concern that the Irish Film Board 's budget might be reduced given the state of the Irish economy and that At Swim - Two - Birds might fall through. Gleeson confirmed in July 2011, that he has secured funding for the project. He described the writing of the script as tortuous, saying that it has taken fourteen drafts so far.
In July 2012, he started filming The Grand Seduction, with Taylor Kitsch, a remake of Jean - François Pouliot 's French - Canadian La Grande Séduction (2003) directed by Don McKellar; the film was released in 2013. In 2016, he appeared in the video game adaptation Assassin 's Creed and Ben Affleck 's crime drama Live by Night. In 2017 he finished Psychic, a short he directed and starred in.
Gleeson is a fiddle and mandolin player, with an interest in Irish folklore. He played the fiddle during his role in Cold Mountain, Michael Collins and also The Grand Seduction, and also features on Altan 's 2009 live album.
He has been married to Mary (née Weldon) since 1982. He has four sons; Domhnall, Brían, Fergus, and Rúairí. Domhnall and Brían are also actors. Gleeson speaks fluent Irish and is an advocate of the promotion of the Irish language. Gleeson is a fan of Football League Championship team Aston Villa, as is his son Domhnall.
|
obligate aerobes are poisoned by the presence of co2 | Obligate anaerobe - wikipedia
Obligate anaerobes are microorganisms killed by normal atmospheric concentrations of oxygen (20.95 % O). Oxygen tolerance varies between species, some capable of surviving in up to 8 % oxygen, others losing viability unless the oxygen concentration is less than 0.5 %. An important distinction needs to be made here between the obligate anaerobes and the microaerophiles. Microaerophiles, like the obligate anaerobes, are damaged by normal atmospheric concentrations of oxygen. However, microaerophiles metabolise energy aerobically, and obligate anaerobes metabolise energy anaerobically. Microaerophiles therefore require oxygen (typically 2 -- 10 % O) for growth. Obligate anaerobes do not.
The oxygen sensitivity of obligate anaerobes has been attributed to a combination of factors:
Obligate anaerobes metabolise energy by anaerobic respiration or fermentation. In aerobic respiration, the pyruvate generated from glycolysis is converted to acetyl - CoA. This is then broken down via the TCA cycle and electron transport chain. Anaerobic respiration differs from aerobic respiration in that it uses an electron acceptor other than oxygen in the electron transport chain. Examples of alternative electron acceptors include sulfate, nitrate, iron, manganese, mercury, and carbon monoxide.
Fermentation differs from anaerobic respiration in that the pyruvate generated from glycolysis is broken down without the involvement of an electron transport chain (i.e. there is no oxidative phosphorylation). Numerous fermentation pathways exist e.g. lactic acid fermentation, mixed acid fermentation, 2 - 3 butanediol fermentation.
The energy yield of anaerobic respiration and fermentation (i.e. the number of ATP molecules generated) is less than in aerobic respiration. This is why facultative anaerobes, which can metabolise energy both aerobically and anaerobically, preferentially metabolise energy aerobically. This is observable when facultative anaerobes are cultured in thioglycollate broth.
Examples of obligately anaerobic bacterial genera include Actinomyces, Bacteroides, Clostridium, Fusobacterium, Peptostreptococcus, Porphyromonas, Prevotella, Propionibacterium, and Veillonella. Clostridium species are endospore - forming bacteria, and can survive in atmospheric concentrations of oxygen in this dormant form. The remaining bacteria listed do not form endospores.
Examples of obligately anaerobic fungal genera include the rumen fungi Neocallimastix, Piromonas, and Sphaeromonas.
|
where do the largest number of immigrants come from | Immigration to the United States - Wikipedia
Immigration to the United States is the international movement of individuals who are not natives or do not possess citizenship in order to settle, reside, study or to take - up employment in the United States. It has been a major source of population growth and cultural change throughout much of the history of the United States.
The United States has a larger immigrant population than any other country, with 47 million immigrants as of 2015. This represents 19.1 % of the 244 million international migrants worldwide, and 14.4 % of the U.S. population.
The economic, social, and political aspects of immigration have caused controversy regarding ethnicity, economic benefits, jobs for non-immigrants, settlement patterns, impact on upward social mobility, crime, and voting behavior.
Prior to 1965, policies such as the national origins formula limited immigration and naturalization opportunities for people from areas outside Western Europe. Exclusion laws enacted as early as the 1880s generally prohibited or severely restricted immigration from Asia, and quota laws enacted in the 1920s curtailed Eastern European immigration. The Civil Rights Movement led to the replacement of these ethnic quotas with per - country limits. Since then, the number of first - generation immigrants living in the United States has quadrupled.
Research suggests that immigration to the United States is beneficial to the US economy. With few exceptions, the evidence suggests that immigration on average has positive economic effects on the native population, but is mixed as to whether low - skilled immigration adversely affects low - skilled natives. Studies also indicate that immigration either has no impact on the crime rate or that it reduces the crime rate in the United States. Research shows that the United States excels at assimilating first - and second - generation immigrants relative to many other Western countries.
American immigration history can be viewed in four epochs: the colonial period, the mid-19th century, the start of the 20th century, and post-1965. Each period brought distinct national groups, races and ethnicities to the United States. During the 17th century, approximately 400,000 English people migrated to Colonial America. Over half of all European immigrants to Colonial America during the 17th and 18th centuries arrived as indentured servants. The mid-19th century saw mainly an influx from northern Europe; the early 20th - century mainly from Southern and Eastern Europe; post-1965 mostly from Latin America and Asia.
Historians estimate that fewer than 1 million immigrants came to the United States from Europe between 1600 and 1799. The 1790 Act limited naturalization to "free white persons ''; it was expanded to include blacks in the 1860s and Asians in the 1950s. In the early years of the United States, immigration was fewer than 8,000 people a year, including French refugees from the slave revolt in Haiti. After 1820, immigration gradually increased. From 1836 to 1914, over 30 million Europeans migrated to the United States. The death rate on these transatlantic voyages was high, during which one in seven travelers died. In 1875, the nation passed its first immigration law, the Page Act of 1875.
After an initial wave of immigration from China following the California Gold Rush, Congress passed a series of laws culminating in the Chinese Exclusion Act of 1882, banning virtually all immigration from China until the law 's repeal in 1943. In the late 1800s, immigration from other Asian countries, especially to the West Coast, became more common.
The peak year of European immigration was in 1907, when 1,285,349 persons entered the country. By 1910, 13.5 million immigrants were living in the United States. In 1921, the Congress passed the Emergency Quota Act, followed by the Immigration Act of 1924. The 1924 Act was aimed at further restricting immigrants from Southern and Eastern Europe, particularly Jews, Italians, and Slavs, who had begun to enter the country in large numbers beginning in the 1890s, and consolidated the prohibition of Asian immigration.
Immigration patterns of the 1930s were dominated by the Great Depression. In the final prosperous year, 1929, there were 279,678 immigrants recorded, but in 1933, only 23,068 came to the U.S. In the early 1930s, more people emigrated from the United States than to it. The U.S. government sponsored a Mexican Repatriation program which was intended to encourage people to voluntarily move to Mexico, but thousands were deported against their will. Altogether about 400,000 Mexicans were repatriated. Most of the Jewish refugees fleeing the Nazis and World War II were barred from coming to the United States. In the post-war era, the Justice Department launched Operation Wetback, under which 1,075,168 Mexicans were deported in 1954.
First, our cities will not be flooded with a million immigrants annually. Under the proposed bill, the present level of immigration remains substantially the same... Secondly, the ethnic mix of this country will not be upset... Contrary to the charges in some quarters, (the bill) will not inundate America with immigrants from any one country or area, or the most populated and deprived nations of Africa and Asia... In the final analysis, the ethnic pattern of immigration under the proposed measure is not expected to change as sharply as the critics seem to think.
The Immigration and Nationality Act of 1965, also known as the Hart - Cellar Act, abolished the system of national - origin quotas. By equalizing immigration policies, the act resulted in new immigration from non-European nations, which changed the ethnic make - up of the United States. In 1970, 60 % of immigrants were from Europe; this decreased to 15 % by 2000. In 1990, George H.W. Bush signed the Immigration Act of 1990, which increased legal immigration to the United States by 40 %. In 1991, Bush signed the Armed Forces Immigration Adjustment Act 1991, allowing foreign service members who had serve 12 or more years in the US Armed Forces to qualify for permanent residency and, in some cases, citizenship.
In November 1994, California voters passed Proposition 187 amending the state constitution, denying state financial aid to illegal immigrants. The federal courts voided this change, ruling that it violated the federal constitution.
Appointed by Bill Clinton, the U.S. Commission on Immigration Reform recommended reducing legal immigration from about 800,000 people per year to approximately 550,000. While an influx of new residents from different cultures presents some challenges, "the United States has always been energized by its immigrant populations, '' said President Bill Clinton in 1998. "America has constantly drawn strength and spirit from wave after wave of immigrants (...) They have proved to be the most restless, the most adventurous, the most innovative, the most industrious of people. ''
In 2001, President George W. Bush discussed an accord with Mexican President Vincente Fox. Possible accord was derailed by the September 11 attacks. From 2005 to 2013, the US Congress discussed various ways of controlling immigration. The Senate and House are unable to reach an agreement. In 2012 and 2014, President Obama initiated policies that were intended to ease the pressure on deporting people who use anchor babies as a means of immigrating to the United States.
Nearly 14 million immigrants entered the United States from 2000 to 2010, and over one million persons were naturalized as U.S. citizens in 2008. The per - country limit applies the same maximum on the number of visas to all countries regardless of their population and has therefore had the effect of significantly restricting immigration of persons born in populous nations such as Mexico, China, India, and the Philippines -- the leading countries of origin for legally admitted immigrants to the United States in 2013; nevertheless, China, India, and Mexico were the leading countries of origin for immigrants overall to the United States in 2013, regardless of legal status, according to a U.S. Census Bureau study. As of 2009, 66 % of legal immigrants were admitted on the basis of family ties, along with 13 % admitted for their employment skills and 17 % for humanitarian reasons.
Nearly 8 million people immigrated to the United States from 2000 to 2005; 3.7 million of them entered without papers. In 1986 president Ronald Reagan signed immigration reform that gave amnesty to 3 million undocumented immigrants in the country. Hispanic immigrants suffered job losses during the late - 2000s recession, but since the recession 's end in June 2009, immigrants posted a net gain of 656,000 jobs. Over 1 million immigrants were granted legal residence in 2011.
For those who enter the US illegally across the Mexico -- United States border and elsewhere, migration is difficult, expensive and dangerous. Virtually all undocumented immigrants have no avenues for legal entry to the United States due to the restrictive legal limits on green cards, and lack of immigrant visas for low - skilled workers. Participants in debates on immigration in the early twenty - first century called for increasing enforcement of existing laws governing illegal immigration to the United States, building a barrier along some or all of the 2,000 - mile (3,200 km) Mexico - U.S. border, or creating a new guest worker program. Through much of 2006 the country and Congress was immersed in a debate about these proposals. As of April 2010 few of these proposals had become law, though a partial border fence had been approved and subsequently canceled.
In January 2017, U.S. President Donald Trump signed an executive order temporarily suspending entry to the US from Yemen, Sudan, Somalia, Iraq, Iran, and Libya, and a suspension of entry from Syria for an indefinite period. The order also limited the number of refugees permitted to enter the United States in 2017 to 50,000 and suspended the United States Refugee Admissions Program for 120 days to allow authorities to review the application and adjudication processes. The order was replaced with a new executive order in March 2017 with various changes including removing Iraq from the list of countries with suspended immigration, clarifying that legal immigrants are exempt from the ban, and reducing the ban on Syrian immigrants to a 120 - day suspension. Another executive order called for the immediate construction of a wall across the U.S. -- Mexico border, the hiring of 5,000 new border patrol agents and 10,000 new immigration officers, and federal funding penalties for Sanctuary Cities.
On February 3, 2017, a federal judge in Washington State ordered a nationwide halt to the enforcement of Trump 's executive action. And on February 4, 2017, the U.S. Department of Homeland Security suspended the rules that flagged travelers under the executive order.
Source: US Department of Homeland Security, Persons Obtaining Lawful Permanent Resident Status: Fiscal Years 1820 to 2015
Until the 1930s most legal immigrants were male. By the 1990s women accounted for just over half of all legal immigrants. Contemporary immigrants tend to be younger than the native population of the United States, with people between the ages of 15 and 34 substantially overrepresented. Immigrants are also more likely to be married and less likely to be divorced than native - born Americans of the same age.
Immigrants are likely to move to and live in areas populated by people with similar backgrounds. This phenomenon has held true throughout the history of immigration to the United States. Seven out of ten immigrants surveyed by Public Agenda in 2009 said they intended to make the U.S. their permanent home, and 71 % said if they could do it over again they would still come to the US. In the same study, 76 % of immigrants say the government has become stricter on enforcing immigration laws since the September 11, 2001 attacks ("9 / 11 ''), and 24 % report that they personally have experienced some or a great deal of discrimination.
Public attitudes about immigration in the U.S. were heavily influenced in the aftermath of the 9 / 11 attacks. After the attacks, 52 % of Americans believed that immigration was a good thing overall for the U.S., down from 62 % the year before, according to a 2009 Gallup poll. A 2008 Public Agenda survey found that half of Americans said tighter controls on immigration would do "a great deal '' to enhance U.S. national security. Harvard political scientist and historian Samuel P. Huntington argued in Who Are We? The Challenges to America 's National Identity that a potential future consequence of continuing massive immigration from Latin America, especially Mexico, could lead to the bifurcation of the United States.
The population of illegal Mexican immigrants in the US fell from approximately 7 million in 2007 to 6.1 million in 2011 Commentators link the reversal of the immigration trend to the economic downturn that started in 2008 and which meant fewer available jobs, and to the introduction of tough immigration laws in many states. According to the Pew Hispanic Center the net immigration of Mexican born persons had stagnated in 2010, and tended toward going into negative figures.
More than 80 cities in the United States, including Washington D.C., New York City, Los Angeles, Chicago, San Francisco, San Diego, San Jose, Salt Lake City, Phoenix, Dallas, Fort Worth, Houston, Detroit, Jersey City, Minneapolis, Miami, Denver, Baltimore, Seattle, Portland, Oregon and Portland, Maine, have sanctuary policies, which vary locally.
Inflow of New Legal Permanent Residents by continent in 2015:
Source: US Department of Homeland Security, Office of Immigration Statistics
Top 10 sending countries in 2014 and 2015
The United States admitted more legal immigrants from 1991 to 2000, between ten and eleven million, than in any previous decade. In the most recent decade, the ten million legal immigrants that settled in the U.S. represent an annual growth of only about 0.3 % as the U.S. population grew from 249 million to 281 million. By comparison, the highest previous decade was the 1900s, when 8.8 million people arrived, increasing the total U.S. population by one percent every year. Specifically, "nearly 15 % of Americans were foreign - born in 1910, while in 1999, only about 10 % were foreign - born. ''
By 1970, immigrants accounted for 4.7 percent of the US population and rising to 6.2 percent in 1980, with an estimated 12.5 percent in 2009. As of 2010, 25 % of US residents under age 18 were first - or second - generation immigrants. Eight percent of all babies born in the U.S. in 2008 belonged to illegal immigrant parents, according to a recent analysis of U.S. Census Bureau data by the Pew Hispanic Center.
Legal immigration to the U.S. increased from 250,000 in the 1930s, to 2.5 million in the 1950s, to 4.5 million in the 1970s, and to 7.3 million in the 1980s, before resting at about 10 million in the 1990s. Since 2000, legal immigrants to the United States number approximately 1,000,000 per year, of whom about 600,000 are Change of Status who already are in the U.S. Legal immigrants to the United States now are at their highest level ever, at just over 37,000,000 legal immigrants. Illegal immigration may be as high as 1,500,000 per year with a net of at least 700,000 illegal immigrants arriving every year. Immigration led to a 57.4 % increase in foreign born population from 1990 to 2000.
While immigration has increased drastically over the last century, the foreign born share of the population is, at 13.4, only somewhat below what it was at its peak in 1910 at 14.7 %. A number of factors may be attributed to the decrease in the representation of foreign born residents in the United States. Most significant has been the change in the composition of immigrants; prior to 1890, 82 % of immigrants came from North and Western Europe. From 1891 to 1920, that number dropped to 25 %, with a rise in immigrants from East, Central, and South Europe, summing up to 64 %. Animosity towards these different and foreign immigrants rose in the United States, resulting in much legislation to limit immigration.
Contemporary immigrants settle predominantly in seven states, California, New York, Florida, Texas, Pennsylvania, New Jersey and Illinois, comprising about 44 % of the U.S. population as a whole. The combined total immigrant population of these seven states was 70 % of the total foreign - born population in 2000. If current birth rate and immigration rates were to remain unchanged for another 70 to 80 years, the U.S. population would double to nearly 600 million.
In 1900, when the U.S. population was 76 million, there were an estimated 500,000 Hispanics. The Census Bureau projects that by 2050, one - quarter of the population will be of Hispanic descent. This demographic shift is largely fueled by immigration from Latin America.
A country is included in the table if it exceeded 50,000 in either category.
Note: Counts of immigrants since 1986 for Russia includes "Soviet Union (former) '', and for Czech Republic includes "Czechoslovakia (former) ''.
The Census Bureau estimates the US population will grow from 317 million in 2014 to 417 million in 2060 with immigration, when nearly 20 % will be foreign born. A 2015 report from the Pew Research Center projects that by 2065, non-Hispanic whites will account for 46 % of the population, down from the 2005 figure of 67 %. Non-Hispanic whites made up 85 % of the population in 1960. It also foresees the Hispanic population rising from 17 % in 2014 to 29 % by 2060. The Asian population is expected to nearly double in 2060. Overall, the Pew Report predicts the population of the United States will rise from 296 million in 2005 to 441 million in 2065, but only to 338 million with no immigration.
In 35 of the country 's 50 largest cities, non-Hispanic whites were at the last census or are predicted to be in the minority. In California, non-Hispanic whites slipped from 80 % of the state 's population in 1970 to 42 % in 2001 and 39 % in 2013.
Immigrant segregation declined in the first half of the 20th century, but has been rising over the past few decades. This has caused questioning of the correctness of describing the United States as a melting pot. One explanation is that groups with lower socioeconomic status concentrate in more densely populated area that have access to public transit while groups with higher socioeconomic status move to suburban areas. Another is that some recent immigrant groups are more culturally and linguistically different from earlier groups and prefer to live together due to factors such as communication costs. Another explanation for increased segregation is white flight.
Source: 1990, 2000 and 2010 decennial Census and 2015 American Community Survey
A survey of leading economists shows a consensus behind the view that high - skilled immigration makes the average American better off. A survey of the same economists also shows strong support behind the notion that low - skilled immigration makes the average American better off. According to David Card, Christian Dustmann, and Ian Preston, "most existing studies of the economic impacts of immigration suggest these impacts are small, and on average benefit the native population ''. In a survey of the existing literature, Örn B Bodvarsson and Hendrik Van den Berg write, "a comparison of the evidence from all the studies... makes it clear that, with very few exceptions, there is no strong statistical support for the view held by many members of the public, namely that immigration has an adverse effect on native - born workers in the destination country. ''
Whereas the impact on the average native tends to be small and positive, studies show more mixed results for low - skilled natives, but whether the effects are positive or negative, they tend to be small either way.
Immigrants may often do types of work that natives are largely unwilling to do, contributing to greater economic prosperity for the economy as a whole: for instance, Mexican migrant workers taking up manual farm work in the United States has close to zero effect on native employment in that occupation, which means that the effect of Mexican workers on U.S. employment outside farm work was therefore most likely positive, since they raised overall economic productivity. Research indicates that immigrants are more likely to work in risky jobs than U.S. - born workers, partly due to differences in average characteristics, such as immigrants ' lower English language ability and educational attainment. Further, some studies indicate that higher ethnic concentration in metropolitan areas is positively related to the probability of self - employment of immigrants.
Research also suggests that diversity has a net positive effect on productivity and economic prosperity. A study by Harvard economist Nathan Nunn, Yale economist Nancy Qian and LSE economist Sandra Sequeira found that the Age of Mass Migration (1850 -- 1920) has had substantially beneficial long - term effects on U.S. economic prosperity: "locations with more historical immigration today have higher incomes, less poverty, less unemployment, higher rates of urbanization, and greater educational attainment. The long - run effects appear to arise from the persistence of sizeable short - run benefits, including earlier and more intensive industrialization, increased agricultural productivity, and more innovation. '' The authors also find that the immigration had short - term benefits: "that there is no evidence that these long - run benefits come at short - run costs. In fact, immigration immediately led to economic benefits that took the form of higher incomes, higher productivity, more innovation, and more industrialization. ''
Research also finds that migration leads to greater trade in goods and services. Using 130 years of data on historical migrations to the United States, one study finds "that a doubling of the number of residents with ancestry from a given foreign country relative to the mean increases by 4.2 percentage points the probability that at least one local firm invests in that country, and increases by 31 % the number of employees at domestic recipients of FDI from that country. The size of these effects increases with the ethnic diversity of the local population, the geographic distance to the origin country, and the ethno - linguistic fractionalization of the origin country. ''
A 2011 literature review of the economic impacts of immigration found that the net fiscal impact of migrants varies across studies but that the most credible analyses typically find small and positive fiscal effects on average. According to the authors, "the net social impact of an immigrant over his or her lifetime depends substantially and in predictable ways on the immigrant 's age at arrival, education, reason for migration, and similar ''.
A 2016 report by the National Academies of Sciences, Engineering, and Medicine concluded that over a 75 - year time horizon, "the fiscal impacts of immigrants are generally positive at the federal level and generally negative at the state and local level. '' The reason for the costs to state and local governments is that the cost of educating the immigrants ' children falls on state and local governments. According to a 2007 literature review by the Congressional Budget Office, "Over the past two decades, most efforts to estimate the fiscal impact of immigration in the United States have concluded that, in aggregate and over the long term, tax revenues of all types generated by immigrants -- both legal and unauthorized -- exceed the cost of the services they use. ''
According to James Smith, a senior economist at Santa Monica - based RAND Corporation and lead author of the United States National Research Council 's study "The New Americans: Economic, Demographic, and Fiscal Effects of Immigration '', immigrants contribute as much as $10 billion to the U.S. economy each year. The NRC report found that although immigrants, especially those from Latin America, caused a net loss in terms of taxes paid versus social services received, immigration can provide an overall gain to the domestic economy due to an increase in pay for higher - skilled workers, lower prices for goods and services produced by immigrant labor, and more efficiency and lower wages for some owners of capital. The report also notes that although immigrant workers compete with domestic workers for low - skilled jobs, some immigrants specialize in activities that otherwise would not exist in an area, and thus can be beneficial for all domestic residents.
Immigration and foreign labor documentation fees increased over 80 % in 2007, with over 90 % of funding for USCIS derived from immigration application fees, creating many USCIS jobs involving immigration to US, such as immigration interview officials, finger print processor, Department of Homeland Security, etc.
Overall immigration has not had much effect on native wage inequality but low - skill immigration has been linked to greater income inequality in the native population.
Research on the economic effects of undocumented immigrants is scant but existing peer - reviewed studies suggest that the effects are positive for the native population and public coffers. A 2015 study shows that "increasing deportation rates and tightening border control weakens low - skilled labor markets, increasing unemployment of native low - skilled workers. Legalization, instead, decreases the unemployment rate of low - skilled natives and increases income per native. '' Studies show that legalization of undocumented immigrants would boost the U.S. economy; a 2013 study found that granting legal status to undocumented immigrants would raise their incomes by a quarter (increasing U.S. GDP by approximately $1.4 trillion over a ten - year period), and 2016 study found that "legalization would increase the economic contribution of the unauthorized population by about 20 %, to 3.6 % of private - sector GDP. ''
A 2007 literature by the Congressional Budget Office found that estimating the fiscal effects of undocumented immigrants has proven difficult: "currently available estimates have significant limitations; therefore, using them to determine an aggregate effect across all states would be difficult and prone to considerable error ''. The impact of undocumented immigrants differs on federal levels than state and local levels, with research suggesting modest fiscal costs at the state and local levels but with substantial fiscal gains at the federal level.
In 2009, a study by the Cato Institute, a free market think tank, found that legalization of low - skilled illegal resident workers in the US would result in a net increase in US GDP of $180 billion over ten years. The Cato Institute study did not examine the impact on per capita income for most Americans. Jason Riley notes that because of progressive income taxation, in which the top 1 % of earners pay 37 % of federal income taxes (even though they actually pay a lower tax percentage based on their income), 60 % of Americans collect more in government services than they pay in, which also reflects on immigrants. In any event, the typical immigrant and his children will pay a net $80,000 more in their lifetime than they collect in government services according to the NAS. Legal immigration policy is set to maximize net taxation. Illegal immigrants even after an amnesty tend to be recipients of more services than they pay in taxes. In 2010, an econometrics study by a Rutgers economist found that immigration helped increase bilateral trade when the incoming people were connected via networks to their country of origin, particularly boosting trade of final goods as opposed to intermediate goods, but that the trade benefit weakened when the immigrants became assimilated into American culture.
According to NPR in 2005, about 3 % of illegal immigrants were working in agriculture. The H - 2A visa allows U.S. employers to bring foreign nationals to the United States to fill temporary agricultural jobs. The passing of tough immigration laws in several states from around 2009 provides a number of practical case studies. The state of Georgia passed immigration law HB 87 in 2011; this led, according to the coalition of top Kansas businesses, to 50 % of its agricultural produce being left to rot in the fields, at a cost to the state of more than $400 million. Overall losses caused by the act were $1 billion; it was estimated that the figure would become over $20 billion if all the estimated 325,000 undocumented workers left Georgia. The cost to Alabama of its crackdown in June 2011 has been estimated at almost $11 billion, with up to 80,000 unauthorized immigrant workers leaving the state.
Studies of refugees ' impact on native welfare are scant but the existing literature shows a positive fiscal impact and mixed results (negative, positive and no significant effects) on native welfare. A 2017 National Bureau of Economic Research paper found that refugees to the United States pay "$21,000 more in taxes than they receive in benefits over their first 20 years in the U.S. '' An internal study by the Department of Health and Human Services under the Trump administration, which was suppressed and not shown to the public, found that refugees to the United States brought in $63 billion more in government revenues than they cost the government. According to labor economist Giovanni Peri, the existing literature suggests that there are no economic reasons why the American labor market could not easily absorb 100,000 Syrian refugees in a year. Refugees integrate more slowly into host countries ' labor markets than labor migrants, in part due to the loss and depreciation of human capital and credentials during the asylum procedure.
According to one survey of the existing economic literature, "much of the existing research points towards positive net contributions by immigrant entrepreneurs. '' Areas where immigrant are more prevalent in the United States have substantially more innovation (as measured by patenting and citations). Immigrants to the United States create businesses at higher rates than natives. Mass migration can also boost innovation and growth, as shown by the examples of German Jewish Émigrés to the US and the Mariel boatlift. Immigrants have been linked to greater invention and innovation in the US. According to one report, "immigrants have started more than half (44 of 87) of America 's startup companies valued at $1 billion or more and are key members of management or product development teams in over 70 percent (62 of 87) of these companies. '' Foreign doctoral students are a major source of innovation in the American economy. In the United States, immigrant workers hold a disproportionate share of jobs in science, technology, engineering, and math (STEM): "In 2013, foreign - born workers accounted for 19.2 percent of STEM workers with a bachelor 's degree, 40.7 percent of those with a master 's degree, and more than half -- 54.5 percent -- of those with a Ph. D. ''
The Kauffman Foundation 's index of entrepreneurial activity is nearly 40 % higher for immigrants than for natives. Immigrants were involved in the founding of many prominent American high - tech companies, such as Google, Yahoo, YouTube, Sun Microsystems, and eBay.
Irish immigration was opposed in the 1850s by the nativist Know Nothing movement, originating in New York in 1843. It was engendered by popular fears that the country was being overwhelmed by Irish Catholic immigrants. On March 14, 1891, a lynch mob stormed a local jail and lynched several Italians following the acquittal of several Sicilian immigrants alleged to be involved in the murder of New Orleans police chief David Hennessy. The Congress passed the Emergency Quota Act in 1921, followed by the Immigration Act of 1924. The Immigration Act of 1924 was aimed at limiting immigration overall, and making sure that the nationalities of new arrivals matched the overall national profile.
A 2014 meta - analysis of racial discrimination in product markets found extensive evidence of minority applicants being quoted higher prices for products. A 1995 study found that car dealers "quoted significantly lower prices to white males than to black or female test buyers using identical, scripted bargaining strategies. '' A 2013 study found that eBay sellers of iPods received 21 percent more offers if a white hand held the iPod in the photo than a black hand.
Research suggests that police practices, such as racial profiling, over-policing in areas populated by minorities and in - group bias may result in disproportionately high numbers of racial minorities among crime suspects. Research also suggests that there may be possible discrimination by the judicial system, which contributes to a higher number of convictions for racial minorities. A 2012 study found that "(i) juries formed from all - white jury pools convict black defendants significantly (16 percentage points) more often than white defendants, and (ii) this gap in conviction rates is entirely eliminated when the jury pool includes at least one black member. '' Research has found evidence of in - group bias, where "black (white) juveniles who are randomly assigned to black (white) judges are more likely to get incarcerated (as opposed to being placed on probation), and they receive longer sentences. '' In - group bias has also been observed when it comes to traffic citations, as black and white cops are more likely to cite out - groups.
A 2015 study using correspondence tests "found that when considering requests from prospective students seeking mentoring in the future, faculty were significantly more responsive to White males than to all other categories of students, collectively, particularly in higher - paying disciplines and private institutions. '' Through affirmative action, there is reason to believe that elite colleges favor minority applicants.
A 2014 meta - analysis found extensive evidence of racial discrimination in the American housing market. Minority applicants for housing needed to make many more enquiries to view properties. Geographical steering of African - Americans in US housing remained significant. A 2003 study finds "evidence that agents interpret an initial housing request as an indication of a customer 's preferences, but also are more likely to withhold a house from all customers when it is in an integrated suburban neighborhood (redlining). Moreover, agents ' marketing efforts increase with asking price for white, but not for black, customers; blacks are more likely than whites to see houses in suburban, integrated areas (steering); and the houses agents show are more likely to deviate from the initial request when the customer is black than when the customer is white. These three findings are consistent with the possibility that agents act upon the belief that some types of transactions are relatively unlikely for black customers (statistical discrimination). ''
A report by the federal Department of Housing and Urban Development where the department sent African - Americans and whites to look at apartments found that African - Americans were shown fewer apartments to rent and houses for sale.
Several meta - analyses find extensive evidence of ethnic and racial discrimination in hiring in the American labor market. A 2016 meta - analysis of 738 correspondence tests -- tests where identical CVs for stereotypically black and white names were sent to employers -- in 43 separate studies conducted in OECD countries between 1990 and 2015 finds that there is extensive racial discrimination in hiring decisions in Europe and North - America. These correspondence tests showed that equivalent minority candidates need to send around 50 % more applications to be invited for an interview than majority candidates. A study that examine the job applications of actual people provided with identical résumés and similar interview training showed that African - American applicants with no criminal record were offered jobs at a rate as low as white applicants who had criminal records.
Racist thinking among and between minority groups does occur; examples of this are conflicts between blacks and Korean immigrants, notably in the 1992 Los Angeles Riots, and between African Americans and non-white Latino immigrants. There has been a long running racial tension between African American and Mexican prison gangs, as well as significant riots in California prisons where they have targeted each other, for ethnic reasons. There have been reports of racially motivated attacks against African Americans who have moved into neighborhoods occupied mostly by people of Mexican origin, and vice versa. There has also been an increase in violence between non-Hispanic Anglo Americans and Latino immigrants, and between African immigrants and African Americans.
Measuring assimilation can be difficult due to "ethnic attrition '', which refers to when ancestors of migrants cease to self - identify with the nationality or ethnicity of their ancestors. This means that successful cases of assimilation will be underestimated. Research shows that ethnic attrition is sizable in Hispanic and Asian immigrant groups in the United States. By taking account of ethnic attrition, the assimilation rate of Hispanics in the United States improves significantly. A 2016 paper challenges the view that cultural differences are necessarily an obstacle to long - run economic performance of migrants. It finds that "first generation migrants seem to be less likely to success the more culturally distant they are, but this effect vanishes as time spent in the USA increases. ''
Immigration from South Asia and elsewhere has contributed to enlarging the religious composition of the United States. Islam in the United States is growing mainly due to immigration. Hinduism in the United States, Buddhism in the United States, and Sikhism in the United States are other examples.
Since 1992, an estimated 1.7 million Muslims, approximately 1 million Hindus, and approximately 1 million Buddhists have immigrated legally to the United States.
The American Federation of Labor (AFL), a coalition of labor unions formed in the 1880s, vigorously opposed unrestricted immigration from Europe for moral, cultural, and racial reasons. The issue unified the workers who feared that an influx of new workers would flood the labor market and lower wages. Nativism was not a factor because upwards of half the union members were themselves immigrants or the sons of immigrants from Ireland, Germany and Britain. However, nativism was a factor when the AFL even more strenuously opposed all immigration from Asia because it represented (to its Euro - American members) an alien culture that could not be assimilated into American society. The AFL intensified its opposition after 1906 and was instrumental in passing immigration restriction bills from the 1890s to the 1920s, such as the 1921 Emergency Quota Act and the Immigration Act of 1924, and seeing that they were strictly enforced.
Mink (1986) concludes that the link between the AFL and the Democratic Party rested in part on immigration issues, noting the large corporations, which supported the Republicans, wanted more immigration to augment their labor force.
United Farm Workers during Cesar Chavez tenure was committed to restricting immigration. Chavez and Dolores Huerta, cofounder and president of the UFW, fought the Bracero Program that existed from 1942 to 1964. Their opposition stemmed from their belief that the program undermined U.S. workers and exploited the migrant workers. Since the Bracero Program ensured a constant supply of cheap immigrant labor for growers, immigrants could not protest any infringement of their rights, lest they be fired and replaced. Their efforts contributed to Congress ending the Bracero Program in 1964. In 1973, the UFW was one of the first labor unions to oppose proposed employer sanctions that would have prohibited hiring illegal immigrants.
On a few occasions, concerns that illegal immigrant labor would undermine UFW strike campaigns led to a number of controversial events, which the UFW describes as anti-strikebreaking events, but which have also been interpreted as being anti-immigrant. In 1969, Chavez and members of the UFW marched through the Imperial and Coachella Valleys to the border of Mexico to protest growers ' use of illegal immigrants as strikebreakers. Joining him on the march were Reverend Ralph Abernathy and U.S. Senator Walter Mondale. In its early years, the UFW and Chavez went so far as to report illegal immigrants who served as strikebreaking replacement workers (as well as those who refused to unionize) to the Immigration and Naturalization Service.
In 1973, the United Farm Workers set up a "wet line '' along the United States - Mexico border to prevent Mexican immigrants from entering the United States illegally and potentially undermining the UFW 's unionization efforts. During one such event, in which Chavez was not involved, some UFW members, under the guidance of Chavez 's cousin Manuel, physically attacked the strikebreakers after peaceful attempts to persuade them not to cross the border failed.
A Boston Globe article attributed Barack Obama 's win in the 2008 U.S. Presidential election to a marked reduction over the preceding decades in the percentage of whites in the American electorate, attributing this demographic change to the Immigration Act of 1965. The article quoted Simon Rosenberg, president and founder of the New Democrat Network, as having said that the Act is "the most important piece of legislation that no one 's ever heard of, '' and that it "set America on a very different demographic course than the previous 300 years. ''
Immigrants differ on their political views; however, the Democratic Party is considered to be in a far stronger position among immigrants overall. Research shows that religious affiliation can also significantly impact both their social values and voting patterns of immigrants, as well as the broader American population. Hispanic evangelicals, for example, are more strongly conservative than non-Hispanic evangelicals. This trend is often similar for Hispanics or others strongly identifying with the Catholic Church, a religion that strongly opposes abortion and gay marriage.
The key interests groups that lobby on immigration are religious, ethnic and business groups, together with some liberals and some conservative public policy organizations. Both the pro - and anti - groups affect policy.
Studies have suggested that some special interest group lobby for less immigration for their own group and more immigration for other groups since they see effects of immigration, such as increased labor competition, as detrimental when affecting their own group but beneficial when affecting other groups.
A 2007 paper found that both pro - and anti-immigration special interest groups play a role in migration policy. "Barriers to migration are lower in sectors in which business lobbies incur larger lobbying expenditures and higher in sectors where labor unions are more important. '' A 2011 study examining the voting of US representatives on migration policy suggests that "representatives from more skilled labor abundant districts are more likely to support an open immigration policy towards the unskilled, whereas the opposite is true for representatives from more unskilled labor abundant districts. ''
After the 2010 election, Gary Segura of Latino Decisions stated that Hispanic voters influenced the outcome and "may have saved the Senate for Democrats ''. Several ethnic lobbies support immigration reforms that would allow illegal immigrants that have succeeded in entering to gain citizenship. They may also lobby for special arrangements for their own group. The Chairman for the Irish Lobby for Immigration Reform has stated that "the Irish Lobby will push for any special arrangement it can get -- ' as will every other ethnic group in the country. ' '' The irrendentist and ethnic separatist movements for Reconquista and Aztlán see immigration from Mexico as strengthening their cause.
The book Ethnic Lobbies and US Foreign Policy (2009) states that several ethnic special interest groups are involved in pro-immigration lobbying. Ethnic lobbies also influence foreign policy. The authors write that "Increasingly, ethnic tensions surface in electoral races, with House, Senate, and gubernatorial contests serving as proxy battlegrounds for antagonistic ethnoracial groups and communities. In addition, ethnic politics affect party politics as well, as groups compete for relative political power within a party ''. However, the authors argue that currently ethnic interest groups, in general, do not have too much power in foreign policy and can balance other special interest groups.
In a 2012 news story, Reuters reported, "Strong support from Hispanics, the fastest - growing demographic in the United States, helped tip President Barack Obama 's fortunes as he secured a second term in the White House, according to Election Day polling. ''
Lately, there is talk among several Republican leaders, such as governors Bobby Jindal and Susana Martinez, of taking a new, friendlier approach to immigration. Former US Secretary of Commerce Carlos Gutierrez is promoting the creation of Republicans for Immigration Reform.
Bernie Sanders opposes guest worker programs and is also skeptical about skilled immigrant (H - 1B) visas, saying, "Last year, the top 10 employers of H - 1B guest workers were all offshore outsourcing companies. These firms are responsible for shipping large numbers of American information technology jobs to India and other countries. '' In an interview with Vox he stated his opposition to an open borders immigration policy, describing it as:
... a right - wing proposal, which says essentially there is no United States... you 're doing away with the concept of a nation - state. What right - wing people in this country would love is an open - border policy. Bring in all kinds of people, work for $2 or $3 an hour, that would be great for them. I do n't believe in that. I think we have to raise wages in this country, I think we have to do everything we can to create millions of jobs.
The issue of the health of immigrants and the associated cost to the public has been largely discussed. On average, per capita health care spending is lower for immigrants than it is for native - born Americans. The non-emergency use of emergency rooms ostensibly indicates an incapacity to pay, yet some studies allege disproportionately lower access to unpaid health care by immigrants. For this and other reasons, there have been various disputes about how much immigration is costing the United States public health system. University of Maryland economist and Cato Institute scholar Julian Lincoln Simon concluded in 1995 that while immigrants probably pay more into the health system than they take out, this is not the case for elderly immigrants and refugees, who are more dependent on public services for survival.
Immigration from areas of high incidences of disease is thought to have fueled the resurgence of tuberculosis (TB), chagas, and hepatitis in areas of low incidence. According to Centers for Disease Control and Prevention (CDC), TB cases among foreign - born individuals remain disproportionately high, at nearly nine times the rate of U.S. - born persons. To reduce the risk of diseases in low - incidence areas, the main countermeasure has been the screening of immigrants on arrival. HIV / AIDS entered the United States in around 1969, likely through a single infected immigrant from Haiti. Conversely, many new HIV infections in Mexico can be traced back to the United States. People infected with HIV were banned from entering the United States in 1987 by executive order, but the 1993 statute supporting the ban was lifted in 2009. The executive branch is expected to administratively remove HIV from the list of infectious diseases barring immigration, but immigrants generally would need to show that they would not be a burden on public welfare. Researchers have also found what is known as the "healthy immigrant effect '', in which immigrants in general tend to be healthier than individuals born in the U.S.
There is no empirical evidence that immigration increases crime in the United States. In fact, a majority of studies in the U.S. have found lower crime rates among immigrants than among non-immigrants, and that higher concentrations of immigrants are associated with lower crime rates.
Some research even suggests that increases in immigration may partly explain the reduction in the U.S. crime rate. A 2005 study showed that immigration to large U.S. metropolitan areas does not increase, and in some cases decreases, crime rates there. A 2009 study found that recent immigration was not associated with homicide in Austin, Texas. The low crime rates of immigrants to the United States despite having lower levels of education, lower levels of income and residing in urban areas (factors that should lead to higher crime rates) may be due to lower rates of antisocial behavior among immigrants. A 2015 study found that Mexican immigration to the United States was associated with an increase in aggravated assaults and a decrease in property crimes. A 2016 study finds no link between immigrant populations and violent crime, although there is a small but significant association between undocumented immigrants and drug - related crime.
Research finds that Secure Communities, an immigration enforcement program which led to a quarter of a million of detentions (when the study was published; November 2014), had no observable impact on the crime rate. A 2015 study found that the 1986 Immigration Reform and Control Act, which legalized almost 3 million immigrants, led to "decreases in crime of 3 -- 5 percent, primarily due to decline in property crimes, equivalent to 120,000 - 180,000 fewer violent and property crimes committed each year due to legalization ''. According to one study, sanctuary cities -- which adopt policies designed to not prosecute people solely for being an illegal immigrant -- have no statistically meaningful effect on crime.
One of the first political analyses in the U.S. of the relationship between immigration and crime was performed in the beginning of the 20th century by the Dillingham Commission, which found a relationship especially for immigrants from non-Northern European countries, resulting in the sweeping 1920s immigration reduction acts, including the Emergency Quota Act of 1921, which favored immigration from northern and western Europe. Recent research is skeptical of the conclusion drawn by the Dillingham Commission. One study finds that "major government commissions on immigration and crime in the early twentieth century relied on evidence that suffered from aggregation bias and the absence of accurate population data, which led them to present partial and sometimes misleading views of the immigrant - native criminality comparison. With improved data and methods, we find that in 1904, prison commitment rates for more serious crimes were quite similar by nativity for all ages except ages 18 and 19, for which the commitment rate for immigrants was higher than for the native - born. By 1930, immigrants were less likely than natives to be committed to prisons at all ages 20 and older, but this advantage disappears when one looks at commitments for violent offenses. ''
For the early twentieth century, one study found that immigrants had "quite similar '' imprisonment rates for major crimes as natives in 1904 but lower for major crimes (except violent offenses; the rate was similar) in 1930. Contemporary commissions used dubious data and interpreted it in questionable ways.
Research suggests that police practices, such as racial profiling, over-policing in areas populated by minorities and in - group bias may result in disproportionately high numbers of immigrants among crime suspects. Research also suggests that there may be possible discrimination by the judicial system, which contributes to a higher number of convictions for immigrants.
Scientific laboratories and startup internet opportunities have been a powerful American magnet. By 2000, 23 % of scientists with a PhD in the U.S. were immigrants, including 40 % of those in engineering and computers. Roughly a third of the United State 's college and universities graduate students in STEM fields are foreign nationals -- in some states it is well over half of their graduate students. On Ash Wednesday, March 5, 2014, the presidents of 28 Catholic and Jesuit colleges and universities, joined the "Fast for Families '' movement. The "Fast for Families '' movement reignited the immigration debate in the fall of 2013 when the movement 's leaders, supported by many members of Congress and the President, fasted for twenty - two days on the National Mall in Washington, D.C.
A study on public schools in California found that white enrollment declined in response to increases in the number of Spanish - speaking Limited English Proficient and Hispanic students. This white flight was greater for schools with relatively larger proportions of Spanish - speaking Limited English Proficient.
A North Carolina study found that the presence of Latin American children in schools had no significant negative effects on peers, but that students with limited English skills had slight negative effects on peers.
The ambivalent feeling of Americans toward immigrants is shown by a positive attitude toward groups that have been visible for a century or more, and much more negative attitude toward recent arrivals. For example, a 1982 national poll by the Roper Center at the University of Connecticut showed respondents a card listing a number of groups and asked, "Thinking both of what they have contributed to this country and have gotten from this country, for each one tell me whether you think, on balance, they 've been a good or a bad thing for this country, '' which produced the results shown in the table. "By high margins, Americans are telling pollsters it was a very good thing that Poles, Italians, and Jews immigrated to America. Once again, it 's the newcomers who are viewed with suspicion. This time, it 's the Mexicans, the Filipinos, and the people from the Caribbean who make Americans nervous. ''
In a 2002 study, which took place soon after the September 11 attacks, 55 % of Americans favored decreasing legal immigration, 27 % favored keeping it at the same level, and 15 % favored increasing it.
In 2006, the immigration - reduction advocacy think tank the Center for Immigration Studies released a poll that found that 68 % of Americans think U.S. immigration levels are too high, and just 2 % said they are too low. They also found that 70 % said they are less likely to vote for candidates that favor increasing legal immigration. In 2004, 55 % of Americans believed legal immigration should remain at the current level or increased and 41 % said it should be decreased. The less contact a native - born American has with immigrants, the more likely one would have a negative view of immigrants.
One of the most important factors regarding public opinion about immigration is the level of unemployment; anti-immigrant sentiment is where unemployment is highest, and vice versa.
Surveys indicate that the U.S. public consistently makes a sharp distinction between legal and illegal immigrants, and generally views those perceived as "playing by the rules '' with more sympathy than immigrants that have entered the country illegally.
Laws concerning immigration and naturalization include:
AEDPA and IIRARA exemplify many categories of criminal activity for which immigrants, including green card holders, can be deported and have imposed mandatory detention for certain types of cases.
In contrast to economic migrants, who generally do not gain legal admission, refugees, as defined by international law, can gain legal status through a process of seeking and receiving asylum, either by being designated a refugee while abroad, or by physically entering the United States and requesting asylum status thereafter. A specified number of legally defined refugees, who either apply for asylum overseas or after arriving in the U.S., are admitted annually. Refugees compose about one - tenth of the total annual immigration to the United States, though some large refugee populations are very prominent. In the year 2014, the number of asylum seekers accepted into the U.S. was about 120,000. This compared with about 31,000 in the UK and 13,500 in Canada. Japan accepted just 41 refugees for resettlement in 2007.
Since 1975, more than 1.3 million refugees from Asia have been resettled in the United States. Since 2000 the main refugee - sending regions have been Somalia, Liberia, Sudan, and Ethiopia. The ceiling for refugee resettlement for fiscal year 2008 was 80,000 refugees. The United States expected to admit a minimum of 17,000 Iraqi refugees during fiscal year 2009. The U.S. has resettled more than 42,000 Bhutanese refugees from Nepal since 2008.
In fiscal year 2008, the Office of Refugee Resettlement (ORR) appropriated over $655 million for long - term services provided to refugees after their arrival in the US. The Obama administration has kept to about the same level.
In removal proceedings in front of an immigration judge, cancellation of removal is a form of relief that is available for certain long - time residents of the United States. It allows a person being faced with the threat of removal to obtain permanent residence if that person has been physically present in the U.S. for at least ten years, has had good moral character during that period, has not been convicted of certain crimes, and can show that removal would result in exceptional and extremely unusual hardship to his or her U.S. citizen or permanent resident spouse, children, or parent. This form of relief is only available when a person is served with a Notice to Appear to appear in the proceedings in the court.
Members of Congress may submit private bills granting residency to specific named individuals. A special committee vets the requests, which require extensive documentation. The Central Intelligence Agency has the statutory authority to admit up to one hundred people a year outside of normal immigration procedures, and to provide for their settlement and support. The program is called "PL110 '', named after the legislation that created the agency, Public Law 110, the Central Intelligence Agency Act.
The illegal immigrant population of the United States is estimated to be between 11 and 12 million. The population of unauthorized immigrants peaked in 2007 and has declined since that time. The majority of the U.S. unauthorized immigrants are from Mexico, but "their numbers (and share of the total) have been declining '' and as of 2016 Mexicans no longer make up a clear majority of unauthorized immigrants, as they did in the past. Unauthorized immigrants made up about 5 % of the total U.S. civilian labor force in 2014. By the 2010s, an increasing share of U.S. unauthorized immigrants were long - term residents; in 2015, 66 % of adult unauthorized residents had lived in the country for at least ten years, while only 14 % had lived in the U.S. for less than five years.
In June 2012, President Obama issued a memorandum instructing officers of the federal government to defer deporting young undocumented immigrants who were brought to the U.S. as children as part of the Deferred Action for Childhood Arrivals (DACA) program. Under the program, eligible recipients who applied and were granted DACA status were granted a two - year deferral from deportation and temporary eligibility to work legally in the country. Among other criteria, in order to be eligible a youth applicant must (1) be between age 15 and 31; (2) have come to the United States before the age of 16; (3) have lived in the U.S. continuously for at least five years; (4) be a current student, or have earned a high school diploma or equivalent, or have received an honorable discharge from the U.S. armed services; and (4) must not "have not been convicted of a felony, significant misdemeanor, or three or more misdemeanors, and do not otherwise pose a threat to public safety or national security. '' The Migration Policy Institution estimated that as of 2016, about 1.3 million unauthorized young adults ages 15 and older were "immediately eligible for DACA ''; of this eligible population, 63 % had applied as of March 2016.
In 2014, President Obama announced a set of executive actions, the Deferred Action for Parents of Americans and Lawful Permanent Residents. Under this program, "unauthorized immigrants who are parents of U.S. citizens or lawful permanent residents (LPRs) would qualify for deferred action for three years if they meet certain other requirements. '' A February 2016 Migration Policy Institute / Urban Institute report found that (about 3.6 million people were potentially eligible for DAPA and "more than 10 million people live in households with at least one potentially DAPA - eligible adult, including some 4.3 million children under age 18 - an estimated 85 percent of whom are U.S. citizens. '' The report also found that "the potentially DAPA eligible are well settled with strong U.S. roots, with 69 percent having lived in the United States ten years or more, and 25 percent at least 20 years. ''
Although not without precedent under prior presidents, President Obama 's authority to create DAPA and expand DACA were challenged in the federal courts by Texas and 25 other states. In November 2015, the U.S. Court of Appeals for the Fifth Circuit, in a 2 - 1 decision in United States v. Texas, upheld a preliminary injunction blocking the programs from going forward. The case was heard by the U.S. Supreme Court, which in June 2016 deadlocked 4 - 4, thus affirming the ruling of the Fifth Circuit but setting no nationally binding precedent.
On November 15, 2013 the United States Citizenship and Immigration Services announced that they would be issuing a new policy memorandum called "parole in place. '' Parole in place would offer green cards to immigrant parents, spouses and children of active military duty personnel. Prior to this law relatives of military personnel -- excluding husbands and wives -- were forced to leave the United States and apply for green cards in their home countries. The law allows for family members to avoid the possible ten - year bar from the United States and remain in the United States while applying for lawful permanent residence. The parole status, given in one year terms, will be subject to the family member being "absent a criminal conviction or other serious adverse factors. ''
The history of immigration to the United States is the history of the country itself, and the journey from beyond the sea is an element found in American folklore, appearing over and over again in everything from The Godfather to Gangs of New York to "The Song of Myself '' to Neil Diamond 's "America '' to the animated feature An American Tail.
From the 1880s to the 1910s, vaudeville dominated the popular image of immigrants, with very popular caricature portrayals of ethnic groups. The specific features of these caricatures became widely accepted as accurate portrayals.
In The Melting Pot (1908), playwright Israel Zangwill (1864 -- 1926) explored issues that dominated Progressive Era debates about immigration policies. Zangwill 's theme of the positive benefits of the American melting pot resonated widely in popular culture and literary and academic circles in the 20th century; his cultural symbolism -- in which he situated immigration issues -- likewise informed American cultural imagining of immigrants for decades, as exemplified by Hollywood films. The popular culture 's image of ethnic celebrities often includes stereotypes about immigrant groups. For example, Frank Sinatra 's public image as a superstar contained important elements of the American Dream while simultaneously incorporating stereotypes about Italian Americans that were based in nativist and Progressive responses to immigration.
The process of assimilation has been a common theme of popular culture. For example, "lace - curtain Irish '' refers to middle - class Irish Americans desiring assimilation into mainstream society in counterpoint to the older, more raffish "shanty Irish ''. The occasional malapropisms and left - footed social blunders of these upward mobiles were gleefully lampooned in vaudeville, popular song, and the comic strips of the day such as Bringing Up Father, starring Maggie and Jiggs, which ran in daily newspapers for 87 years (1913 to 2000). In The Departed (2006), Staff Sergeant Dignam regularly points out the dichotomy between the lace curtain Irish lifestyle Billy Costigan enjoyed with his mother, and the shanty Irish lifestyle of Costigan 's father. In recent years the popular culture has paid special attention to Mexican immigration and the film Spanglish (2004) tells of a friendship of a Mexican housemaid (Paz Vega) and her boss played by Adam Sandler.
Novelists and writers have captured much of the color and challenge in their immigrant lives through their writings.
Regarding Irish women in the 19th century, there were numerous novels and short stories by Harvey O'Higgins, Peter McCorry, Bernard O'Reilly and Sarah Orne Jewett that emphasize emancipation from Old World controls, new opportunities and expansiveness of the immigrant experience.
On the other hand, Hladnik studies three popular novels of the late 19th century that warned Slovenes not to immigrate to the dangerous new world of the United States.
Jewish American writer Anzia Yezierska wrote her novel Bread Givers (1925) to explore such themes as Russian - Jewish immigration in the early 20th century, the tension between Old and New World Yiddish culture, and women 's experience of immigration. A well established author Yezierska focused on the Jewish struggle to escape the ghetto and enter middle - and upper - class America. In the novel, the heroine, Sara Smolinsky, escape from New York City 's "down - town ghetto '' by breaking tradition. She quits her job at the family store and soon becomes engaged to a rich real - estate magnate. She graduates college and takes a high - prestige job teaching public school. Finally Sara restores her broken links to family and religion.
The Swedish author Vilhelm Moberg in the mid-20th century wrote a series of four novels describing one Swedish family 's migration from Småland to Minnesota in the late 19th century, a destiny shared by almost one million people. The author emphasizes the authenticity of the experiences as depicted (although he did change names). These novels have been translated into English (The Emigrants, 1951, Unto a Good Land, 1954, The Settlers, 1961, The Last Letter Home, 1961). The musical Kristina från Duvemåla by ex-ABBA members Björn Ulvaeus and Benny Andersson is based on this story.
The Immigrant is a musical by Steven Alper, Sarah Knapp, and Mark Harelik. The show is based on the story of Harelik 's grandparents, Matleh and Haskell Harelik, who traveled to Galveston, Texas in 1909.
In their documentary How Democracy Works Now: Twelve Stories, filmmakers Shari Robertson and Michael Camerini examine the American political system through the lens of immigration reform from 2001 to 2007. Since the debut of the first five films, the series has become an important resource for advocates, policy - makers and educators.
That film series premiered nearly a decade after the filmmakers ' landmark documentary film Well - Founded Fear which provided a behind - the - scenes look at the process for seeking asylum in the United States. That film still marks the only time that a film - crew was privy to the private proceedings at the U.S. Immigration and Naturalization Service (INS), where individual asylum officers ponder the often life - or - death fate of immigrants seeking asylum.
University of North Carolina law professor Hiroshi Motomura has identified three approaches the United States has taken to the legal status of immigrants in his book Americans in Waiting: The Lost Story of Immigration and Citizenship in the United States. The first, dominant in the 19th century, treated immigrants as in transition; in other words, as prospective citizens. As soon as people declared their intention to become citizens, they received multiple low - cost benefits, including the eligibility for free homesteads in the Homestead Act of 1869, and in many states, the right to vote. The goal was to make the country more attractive, so large numbers of farmers and skilled craftsmen would settle new lands. By the 1880s, a second approach took over, treating newcomers as "immigrants by contract ''. An implicit deal existed where immigrants who were literate and could earn their own living were permitted in restricted numbers. Once in the United States, they would have limited legal rights, but were not allowed to vote until they became citizens, and would not be eligible for the New Deal government benefits available in the 1930s. The third and more recent policy is "immigration by affiliation '', which Motomura argues is the treatment which depends on how deeply rooted people have become in the country. An immigrant who applies for citizenship as soon as permitted, has a long history of working in the United States, and has significant family ties, is more deeply affiliated and can expect better treatment.
It has been suggested that the US should adopt policies similar to those in Canada and Australia and select for desired qualities such as education and work experience. Another suggestion is to reduce legal immigration because of being a relative, except for nuclear family members, since such immigrations of extended relatives, who in turn bring in their own extended relatives, may cause a perpetual cycle of "chain immigration ''.
The American Dream is the belief that through hard work and determination, any United States immigrant can achieve a better life, usually in terms of financial prosperity and enhanced personal freedom of choice. According to historians, the rapid economic and industrial expansion of the U.S. is not simply a function of being a resource rich, hard working, and inventive country, but the belief that anybody could get a share of the country 's wealth if he or she was willing to work hard. This dream has been a major factor in attracting immigrants to the United States.
|
who is the lead singer of the group disturbed | David Draiman - wikipedia
David Michael Draiman (born March 13, 1973) is an American songwriter and the vocalist for the band Disturbed as well as for the band Device. Draiman is known for his distorted voice and percussive singing style. In November 2006, Draiman was voted number 42 on the Hit Parader 's "Top 100 Metal Vocalists of All Time ''. Draiman has written some of Disturbed 's most successful singles, such as "Stupify '', "Down with the Sickness '', "Indestructible '', and "Inside the Fire ''.
In October 2011, Disturbed entered a hiatus. Draiman announced in the following year that he was working on an industrial rock / metal project with Geno Lenardo, formerly of Filter, which was later named Device. In June 2015, Disturbed released their first single since their hiatus, named "The Vengeful One ''. They produced it over a year before, and along with it announced a new album, Immortalized.
Draiman was born in Brooklyn, New York on March 13, 1973, the son to Miriam and YJ Draiman. His father, a candidate in the 2017 race for mayor of Los Angeles, is a former real estate developer, small - business owner, and elected member of the Northridge East Neighborhood Council, among other roles. Draiman 's brother Benjamin is an ambient / folk rock musician who lives in Israel and performs in Jerusalem. Draiman 's grandmother also lives in Israel.
His parents were observant, religious Jews (dati). They intended for Draiman to receive semikhah, and Draiman frequently spent time in Israel during his early life. Draiman attended five Jewish day schools, including Wisconsin Institute for Torah Study in Milwaukee, Wisconsin; Valley Torah High School in Los Angeles, California, where he formed his first band; and Fasman Yeshiva High School in Chicago, Illinois. During his freshman year at Wisconsin Institute for Torah Study he was asked to leave, as he "rebelled against the conformity '' and "just wanted to be a normal teenage kid '', adding that he "could n't really stomach the rigorous religious requirements of the life (there) ''. Of his study at Jewish day schools, Draiman stated that he "was a bit resentful ''; but he later encouraged his family to observe Shabbat, and was trained as a hazzan.
Draiman later enrolled at Ida Crown Jewish Academy in Chicago, Illinois, where he graduated from high school in 1991. From there, in 1991 -- 1992, he spent a year after high school studying at the Yeshivas Neveh Zion in Kiryat Ye'arim, on the outskirts of Jerusalem, Israel.
After returning to the US in 1992, Draiman commenced pre-law studies at Loyola University Chicago. In 1996, he graduated from the University with a Bachelor of Arts in Political Science and Government, Philosophy, and Business Administration. Initially considering offers to matriculate and study at law school, Draiman realized that although criminal defense law was the only area of law that interested him, he could not "really look at myself in the mirror and say ' I 'm going to lie for a living and protect criminals ' ''. During his university studies, Draiman also worked as a bank teller and in phone sales.
After graduating from college, Draiman worked as an administrative assistant in a healthcare facility. After his first year, he earned an administrator 's license and commenced running his own healthcare facility. For five years before joining Disturbed and the band 's signing with Giant Records, Draiman was a healthcare administrator.
Draiman said, "the first record I ever bought was Kiss ' Destroyer. And those classic bands like Black Sabbath were my first loves... I focused on the seminal metal bands like Metallica, Iron Maiden, Pantera and Queensrÿche ''.
Draiman continues, "But I could also appreciate the hair metal bands -- When you hear Whitesnake, you ca n't deny their greatness. Then I went in the direction of punk and new wave, groups like the Sex Pistols, The Ramones, The Misfits and later The Smiths and The Cure -- that was my ' 80s ''.
"And then when the grunge revolution happened, it was like a wakeup call. I 'll never forget getting my first Nirvana, Soundgarden and Alice in Chains records ''.
Draiman has cited James Hetfield of Metallica, Rob Halford of Judas Priest, and Bruce Dickinson of Iron Maiden as the three biggest influences on his singing.
Draiman is married to former WWE Diva Lena Yada; they have a son, Samuel Bear Isamu Draiman, born in 2013. In politics he said "I 'm liberal about everything that is issue - based as far as ideology, but I 'm also of the opinion of a very small government. I do n't agree with the fiscal policies of the Democrats, but I certainly do n't agree with the right - wing craziness of the Republicans. ''
For a more comprehensive list, see Disturbed discography Disturbed
Device
Guest appearances
As Producer
Metal Hammer Golden Gods Awards
Loudwire Music Awards
|
when was the name nigeria given to the country | Flora Shaw, Lady Lugard - wikipedia
Flora Louise Shaw, DBE (born 19 December 1852 -- 25 January 1929), was a British journalist and writer. She is credited with having coined the name "Nigeria ''.
She was born at 2 Dundas Terrace, Woolwich, South London, the fourth of fourteen children, the daughter of an English father, Captain (later Major General) George Shaw, and a French mother, Marie Adrienne Josephine (née Desfontaines; 1826 -- 1871), a native of Mauritius. She had nine sisters, the first and the last dying in infancy, and four brothers.
Her paternal grandfather was Sir Frederick Shaw, third baronet (1799 -- 1876), of Bushy Park, Dublin, and a member of parliament from 1830 to 1848, regarded as the leader of the Irish Conservatives. Her paternal grandmother, Thomasine Emily, was the sixth daughter of the Hon. George Jocelyn, and granddaughter of Robert, first earl of Roden.
From 1878 to 1886, Shaw wrote five novels, four for children and one for young adults. In her books, young girls are encouraged to be resourceful and brave but in a traditional framework, acting in support of "gentlemanly '' fathers and prospective husbands, rather than on their own behalf. Shaw 's ideology is both sexually conservative and imperialist.
Her first children 's novel, Castle Blair, was translated into several languages and continued to be extremely popular in the UK and the US well into the 20th century. It was based on her own Anglo - Irish childhood experiences. Charlotte Yonge recommended it along with works of "some of the most respected and loved authors available in late Victorian England '' as "wild... attractive and exciting ''. The critic John Ruskin called Castle Blair "good, and lovely, and true ''.
Shaw also wrote a history of Australia for children, The story of Australia (London: Horace Marshall, 1897), as part of the Story of the Empire series.
She began her career in journalism in 1886, writing for the Pall Mall Gazette and the Manchester Guardian. She was sent by the Manchester Guardian newspaper and was the only woman reporter to cover the Anti-Slavery Conference in Brussels. She became Colonial Editor for The Times, which made her the highest paid woman journalist of the time. In that connection, she was sent as a special correspondent to Southern Africa in 1892 and in 1901 and to Australia and New Zealand in 1892, partly to study the question of Kanaka labour in the sugar plantations of Queensland. Penneshaw, South Australia is partly named after her.
She also made two journeys to Canada, in 1893 and 1898, the second including a journey to the gold diggings of Klondike.
Her belief in the positive benefits of the British Empire infused her writing. As a correspondent for The Times, Shaw sent back "Letters '' in 1892 and 1893 from her travels in South Africa and Australia, later published in book form as Letters from South Africa (1893). Writing for the educated governing circles, she focused on the prospects of economic growth and the political consolidation of self - governing colonies within an increasingly - united empire, with a vision largely blinkered to the force of colonial nationalisms and local self - identities.
The lengthy articles in a leading daily newspaper reveal a late - Victorian metropolitan imagery of colonial space and time. Shaw projected vast empty spaces awaiting energetic English settlers and economic enterprise. Observing new landscapes from a rail carriage, for example, she selected images which served as powerful metaphors of time and motion in the construction of racial identities.
When she first started writing for The Times, she wrote under the name of "F. Shaw '' to try to disguise that she was a woman. Later, she was so highly regarded that she wrote openly as Flora Shaw. Her pseudonym is now little - known, and she was regarded as one of the greatest journalists of her time, specialising in politics and economics.
Shaw first took advantage of a journalistic opportunity while she was staying with family friends, the Younghusbands, in Gibraltar in 1886. There, over four months, she visited Zebehr Pasha, a slaver and former Sudanese governor, who was incarcerated there. Her reports purportedly led to his release.
Flora was required to testify before the House of Commons Select Committee on British South Africa during the controversy on the Jameson Raid into the Transvaal on 29 December 1895. The prominent journalist had corresponded frequently with those involved or suspected of involvement, including Cecil Rhodes, Leander Starr Jameson, Colonel Francis Rhodes, and Colonial Secretary Joseph Chamberlain. She was exonerated from all charges.
In an essay that first appeared in The Times on 8 January 1897, by "Miss Shaw '', she suggested the name "Nigeria '' for the British Protectorate on the Niger River. In her essay, Shaw made the case for a shorter term that would be used for the "agglomeration of pagan and Mahomedan States '' to replace the official title, "Royal Niger Company Territories ''. She thought that the term "Royal Niger Company Territories '' was too long to be used as a name of a Real Estate Property, under the Trading Company in that part of Africa. She was in search of a new name, and she coined "Nigeria '', in preference to terms, such as "Central Sudan '', which were associated with the area by some geographers and travellers.
She thought that the term "Sudan '' was associated with a territory in the Nile basin, the current Sudan. In The Times of 8 January 1897, she wrote: "The name Nigeria applying to no other part of Africa may without offence to any neighbours be accepted as co-extensive with the territories over which the Royal Niger Company has extended British influence, and may serve to differentiate them equally from the colonies of Lagos and the Niger Protectorate on the coast and from the French territories of the Upper Niger. ''
Shaw was close to the three men who most epitomised empire in Africa: Rhodes, George Taubman Goldie and Sir Frederick Lugard.
She married, on 10 June 1902, Lugard, who, in 1928, was created Baron Lugard. She accompanied him when he served as Governor of Hong Kong (1907 -- 1912) and Governor - General of Nigeria (1914 -- 1919). They had no children.
In 1905, Shaw wrote what remains the definitive history of Western Sudan and the modern settlement of Northern Nigeria, A Tropical Dependency: An Outline of the Ancient History of the Western Soudan, With an Account of the Modern Settlement of Northern Nigeria (London: Nisbet, 1905).
While they lived in Hong Kong she helped her husband establishing the University of Hong Kong. During the First World War, she was prominent in the founding of the War Refugees Committee, which dealt with the problem of the refugees from Belgium, and she founded the Lady Lugard Hospitality Committee. In the 1918 New Year Honours, she was appointed as a Dame Commander of the Order of the British Empire.
She died of pneumonia on 25 January 1929, aged 76, in Surrey.
|
why is it important for a writer to have some basic knowledge of science | Writing - wikipedia
Writing is a medium of human communication that represents language and emotion with signs and symbols. In most languages, writing is a complement to speech or spoken language. Writing is not a language, but a tool developed by human society. Within a language system, writing relies on many of the same structures as speech, such as vocabulary, grammar, and semantics, with the added dependency of a system of signs or symbols. The result of writing is called text, and the recipient of text is called a reader. Motivations for writing include publication, storytelling, correspondence and diary. Writing has been instrumental in keeping history, maintaining culture, dissemination of knowledge through the media and the formation of legal systems.
As human societies emerged, the development of writing was driven by pragmatic exigencies such as exchanging information, maintaining financial accounts, codifying laws and recording history. Around the 4th millennium BCE, the complexity of trade and administration in Mesopotamia outgrew human memory, and writing became a more dependable method of recording and presenting transactions in a permanent form. In both ancient Egypt and Mesoamerica, writing may have evolved through calendric and a political necessity for recording historical and environmental events.
H.G. Wells argued that writing has the ability to "put agreements, laws, commandments on record. It made the growth of states larger than the old city states possible. It made a continuous historical consciousness possible. The command of the priest or king and his seal could go far beyond his sight and voice and could survive his death ''.
The major writing systems -- methods of inscription -- broadly fall into five categories: logographic, syllabic, alphabetic, featural, and ideographic (symbols for ideas). A sixth category, pictographic, is insufficient to represent language on its own, but often forms the core of logographies.
A logogram is a written character which represents a word or morpheme. A vast number of logograms are needed to write Chinese characters, cuneiform, and Mayan, where a glyph may stand for a morpheme, a syllable, or both -- ("logoconsonantal '' in the case of hieroglyphs). Many logograms have an ideographic component (Chinese "radicals '', hieroglyphic "determiners ''). For example, in Mayan, the glyph for "fin '', pronounced "ka ' '', was also used to represent the syllable "ka '' whenever the pronunciation of a logogram needed to be indicated, or when there was no logogram. In Chinese, about 90 % of characters are compounds of a semantic (meaning) element called a radical with an existing character to indicate the pronunciation, called a phonetic. However, such phonetic elements complement the logographic elements, rather than vice versa.
The main logographic system in use today is Chinese characters, used with some modification for the various languages or dialects of China, Japan, and sometimes in Korean despite the fact that in South and North Korea, the phonetic Hangul system is mainly used.
A syllabary is a set of written symbols that represent (or approximate) syllables. A glyph in a syllabary typically represents a consonant followed by a vowel, or just a vowel alone, though in some scripts more complex syllables (such as consonant - vowel - consonant, or consonant - consonant - vowel) may have dedicated glyphs. Phonetically related syllables are not so indicated in the script. For instance, the syllable "ka '' may look nothing like the syllable "ki '', nor will syllables with the same vowels be similar.
Syllabaries are best suited to languages with a relatively simple syllable structure, such as Japanese. Other languages that use syllabic writing include the Linear B script for Mycenaean Greek; Cherokee; Ndjuka, an English - based creole language of Surinam; and the Vai script of Liberia. Most logographic systems have a strong syllabic component. Ethiopic, though technically an abugida, has fused consonants and vowels together to the point where it is learned as if it were a syllabary.
An alphabet is a set of symbols, each of which represents or historically represented a phoneme of the language. In a perfectly phonological alphabet, the phonemes and letters would correspond perfectly in two directions: a writer could predict the spelling of a word given its pronunciation, and a speaker could predict the pronunciation of a word given its spelling.
As languages often evolve independently of their writing systems, and writing systems have been borrowed for languages they were not designed for, the degree to which letters of an alphabet correspond to phonemes of a language varies greatly from one language to another and even within a single language.
In most of the writing systems of the Middle East, it is usually only the consonants of a word that are written, although vowels may be indicated by the addition of various diacritical marks. Writing systems based primarily on marking the consonant phonemes alone date back to the hieroglyphics of ancient Egypt. Such systems are called abjads, derived from the Arabic word for "alphabet ''.
In most of the alphabets of India and Southeast Asia, vowels are indicated through diacritics or modification of the shape of the consonant. These are called abugidas. Some abugidas, such as Ethiopic and Cree, are learned by children as syllabaries, and so are often called "syllabics ''. However, unlike true syllabaries, there is not an independent glyph for each syllable.
Sometimes the term "alphabet '' is restricted to systems with separate letters for consonants and vowels, such as the Latin alphabet, although abugidas and abjads may also be accepted as alphabets. Because of this use, Greek is often considered to be the first alphabet.
A featural script notates the building blocks of the phonemes that make up a language. For instance, all sounds pronounced with the lips ("labial '' sounds) may have some element in common. In the Latin alphabet, this is accidentally the case with the letters "b '' and "p ''; however, labial "m '' is completely dissimilar, and the similar - looking "q '' and "d '' are not labial. In Korean hangul, however, all four labial consonants are based on the same basic element, but in practice, Korean is learned by children as an ordinary alphabet, and the featural elements tend to pass unnoticed.
Another featural script is SignWriting, the most popular writing system for many sign languages, where the shapes and movements of the hands and face are represented iconically. Featural scripts are also common in fictional or invented systems, such as J.R.R. Tolkien 's Tengwar.
Historians draw a sharp distinction between prehistory and history, with history defined by the advent of writing. The cave paintings and petroglyphs of prehistoric peoples can be considered precursors of writing, but they are not considered true writing because they did not represent language directly.
Writing systems develop and change based on the needs of the people who use them. Sometimes the shape, orientation, and meaning of individual signs changes over time. By tracing the development of a script, it is possible to learn about the needs of the people who used the script as well as how the script changed over time.
The many tools and writing materials used throughout history include stone tablets, clay tablets, bamboo slats, wax tablets, vellum, parchment, paper, copperplate, styluses, quills, ink brushes, pencils, pens, and many styles of lithography. It is speculated that the Incas might have employed knotted cords known as quipu (or khipu) as a writing system.
The typewriter and various forms of word processors have subsequently become widespread writing tools, and various studies have compared the ways in which writers have framed the experience of writing with such tools as compared with the pen or pencil.
By definition, the modern practice of history begins with written records. Evidence of human culture without writing is the realm of prehistory. The Dispilio Tablet (Greece) and Tărtăria tablets (Romania), which have been carbon dated to the 6th millennium BC, are recent discoveries of the earliest known neolithic writings.
While neolithic writing is a current research topic, conventional history assumes that the writing process first evolved from economic necessity in the ancient Near East. Writing most likely began as a consequence of political expansion in ancient cultures, which needed reliable means for transmitting information, maintaining financial accounts, keeping historical records, and similar activities. Around the 4th millennium BC, the complexity of trade and administration outgrew the power of memory, and writing became a more dependable method of recording and presenting transactions in a permanent form.
Archaeologist Denise Schmandt - Besserat determined the link between previously uncategorized clay "tokens '', the oldest of which have been found in the Zagros region of Iran, and the first known writing, Mesopotamian cuneiform. In approximately 8000 BC, the Mesopotamians began using clay tokens to count their agricultural and manufactured goods. Later they began placing these tokens inside large, hollow clay containers (bulla, or globular envelopes) which were then sealed. The quantity of tokens in each container came to be expressed by impressing, on the container 's surface, one picture for each instance of the token inside. They next dispensed with the tokens, relying solely on symbols for the tokens, drawn on clay surfaces. To avoid making a picture for each instance of the same object (for example: 100 pictures of a hat to represent 100 hats), they ' counted ' the objects by using various small marks. In this way the Sumerians added "a system for enumerating objects to their incipient system of symbols ''.
The original Mesopotamian writing system (believed to be the world 's oldest) was derived around 3600 BC from this method of keeping accounts. By the end of the 4th millennium BC, the Mesopotamians were using a triangular - shaped stylus pressed into soft clay to record numbers. This system was gradually augmented with using a sharp stylus to indicate what was being counted by means of pictographs. Round - stylus and sharp - stylus writing was gradually replaced by writing using a wedge - shaped stylus (hence the term cuneiform), at first only for logograms, but by the 29th century BC also for phonetic elements. Around 2700 BC, cuneiform began to represent syllables of spoken Sumerian. About that time, Mesopotamian cuneiform became a general purpose writing system for logograms, syllables, and numbers. This script was adapted to another Mesopotamian language, the East Semitic Akkadian (Assyrian and Babylonian) around 2600 BC, and then to others such as Elamite, Hattian, Hurrian and Hittite. Scripts similar in appearance to this writing system include those for Ugaritic and Old Persian. With the adoption of Aramaic as the ' lingua franca ' of the Neo-Assyrian Empire (911 -- 609 BC), Old Aramaic was also adapted to Mesopotamian cuneiform. The last cuneiform scripts in Akkadian discovered thus far date from the 1st century AD.
Over the centuries, three distinct Elamite scripts developed. Proto - Elamite is the oldest known writing system from Iran. In use only for a brief time (c. 3200 -- 2900 BC), clay tablets with Proto - Elamite writing have been found at different sites across Iran. The Proto - Elamite script is thought to have developed from early cuneiform (proto - cuneiform). The Proto - Elamite script consists of more than 1,000 signs and is thought to be partly logographic.
Linear Elamite is a writing system attested in a few monumental inscriptions in Iran. It was used for a very brief period during the last quarter of the 3rd millennium BC. It is often claimed that Linear Elamite is a syllabic writing system derived from Proto - Elamite, although this can not be proven since Linear - Elamite has not been deciphered. Several scholars have attempted to decipher the script, most notably Walther Hinz and Piero Meriggi.
The Elamite cuneiform script was used from about 2500 to 331 BC, and was adapted from the Akkadian cuneiform. The Elamite cuneiform script consisted of about 130 symbols, far fewer than most other cuneiform scripts.
Cretan hieroglyphs are found on artifacts of Crete (early - to - mid-2nd millennium BC, MM I to MM III, overlapping with Linear A from MM IIA at the earliest). Linear B, the writing system of the Mycenaean Greeks, has been deciphered while Linear A has yet to be deciphered. The sequence and the geographical spread of the three overlapping, but distinct writing systems can be summarized as follows: Cretan hieroglyphs were used in Crete from c. 1625 to 1500 BC; Linear A was used in the Aegean Islands (Kea, Kythera, Melos, Thera), and the Greek mainland (Laconia) from c. 18th century to 1450 BC; and Linear B was used in Crete (Knossos), and mainland (Pylos, Mycenae, Thebes, Tiryns) from c. 1375 to 1200 BC.
The earliest surviving examples of writing in China -- inscriptions on so - called "oracle bones '', tortoise plastrons and ox scapulae used for divination -- date from around 1200 BC in the late Shang dynasty. A small number of bronze inscriptions from the same period have also survived. Historians have found that the type of media used had an effect on what the writing was documenting and how it was used.
In 2003 archaeologists reported discoveries of isolated tortoise - shell carvings dating back to the 7th millennium BC, but whether or not these symbols are related to the characters of the later oracle - bone script is disputed.
The earliest known hieroglyphic inscriptions are the Narmer Palette, dating to c. 3200 BC, and several recent discoveries that may be slightly older, though these glyphs were based on a much older artistic rather than written tradition. The hieroglyphic script was logographic with phonetic adjuncts that included an effective alphabet.
Writing was very important in maintaining the Egyptian empire, and literacy was concentrated among an educated elite of scribes. Only people from certain backgrounds were allowed to train to become scribes, in the service of temple, pharaonic, and military authorities. The hieroglyph system was always difficult to learn, but in later centuries was purposely made even more so, as this preserved the scribes ' status.
The world 's oldest known alphabet appears to have been developed by Canaanite turquoise miners in the Sinai desert around the mid-19th century BC. Around 30 crude inscriptions have been found at a mountainous Egyptian mining site known as Serabit el - Khadem. This site was also home to a temple of Hathor, the "Mistress of turquoise ''. A later, two line inscription has also been found at Wadi el - Hol in Central Egypt. Based on hieroglyphic prototypes, but also including entirely new symbols, each sign apparently stood for a consonant rather than a word: the basis of an alphabetic system. It was not until the 12th to 9th centuries, however, that the alphabet took hold and became widely used.
Indus script refers to short strings of symbols associated with the Indus Valley Civilization (which spanned modern - day Pakistan and North India) used between 2600 and 1900 BC. In spite of many attempts at decipherments and claims, it is as yet undeciphered. The term ' Indus script ' is mainly applied to that used in the mature Harappan phase, which perhaps evolved from a few signs found in early Harappa after 3500 BC, and was followed by the mature Harappan script. The script is written from right to left, and sometimes follows a boustrophedonic style. Since the number of principal signs is about 400 -- 600, midway between typical logographic and syllabic scripts, many scholars accept the script to be logo - syllabic (typically syllabic scripts have about 50 -- 100 signs whereas logographic scripts have a very large number of principal signs). Several scholars maintain that structural analysis indicates that an agglutinative language underlies the script.
Archaeologists have recently discovered that there was a civilization in Central Asia using writing c. 2000 BC. An excavation near Ashgabat, the capital of Turkmenistan, revealed an inscription on a piece of stone that was used as a stamp seal.
The Proto - Sinaitic script in which Proto - Canaanite is believed to have been first written, is attested as far back as the 19th century BC. The Phoenician writing system was adapted from the Proto - Canaanite script sometime before the 14th century BC, which in turn borrowed principles of representing phonetic information from Hieratic, Cuneiform and Egyptian hieroglyphics. This writing system was an odd sort of syllabary in which only consonants are represented. This script was adapted by the Greeks, who adapted certain consonantal signs to represent their vowels. The Cumae alphabet, a variant of the early Greek alphabet, gave rise to the Etruscan alphabet, and its own descendants, such as the Latin alphabet and Runes. Other descendants from the Greek alphabet include Cyrillic, used to write Bulgarian, Russian and Serbian among others. The Phoenician system was also adapted into the Aramaic script, from which the Hebrew script and also that of Arabic are descended.
The Tifinagh script (Berber languages) is descended from the Libyco - Berber script which is assumed to be of Phoenician origin.
A stone slab with 3,000 - year - old writing, known as the Cascajal Block, was discovered in the Mexican state of Veracruz and is an example of the oldest script in the Western Hemisphere, preceding the oldest Zapotec writing by approximately 500 years. It is thought to be Olmec.
Of several pre-Columbian scripts in Mesoamerica, the one that appears to have been best developed, and the only one to be deciphered, is the Maya script. The earliest inscriptions which are identifiably Maya date to the 3rd century BC. Maya writing used logograms complemented by a set of syllabic glyphs, somewhat similar in function to modern Japanese writing.
The Incas had no known script. Their quipu system of recording information -- based on knots tied along one or many linked cords -- was apparently used for inventory and accountancy purposes and could not encode textual information.
Three stone slabs were found by Romanian archaeologist Nicolae Vlassa, in the mid-20th century (1961) in Tărtăria (present - day Alba county, Transylvania), Romania, ancient land of Dacia, inhabited by Dacians, which were a population who may have been related to the Getaes and Thracians. One of the slabs contains 4 groups of pictographs divided by lines. Some of the characters are also found in Ancient Greek, as well as in Phoenician, Etruscan, Old Italic and Iberian. The origin and the timing of the writings are disputed, because there are no precise evidence in situ, the slabs can not be carbon dated, because of the bad treatment of the Cluj museum. There are indirect carbon dates found on a skeleton discovered near the slabs, that certifies the 5300 -- 5500 BC period.
In the 21st century, writing has become an important part of daily life as technology has connected individuals from across the globe through systems such as e-mail and social media. Literacy has grown in importance as a factor for success in the modern world. In the United States, the ability to read and write are necessary for most jobs, and multiple programs are in place to aid both children and adults in improving their literacy skills. For example, the emergence of the writing center and community - wide literacy councils aim to help students and community members sharpen their writing skills. These resources, and many more, span across different age groups in order to offer each individual a better understanding of their language and how to express themselves via writing in order to perhaps improve their socioeconomic status.
Other parts of the world have seen an increase in writing abilities as a result of programs such as the World Literacy Foundation and International Literacy Foundation, as well as a general push for increased global communication.
|
what is the ballistic coefficient of a bullet | Ballistic coefficient - wikipedia
In ballistics, the ballistic coefficient (BC) of a body is a measure of its ability to overcome air resistance in flight. It is inversely proportional to the negative acceleration: a high number indicates a low negative acceleration -- the drag on the projectile is small in proportion to its mass.
Where:
The formula for calculating the ballistic coefficient for small and large arms projectiles only is as follows:
Where:
The Coefficient of form (i) can be derived by 6 methods and applied differently depending on the trajectory models used: G Model, Beugless / Coxe; 3 Sky Screen; 4 Sky Screen; Target Zeroing; Doppler radar.
Here are several methods to compute i or C:
Where:
or
A drag coefficient can also be calculated mathematically:
Where:
or
From standard physics as applied to "G '' models:
Where:
This formula is for calculating the ballistic coefficient within the smalls arms shooting community, but is redundant with BC:
Where:
In 1537, Niccolò Tartaglia did some test firing to determine the maximum angle and range for a shot. His conclusion was near 45 degrees. He noted that the shot trajectory was continuously curved.
In 1636, Galileo Galilei published results in "Dialogues Concerning Two New Sciences ''. He found that a falling body had a constant acceleration. This allowed Galileo to show that a bullet 's trajectory was a curve.
Circa 1665, Sir Isaac Newton derived the law of air resistance. Newton 's experiments on drag were through air and fluids. He showed that drag on shot increases proportionately with the density of the air (or the fluid), cross sectional area, and the square of the speed. Newton 's experiments were only at low velocities to about 260 m / s (853 ft / s).
In 1718, John Keill challenged the Continental Mathematica, "To find the curve that a projectile may describe in the air, on behalf of the simplest assumption of gravity, and the density of the medium uniform, on the other hand, in the duplicate ratio of the velocity of the resistance ''. This challenge supposes that air resistance increases exponentially to the velocity of a projectile. Keill gave no solution for his challenge. Johann Bernoulli took up this challenge and soon thereafter solved the problem and air resistance varied as "any power '' of velocity; known as the Bernoulli equation. This is the precursor to the concept of the "standard projectile ''.
In 1742, Benjamin Robins invented the ballistic pendulum. This was a simple mechanical device that could measure a projectile 's velocity. Robins reported muzzle velocities ranging from 1,400 ft / s (427 m / s) to 1,700 ft / s (518 m / s). In his book published that same year "New Principles of Gunnery '', he uses numerical integration from Euler 's method and found that air resistance varies as the square of the velocity, but insisted that it changes at the speed of sound.
In 1753, Leonhard Euler showed how a theoretical trajectories might be calculated using his method as applied to the Bernoulli equation, but only for resistance varying as the square of the velocity.
In 1844, the Electro - ballistic chronograph was invented and by 1867 the electro - ballistic chronograph was accurate to within one ten millionth of a second.
Many countries and their militaries carried out test firings from the mid eighteenth century on using large ordnance to determine the drag characteristics of each individual projectile. These individual test firings were logged and reported in extensive ballistics tables.
Of the test firing, most notably were: Francis Bashforth at Woolwich Marshes & Shoeburyness, England (1864 - 1889) with velocities to 2,800 ft / s (853 m / s) and M. Krupp (1865 -- 1880) of Friedrich Krupp AG at Meppen, Germany, Friedrich Krupp AG continued these test firings to 1930; to a lesser extent General Nikolai V. Mayevski, then a Colonel (1868 -- 1869) at St. Petersburg, Russia; the Commission d'Experience de Gâvre (1873 to 1889) at Le Gâvre, France with velocities to 1,830 m / s (6,004 ft / s) and The British Royal Artillery (1904 -- 1906).
The test projectiles (shot) used, vary from spherical, spheroidal, ogival; being hollow, solid and cored in design with the elongated ogival - headed projectiles having 1, 11⁄2, 2 and 3 caliber radii. These projectiles varied in size from, 75 mm (3.0 in) at 3 kg (6.6 lb) to 254 mm (10.0 in) at 187 kg (412.3 lb)
Many militaries up until the 1860s used calculus to compute projectile trajectory. The numerical computations necessary to calculate just a single trajectory was lengthy, tedious and done by hand. So, investigations to develop a theoretical drag model began. The investigations led to a major simplification in the experimental treatment of drag. This was the concept of a "standard projectile ''. The ballistic tables are made up for a factitious projectile being defined as: "a factitious weight and with a specific shape and specific dimensions in a ratio of calibers. '' This simplifies calculation for the ballistic coefficient of a standard model projectile, which could mathematically move through the standard atmosphere with the same ability as any actual projectile could move through the actual atmosphere.
In 1870, Bashforth publishes a report containing his ballistic tables. Bashforth found that the drag of his test projectiles varied with the square of velocity (v) from 830 ft / s (253 m / s) to 430 ft / s (131 m / s) and with the cube of velocity (v) from 1,000 ft / s (305 m / s) to 830 ft / s (253 m / s). As of his 1880 report, he found that drag varied by v from 1,100 ft / s (335 m / s) to 1,040 ft / s (317 m / s). Bashforth used rifled guns of 3 in (76 mm), 5 in (127 mm), 7 in (178 mm) and 9 in (229 mm); smooth - bore guns of similar caliber for firing spherical shot and howitzers propelled elongated projectiles having an ogival - head of 11⁄2 caliber radius.
Bashforth uses b as the variable for ballistic coefficient. When b is equal to or less than v, then b is equal to P for the drag of a projectile. It would be found that air does not deflect off the front of a projectile in the same direction, when there are of differing shapes. This prompted the introduction of a second factor to b, the coefficient of form (i). This is particularly true at high velocities, greater than 830 ft / s (253 m / s). Hence, Bashforth introduced the "undetermined multiplier '' of any power called the k (\ displaystyle k) factor that compensate for this unknown effects of drag above 830 ft / s (253 m / s); k > i (\ displaystyle k > i). Bashforth then integrated k (\ displaystyle k) and i (\ displaystyle i) as K v (\ displaystyle K_ (v)).
Although Bashforth did not conceive the "restricted zone '', he showed mathematically there were 5 restricted zones. Bashforth did not propose a standard projectile, but was well aware of the concept.
In 1872, General Mayevski published his report Trité Balistique Extérieure, which included the Mayevski model. Using his ballistic tables along with Bashforth 's tables from the 1870 report, Mayevski created an analytical math formula that calculated the air resistances of a projectile in terms of log A and the value n. Although Mayevski 's math used a differing approach than Bashforth, the resulting calculations of air resistance was the same. Mayevski proposed the restricted zone concept and found there to be 6 restricted zones for projectiles.
Circa 1886, General Mayevski published the results from a discussion of experiments made by M. Krupp (1880). Though the ogival - headed projectiles used varied greatly in caliber, they had essentially the same proportions as the standard projectile, being mostly 3 caliber in length, with an ogive of 2 calibers radius. Giving the standard projectile dimensionally as 10 cm (3.9 in) and 1 kg (2.2 lb).
In 1880, Colonel Francesco Siacci published his work "Balistica ''. Siacci found as did those who came before him that the resistance and density of the air becomes greater and greater as a projectile displaced the air at higher and higher velocities.
Siacci 's method was for flat - fire trajectories with angles of departure of less than 20 degrees. He found that the angle of departure is sufficiently small to allow for air density to remain the same and was able to reduce the ballistics tables to easily tabulated quadrants giving distance, time, inclination and altitude of the projectile. Using Bashforth 's k and Mayevski 's tables, Siacci created a 4 zone model. Siacci used Mayevski 's standard projectile. From this method and standard projectile, Siacci formulated a shortcut.
Siacci found that within a low velocity restricted zone, projectiles of similar shape, and velocity in the same air density behave similar; δ w d 2 (\ displaystyle (\ tfrac (\ delta w) (d ^ (2)))) or δ C (\ displaystyle (\ tfrac (\ delta) (C))). Siacci used the variable C (\ displaystyle C) for ballistic coefficient. Meaning, air density is the generally the same for flat - fire trajectories, thus sectional density is equal to the ballistic coefficient and air density can be dropped. Then as the velocity rises to Bashforth 's k (\ displaystyle k) for high velocity when C (\ displaystyle C) requires the introduction of i (\ displaystyle i). Following within today 's currently used ballistic trajectory tables for an average ballistic coeficient: m d 2 ⋅ p 0 p (\ displaystyle (\ tfrac (m) (d ^ (2))) \ cdot (\ tfrac (p_ (0)) (p))) would equal m d 2 i (\ displaystyle (\ tfrac (m) (d ^ (2) i))) equals S D i (\ displaystyle (\ tfrac (SD) (i))) as B C (\ displaystyle BC).
Siacci wrote that within any restricted zone, C being the same for two or more projectiles, the trajectories differences will be minor. Therefore, C agrees with an average curve, and this average curve applies for all projectiles. Therefore, a single trajectory can be computed for the standard projectile without having to resort to tedious calculus methods, and then a trajectory for any actual bullet with known C can be computed from the standard trajectory with just simple algebra.
The aforementioned ballistics tables are generally: functions, air density, projectile time at range, range, degree of projectile departure, weight and diameter to facilitate the calculation of ballistic formulae. These formulae produce the projectile velocity at range, drag and trajectories. The modern day commercially published ballistic tables or software computed ballistics tables for small arms, sporting ammunition are exterior ballistic, trajectory tables.
The 1870 Bashforth tables were to 2,800 ft / s (853 m / s). Mayevski, using his tables, supplemented by the Bashforth tables (to 6 restricted zones) and the Krupp tables. Mayevski conceived a 7th restricted zone and extended the Bashforth tables to 1,100 m / s (3,609 ft / s). Mayevski converted Bashforth 's data from Imperial units of measure to metric units of measure (now in SI units of measure). In 1884, James Ingalls published his tables in the U.S. Army Artillery Circular M using the Mayevski tables. Ingalls extended Mayevski 's ballistics tables to 5,000 ft / s (1,524 m / s) within an 8th restricted zone, but still with the same n value (1.55) as Mayevski 's 7th restricted zone. Ingalls, converted Mayevski 's results back to Imperial units. The British Royal Artillery results were very similar to those of Mayevski 's and extended their tables to 5,000 ft / s (1,524 m / s) within the 8th restricted zone changing the n value from 1.55 to 1.67. These ballistic tables were published in 1909 and almost identical to those of Ingalls. In 1971 the Sierra Bullet company calculated their ballistic tables to 9 restricted zones but only within 4,400 ft / s (1,341 m / s).
In 1881, the Commission d'Experience de Gâvre did a comprehensive survey of data available from their tests as well as other countries. After adopting a standard atmospheric condition for the drag data the Gavre drag function was adopted. This drag function was known as the Gavre function and the standard projectile adopted was the Type 1 projectile. Thereafter, the Type 1 standard projectile was renamed by Ballistics Section of Aberdeen Proving Grounds in Maryland, USA as G after the Commission d'Experience de Gâvre. For practical purposes the subscript 1 in G is generally written in normal font size as G1.
The general form for the calculations of trajectory adopted for the G model is the Siacci method. The standard model projectile is a "fictitious projectile '' used as the mathematical basis for the calculation of actual projectile 's trajectory when an initial velocity is known. The G1 model projectile adopted is in dimensionless measures of 2 caliber radius ogival - head and 3.28 caliber in length. By calculation this leaves the body length 1.96 caliber and head, 1.32 caliber long.
Over the years there has been some confusion as to adopted size, weight and radius ogival - head of the G1 standard projectile. This misconception may be explained by Colonel Ingalls in the 1886 publication, Exterior Ballistics in the Plan Fire; page 15, In the following tables the first and second columns give the velocities and corresponding resistance, in pounds, to an elongated one inch in diameter and having an ogival head of one and a half calibers. They were deduced from Bashforth 's experiments by Professor A.G. Greenhill, and are taken from his papers published in the Proceedings of the Royal Artillery Institution, No 2, Vol. XIII. Further it is discussed that said projectile 's weight was one pound.
For the purposes of mathematical convenience for any standard projectile (G) the BC is 1.00. Where as the projectile 's sectional density (SD) is dimensionless with a mass of 1 divided by the square of the diameter of 1 caliber equaling an SD of 1. Then the standard projectile is assigned a coefficient of form of 1. Following that B C = S D i = 1 1 = 1.00 (\ displaystyle BC = (\ tfrac (SD) (i)) = (\ tfrac (1) (1)) = 1.00). BC, as a general rule, within flat - fire trajectory, is carried out to 2 decimal points. BC is commonly found within commercial publications to be carried out to 3 decimal points as few sporting, small arms projectiles rise to the level of 1.00 for a ballistic coefficient.
When using the Siacci method for different G models, the formula used to compute the trajectories is the same. What differs is retardation factors found through testing of actual projectiles that are similar in shape to the standard project reference. This creates slightly different set of retardation factors between differing G models. When the correct G model retardation factors are applied within the Siacci mathematical formula for the same G model BC, a corrected trajectory can be calculated for any G model.
Another method of determining trajectory and ballistic coefficient was developed and published by Wallace H. Coxe and Edgar Beugless of DuPont in 1936. This method is by shape comparison an logarithmic scale as drawn on 10 charts. The method estimates the ballistic coefficient related to the drag model of the Ingalls tables. When matching an actual projectile against the drawn caliber radii of Chart No. 1, it will provide i and by using Chart No. 2, C can be quickly calculated. Coxe and Beugless used the variable C for ballistic coefficient.
The Siacci method was abandoned by the end of the World War I for artillery fire. But the U.S. Army Ordnance Corps continued using the Siacci method into the middle of the 20th century for direct (flat - fire) tank gunnery. The development of the electromechanical analog computer contributed to the calculation of aerial bombing trajectories during World War II. After World War II the advent of the silicon semiconductor based digital computer made it possible to create trajectories for the guided missiles / bombs, intercontinental ballistic missiles and space vehicles.
Between World War I and II the U.S. Army Ballistics research laboratories at Aberdeen Proving Grounds, Maryland, USA developed the standard models for G2, G5, G6. In 1965, Winchester Western published a set of ballistics tables for G1, G5, G6 and GL. In 1971 Sierra Bullet Company retested all their bullets and concluded that the G5 model was not the best model for their boat tail bullets and started using the G1 model. This was fortunate, as the entire commercial sporting and firearms industries had based their calculations on the G1 model. The G1 model and Mayevski / Siacci Method continue to be the industry standard today. This benefit allows for comparison of all ballistic tables for trajectory within the commercial sporting and firearms industry.
In recent years there have been vast advancements in the calculation of flat - fire trajectories with the advent of Doppler radar and the personal computer and handheld computing devices. Also, the newer methodology proposed by Dr. Arthur Pejsa and the use of the G7 model used by Mr. Brian Litz, ballistic engineer for Berger Bullets, LLC for calculating boat tailed spitzer rifle bullet trajectories and 6 Dof model based software have improved the prediction of flat - fire trajectories.
Most ballistic mathematical models and hence tables or software take for granted that one specific drag function correctly describes the drag and hence the flight characteristics of a bullet related to its ballistic coefficient. Those models do not differentiate between wadcutter, flat - based, spitzer, boat - tail, very - low - drag, etc. bullet types or shapes. They assume one invariable drag function as indicated by the published BC. Several different drag curve models optimized for several standard projectile shapes are available, however.
The resulting drag curve models for several standard projectile shapes or types are referred to as:
Since these standard projectile shapes differ significantly the Gx BC will also differ significantly from the Gy BC for an identical bullet. To illustrate this the bullet manufacturer Berger has published the G1 and G7 BCs for most of their target, tactical, varmint and hunting bullets. Other bullet manufacturers like Lapua and Nosler also published the G1 and G7 BCs for most of their target bullets. How much a projectile deviates from the applied reference projectile is mathematically expressed by the form factor (i). The applied reference projectile shape always has a form factor (i) of exactly 1. When a particular projectile has a sub 1 form factor (i) this indicates that the particular projectile exhibits lower drag than the applied reference projectile shape. A form factor (i) greater than 1 indicates the particular projectile exhibits more drag than the applied reference projectile shape. In general the G1 model yields comparatively high BC values and is often used by the sporting ammunition industry.
Variations in BC claims for exactly the same projectiles can be explained by differences in the ambient air density used to compute specific values or differing range - speed measurements on which the stated G1 BC averages are based. Also, the BC changes during a projectile 's flight, and stated BCs are always averages for particular range - speed regimes. Further explanation about the variable nature of a projectile 's G1 BC during flight can be found at the external ballistics article. The external ballistics article implies that knowing how a BC was determined is almost as important as knowing the stated BC value itself.
For the precise establishment of BCs (or perhaps the scientifically better expressed drag coefficients), Doppler radar - measurements are required. The normal shooting or aerodynamics enthusiast, however, has no access to such expensive professional measurement devices. Weibel 1000e or Infinition BR - 1001 Doppler radars are used by governments, professional ballisticians, defense forces, and a few ammunition manufacturers to obtain exact real - world data on the flight behavior of projectiles of interest.
Doppler radar measurement results for a lathe turned monolithic solid. 50 BMG very - low - drag bullet (Lost River J40 13.0 millimetres (0.510 in), 50.1 grams (773 gr) monolithic solid bullet / twist rate 1: 380 millimetres (15 in)) look like this:
The initial rise in the BC value is attributed to a projectile 's always present yaw and precession out of the bore. The test results were obtained from many shots, not just a single shot. The bullet was assigned 1.062 for its BC number by the bullet 's manufacturer, Lost River Ballistic Technologies.
Measurements on other bullets can give totally different results. How different speed regimes affect several 8.6 mm (. 338 in calibre) rifle bullets made by the Finnish ammunition manufacturer Lapua can be seen in the. 338 Lapua Magnum product brochure which states Doppler radar established BC data.
Sporting bullets, with a calibre d ranging from 4.4 to 12.7 millimetres (0.172 to 0.50 in), have BCs in the range 0.12 to slightly over 1.00. Those bullets with the higher BCs are the most aerodynamic, and those with low BCs are the least. Very - low - drag bullets with BCs ≥ 1.10 can be designed and produced on CNC precision lathes out of mono - metal rods, but they often have to be fired from custom made full bore rifles with special barrels.
Ammunition makers often offer several bullet weights and types for a given cartridge. Heavy - for - caliber pointed (spitzer) bullets with a boattail design have BCs at the higher end of the normal range, whereas lighter bullets with square tails and blunt noses have lower BCs. The 6 mm and 6.5 mm cartridges are probably the most well known for having high BCs and are often used in long range target matches of 300 m (328 yd) -- 1,000 m (1,094 yd). The 6 and 6.5 have relatively light recoil compared to high BC bullets of greater caliber and tend to be shot by the winner in matches where accuracy is key. Examples include the 6mm PPC, 6mm Norma BR, 6x47mm SM, 6.5 × 55mm Swedish Mauser, 6.5 × 47mm Lapua, 6.5 Creedmoor, 6.5 Grendel,. 260 Remington, and the 6.5 - 284. The 6.5 mm is also a popular hunting caliber in Europe.
In the United States, hunting cartridges such as the. 25 - 06 Remington (a 6.35 mm caliber), the. 270 Winchester (a 6.8 mm caliber), and the. 284 Winchester (a 7 mm caliber) are used when high BCs and moderate recoil are desired. The. 30 - 06 Springfield and. 308 Winchester cartridges also offer several high - BC loads, although the bullet weights are on the heavy side.
In the larger caliber category, the. 338 Lapua Magnum and the. 50 BMG are popular with very high BC bullets for shooting beyond 1,000 meters. Newer chamberings in the larger caliber category are the. 375 and. 408 Cheyenne Tactical and the. 416 Barrett.
For many years, bullet manufacturers were the main source of ballistic coefficients for use in trajectory calculations. However, in the past decade or so, it has been shown that ballistic coefficient measurements by independent parties can often be more accurate than manufacturer specifications. Since ballistic coefficients depend on the specific firearm and other conditions that vary, it is notable that methods have been developed for individual users to measure their own ballistic coefficients.
Satellites in Low Earth Orbit (LEO) with high ballistic coefficients experience smaller perturbations to their orbits due to atmospheric drag.
The ballistic coefficient of an atmospheric reentry vehicle has a significant effect on its behavior. A very high ballistic coefficient vehicle would lose velocity very slowly and would impact the Earth 's surface at higher speeds. In contrast, a low ballistic coefficient would reach subsonic speeds before reaching the ground.
In general, reentry vehicles that carry human beings back to Earth from space have high drag and a correspondingly low ballistic coefficient. Vehicles that carry nuclear weapons launched by an intercontinental ballistic missile (ICBM), by contrast, have a high ballistic coefficient, which enables them to travel rapidly from space to a target on land. That makes the weapon less affected by crosswinds or other weather phenomena, and harder to track, intercept, or otherwise defend against.
Ballistic calculators
|
what brand of coffee does turkey hill use | Turkey Hill (company) - wikipedia
Turkey Hill Dairy, simply known as Turkey Hill, is an American brand of iced tea, ice cream and other beverages and frozen desserts distributed throughout the United States and internationally. It is based in Lancaster County, Pennsylvania and owned by Kroger.
The company is operated independently from Turkey Hill Minit Markets, a chain of more than 260 gas station convenience stores that carry Turkey Hill products in Pennsylvania, Ohio and Indiana.
In 2011, Turkey Hill opened The Turkey Hill Experience, a 17,000 - square - foot (1,600 - square - meter) attraction based in Columbia, Pennsylvania that pays homage to Turkey Hill 's history while highlighting its ice cream and iced tea - making processes.
Turkey Hill Dairy began in 1931 during the Great Depression, when farmer Armor Frey began selling bottled milk to neighbors from his sedan. Frey 's family obtained the farm directly from Thomas and Richard Penn, sons of William Penn, and the sheepskin deed to the farm refers to "turkeyhill ''. Turkey Hill Ridge had been given its name by the Conestoga Indians for the wild turkeys found there, so the family decided to name their dairy after the name on the deed and the nearby geographical feature.
Armor sold the dairy to sons Glen, Emerson and Charles Frey in 1947. Milking the cows and delivering milk to customers provided these three families with a satisfactory income.
In 1954, the dairy began making ice cream, which sold well in Lancaster County, and in 1981, they started selling the ice cream through a few independent stores in Philadelphia. Turkey Hill quickly began to expand into New Jersey and up the East Coast. In the early 2000s (decade) Turkey Hill 's products were distributed in places further west, such as Buffalo, Pittsburgh, and Cleveland. Over the next few years, Turkey Hill rapidly expanded its distribution area, and its teas are now sold in 45 states and the ice cream is now sold in 43 states.
The dairy and the stores were sold in 1985 to Dillons, a subsidiary of Kroger. Despite the new ownership, the Frey family was heavily involved in the day - to - day operations of the company, as Charles Frey, the youngest of company founder Armor Frey 's sons, remained as president. Charles was succeeded as president by Quintin Frey (son of Emerson Frey) in 1991. On May 28, 2013, The Kroger Company announced Quintin Frey 's retirement.
In 2011, Turkey Hill partnered with PPL Renewable Energy and the Lancaster County Solid Waste Management Authority to purchase the electricity generated by two General Electric wind turbines on the Frey Farm landfill adjacent to the company 's manufacturing facility. The wind turbines are capable of producing enough power to supply 25 percent of the company 's annual electricity demand.
In 2012, Turkey Hill Dairy produced 29 million U.S. gallons (110,000,000 liters) of ice cream, 56.6 million US gallons (214,000,000 L) of iced tea and other drinks and 7.1 million US gallons (27,000,000 L) of milk. Since 2000, Turkey Hill has been among the nation 's top - selling brands of refrigerated iced tea and, in 2011, was the fourth - largest producer of ice cream, after Nestlé (Häagen - Dazs, Dreyer 's / Edy Grand, Mövenpick), Unilever (Ben & Jerry 's and Breyers Ice Cream), and Wells ' Dairy (Blue Bunny).
Turkey Hill produces more than 60 full - time and Limited Edition flavors of ice cream, frozen yogurt and sherbet available in 48 - US - fluid - ounce (1,400 mL), 1 - US - pint (470 mL) sizes and 3 - US - gallon (11 L) sizes for use by ice cream shops. Specific product lines include Premium Ice Cream, All Natural Ice Cream, Stuff 'd, Light Recipe, and No Sugar Added, in addition to frozen yogurt and sherbet.
The Turkey Hill Iced Tea lineup includes more than 20 seasonal and full - time flavors. Traditional flavors are made with a manufacturing process that includes cold bottling, cold shipping and cold storage in stores.
Other Turkey Hill beverages include fruit drinks (lemonade and fruit punch, among others), milk (fat free, 1 % lowfat, 2 % reduced fat, whole milk, and 1 % low fat chocolate milk), drinking water, and egg nog distributed during the Christmas and Easter seasons.
Turkey Hill Dairy has partnerships with the Snyder 's of Hanover, Gertrude Hawk Chocolates, and Tootsie Roll Industries ' Junior Mints allowing them to produce and distribute theme - flavored ice cream based on the products of these companies.
Turkey Hill also maintains partnerships with several professional sports teams, including the Lancaster Barnstormers, Camden Riversharks, and the York Revolution, all of the Atlantic League of Professional Baseball. Other sports partnerships include Major League Soccer 's Philadelphia Union, Major League Baseball 's Philadelphia Phillies and the New York Yankees, and the National Football League 's Pittsburgh Steelers. The company 's relationships with the Philadelphia Phillies, New York Yankees, and Pittsburgh Steelers include the manufacture of official team ice cream flavors.
The Turkey Hill Experience is a 17,000 sq ft (1,600 m) attraction based in Lancaster County, Pennsylvania that pays homage to Turkey Hill Dairy 's history while highlighting its ice cream and iced tea - making processes. It opened in June 2011 and is located in Columbia, Pennsylvania, just six miles from the company 's main production facility in Conestoga, Pennsylvania.
The Turkey Hill Experience includes a variety of interactive displays aimed at entertaining both children and adults. In addition to learning about Turkey Hill Dairy 's history and the iced tea and ice cream manufacturing processes, visitors can also make their own virtual ice cream flavor, an experience that uses green - screen technology to allow visitors to star in a Turkey Hill TV commercial. An exhibit called the Turkey Hill Taste Lab, opened in June 2013, Allows visitors to bring their virtual ice cream creation to life, a hands - on, educational experience where you develop and taste the flavor you created.
The Turkey Hill Experience is located in a former Ashley and Bailey silk mill, part of which was originally built in 1899. The location had several owners and uses in the decades that followed, before being abandoned after the Tidy Products company stopped using it as a sewing factory in the late 1970s. The building was vacant for more than 25 years before Turkey Hill began to work with a developer to repurpose the building. This former silk mill should not be confused with the nearby Ashley and Bailey Company Silk Mill in West York or the Ashley and Bailey Silk Mill in Marietta, both of which are listed on the National Register of Historic Places.
In 1967, Charles and Emerson Frey opened the first Turkey Hill Minit Markets store in Lancaster, Pennsylvania, as a way to better market their dairy products.
The stores operated as a separate business - Farmland Industries - with the headquarters in the original store basement. In 1978, they built their current headquarters at 257 Centerville Road in East Hempfield Township, Lancaster County, Pennsylvania.
On December 19, 1974, the stores won a legal battle overturning the so - called blue laws that prohibited retailers opening on Sunday, and in 1976, they became the first company to offer self - service gasoline in Pennsylvania.
In 1979, Turkey Hill Minit Market purchased 36 Louden Hill stores. In July 1985, they purchased a number of 7 - Eleven stores and six Ideal Markets. That same year the Turkey Hill Minit Markets chain was purchased by Kroger. In Lancaster County, Turkey Hill Minit Markets were the overwhelming convenience store choice; in some cases, stores were located as close as three blocks apart. During the 1990s, Turkey Hill and competitors Sheetz and Wawa began overlapping their regions of service. John Hofmeister, president of Shell Oil commented on the new situation in sworn testimony before the U.S. Senate Committee on the Judiciary in March 2006: "We are seeing healthy new retail competition emerging with brands such as Wawa, Sheetz, and Turkey Hill. ''
In 1998, Turkey Hill opened its 249th store in Hazleton, Pennsylvania. This store was the first of many stores to open with Food Service. Food Service offers fresh hoagies, sandwiches, pizza, and many other hot foods. Many new stores are built with food service and car washes. Beginning in 1999, new larger stores were opened with more of an emphasis on selling gasoline. About 200 of the 240 stores in Central Pennsylvania have gas pumps.
In 2004, the stores adopted the current signage, featuring a stylized map of the contiguous U.S. Kroger also owns the Kwik Shop chain in Illinois, Iowa, Kansas, and Nebraska; the Loaf ' N Jug Mini Marts in Colorado, New Mexico, Nebraska, Montana, North Dakota, Oklahoma, South Dakota, and Wyoming; Quik Stop Markets in California and Nevada; and the Tom Thumb in Florida and Alabama - all of which use the same logo and font. The older stores will be getting face lifts with the new logo and store front. Kroger has not indicated any plans yet to consolidate the marketing for their 800 convenience stores, but a new vice-president, Van Tarver, was named in June 2006 to oversee their convenience store and petroleum division. Van Tarver left the company in 2014 to start his own business, Van Tarver Group.
Turkey Hill 's logo at Clipper Magazine Stadium, Lancaster
Turkey Hill featured on Santander Stadium 's Arch Nemesis, York
|
why cant you swim in the east river nyc | East River - wikipedia
The East River is a salt water tidal estuary in New York City. The waterway, which is actually not a river despite its name, connects Upper New York Bay on its south end to Long Island Sound on its north end. It separates the borough of Queens on Long Island from the Bronx on the North American mainland, and also divides Manhattan from Queens and Brooklyn, which are also on Long Island. Because of its connection to Long Island Sound, it was once also known as the Sound River. The tidal strait changes its direction of flow frequently, and is subject to strong fluctuations in its current, which are accentuated by its narrowness and variety of depths. The waterway is navigable for its entire length of 16 miles (26 km), and was historically the center of maritime activities in the city, although that is no longer the case.
Technically a drowned valley, like the other waterways around New York City, the strait was formed approximately 11,000 years ago at the end of the Wisconsin glaciation. The distinct change in the shape of the strait between the lower and upper portions is evidence of this glacial activity. The upper portion (from Long Island Sound to Hell Gate), running largely perpendicular to the glacial motion, is wide, meandering, and has deep narrow bays on both banks, scoured out by the glacier 's movement. The lower portion (from Hell Gate to New York Bay) runs north - south, parallel to the glacial motion. It is much narrower, with straight banks. The bays that exist, as well as those that used to exist before being filled in by human activity, are largely wide and shallow.
The section known as "Hell Gate '' -- from the Dutch name Hellegat or "passage to hell '' given to the entire river in 1614 by explorer Adriaen Block when he passed through it in his ship Tyger -- is a narrow, turbulent, and particularly treacherous stretch of the river. Tides from the Long Island Sound, New York Harbor and the Harlem River meet there, making it difficult to navigate, especially because of the number of rocky islets which once dotted it, with names such as "Frying Pan '', "Pot, Bread and Cheese '', "Hen and Chicken '', "Nigger Head '', "Heel Top ''; "Flood ''; and "Gridiron '', roughly 12 islets and reefs in all, all of which led to a number of shipwrecks, including the British frigate Hussar which sank in 1780 while carrying gold and silver intended to pay British troops. The stretch has since been cleared of rocks and widened. Washington Irving wrote of Hell Gate that the current sounded "like a bull bellowing for more drink '' at half tide, whilte at full tide it slept "as soundly as an alderman after dinner. '' He said it was like "a peaceable fellow enough when he has no liquor at all, or when he has a skinful, but who, when half - seas over, plays the very devil. '' The tidal regime is complex, with the two major tides -- from the Long Island Sound and from the Atlantic Ocean -- separated by about two hours; and this is without consideration of the tidal influence of the Harlem River, all of which creates a "dangerous cataract '', as one ship 's captain put it.
The river is navigable for its entire length of 16 miles (26 km). In 1939 it was reported that the stretch from The Battery to the former Brooklyn Navy Yard near Wallabout Bay, a run of about 1,000 yards (910 m), was 40 feet (12 m) deep, the long section from there, running to the west of Roosevelt Island, through Hell Gate and to Throg 's Neck was at least 35 feet (11 m) deep, and then eastward from there the river was, at mean low tide, 168 feet (51 m) deep.
The broadness of the river 's channel south of Roosevelt Island is caused by the dipping of the hardy Fordham gneiss which underlies the island under the less strong Inwood marble which lies under the river bed. Why the river turns to the east as it approaches the three lower Manhattan bridges is currently geologically unknown.
In the stretch of the river between Manhattan Island and the borough of Queens, lies Roosevelt Island, a narrow (maximum width 800 feet (240 m)) 2 - mile (3.2 km) long island consisting of 147 acres (0.59 km). Politically part of Manhattan, it begins at around the level of East 46th Street of that borough and runs up to around East 86th Street. Formerly called Blackwell 's Island and Welfare Island, and now named after President Franklin Delano Roosevelt, it was the site of a penitentiary, and a number of hospitals, but now consists primarily of apartment buildings, park land, and the ruins of older buildings. It is connected to Queens by the Roosevelt Island Bridge, to Manhattan by the Roosevelt Island Tramway, and to both by a subway station. The Queensboro Bridge runs across Roosevelt Island, but no longer has a passenger elevator connection to it, as it did in the past. The abrupt termination of the island on its north end is due to an extension of the 125th Street Fault.
Other islands in the river are U Thant Island -- formerly Belmont Island -- south of Roosevelt Island, which was named after U Thant, the former Secretary - General of the United Nations; and Mill Rock, Wards and Randalls Islands, which have been joined together by landfill, and are used as park land, for a stadium, and to support the Triborough Bridge and the Hell Gate Bridge, Rikers Island, a small island bought by the city in 1884 to be a prison farm and expanded with landfill from under 100 acres (40 ha) to over 400 acres (160 ha), which is currently the site of the city 's primary jail, and North and South Brother Islands, all of which lie north of Roosevelt Island.
The Bronx River drains into the East River in the northern section of the strait, and the Flushing River, historically known as "Flushing Creek '' empties into it near LaGuardia Airport via Flushing Bay.
North of Randalls Island, it is joined by the Bronx Kill. Along the east of Wards Island, at approximately the strait 's midpoint, it narrows into a channel called Hell Gate, which is spanned by both the Robert F. Kennedy Bridge (formerly the Triborough), and the Hell Gate Bridge. On the south side of Wards Island, it is joined by the Harlem River.
Newtown Creek on Long Island drains into the East River, and forms part of the boundary between Queens and Brooklyn. The Gowanus Canal was built from Gowanus Creek, which emptied into the river. Historically, there were other small streams which emptied into the river -- including the Harlem Creek, one of the most significant tributaries originating in Manhattan -- but these and their associated wetlands have been filled in and built over.
Prior to the arrival of Europeans, the land north of the East River was occupied by the Siwanoys, one of many groups of Algonquin - speaking Lenapes in the area. Those of the Lenapes who lived in the northern part of Manhattan Island in a campsite known as Konaande Kongh used a landing at around the current location of East 119th street to paddle into the river in canoes fashioned from tree - trunk in order to fish.
Dutch settlement of what became New Amsterdam began in 1623. Some of the earliest of the small settlements in the area were along the west bank of the East River on sites that had previously been Native American settlements. As with the Native Americans, the river was central to their lives for transportation for trading and for fishing. They gathered marsh grass to feed their cattle, and the East River 's tides helped to power mills which ground grain to flour. By 1642 there was a ferry running on the river between Manhattan island and what is now Brooklyn, and the first pier on the river was built in 1647 at Pearl and Broad Streets. After the British took over the colony in 1664, and was renamed "New York '', the development of the waterfront continued, and a shipbuilding industry grew up once New York started exporting flour. By the end of the 17th century, the Great Dock, located at Corlear 's Hook on the East River, had been built.
Historically, the lower portion of the strait, which separates Manhattan from Brooklyn, was one of the busiest and most important channels in the world, particularly during the first three centuries of New York City 's history. Because the water along the lower Manhattan shoreline was too shallow for large boats to tie up and unload their goods, from 1686 on -- after the signing of the Dongan Charter, which allowed intertidal land to be owned and sold -- the shoreline was "wharfed out '' to the high - water mark by building retaining walls that were filled in with every conceivable kind of landfill: excrement, dead animals, ships deliberately sunk in place, ship ballast, and muck dredged from the bottom of the river. On the new land were built warehouses and other structures necessary for the burgeoning sea trade Many of the "water - lot '' grants went to the rich and powerful families of the merchant class, although some went to tradesmen. By 1700, the Manhattan bank of the river has been "wharfed - out '' up to around Whitehall Street, narrowing the strait of the river.
After the signing of the Montgomerie Charter in the late 1720s, another 127 acres of land along the Manhattan shore of the East River was authorized to be filled - in, this time to a point 400 feet beyond the low - water mark; the parts that had already been expanded to the low water mark -- much of which had been devastated by a coastal storm in the early 1720s and a nor'easter in 1723 -- were also expanded, narrowing the channel even further. What had been quiet beach land was to become new streets and buildings, and the core of the city 's sea - borne trade. This infilling went as far north as Corlear 's Hook. In addition, the city was given control of the western shore of the river from Wallabout Bay south.
Expansion of the waterfront halted during the American Revolution, in which the East River played an important role early in the conflict. On August 28, 1776, while British and Hessian troops rested after besting the Americans at the Battle of Long Island, General George Washington was rounding up all the boats on the east shore of the river, in what is now Brooklyn, and used them to successfully move his troops across the river -- under cover of night, rain, and fog -- to Manhattan island, before the British could press their advantage. Thus, though the battle was a victory for the British, the failure of Sir William Howe to destroy the Continental Army when he had the opportunity allowed the Americans to continue fighting. Without the stealthy withdrawal across the East River, the American Revolution might have ended much earlier.
Wallabout Bay on the River was the site of most of the British prison ships -- most notoriously the HMS Jersey -- where thousands of American prisoners of war were held in terrible conditions. These prisoners had come into the hands of the British after the fall of New York City on September 15, 1776, after the American loss at the Battle of Long Island and the loss of Fort Washington on November 16. Prisoners began to be housed on the broken - down warships and transports in December; about 24 ships were used in total, but generally only 5 or 6 at a time. Almost twice as many Americans died from neglect in these ships than did from all the battles in the war: as many as 12,000 soldiers, sailors and civilians. The bodies were thrown overboard or were buried in shallow graves on the riverbanks, but their bones -- some of which were collected when they washed ashore -- were later relocated and are now inside the Prison Ship Martyrs ' Monument in nearby Fort Greene Park. The existence of the ships and the conditions the men were held in was widely known at the time through letters, diaries and memoirs, and was a factor not only in the attitude of Americans toward the British, but in the negotiations to formally end the war.
After the war, East River waterfront development continued once more. New York State legislation which in 1807 authorized what would become the Commissioners Plan of 1811 also authorized the creation of new land out to 400 feet from the low water mark into the river, and with the advent of gridded streets along the new waterline -- Joseph Mangin had laid out such a grid in 1803 in his A Plan and Regulation of the City of New York, which was rejected by the city, but established the concept -- the coastline become regularized at the same time that the strait became even narrower.
One result of the narrowing of the East River along the shoreline of Manhattan and, later, Brooklyn -- which continued until the mid-19th century when the state put a stop to it -- was an increase in the speed of its current. Buttermilk Channel, the strait that divides Governors Island from Red Hook in Brooklyn, and which is located directly south of the "mouth '' of the East River, was in the early 17th century a fordable waterway across which cattle could be driven. Further investigation by Colonel Jonathan Williams determined that the channel was by 1776 three fathoms deep (18 feet (5.5 m)), five fathoms deep (30 feet (9.1 m)) in the same spot by 1798, and when surveyed by Williams in 1807 had deepened to 7 fathoms (42 feet (13 m)) at low tide. What had been almost a bridge between two landforms which were once connected had become a fully navigable channel, thanks to the constriction of the East River and the increased flow it caused. Soon, the current in the East River had become so strong that larger ships had to use auxiliary steam power in order to turn. The continued narrowing of the channel on both side may have been the reasoning behind the suggestion of one New York State Senator, who wanted to fill in the East River and annex Brooklyn, with the cost of doing so being covered byselling the newly made land. Others proposed a dam at Roosevelt Island (then Blackwell 's Island) to create a wet basin for shipping.
Filling in part of the river was also proposed in 1867 by engineer James E. Serrell, later a city surveyor, but with emphasis on solving the problem of Hell Gate. Serrell proposed filling in Hell Gate and build a "New East River '' through Queens with an extension to Westchester County. Serrell 's plan -- which he publicized with maps, essay and lectures as well as presentations to the city, state and federal governments -- would have filled in the river from 14th Street to 125th Street. The New East River through Queens would be about three times the average width of the existing one at an even 3,600 feet (1,100 m) throughout, and would run as straight as an arrow for five miles. The new land, and the portions of Queens which would become part of Manhattan, adding 2,500 acres (1,000 ha), would be covered with an extension of the existing street grid of Manhattan.
Variations on Serrell 's plan would be floated over the years. A pseudonymous "Terra Firma '' brought up filling in the East River again in the Evening Post and Scientific American in 1904, and Thomas Alva Edison took it up in 1906. Then Thomas Kennard Thompson, a bridge and railway engineer, proposed in 1913 to fill in the river from Hell Gate to the tip of Manhattan and, as Serrell had suggested, make a new canalized East River, only this time from Flushing Bay to Jamaica Bay. He would also expand Brooklyn into the Upper Harbor, put up a dam from Brooklyn to Staten Island, and make extensive landfill in the Lower Bay. At around the same time, in the 1920s, Dr. John A. Harriss, New York City 's chief traffic engineer, who had developed the first traffic signals in the city, also had plans for the river. Harriss wanted to dam the East River at Hell Gate and the Williamsburg Bridge, then remove the water, put a roof over it on stilts, and build boulevards and pedestrian lanes on the roof along with "majestic structures '', with transportation services below. The East River 's course would, once again, be shifted to run through Queens, and this time Brooklyn as well, to channel it to the Harbor.
Periodically, merchants and other interested parties would try to get something done about the difficulty of navigating through Hell Gate. In 1832, the New York State legislature was presented with a petition for a canal to be built through nearby Hallet 's Point, thus avoiding Hell Gate altogether. Instead, the legislature responded by providing ships with pilots trained to navigate the shoals for the next 15 years.
In 1849, a French engineer whose specialty was underwater blasting, Benjamin Maillefert, had cleared some of the rocks which, along with the mix of tides, made the Hell Gate stretch of the river so dangerous to navigate. Ebenezer Meriam had organized a subscription to pay Maillefert $6,000 to, for instance, reduce "Pot Rock '' to provide 24 feet (7.3 m) of depth at low - mean water. While ships continued to run aground (in the 1850s about 2 % of ships did so) and petitions continued to call for action, the federal government undertook surveys of the area which ended in 1851 with a detailed and accurate map. By then Maillefert had cleared the rock "Baldheaded Billy '', and it was reported that Pot Rock had been reduced to 20.5 feet (6.2 m), which encouraged the United States Congress to appropriate $20,000 for further clearing of the strait. However, a more accurate survey showed that the depth of Pot Rock was actually a little more than 18 feet (5.5 m), and eventually Congress withdrew its funding.
With the main shipping channels through The Narrows into the harbor silting up with sand due to littoral drift, thus providing ships with less depth, and a new generation of larger ships coming online -- epitomized by Isambard Kingdom Brunel 's SS Great Eastern, popularly known as "Leviathan '' -- New York began to be concerned that it would start to lose its status as a great port if a "back door '' entrance into the harbor was not created. In the 1850s the depth continued to lessen -- the harbor commission said in 1850 that the mean water low was 24 feet (7.3 m) and the extreme water low was 23 feet (7.0 m) -- while the draft required by the new ships continued to increase, meaning it was only safe for them to enter the harbor at high tide.
The U.S. Congress, realizing that the problem needed to be addressed, appropriated $20,000 for the Army Corps of Engineers to continue Maillefert 's work, but the money was soon spent without appreciable change in the hazards of navigating the strait. An advisory council recommended in 1856 that the strait be cleared of all obstacles, but nothing was done, and the Civil War soon broke out.
In the late 1860s, after the Civil War, Congress realized the military importance of having easily navigable waterways, and charged the Army Corps of Engineers with clearing Hell Gate of the rocks there that caused a danger to navigation. The Corps ' Colonel James Newton estimated that the project would cost $1 million, as compared to the approximate annual loss in shipping of $2 million. Initial forays floundered, and Newton, by that time a general, took over direct control of the project. In 1868 Newton decided, with the support of both New York 's mercantile class and local real estate interests, to focus on the 3 - acre (1.2 ha) Hallert 's Point Reef off of Queens. The project would involve 7,000 feet (2,100 m) of tunnels equipped with trains to haul debris out as the reef was eviscerated, creating a reef structured like "swiss cheese '' which Newton would then blow up. After seven years of digging seven thousand holes, and filling four thousand of them with 30,000 pounds (14,000 kg) of dynamite, on September 24, 1876, in front of an audience of people including the inhabitants of the insane asylum on Wards Island, but not the prisoners of Roosevelt Island -- then called Blackwell 's Island -- who remained in their cells, Newton 's daughter set off the explosion. The effect was immediate in decreased turbulence through the strait, and fewer accidents and shipwrecks. The city 's Chamber of Commerce commented that "The Centennial year will be for ever known in the annals of commerce for this destruction of one of the terrors of navigation. '' Clearing out the debris from the explosion took until 1891.
Then, in 1885, Flood Rock, a 9 - acre (3.6 ha) reef that Newton had begun to undermine even before starting on Hallert 's Rock, removing 8,000 cubic yards (6,100 m) of rock from the reef, was blown up as well, with Civil War General Philip Sheridan and abolitionist Henry Ward Beecher among those in attendance, and Newton 's daughter once more setting off the blast, the biggest ever to that date, and reportedly the largest man - made explosion until the advent of the atomic bomb although the detonation at the Battle of Messines in 1917 was several times larger. Two years later, plans were in place to dredge Hell Gate to a consistent depth of 26 feet (7.9 m).
At the same time that Hell Gate was being cleared, the Harlem River Ship Canal was being planned. When it was completed in 1895, the "back door '' to New York 's center of ship - borne trade in the docks and warehouses of the East River was open from two directions, through the cleared East River, and from the Hudson River through the Harlem River to the East River. Ironically, though, while both forks of the northern shipping entrance to the city were now open, modern dredging techniques had cut through the sandbars of the Atlantic Ocean entrance, allowing new, even larger ships to use that traditional passage into New York 's docks.
At the beginning of the 19th century, the East River was the center of New York 's shipping industry, but by the end of the century, much of it had moved to the Hudson River, leaving the East River wharves and slips to begin a long process of decay, until the area was finally rehabilitated in the mid-1960s, and the South Street Seaport Museum was opened in 1967.
By 1870, the condition of the Port of New York along both the East and Hudson Rivers had so deteriorated that the New York State legislature created the Department of Docks to renovate the port and keep New York competitive with other ports on the American East Coast. The Department of Docks was given the task of creating the master plan for the waterfront, and General George B. McClellan was engaged to head the project. McClellan held public hearings and invited plans to be submitted, ultimately receiving 70 of them, although in the end he and his successors put his own plan into effect. That plan called for the building of a seawall around Manhattan island from West 61st Street on the Hudson, around The Battery, and up to East 51st Street on the East River. The area behind the masonry wall (mostly concrete but in some parts granite blocks) would be filled in with landfill, and wide streets would be laid down on the new land. In this way, a new edge for the island (or at least the part of it used as a commercial port) would be created.
The Department had surveyed 13,700 feet (4,200 m) of shoreline by 1878, as well as documenting the currents and tides. By 1900, 75 miles (121 km) had been surveyed and core samples had been taken to inform the builders of how deep the bedrock was. The work was completed just as World War I began, allowing the Port of New York to be a major point of embarkation for troops and materiel.
The new seawall helps protect Manhattan island from storm surges, although it is only 5 feet (1.5 m) above the mean sea level, so that particularly dangerous storms, such as the nor'easter of 1992 and Hurricane Sandy in 2012, which hit the city in a way to create surges which are much higher, can still do significant damage. (The Hurricane of September 3, 1831 created the biggest storm surge on record in New York City: a rise of 13 feet (4.0 m) in one hour at the Battery, flooding all of lower Manhattan up to Canal Street.) Still, the new seawall begun in 1871 gave the island a firmer edge, improved the quality of the port, and continues to protect Manhattan from normal storm surges.
The Brooklyn Bridge, completed in 1883, was the first bridge to span the East River, connecting the cities of New York and Brooklyn, and all but replacing the frequent ferry service between them, which did not return until the late 20th century. The bridge offered cable car service across the span. The Brooklyn Bridge was followed by the Williamsburg Bridge (1903), the Queensboro Bridge (1909), the Manhattan Bridge (1912) and the Hell Gate Railroad Bridge (1916). Later would come the Triborough Bridge (1936), the Bronx - Whitestone Bridge (1939), the Throgs Neck Bridge (1961) and the Rikers Island Bridge (1966). In addition, numerous rail tunnels pass under the East River -- most of them part of the New York City Subway system -- as does the Brooklyn - Battery Tunnel and the Queens - Midtown Tunnel. (See Crossings below for details.) Also under the river is Water Tunnel # 1 of the New York City water supply system, built in 1917 to extend the Manhattan portion of the tunnel to Brooklyn, and via City Tunnel # 2 (1936) to Queens; these boroughs became part of New York City after the city 's consolidation in 1898. City Tunnel # 3 will also run under the river, under the northern tip of Roosevelt Island, and is expected to be completed by 2018; the Manhattan portion of the tunnel went into service in 2013.
Philanthropist John D. Rockefeller founded what is now Rockefeller University in 1901, between 63rd and 64th Streets on the river side of York Avenue, overlooking the river. The university is a research university for doctoral and post-doctoral scholars, primarily in the fields of medicine and biological science. North of it is one of the major medical centers in the city, NewYork Presbyterian / Weill Cornell Medical Center, which is associated with the medical schools of both Columbia University and Cornell University. Although it can trace its history back to 1771, the center on York Avenue, much of which overlooks the river, was built in 1932.
The East River was the site of one of the greatest disasters in the history of New York City when, in June 1904, the PS General Slocum sank near North Brother Island due to a fire. It was carrying 1,400 German - Americans to a picnic site on Long Island for an annual outing. There were only 321 survivors of the disaster, one of the worst losses of life in the city 's long history, and a devastating blow to the Little Germany neighborhood on the Lower East Side. The captain of the ship and the managers of the company that owned it were indicted, but only the captain was convicted; he spent 3 and a half years of his 10 - year sentence at Sing Sing Prison before being released by a Federal parole board, and then pardoned by President William Howard Taft.
Beginning in 1934, and then again from 1948 - 1966, the Manhattan shore of the river became the location for the limited - access East River Drive, which was later renamed after Franklin Delano Roosevelt, and is universally known by New Yorkers as the "FDR Drive ''. The road in sometimes at grade, sometimes runs under locations such as the site of the Headquarters of the United Nations and Carl Schurz Park and Gracie Mansion -- the mayor 's official residence, and is at time double - decked, because Hell Gate provides no room for more landfill. It begins at Battery Park, runs past the Brooklyn, Manhattan, Williamsburg and Queensboro Bridges, and the Ward 's Island Footbridge, and terminates just before the Robert F. Kennedy Triboro Bridge when it connects to the Harlem River Drive. Between most of the FDR Drive and the River is the East River Greenway, part of the Manhattan Waterfront Greenway. The East River Greenway was primarily built in connection with the building of the FDR Drive, although some portions were built as recently as 2002, and other sections are still incomplete.
In 1963, Con Edison built the Ravenswood Generating Station on the Long Island City shore of the river, on land some of which was once stone quarries which provided granite and marble slabs for Manhattan 's buildings. The plant has since been owned by KeySpan. National Grid and TransCanada, the result of deregulation of the electrical power industry. The station, which can generate about 20 % of the electrical needs of New York City -- approximately 2,500 megawatts -- receives some of its fuel by oil barge.
North of the power plant can be found Socrates Sculpture Park, an illegal dumpsite and abandoned landfill that in 1986 was turned into an outdoor museum, exhibition space for artists, and public park by sculptor Mark di Suvero and local activists. The area also contains Rainey Park, which honors Thomas C. Rainey, who attempted for 40 years to get a bridge built in that location from Manhattan to Queens. The Queensboro Bridge was eventually built south of this location.
In 2011, NY Waterway started operating its East River Ferry line. The route was a 7 - stop East River service that runs in a loop between East 34th Street and Hunters Point, making two intermediate stops in Brooklyn and three in Queens. The ferry, an alternative to the New York City Subway, cost $4 per one - way ticket. It was instantly popular: from June to November 2011, the ferry saw 350,000 riders, over 250 % of the initial ridership forecast of 134,000 riders. In December 2016, in preparation for the start of NYC Ferry service the next year, Hornblower Cruises purchased the rights to operate the East River Ferry. NYC Ferry started service on May 1, 2017, with the East River Ferry as part of the system.
In February 2012 the federal government announced an agreement with Verdant Power to install 30 tidal turbines in the channel of the East River. The turbines were projected to begin operations in 2015 and are supposed to produce 1.05 megawatts of power. The strength of the current foiled an earlier effort in 2007 to tap the river for tidal power.
On May 7, 2017, the catastrophic failure of a Con Edison substation in Brooklyn caused a spill into the river of over 5,000 US gallons (18,927 l; 4,163 imp gal) of dielectric fluid, a synthetic mineral oil used to cool electrical equipment and prevent electrical discharges. (See below.)
Throughout most of the history of New York City, and New Amsterdam before it, the East River has been the receptacle for the city 's garbage and sewage. "Night men '' who collected "night soil '' from outdoor privies would dump their loads into the river, and even after the construction of the Croton Aqueduct (1842) and then the New Croton Aqueduct (1890) gave rise to indoor plumbing, the waste that was flushed away into the sewers, where it mixed with ground run off, ran directly into the river, untreated. The sewers terminated at the slips where ships docked, until the waste began to build up, preventing dockage, after which the outfalls were moved to the end of the piers. The "landfill '' which created new land along the shoreline when the river was "wharfed out '' by the sale of "water lots '' was largely garbage such as bones, offal, and even whole dead animals, along with excrement -- human and animal. The result was that by the 1850s, if not before, the East River, like the other waterways around the city, was undergoing the process of eutrophication where the increase in nitrogen from excrement and other sources led to a decrease in free oxygen, which in turn led to an increase in phytoplankton such as algae and a decrease in other life forms, breaking the area 's established food chain. The East River became very polluted, and its animal life decreased drastically.
In an earlier time, one person had described the transparency of the water: "I remember the time, gentlemen, when you could go in twelve feet of water and you could see the pebbles on the bottom of this river. '' As the water got more polluted, it darkened, underwater vegetation (such as photosynthesizing seagrass) began dying, and as the seagrass beds declined, the many associated species of their ecosystems declined as well, contributing to the decline of the river. Also harmful was the general destruction of the once plentiful oyster beds in the waters around the city, and the over-fishing of menhaden, or mossbunker, a small silvery fish which had been used since the time of the Native Americans for fertilizing crops - however it took 8,000 of these schooling fish to fertilize a single acre, so mechanized fishing using the purse seine was developed, and eventually the menhaden population collapsed. Menhaden feed on phytoplankton, helping to keep them in check, and are also a vital step in the food chain, as bluefish, striped bass and other fish species which do not eat phytoplankton feed on the menhaden. The oyster is another filter feeder: oysters purify 10 to 100 gallons a day, while each menhaden filters four gallons in a minute, and their schools were immense: one report had a farmer collecting 20 oxcarts worth of menhaden using simple fishing nets deployed from the shore. The combination of more sewage, due to the availability of more potable water -- New York 's water consumption per capita was twice that of Europe -- indoor plumbing, the destruction of filter feeders, and the collapse of the food chain, damaged the ecosystem of the waters around New York, including the East River, almost beyond repair.
Because of these changes to the ecosystem, by 1909, the level of dissolved - oxygen in the lower part of the river had declined to less than 65 %, where 55 % of saturation is the point at which the amount of fish and the number of their species begins to be affected. Only 17 years later, by 1926, the level of dissolved oxygen in the river had fallen to 13 %, below the point at which most fish species can survive.
Due to heavy pollution, the East River is dangerous to people who fall in or attempt to swim in it, although as of mid-2007 the water was cleaner than it had been in decades. As of 2010, the New York City Department of Environmental Protection (DEP) categorizes the East River as Use Classification I, meaning it is safe for secondary contact activities such as boating and fishing. According to the marine sciences section of the DEP, the channel is swift, with water moving as fast as four knots, just as it does in the Hudson River on the other side of Manhattan. That speed can push casual swimmers out to sea. A few people drown in the waters around New York City each year.
As of 2013, it was reported that the level of bacteria in the river was below Federal guidelines for swimming on most days, although the readings may vary significantly, so that the outflow from Newtown Creek or the Gowanus Canal can be tens or hundreds of times higher than recommended, according to Riverkeeper, a non-profit environmentalist advocacy group. The counts are also higher along the shores of the strait than they are in the middle of its flow. Nevertheless, the "Brooklyn Bridge Swim '' is an annual event where swimmers cross the channel from Brooklyn Bridge Park to Manhattan.
Still, thanks to reductions in pollution, cleanups, the restriction of development, and other environmental controls, the East River along Manhattan is one of the areas of New York 's waterways -- including the Hudson - Raritan Estuary and both shores of Long Island -- which have shown signs of the return of biodiversity. On the other hand, the river is also under attack from hardy, competitive, alien species, such as the European green crab, which is considered to be one of the world 's ten worst invasive species, and is present in the river.
On May 7, 2017, the catastrophic failure of Con Edison 's Farragut Substation at 89 John Street in Dumbo, Brooklyn, caused a spill of dielectric fluid -- an insoluble synthetic mineral oil, considered non-toxic by New York state, used to cool electrical equipment and prevent electrical discharges -- into the East River from a 37,000 - US - gallon (140,060 l; 30,809 imp gal) tank. The National Response Center received a report of the spill at 1: 30pm that day, although the public did not learn of the spill for two days, and then only from tweets from NYC Ferry. A "safety zone '' was established, extending from a line drawn between Dumont Street in Greenpoint, Brooklyn, to East 25th Street in Kips Bay, Manhattan, south to Buttermilk Channel. Recreational and human - powered vehicles such as kayaks and paddleboards were banned from the zone while the oil was being cleaned up, and the speed of commercial vehicles restricted so as not to spread the oil in their wakes, causing delays in NYC Ferry service. The clean - up efforts were being undertaken by Con Edison personnel and private environmental contractors, the U.S. Coast Guard, and the New York State Department of Environmental Conservation, with the assistance of NYC Emergency Management.
The loss of the sub-station caused a voltage dip in the power provided by Con Ed to the Metropolitan Transportation Authority 's New York City Subway system, which disrupted its signals.
The Coast Guard estimated that 5,200 US gallons (19,684 l; 4,330 imp gal) of oil spilled into the water, with the remainder soaking into the soil at the substation. In the past the Coast Guard has on average been able to recover about 10 % of oil spilled, however the complex tides in the river make the recovery much more difficult, with the turbulent water caused by the river 's change of tides pushing contaminated water over the containment booms, where it is then carried out to sea and can not be recovered. By Friday May 12, officials from Con Edison reported that almost 600 US gallons (2,271 l; 500 imp gal) had been taken out of the water.
Environmental damage to wildlife is expected to be less than if the spill was of petroleum - based oil, but the oil can still block the sunlight necessary for the river 's fish and other organisms to live. Nesting birds are also in possible danger from the oil contaminating their nests and potentially poisoning the birds or their eggs. Water from the East River was reported to have tested positive for low levels of PCB, a known carcinogen.
Putting the spill into perspective, John Lipscomb, the vice president of advocacy for Riverkeepers said that the chronic release after heavy rains of overflow from city 's wastewater treatment system was "a bigger problem for the harbor than this accident. '' The state Department of Environmental Conservation is investigating the spill. It was later reported that according to DEC data which dates back to 1978, the substation involved had spilled 179 times previously, more than any other Con Ed facility. The spills have included 8,400 gallons of dielectric oil, hydraulic oil, and anti-freeze which leaked at various times into the soil around the substation, the sewers, and the East River.
On June 22, Con Edison used non-toxic green dye and divers in the river to find the source of the leak. As a result, a 4 - inch (10 cm) hole was plugged. The utility continued to believe that the bulk of the spill went into the ground around the substation, and excavated and removed several hundred cubic yards of soil from the area. They estimated that about 5,200 US gallons (19,684 l; 4,330 imp gal) went into the river, of which 520 US gallons (1,968 l; 433 imp gal) were recovered. Con Edison said that it installed a new transformer, and intended to add new barrier around the facility to help guard against future spills propagating into the river.
Music
Television
Games
A "shot tower '' at 53rd Street in Manhattan on the East River (1831)
Blackwells Island from Eighty Sixth Street, Currier & Ives (1862); Blackwell 's Island is now known as Roosevelt Island
Manhattan Bridge (top) and Brooklyn Bridge (bottom); Manhattan is on the left, Brooklyn on the right (1981)
The East River passes children playing football in East River Park (2008)
Powell 's Cove, in Whitestone, Queens (2009)
The East River flows past the Upper East Side (2009)
The East River with Brooklyn Heights in the background, Topsail Schooner Clipper City (2013)
The East River and Lower Manhattan (2013)
Informational notes
Citations
Bibliography
|
california’s san andreas fault is which type of plate boundary | San Andreas fault - wikipedia
The San Andreas Fault is a continental transform fault that extends roughly 1,200 kilometers (750 mi) through California. It forms the tectonic boundary between the Pacific Plate and the North American Plate, and its motion is right - lateral strike - slip (horizontal). The fault divides into three segments, each with different characteristics and a different degree of earthquake risk. The slip rate along the fault ranges from 20 to 35 mm (0.79 to 1.38 in) / yr.
The fault was identified in 1895 by Professor Andrew Lawson of UC Berkeley, who discovered the northern zone. It is often described as having been named after San Andreas Lake, a small body of water that was formed in a valley between the two plates. However, according to some of his reports from 1895 and 1908, Lawson actually named it after the surrounding San Andreas Valley. Following the 1906 San Francisco earthquake, Lawson concluded that the fault extended all the way into southern California.
In 1953, geologist Thomas Dibblee concluded that hundreds of miles of lateral movement could occur along the fault. A project called the San Andreas Fault Observatory at Depth (SAFOD) near Parkfield, Monterey County, was drilled through the fault during 2004 - 2007 to collect material and make physical and chemical observations to better understand fault behavior.
The northern segment of the fault runs from Hollister, through the Santa Cruz Mountains, epicenter of the 1989 Loma Prieta earthquake, then up the San Francisco Peninsula, where it was first identified by Professor Lawson in 1895, then offshore at Daly City near Mussel Rock. This is the approximate location of the epicenter of the 1906 San Francisco earthquake. The fault returns onshore at Bolinas Lagoon just north of Stinson Beach in Marin County. It returns underwater through the linear trough of Tomales Bay which separates the Point Reyes Peninsula from the mainland, runs just east of Bodega Head through Bodega Bay and back underwater, returning onshore at Fort Ross. (In this region around the San Francisco Bay Area several significant "sister faults '' run more - or-less parallel, and each of these can create significantly destructive earthquakes.) From Fort Ross, the northern segment continues overland, forming in part a linear valley through which the Gualala River flows. It goes back offshore at Point Arena. After that, it runs underwater along the coast until it nears Cape Mendocino, where it begins to bend to the west, terminating at the Mendocino Triple Junction.
The central segment of the San Andreas Fault runs in a northwestern direction from Parkfield to Hollister. While the southern section of the fault and the parts through Parkfield experience earthquakes, the rest of the central section of the fault exhibits a phenomenon called aseismic creep, where the fault slips continuously without causing earthquakes.
The southern segment (also known as the Mojave segment) begins near Bombay Beach, California. Box Canyon, near the Salton Sea, contains upturned strata associated with that section of the fault. The fault then runs along the southern base of the San Bernardino Mountains, crosses through the Cajon Pass and continues northwest along the northern base of the San Gabriel Mountains. These mountains are a result of movement along the San Andreas Fault and are commonly called the Transverse Range. In Palmdale, a portion of the fault is easily examined at a roadcut for the Antelope Valley Freeway. The fault continues northwest alongside the Elizabeth Lake Road to the town of Elizabeth Lake. As it passes the towns of Gorman, Tejon Pass and Frazier Park, the fault begins to bend northward, forming the "Big Bend ''. This restraining bend is thought to be where the fault locks up in Southern California, with an earthquake - recurrence interval of roughly 140 -- 160 years. Northwest of Frazier Park, the fault runs through the Carrizo Plain, a long, treeless plain where much of the fault is plainly visible. The Elkhorn Scarp defines the fault trace along much of its length within the plain.
The southern segment, which stretches from Parkfield in Monterey County all the way to the Salton Sea, is capable of an 8.1 - magnitude earthquake. At its closest, this fault passes about 35 miles (56 km) to the northeast of Los Angeles. Such a large earthquake on this southern segment would kill thousands of people in Los Angeles, San Bernardino, Riverside, and surrounding areas, and cause hundreds of billions of dollars in damage.
The Pacific Plate, to the west of the fault, is moving in a northwest direction while the North American Plate to the east is moving toward the southwest, but relatively southeast under the influence of plate tectonics. The rate of slippage averages about 33 to 37 millimeters (1.3 to 1.5 in) a year across California.
The southwestward motion of the North American Plate towards the Pacific is creating compressional forces along the eastern side of the fault. The effect is expressed as the Coast Ranges. The northwest movement of the Pacific Plate is also creating significant compressional forces which are especially pronounced where the North American Plate has forced the San Andreas to jog westward. This has led to the formation of the Transverse Ranges in Southern California, and to a lesser but still significant extent, the Santa Cruz Mountains (the location of the Loma Prieta earthquake in 1989).
Studies of the relative motions of the Pacific and North American plates have shown that only about 75 percent of the motion can be accounted for in the movements of the San Andreas and its various branch faults. The rest of the motion has been found in an area east of the Sierra Nevada mountains called the Walker Lane or Eastern California Shear Zone. The reason for this is not clear. Several hypotheses have been offered and research is ongoing. One hypothesis -- which gained interest following the Landers earthquake in 1992 -- suggests the plate boundary may be shifting eastward away from the San Andreas towards Walker Lane.
Assuming the plate boundary does not change as hypothesized, projected motion indicates that the landmass west of the San Andreas Fault, including Los Angeles, will eventually slide past San Francisco, then continue northwestward toward the Aleutian Trench, over a period of perhaps twenty million years.
The San Andreas began to form in the mid Cenozoic about 30 Mya (million years ago). At this time, a spreading center between the Pacific Plate and the Farallon Plate (which is now mostly subducted, with remnants including the Juan de Fuca Plate, Rivera Plate, Cocos Plate, and the Nazca Plate) was beginning to reach the subduction zone off the western coast of North America. As the relative motion between the Pacific and North American Plates was different from the relative motion between the Farallon and North American Plates, the spreading ridge began to be "subducted '', creating a new relative motion and a new style of deformation along the plate boundaries. These geological features are what are chiefly seen along San Andreas Fault. It also includes a possible driver for the deformation of the Basin and Range, separation of the Baja California Peninsula, and rotation of the Transverse Range.
The main southern section of the San Andreas Fault proper has only existed for about 5 million years. The first known incarnation of the southern part of the fault was Clemens Well - Fenner - San Francisquito fault zone around 22 -- 13 Ma. This system added the San Gabriel Fault as a primary focus of movement between 10 -- 5 Ma. Currently, it is believed that the modern San Andreas will eventually transfer its motion toward a fault within the Eastern California Shear Zone. This complicated evolution, especially along the southern segment, is mostly caused by either the "Big Bend '' and / or a difference in the motion vector between the plates and the trend of the fault and its surrounding branches.
The fault was first identified in Northern California by UC Berkeley geology professor Andrew Lawson in 1895 and named by him after the Laguna de San Andreas, a small lake which lies in a linear valley formed by the fault just south of San Francisco. Eleven years later, Lawson discovered that the San Andreas Fault stretched southward into southern California after reviewing the effects of the 1906 San Francisco earthquake. Large - scale (hundreds of miles) lateral movement along the fault was first proposed in a 1953 paper by geologists Mason Hill and Thomas Dibblee. This idea, which was considered radical at the time, has since been vindicated by modern plate tectonics.
Seismologists discovered that the San Andreas Fault near Parkfield in central California consistently produces a magnitude 6.0 earthquake approximately once every 22 years. Following recorded seismic events in 1857, 1881, 1901, 1922, 1934, and 1966, scientists predicted that another earthquake should occur in Parkfield in 1993. It eventually occurred in 2004. Due to the frequency of predictable activity, Parkfield has become one of the most important areas in the world for large earthquake research.
In 2004, work began just north of Parkfield on the San Andreas Fault Observatory at Depth (SAFOD). The goal of SAFOD is to drill a hole nearly 3 kilometres (1.9 mi) into the Earth 's crust and into the San Andreas Fault. An array of sensors will be installed to record earthquakes that happen near this area.
The San Andreas Fault System has been the subject of a flood of studies. In particular, scientific research performed during the last 23 years has given rise to about 3,400 publications.
A study published in 2006 in the journal Nature found that the San Andreas fault has reached a sufficient stress level for an earthquake of magnitude greater than 7.0 on the moment magnitude scale to occur. This study also found that the risk of a large earthquake may be increasing more rapidly than scientists had previously believed. Moreover, the risk is currently concentrated on the southern section of the fault, i.e. the region around Los Angeles, because massive earthquakes have occurred relatively recently on the central (1857) and northern (1906) segments of the fault, while the southern section has not seen any similar rupture for at least 300 years. According to this study, a massive earthquake on that southern section of the San Andreas fault would result in major damage to the Palm Springs - Indio metropolitan area and other cities in San Bernardino, Riverside and Imperial counties in California, and Mexicali Municipality in Baja California. It would be strongly felt (and potentially cause significant damage) throughout much of Southern California, including densely populated areas of Los Angeles County, Ventura County, Orange County, San Diego County, Ensenada Municipality and Tijuana Municipality, Baja California, San Luis Rio Colorado in Sonora and Yuma, Arizona. Older buildings would be especially prone to damage or collapse, as would buildings built on unconsolidated gravel or in coastal areas where water tables are high (and thus subject to soil liquefaction). The paper concluded:
The information available suggests that the fault is ready for the next big earthquake but exactly when the triggering will happen and when the earthquake will occur we can not tell (...) It could be tomorrow or it could be 10 years or more from now.
Nevertheless, in the 12 years since that publication there has not been a substantial quake in the Los Angeles area, and two major reports issued by the U.S. Geological Survey (USGS) have made variable predictions as to the risk of future seismic events. The ability to predict major earthquakes with sufficient precision to warrant increased precautions has remained elusive.
The U.S. Geological Survey most recent forecast, known as UCERF3 (Uniform California Earthquake Rupture Forecast 3), released in November 2013, estimated that an earthquake of magnitude 6.7 M or greater (i.e. equal to or greater than the 1994 Northridge earthquake) occurs about once every 6.7 years statewide. The same report also estimated there is a 7 % probability that an earthquake of magnitude 8.0 or greater will occur in the next 30 years somewhere along the San Andreas Fault. A different USGS study in 2008 tried to assess the physical, social and economic consequences of a major earthquake in southern California. That study predicted that a magnitude 7.8 earthquake along the southern San Andreas Fault could cause about 1,800 deaths and $213 billion in damage.
A 2008 paper, studying past earthquakes along the Pacific coastal zone, found a correlation in time between seismic events on the northern San Andreas Fault and the southern part of the Cascadia subduction zone (which stretches from Vancouver Island to northern California). Scientists believe quakes on the Cascadia subduction zone may have triggered most of the major quakes on the northern San Andreas within the past 3,000 years. The evidence also shows the rupture direction going from north to south in each of these time - correlated events. However the 1906 San Francisco earthquake seems to have been the exception to this correlation because the plate movement was moved mostly from south to north and it was not preceded by a major quake in the Cascadia zone.
The San Andreas Fault has had some notable earthquakes in historic times:
|
when did green revolution took place in india | Green Revolution in India - Wikipedia
The Green Revolution in India refers to a period of time when agriculture in India changed to an industrial system due to the adoption of modern methods and technology such as high yielding variety (HYV) seeds, tractors, pump sets, etc. Green revolution was started by Dr. M.S. Swaminathan. The key leadership role played by the Indian agricultural scientist Vehla Swaminathan Banda together with many others including GS Kalkat, Proff. M. M Sharan earned him the popularly used title ' Father of Green Revolution of India '. The Green Revolution allowed developing countries, like India, to try to overcome poor agricultural productivity. Within India, this started in the early 1960s and led to an increase in food grain production, especially in Punjab, Haryana and Uttar Pradesh during the early phase. The main development was higher - yielding varieties of wheat, for developing rust resistant strains of wheat.
The main development was higher - yielding varieties of wheat, for developing rust resistant strains of wheat. The introduction of high - yielding varieties (HYV) of seeds and the increased quality of fertilizers and irrigation technique led to the increase in production to make the country self - sufficient in food grains, thus improving agriculture in India. The methods adopted included the use of high - yielding varieties (HYVs) of seeds with modern farming methods.
The production of wheat has produced the best results in fueling self - sufficiency of India. Along with high - yielding seeds and irrigation facilities, the enthusiasm of farmers mobilised the idea of agricultural revolution. Due to the rise in use of chemical pesticides and fertilizers, there was a negative effect on the soil and the land (e.g., land degradation).
Famines in India were very frequent during the period 1940s to 1970s. Due to faulty distribution of food, and because farmers did not receive the true value for their labour, the majority of the population did not get enough food. Malnutrition and starvation was a huge problem.
Marginal farmers found it very difficult to get finance and credit at economical rates from the government and banks and hence, fell as easy prey to the money lenders. They took loans from zamindars, who charged high rates of interests and also exploited the farmers later on to work in their fields to repay the loans (farm labourers). Proper financing was not given during the Green Revolution period, which created a lot of problems and sufferings to the farmers of India. Government also helped those under loans.
Due to traditional agricultural practices, low productivity, and a growing population, often food grains were imported -- draining scarce foreign reserves. It was thought that with the increased production due to the Green Revolution, the government could maintain buffer stock and India could achieve self - sufficiency and self - reliability.
Agriculture was basically for subsistence and, therefore, less agricultural product was offered for sale in the market. Hence, the need was felt to encourage the farmers to increase their production and offer a greater portion of their products for sale in the market. The new methods in agriculture increased the yield of rice and wheat, which reduced India 's dependence on food imports.
Indian environmentalist Vandana Shiva notes that this is the "second Green Revolution ''. The first Green Revolution, she suggests, was mostly publicly funded (by the Indian Government). This new Green Revolution, she says, is driven by private (and foreign) interest -- notably MNCs like Monsanto. Ultimately, this is leading to foreign ownership over most of India 's farmland.
Excessive and inappropriate use of fertilizers and pesticides has polluted waterway, killed beneficial insects and wild life. It has caused over-use of soil and rapidly depleted its nutrients. The rampant irrigation practices have led to eventually soil degradation. Groundwater practices have fallen dramatically. Further, heavy dependence on few major crops has led to loss of biodiversity of farmers. These problems were aggravated due to absence of training to use modern technology and vast illiteracy leading to excessive use of chemicals. (1)
Green revolution spread only in irrigated and high - potential rain fed areas. The villages or regions without the access of sufficient water were left out that widened the regional disparities between adopters and non-adopters. Since, the HYV seeds technically can be applied only in land with assured water supply and availability of other inputs like chemicals, fertilizers etc. The application of the new technology in the dry - land areas is simply ruled out.
The states like Punjab, Haryana, Western UP etc. having good irrigation and other infrastructure facilities were able to derive the benefits of green revolution and achieve faster economic development while other states have recorded slow growth in agriculture production.
The new agriculture strategy involving use of HYV seeds was initially limited to wheat, maize and bajra. The other major crop i.e. rice responded much later. The progress of developing and application of HYV seeds in other crops especially commercial crops like oilseeds, jute etc. has been very slow. In fact, in certain period a decline in the output of commercial crops is witnessed because of diversion of area under commercial crop to food crop production. The basic factor for non-spread of green revolution to many crops was that in the early 1960s the severe shortage in food grains existed and imports were resorted to overcame the shortage. Government initiated green revolution to increase food grain productivity and non-food grain crops were not covered. The substantial rise in one or two food grain crop can not make big difference in the total agricultural production. Thus new technology contributed insignificantly in raising the overall agricultural production due to limited crop coverage. So it is important that the revolutionary efforts should be made in all major rops.
|
who is jim gordon married to in batman | James Gordon (Comics) - wikipedia
James "Jim '' Gordon is a fictional character appearing in American comic books published by DC Comics, most commonly in association with the superhero Batman. The character debuted in the first panel of Detective Comics # 27 (May 1939), Batman 's first appearance, where he is referred to simply as Commissioner Gordon. The character was created by Bill Finger, but credited to Bob Kane. Commissioner Gordon made his debut as an ally of Batman, making him the first Batman supporting character ever to be introduced.
As the police commissioner of Gotham City, Gordon shares Batman 's deep commitment to ridding the city of crime. The character is typically portrayed as having full trust in Batman and is even somewhat dependent on him. In many modern stories, he is somewhat skeptical of Batman 's vigilante methods, but nevertheless believes that Gotham needs him. The two have a mutual respect and tacit friendship. Gordon is the father or adoptive father (depending on the continuity) of Barbara Gordon, the first modern Batgirl and the information broker Oracle. Jim Gordon also has a son, James Gordon Jr., who first appeared in Batman: Year One.
Created by Bill Finger, but credited to Bob Kane, Gordon debuted in the first panel of Detective Comics # 27 (May 1939), in which he is referred to simply as Commissioner Gordon. The character 's name was taken from the earlier pulp character commissioner James W. "Wildcat '' Gordon, also known as "The Whisperer '', created in 1936 by Henry Ralston, John Nanovic, and Lawrence Donovan for Street & Smith.
Gordon had served in the United States Marine Corps prior to becoming a police officer. This gave him a set of skills that would serve useful in the future. In most versions of the Batman mythos, Jim Gordon is at one point or another depicted as commissioner of the Gotham City Police Department. Gordon frequently contacts Batman for help in solving various crimes, particularly those committed by supervillains. Generally it is Gordon who uses the Batsignal to summon Batman, and it has become a running joke of sorts that the Dark Knight will often disappear in the middle of the discussion when Gordon 's back is turned. Gordon is usually depicted with silver or red hair, eyeglasses, and a mustache. In most incarnations, he is seen wearing a trenchcoat, necktie, and on occasion, a fedora hat. He is also sometimes pictured with a cane, although it is not revealed why he uses it. Because DC Comics retconned its characters ' history in the 1985 miniseries Crisis on Infinite Earths, and because of different interpretations in television and film, the details of Gordon 's history vary from story to story.
He has been married twice; first to Barbara Eileen Gordon (née Barbara Eileen Kean) and then to Sarah Essen - Gordon.
In the original pre-Crisis version of his history, Gordon is a police detective who initially resents the mysterious vigilante 's interference in police business. He first appears in Detective Comics # 27, in the very first Batman story, in which they both investigate the murder of a chemical industrialist. Although Batman fights on the side of justice, his methods and phenomenal track record for stopping crimes and capturing criminals embarrasses the police by comparison. Eventually, Batman meets up with Gordon and persuades the detective that they need each other 's help. Batman is deputized and works with Gordon as an agent of the law.
In Batman Special # 1, it is revealed that Gordon, as a young cop, shot and killed two robbers in self - defense in front of their son. The results of this event would lead the boy to become the first Wrath, a cop killer with a costume and motif inspired by Batman, who would come after Gordon for revenge years later.
The post-Crisis version of the character was introduced in the 1987 storyline Batman: Year One, written by Frank Miller. In this version, James W. Gordon is transferred back to Gotham City after spending more than 15 years in Chicago. A man of integrity, Gordon finds that Batman is his only ally against the mob - controlled administration. One of the most significant differences in this version is that Batman is never deputized and Gordon 's relationship with him is kept out of the public eye whenever possible. It is also added that he is a special forces veteran who is capable in hand - to - hand combat; he retaliates against an intimidation attempt by corrupt fellow officers with equal violence. He is depicted as having an extra-marital affair with a fellow detective, Sarah Essen. Essen and Gordon correctly deduce Batman 's identity at one point, but never investigate their guess more fully in order to confirm it. Gordon breaks off their affair after being blackmailed by the corrupt police commissioner, Gillian B. Loeb. Mob boss Carmine Falcone sends his nephew, Johnny Viti, to abduct Gordon 's family; Batman saves them, however, and helps Gordon expose Loeb 's corruption. After Loeb resigns, Gordon is promoted to Captain.
The 1998 miniseries Gordon of Gotham takes place nearly 20 years prior to the current events of the DC Universe and two months before his arrival in Gotham in Batman: Year One. It reveals that Gordon, during his tenure in Chicago, struggled with his wife over conceiving a child while taking university night classes in criminology. He becomes a minor celebrity after a foiling a late - night robbery attempt. However, after deciding to investigate a corrupt fellow officer, he is assaulted and discredited. Gordon then uncovers evidence of rigging in the city council election and brings down two of his fellow officers, which leads to his commander recommending that he take a detective position opening in Gotham.
The story Wrath Child, published in Batman Confidential, issues 13 - 16 retcons that Gordon started his career in Gotham, but transferred to Chicago after shooting a corrupt cop and his wife (the parents of the original Wrath). The transfer was arranged by Loeb, then a captain, in an attempt to keep himself and his fellow corrupt cops from being exposed. Loeb threatens the future Wrath 's life in order to force Gordon to comply with the transfer. Gordon later transfers back to Gotham around the same time Batman starts his career.
While still a Lieutenant in the force, Gordon convinces Loeb 's successor to implement the Bat - Signal as a means to contact Batman and also to frighten criminals. It is around this time that the first Robin, Dick Grayson, becomes Batman 's sidekick. Gordon initially disapproves of a child joining in Batman 's adventures, but soon grows to not only accept the boy but trust him as much as he does Batman.
In the following years, Gordon quickly rises to the rank of commissioner after he and Batman weed out corruption within the department. After the death of his brother and sister - in - law, he adopts his niece, Barbara. Soon after he adopts Barbara, he divorces his wife, who returned to Chicago with their son James, while he retains custody of Barbara, who eventually becomes Batgirl. Gordon quickly deduces the heroine 's true identity, and attempts to confront her about it, going so far as to search her bedroom for proof. However, he was semi-tricked out of this belief, when Batman (after sanctioning Batgirl officially) had Robin dress up as Batgirl while Barbara is on the roof with her father. Gordon would continue to believe his daughter is indeed Batgirl, but would not confront her about it again, until years later.
In the 1988 graphic novel The Killing Joke, the Joker kidnaps Gordon after shooting and paralyzing Barbara. He then cages Gordon in the freak show of an abandoned amusement park and forces him to look at enlarged photos of his wounded daughter in an effort to drive him insane, thus proving to Batman that even seemingly normal people can lose their minds after having "one bad day ''. Batman eventually apprehends the Joker and rescues Gordon. Despite the intense trauma he has endured, Gordon 's sanity and ethical code are intact; he insists that Batman apprehend the Joker without harming him in order to "show him that our way works ''.
Soon after Sarah Essen returns to Gordon 's life, they rekindle their romance and get engaged. However, Essen can not comprehend why Gordon needs Batman so much, which occasionally puts a strain on their relationship.
In Batman: Legends of the Dark Knight Annual # 2, shortly before their planned wedding, former Lieutenant Flass (Gordon 's former partner) beats Gordon and kidnaps James Jr. for ransom in exchange for letting a corrupt judge go free. Batman saves James Jr., while Gordon, Essen, Flass and the judge are trapped and must work together to escape.
For a brief period following the Knightfall and Prodigal storylines, Gordon is removed from his post as commissioner and replaced by his own wife, due partly to his own disinclination to trust Batman after two substitutes -- Jean - Paul Valley and Dick Grayson -- assume the role and do not bother to tell him about the switch.
The No Man 's Land storyline takes place after Gotham is destroyed by an earthquake and isolated from outside assistance. Inside Gotham, Gordon struggles to maintain order in the midst of a crime wave. Batman is mysteriously absent for the initial three months, and Gordon feels betrayed. He forges an uneasy alliance with Two - Face, but the partnership does not last; Two - Face kidnaps Gordon, putting him on trial for breaking their "legally binding '' alliance. Gordon escapes, however, and later meets with Batman once again. In this confrontation, Gordon berates Batman for letting Gotham "fall into ruin ''. Batman offers to prove his trust by revealing his secret identity, but Gordon refuses to look when Batman removes his mask. Eventually, the two repair their friendship.
At the end of the No Man 's Land storyline, the Joker kills Sarah Essen - Gordon. An enraged Gordon barely restrains himself from killing Joker, shooting the Joker 's knee instead. Not long afterward, Gordon is shot by a criminal seeking revenge for a previous arrest. Though seriously injured, he survives, and eventually makes a full recovery.
Gordon retires from the police force after having served for more than 20 years. He remains in Gotham, and occasionally enjoys nighttime visits from Batman. Despite being retired, Gordon often finds himself drawn to a series of life - and - death circumstances, such as the Joker sending him flowers during Last Laugh, or being contacted by the temporarily reformed Harvey Dent to stop Batman from killing the Joker, to being kidnapped by Francis Sullivan, grandson of one of Gotham 's notorious serial killers, during the Made of Wood storyline. After the attack by Sullivan, Batman gives Gordon an encrypted cellphone, the so - called Batphone, in case he needs to contact him, which also carries a transmitter in case of trouble. He also still has contacts with the country 's law enforcement agencies, which the sheriff 's departments request Gordon to contact Batman to help investigating a series of unusual murders on a suburb territory outside the city 's limits; it turns out to be a paranormal case involving black magic, occult rituals, and the supernatural. Commissioner Michael Akins has taken his position, with many officers expressing reluctance to follow him. Even Harvey Bullock at one point attempts to humiliate Akins in front of other officers.
After Barbara requires surgery to counter the life - threatening effects of the Brainiac virus, Gordon visits her in Metropolis. She reveals to him her current role as Oracle, as well as her past as Batgirl. Gordon admits that he knew of her life as Batgirl, but is pleasantly surprised to know of her second career as Oracle.
As part of DC 's "One Year Later '', Gordon has once again become Gotham 's police commissioner. He rebuilds the Bat - Signal, but still carries the mobile Batphone that Batman gave him. The circumstances behind this are currently unknown, though there have been allusions to extreme corruption within the GCPD. These allusions are supported by events within Gotham Central, especially involving Detective Jim Corrigan. Gordon survives an attempt on his life by the Joker (Batman # 655), who had drugged him with Joker Venom in an attack on the GCPD. He is taken to the hospital in time.
During the Blackest Night crossover, while mourning the passing of the original Batman, who was apparently killed in action during Final Crisis, Gordon and his daughter witness Green Lantern crash into the Bat - Signal, after being assaulted by a reanimated version of the deceased Martian Manhunter. After offering the hero a spare car, the Gordons then find themselves fighting for their lives against the reanimated versions of the original Batman 's rogues gallery at Gotham Central, where Gordon makes short work of serial killer Abattoir (in Black Lantern form) with a shotgun. They are rescued by the current Dark Knight, Robin, Red Robin, and Deadman, but are later attacked by Batman and Red Robin 's parents, the reanimated Graysons and the Drakes. While Batman and Red Robin battle the Black Lanterns, Robin takes the Gordons to their underground base. It is later shown that Alfred Pennyworth tends his wounds (Gordon is unconscious, thus protecting the team 's secret identities) along with Barbara 's at the bunker 's infirmary.
In The New 52, Gordon is still the commissioner of the GCPD and a former Marine but is younger than his traditional portrayal; he still has the red hair and mustache from Batman: Year One. He is still married to his wife Barbara, and he and Barbara are the biological parents of Barbara "Babs '' Gordon (aka Batgirl).
During the Forever Evil storyline, commissioner Gordon enters Blackgate Penitentiary in order to save the warden. When a turf war erupts between the Arkham inmates, Gordon helps to evacuate the citizens from Gotham City.
In Batman Eternal, the storyline begins when Gordon is tricked into shooting at an unarmed suspect in an underground train station, resulting in a train derailing and Gordon being arrested. While incarcerated, Gordon is visited by his son, who makes arrangements to leave his father 's cell open and provide him with an opportunity to escape Blackgate, believing that his father 's actions are the result of him at least subconsciously acknowledging the ' truth ' that Gotham is beyond saving and his attempts to be a hero are pointless. However, despite his doubts, Gordon decides to remain in prison, concluding that Gotham is still worth saving and simply musing that he may just be getting old and made a mistake. Although villains such as the Penguin attempt to attack Gordon while in prison, Gordon uses Batman 's example to inspire fear in his ' fellow inmates ' with minimal effort until he is released as the final assault on Gotham begins, proceeding to rally all of Gotham to stand up and take back their city to aid Batman for everything he has ever done for them.
Following Bruce Wayne 's apparent death in battle with the Joker during the events of Batman # 40, Gordon took up the mantle of Batman using a mecha style suit to fight crime in Gotham City. Gordon first appears as Batman in Divergence # 1, a DC Comics 2015 Free Comic Book Day issue, in which he is shown to be sponsored by the mega-corp Powers International. He also notes that this is "the worst idea in the history of Gotham '', as he suits up, but agreed to the offer when various sources argued that there was nobody else capable of understanding Gotham the way Batman had done over the years, Gordon contemplating the merits of a Batman who works with the system rather than outside it. However, he begins to recognize the problems of this approach when he discovers that some of his past arrests have been murdered while out on parole and he is forbidden from investigating the crime himself. Gordon later meets the currently - depowered Superman when Clark comes to Gotham to investigate evidence that the weapons currently being used against him were created in Gotham, but their initial meeting results in a fight as Superman does n't believe that Gordon is the new Batman and Gordon doubts Superman due to him currently working with Luthor. Although Gordon doubts Superman 's abilities as a hero due to his current powerless state, he eventually works with Superman to stop Vandal Savage stealing an artificial sun created in Gotham to use as part of his latest plan, their alliance helping Gordon recognize Superman 's continued merits as a hero while Superman in turn acknowledges that the new Batman gets the job done. Gordon later works with the Justice League to investigate the death of a large monster, the heroes noting after the case has concluded that Batman 's high opinion of his abilities was well - founded. Despite Gordon 's best efforts, political issues in the department result in new villain Mr. Bloom destroying his armour and mounting a massive assault on Gotham, prompting the amnesic Bruce Wayne - ironically inspired by a conversation with the equally - amnesic Joker - to try and reclaim his role as Batman. The crisis concludes with Bloom defeated by the returned Batman using some of Gordon 's equipment while working with his old ally, the return of the true Batman prompting the GCPD to shut down the program and restore Gordon to his role as commissioner, Gordon musing that the world needs Batman to face its nightmares so that normal human beings can learn to cope with the more regular problems.
In most versions of the mythos, Gordon is ignorant of Batman 's identity. There is usually the implication Gordon is smart enough to solve the mystery, but chooses not to in order to preserve Batman 's effectiveness and maintain his own plausible deniability. In the 1966 Batman film, Gordon explicitly states his desire not to know for just such a reason.
In the pre-Crisis era, a 1952 story (Batman # 71) shows Gordon trying to uncover Batman 's identity merely for his own satisfaction, but Batman discovers Gordon 's scheme and skillfully outwits him. A later story in the 1960s shows Gordon giving a bedridden Bruce Wayne (who had contracted a nearly fatal fever as Batman) "Chinese oranges '', a natural treatment for the fever. Later, Bruce opines to Dick Grayson if it is possible that Gordon is beginning to suspect Batman 's identity.
In Batman: Year One, Gordon claims not to see the unmasked Batman well (whom his wife at that time, Barbara, also sees) because he does n't have his glasses on. Gordon suspects early on that Bruce Wayne may be Batman, though he never follows up on his suspicions, although Sarah Essen is correct in her suspicions, even guessing Bruce 's motivation. In Batman: The Animated Series, Gordon has implied he deliberately avoids deep investigation on the subject of Batman or Batgirl 's identity.
Likewise, in the 1980s Detective Comics storyline Blind Justice, the world at large incorrectly supposes Batman is dead and Gordon comments to Bruce Wayne that Batman has earned the right to retirement if he so desires. He then rather pointedly asks Bruce 's advice on whether or not he should reveal that Batman still lives.
When Hugo Strange attempted to determine Batman 's identity early in his career, he began his research by focusing on muggings and murders committed in the last few years based on the idea that Batman was prompted into his current role by a traumatic loss as a result of criminal activity, prompting Gordon - upon learning of Strange 's research - to reflect that Strange had already made a mistake as he was underestimating the physical demands that would be required for Batman to have reached his current level of skill by looking at crimes committed such a short time ago, suggesting that Gordon had already considered such an avenue of investigation (even if he may or may not have followed it up).
During No Man 's Land, Batman attempts to regain Gordon 's trust by revealing his identity. Gordon refuses to look at him after he removes the cowl, however, stating that if he wanted to know Batman 's identity, he could have figured it out years ago, and even cryptically saying, "And for all you know, maybe I did. ''
During the Hush story arc, while working with Superman, Batman discusses whether or not Perry White has figured out Superman 's secret identity. Theorizing that White is too good a reporter to not have figured it out, he draws the same comparison to himself and Gordon, stating that Gordon is too good a cop to not have figured it out. In that same story arc, Gordon, in an attempt to stop Batman from killing the Joker, tells Batman to remember who his role models are (his parents) and the beliefs they instilled in him. As well, he asks Batman to remember who and what made him who he is, a rather obvious reference to the criminal who gunned down his parents in front of him, suggesting that Gordon knows that Bruce Wayne is Batman.
Barbara reveals her identities to her father in Birds of Prey # 89. Gordon then reveals that he was well aware of her status as the first Batgirl all along, though he purposefully avoided looking into what she was doing after she was paralyzed. Batman chides her for revealing herself, saying it was a mistake, but she counters that, while he taught her to fight criminals, it was her father who taught her to be human.
In Blackest Night: Batman, Gordon is present when Deadman refers to the current Batman as "Grayson '' and after the current Robin took Gordon and his daughter to the new Batman 's underground base. It is implied that Gordon is unconscious when they meet Alfred Pennyworth.
At the conclusion of Batman: The Black Mirror, Gordon strongly implies to Dick Grayson that he is aware of the secret identities of Grayson and the Waynes, when he thanks Grayson for everything he had done for him over the course of the story. Grayson attempts to brush this off, thinking Gordon meant only the forensic assistance he had given, from which Gordon cuts him off, saying "I mean, thank you... for everything. '' A long moment of silence follows, and Grayson accepts his thanks.
During Gordon 's brief career as Batman when Bruce was suffering total amnesia after his temporary death in his last fight with the Joker, Gordon meets with Bruce Wayne and introduces himself as Batman, noting how strange it is to be saying that to Bruce, but his response could suggest that he considers it strange based on the public perception that Bruce Wayne was Batman 's financial backer rather than making it clear that he knows who Bruce was. After Bruce is forced to sacrifice his new persona to download his old memories as Batman into his mind to save Gotham from new villain Mr Bloom, Gordon apologizes for making Batman come back, noting that his friend was at peace while he was away, and starts to call him ' B... ' before stopping himself, but Batman ignores the near - name in favor of assuring Gordon that the man he might have been without Batman died long ago.
In Frank Miller 's The Dark Knight Returns, Gordon and Bruce Wayne are portrayed as close friends in their civilian identities, with Gordon having discovered his identity years before around the time of Bruce 's retirement in his mid-forties.
In the Batman: Year 100 storyline, which takes place in 2039, Captain Jim Gordon, grandson of commissioner Gordon, finds an old laptop in the attic of a country home owned by Gordon and discovers a secret file which he assumes contains long - lost information on Batman. After unsuccessfully trying numerous passwords with relevance to the Batman universe he inputs "Bruce Wayne '' and is granted access to the file contents.
In the Flashpoint universe, Gordon knows about Thomas Wayne 's identity as Batman and works with him in both his identities.
In the Batman - Vampire trilogy in the Elseworlds series, Gordon is shown to be aware of Batman 's connection to Alfred Pennyworth by the second novel in the trilogy, working with Alfred as Batman succumbs to his new, darker nature, but his knowledge of Batman 's identity as Bruce Wayne is virtually irrelevant as Batman had abandoned his life as Bruce Wayne after he was transformed into a full vampire while fighting Dracula.
As in most continuities, Gordon decides in the Christopher Nolan trilogy that he prefers to remain ignorant of Batman 's identity and agrees that his anonymity - even in death - has sociological value. Immediately prior to Batman 's apparent self - sacrifice near the end of The Dark Knight Rises, Gordon learns the truth when Batman makes a reference to Gordon 's kindness to him as a child. Following Batman 's apparent death in a nuclear detonation, Gordon attends Wayne 's empty - casket burial with Blake and Wayne 's / Batman 's confidants, Alfred Pennyworth and Lucius Fox.
In Pre-Crisis continuity, James Gordon is the biological father of Anthony "Tony '' Gordon. Originally referred to as a college student, Tony later disappears while hiding from Communist spies. He is later reunited with his sister, Barbara, and dies in a battle with the Sino - Supermen (Batman Family # 12, Detective Comics # 482). In Post-Crisis continuity, there has been no mention of Tony Gordon.
Barbara "Barb '' Gordon is the biological daughter of James Gordon in Pre-Crisis continuity. She also leads a double life as a librarian and as costumed crimefighter Batgirl. Barbara is also the link of the DC Universe Oracle. Her father is aware of her crime - fighting career and is proud of her for it.
Barbara Eileen Gordon (born Barbara Eileen Kean) is Gordon 's ex-wife and mother of Barbara Gordon in Post-Crisis continuity. Her history and existence has been repeatedly retconned over the years, sometimes implying that she died in a car crash, other times that she left Gotham with James for Chicago.
In one story, Gordon and his daughter, Barbara, visit the grave of his late wife, Barbara Eileen Gordon. This story is later retconned and it is revealed that she is not dead, but instead they are divorced and she is living in Chicago with their son, James Gordon Jr.
In Batman: Year One, Bruce Wayne returns home to Gotham City after 12 years abroad, training for his eventual one - man war on crime; Lieutenant James Gordon moves to Gotham with his pregnant wife, Barbara, after a transfer from Chicago. Both are swiftly acquainted with the corruption and violence of Gotham City, with Gordon witnessing his partner Detective Arnold Flass assaulting a teen for fun.
Bruce goes in disguise on a surveillance mission in the seedy East End, where teenage prostitute Holly Robinson propositions him. He is drawn into a brawl with her pimp and several prostitutes, including dominatrix Selina Kyle. One of the two reporting police officers shoot him and take him in their squad car, but a dazed and bleeding Wayne maneuvers his handcuffed hands in front of himself, and demands the police to get out. The cops try to subdue him, but the ensuing struggle causes the police car to careen out of control, and flips. Wayne flees, but not before dragging the police to a safe distance. He reaches Wayne Manor, barely alive, and sits before his father 's bust, requesting guidance in his war on crime. A bat crashes through a window and settles on the bust, giving him inspiration.
Gordon works to rid corruption from the force, but on orders from commissioner Gillian Loeb, several masked officers attack him, including Flass, who threatens Gordon 's pregnant wife. Gordon tracks Flass down, beats him up, and leaves him naked and handcuffed in the snow.
As Gordon becomes a minor celebrity for his bravery on the job, Batman strikes for the first time, attacking a group of thieves and gaining experience. Batman soon works up the ladder, even attacking Flass while he was accepting a bribe. He gains a reputation of being a supernatural being and inhuman, due to his use of speed and darkness to conceal himself. Two months after Batman arrived, the crime and corruption has declined. After Batman interrupts a dinner party attended by many of Gotham 's corrupt politicians and crime bosses, including Carmine "The Roman '' Falcone to threaten their criminal organization, Loeb orders Gordon to bring him in by any means necessary.
As Gordon tries in vain to catch him, Batman attacks Falcone, stripping him naked and tying him up in his bed after dumping his car in the river. Assistant district attorney Harvey Dent becomes Batman 's first ally and he conceals this from Gordon.
Detective Sarah Essen suggests Wayne as a Batman suspect and she and Gordon witness Batman save an old woman from a runaway truck. Essen holds Batman at gunpoint, but Batman disarms her and flees to an abandoned building. Loeb fraudulently orders a bomb dropped on it, forcing Batman into the fortified basement. A SWAT team is sent in, led by trigger - happy Lieutenant Branden, whom Batman attempts to trap in the basement. Branden manages to climb out of the trap through a collapsed chimney, and joins in the gun battle. Enraged as the team 's careless gunfire injures several people outside, Batman beats the team into submission, but is wounded during the fighting. Using a signal device to attract the bats out of his cave to distract the police and conceal himself, Batman escapes amid the chaos. Selina Kyle, after witnessing him in action, dons a costume of her own to begin the life as Catwoman.
Gordon has a brief affair with Essen, while Batman intimidates a drug dealer for information. The dealer goes to Gordon to testify against Flass, who is brought up on charges. Loeb blackmails Gordon with proof of his affair against pressing charges. After taking Barbara with him to investigate Wayne 's connection to Batman, Gordon confesses the affair to her. Bruce avoids Gordons suspicions by appearing with a woman and heavily drinking, though he is actually faking all of it.
Batman sneaks into Falcone 's manor and overhears a plan against Gordon but is interrupted when Catwoman, hoping to build a reputation of her own after her robberies were pinned on Batman, attacks Falcone and his bodyguards, aided by Batman. Identifying Falcone 's plan as the morning comes, the un-costumed Bruce leaves to help Gordon.
Gordon tries to rebuild his relationship with his family after Essen leaves Gotham. While leaving home, Gordon spots a motorcyclist enter his garage. Suspicious, Gordon enters to see Falcone 's nephew Johnny Vitti and his thugs holding his family hostage. Gordon realizes if he lets them go, they will most likely kill his wife and son. Therefore, Gordon shoots the thugs and chases Vitti, who has fled with his baby son James Gordon, Jr. Bruce Wayne, on a motorcycle, also rushes to chase Vitti. Gordon blows out Vitti 's car tire on a bridge and the two fight, with Gordon losing his glasses, before Vitti and James Gordon Jr. fall over the side. Bruce leaps over the railing and saves the baby. Gordon realizes that he is standing before an unmasked Batman, but says that he is "practically blind without (his) glasses '' and lets Bruce go.
Gordon and his wife start attending marriage counseling. Loeb is forced into early retirement and that means he is arrested and on trial. Falcone is the hospital and will be heading to prison pretty soon when he heals, while Flass makes a deal with prosecutors to testify against him. Gordon, meanwhile, is promoted to Captain. When a criminal who "calls himself the Joker '' threatens to poison the city 's reservoir, Gordon summons Batman with the Bat - Signal and waits on a rooftop for the Dark Knight to arrive. During the One Year Later storyline, Gordon makes a reference to his ex-wife "doing well ''.
Melinda McGraw portrayed Barbara Gordon in The Dark Knight. Grey DeLisle voiced her in Batman Year One. Erin Richards portrays Barbara Kean in Gotham.
Gordon and his wife, Barbara Kean - Gordon are the parents of a son named James Gordon Jr. (Batman # 404 - 407). James Jr. and his mother moved to Chicago after she divorced the elder Gordon. After his introduction in Batman: Year One, the character appeared almost exclusively in comics set during the Year One era, and went virtually unmentioned in present day. Scott Snyder 's story Batman: The Black Mirror reintroduced James Jr. as an adult, and establishes that he is sociopath who kills and tortures for pleasure. He is institutionalized as a teenager after he disfigures a school bus driver who insulted him. After he is released years later, he commits a series of brutal murders, while trying to frame the Joker for his crimes. After nearly killing his mother, and capturing his step - sister, James Jr. is apprehended by his father and Batman (Dick Grayson), and institutionalized in Arkham.
In The New 52, James Jr. appears in the Batgirl series. He escapes from Arkham, and begins stalking his sister, whom he views as a rival for his father 's affection. The series reveals that he deliberately caused the divorce of his parents: he killed a cat his mother had bought for Barbara and then threatened to kill his sister if she did not leave the family and threatened to kill Barbara if she tried to contact them ever again.
A different version of James Gordon, Jr. appears briefly in the films Batman Begins and The Dark Knight, in which he is portrayed as the young son of James and Barbara Gordon. In the latter film, Two - Face tries to kill the boy in order to get back at Gordon, whom he blames for the death of his fiancée, Rachel Dawes. Batman saves James Jr. by tackling Two - Face off of a roof, killing him.
Sarah Essen - Gordon (born: Sarah Essen) (Batman Annual # 13, Batman: Legends of the Dark Knight Annual # 2) was first referenced as Gordon 's wife during the future tale The Dark Knight Returns. She first appeared fully in Batman: Year One as a co-worker with whom Gordon has an extra-marital affair. After realizing they could not be together, she transferred out of state. Years after Gordon divorces his wife, Sarah returns to Gotham, and the two continue their relationship. After marrying Gordon, Sarah is murdered by Joker at the end of the No Man 's Land storyline. Following the events of Flashpoint, The New 52 retcons the timeline, Sarah 's marriage to Gordon never happened, and Barbara Eileen Gordon is the only woman James Gordon ever married. Sarah 's status in this new continuity is unknown.
James Gordon appears in the limited series Batman: The Dark Knight Returns, which presents a future where a retiring Gordon not only knows Batman 's identity, but is good friends with Bruce Wayne. He then makes a cameo on Batman: The Dark Knight Strikes Again. Now retired, he has written a book about Batman, who is believed to be dead.
Gordon is also referred to in the first issue of the series, All Star Batman and Robin the Boy Wonder, set in the same universe as and prior to The Dark Knight Returns. He made a full appearance on issue # 6, as a police captain, having a conversations with his ex-partner, Sarah Essen, about Batman. He 's still married to Barbara Kean Gordon, who is now an alcoholic, and has a son, James Jr. Just as other continuities, his daughter, Barbara, who is 15, becomes Batgirl. Frank Miller has commented that the series is set in his Dark Knight Universe, which includes all of the Batman works by Frank Miller, therefore Barbara 's inclusion confirms that Gordon had two children during Batman: Year One, at least in Miller 's version of the continuity. At the end of the series, it 's implied that, despite being married to Barbara Kean Gordon, he 's still in love with Sarah, as when Barbara gets into a car accident and ends up in a hospital, while his daughter, Barbara has been arrested for masquerading as Batgirl, he calls Sarah, telling her to tell him about her day, since he only wants to hear her voice.
On the Anti-Matter Earth, where the evil Crime Syndicate of America live, James Gordon 's counterpart is a crime boss named Boss Gordon, an ally to Owlman. Boss Gordon is the city 's leading crime boss until his empire is toppled by Batman and commissioner Thomas Wayne.
In a world where Superman was never found by the Kents, reference is made to Gordon having been murdered shortly before the events of the story, resulting in Gotham 's police department being granted extra powers of authority in his absence, although these are never fully explained.
In the Elseworlds title Batman: Gotham Noir, Jim Gordon is an alcoholic hard - boiled private detective who had left the police force following a failure to solve the disappearance of a judge. He is Selina Kyle 's former lover and Bruce Wayne 's wartime partner.
In the Elseworlds story Batman: In Darkest Knight, Jim Gordon is an honest cop who distrusts the Green Lantern (who in this reality is Bruce Wayne) because of his near - limitless power. Green Lantern comes to Gordon in order to find the identity of the man who killed his parents, but Gordon rebukes him. Later on, he changes his mind and starts investigating, but he is then interrupted and killed by Sinestro, who ruptures his heart.
In the Vampire Batman Elseworlds trilogy that began with Batman & Dracula: Red Rain, Gordon learns that a coven of vampires, led by Count Dracula himself, is behind a series of murders. Dracula captures him, but he defies the vampire even as he is bled from a cut on his neck, with Batman arriving in time to save Gordon from bleeding to death before confronting Dracula, the Dark Knight now a vampire himself thanks to the aid of renegade vampires opposing Dracula. In the sequel Batman: Bloodstorm, he and Alfred collaborate to form a team to eliminate a new family of vampires in daylight while they sleep, culminating in him and Alfred being forced to stake Batman after he succumbs to vampirism and drains the Joker 's blood. The third part of the trilogy -- Batman: Crimson Mist -- sees Gordon and Alfred forced to work with Two - Face and Killer Croc to stop the vampire Batman, returned from the staking and having already targeted and killed Penguin, Riddler, Scarecrow and Poison Ivy, Gordon grimly stating that, even if he is only killing criminals, the man they knew would never have killed. The story concludes with Gordon being crushed by debris from the Batcave roof after explosives are planted to destroy it, thus exposing Batman to the sunlight and ending his reign of terror.
In Lord Havok and the Lord Havok and the Extremists # 3, an alternate version of Gordon, known as Zombie Gordon is featured as part of Monarch 's army. A flesh - hungry beast, Zombie Gordon is kept in line by Bat - Soldier, via a large chain.
In the alternate timeline of the Flashpoint event, James Gordon is the chief of police, instead of being commissioner, and also works with Thomas Wayne, the Flashpoint version of Batman. Later, Gordon tries to convince Batman that he does not have to fight villains by himself, but Batman refuses. When Gordon locates Martha Wayne (this continuity 's version of the Joker) in old Wayne Manor, he goes in without backup. Gordon is then tricked into shooting Harvey Dent 's daughter, having been disguised as Joker, as she had been taped to a chair and had her mouth taped shut with a smile painted on the tape. Martha then appears and slashes Gordon 's throat, and Gordon dies by Joker venom.
In the graphic novel by Geoff Johns and Gary Frank, Batman: Earth One, Jim Gordon is featured as a central character. In the story, he 's a broken man who has given up on fighting corruption until the emergence of Batman. He is also partnered with a young Harvey Bullock. On the trail of the "Birthday Boy '' killings, Gordon and Batman put aside their differences and stop the killer while saving Gordon 's daughter Barbara. In the sequel, Gordon begin his alliance with Batman to combat the Riddler, who plots to takeover the remnant of Oswald Cobblepott 's criminal empire.
In the prequel to the video game Injustice: Gods Among Us, Gordon learns via Superman 's x-ray vision that he has terminal lung cancer. Later on he, Bullock and Montoya join forces with Batman 's Insurgency to fight the Regime, and together they attack the Hall of Justice. Batman 's inside man, Lex Luthor, notes that Gordon 's cancer is worsening due his taking "super pills '' that give people superhuman abilities. Gordon takes two of the super pills to save Barbara from Cyborg on the Watchtower, as he is scanning to find her location, accelerating the cancer to the point that he has only minutes to live. After the battle, Gordon thanks Batman and says goodbye to Barbara as he dies, looking down on the Earth.
Jim Gordon has appeared in many media adaptations of Batman which includes video games, animation, and the live - action films. Gordon has been played by Lyle Talbot in the serial film Batman and Robin, Neil Hamilton in the television series Batman, Pat Hingle in the Tim Burton / Joel Schumacher film series, Gary Oldman in Christopher Nolan 's The Dark Knight film series, Ben McKenzie in the television series Gotham, and J.K. Simmons in Zack Snyder 's Justice League. In 2011, Jim Gordon placed 19th on IGN 's Top 100 Comic Book Heroes.
In the Tim Burton / Joel Schumacher film adaptations of Batman, Commissioner Gordon is portrayed by Pat Hingle.
Gordon was planned for the aborted reboot named Batman: Year One written by Darren Aronofsky and Frank Miller. In this script Gordon has lived in Gotham for years, and is trying to leave for the sake of his pregnant wife; also Gordon 's wife is renamed Ann, instead of Barbara, and Gordon 's character would have been suicidal.
In the rebooted Christopher Nolan Dark Knight Trilogy, Gordon is played by Gary Oldman.
J.K. Simmons has been cast as James Gordon in the upcoming Justice League film, which is a part of the larger DC Extended Universe. Bryan Cranston revealed to Geeking Out that he was up for the part but turned it down.
Commissioner Gordon is a supporting character in the Batman: Arkham franchise where he is voiced by Tom Kane in Arkham Asylum, David Kaye in Arkham City, Michael Gough in Arkham Origins and Jonathan Banks in Arkham Knight.
James Gordon appears in Batman: The Telltale Series and Batman: The Enemy Within, voiced by Murphy Guyer.
|
not boil a kid in its mother's milk | Milk and meat in Jewish law - wikipedia
Mixtures of milk and meat (Hebrew: בשר בחלב , basar bechalav, literally "meat in milk '') are prohibited according to Jewish law. This dietary law, basic to kashrut, is based on two verses in the Book of Exodus, which forbid "boiling a (kid) goat in its mother 's milk '' and a third repetition of this prohibition in Deuteronomy.
According to the Talmud, these three almost identical references are the basis for three distinct dietary laws:
There are three categories of Kosher food -- Meat, Dairy and Parve.
The rabbis of the Talmud gave no reason for the prohibition, but later authorities, such as Maimonides, opined that the law was connected to a prohibition of Idolatry in Judaism. Obadiah Sforno and Solomon Luntschitz, rabbinic commentators living in the late Middle Ages, both suggested that the law referred to a specific foreign (Canaanite) religious practice, in which young goats were cooked in their own mothers ' milk, aiming to obtain supernatural assistance to increase the yield of their flocks. More recently, a theogonous text named the birth of the gracious gods, found during the rediscovery of Ugarit, has been interpreted as saying that a Levantine ritual to ensure agricultural fertility involved the cooking of a young goat in its mother 's milk, followed by the mixture being sprinkled upon the fields, though still more recent sources argue that this translation is incorrect. Another explanation is the separation accommodates significantly the large percentage of the population, particularly those who are aging, who are lactose - intolerant.
The biblical suppression of these practices was seen by some rabbinic commentators as having an ethical aspect. Rashbam argued that using the milk of an animal to cook its offspring was inhumane, based on a principle similar to that of Shiluach haken, the injunction against gathering eggs from a nest while the mother bird watches. Chaim ibn Attar compared the practice of cooking of animals in their mother 's milk to the barbaric slaying of nursing infants.
Since the Book of Genesis refers to young goats by the Hebrew phrase g'di izim, but the prohibition against boiling a kid... only uses the term g'di (גדי). Rashi, one of the most prominent talmudic commentators, argued that the term g'di must actually have a more general meaning, including calves and lambs, in addition to young goats. Rashi also argued that the meaning of g'di is still narrow enough to exclude birds, all the undomesticated kosher animals (for example, chevrotains and antelope), and all of the non-kosher animals. The Talmudic writers had a similar analysis, but believed that since domesticated kosher animals (sheep, goats, and cattle) have similar meat to birds and to the non-domestic kosher land - animals, they should prohibit these latter meats too, creating a general prohibition against mixing milk and meat from any kosher animal, excepting fish.
The term non-kosher means something is not allowed as food -- non-kosher animals (e.g., pigs, camels, and turtles) were already generally prohibited, and questions about the status of mixtures involving their meat and milk would be somewhat academic. Nevertheless, the lack of a classical decision about milk and meat of non-kosher animals gave rise to argument in the late Middle Ages. Some, such as Yoel Sirkis and Joshua Falk, argued that mixing milk and meat from non-kosher animals should be prohibited, but others, like Shabbatai ben Meir and David HaLevi Segal, argued that, excluding the general ban on non-kosher animals, such mixtures should not be prohibited.
Rashi expressed the opinion that the reference to mother 's milk must exclude fowl from the regulation, since only mammals produce milk. According to Shabbethai Bass, Rashi was expressing the opinion that the reference to a mother was only present to ensure that birds were clearly excluded from the prohibition; Bass argued that Rashi regarded the ban on boiling meat in its mother 's milk to really be a more general ban on boiling meat in milk, regardless of the relationship between the source of the meat and that of the milk.
Substances derived from milk, such as cheese and whey, have traditionally been considered to fall under the prohibition, but milk substitutes, created from non-dairy sources, do not. However, the classical rabbis were worried that Jews using artificial milk might be misinterpreted, so they insisted that the milk be clearly marked to indicate its source. In the classical era, the main form of artificial milk was almond milk, so the classical rabbis imposed the rule that almonds must be placed around such milk; in the Middle Ages, there was some debate about whether this had to be done during cooking as well as eating, or whether it was sufficient to merely do this during the meal.
Currently, "Leben Immo '' (Leben: yogurt in Arabic): is a famous dish in Lebanon and Syria made of meat cooked in yogurt, and Mansaf, the national dish of Jordan, is also made of meat cooked in yogurt.
Although the biblical regulation literally only mentions boiling (Hebrew: bishul, בישול), there were questions raised in the late Middle Ages about whether this should instead be translated as cooking, and hence be interpreted as a reference to activities like broiling, baking, roasting, and frying. Lenient figures like Jacob of Lissa and Chaim ibn Attar argued that such a prohibition would only be a rabbinic addition, and not the biblical intent, but others like Abraham Danzig and Hezekiah da Silva argued that the biblical term itself had this wider meaning.
The Talmudic rabbis believed that the biblical text only forbade eating a mixture of milk and meat, but because the biblical regulation is triplicated they imposed three distinct regulations to represent it:
Jacob ben Asher, an influential medieval rabbi, remarked that the gematria of do not boil a kid (Hebrew: lo t'vasheil g'di, לא תבשל גדי) is identical to that of it is the prohibition of eating, cooking and deriving benefit (Hebrew: he issur achilah u'bishul v'hana'ah, היא איסור אכילה ובישול והנאה), a detail that he considered highly significant. Though deriving benefit is a superficially vague term, it was later clarified by writers in the middle - ages to include:
The classical rabbis only considered milk and meat cooked together biblically forbidden, but Jewish writers of the Middle Ages also forbade consumption of anything merely containing the mixed tastes of milk and meat. This included, for example, meat that had been soaked in milk for an extended period. The prohibition against deriving benefit, on the other hand, was seen as being more nuanced, with several writers of the late Middle Ages, such as Moses Isserles and David Segal, arguing that this restriction only applied to the milk and meat of g'di, not to the much wider range of milks and meats prohibited by the rabbis; other prominent medieval rabbis, like Solomon Luria, disagreed, believing that the prohibition of deriving benefit referred to mixtures of all meats and milks.
The classical rabbis interpreted the biblical phrase heed my ordinance (Hebrew: ushmartem et mishmarti), which appears in the holiness code, to mean that they should (metaphorically) create a protective fence around the biblical laws, an attitude particularly expressed by The Ethics of the Fathers (a Mishnaic tractate discussing the ethics surrounding adherence to biblical rules). Nevertheless, the rabbis of the classical and Middle Ages also introduced a number of leniencies. Another prohibition on mixtures in Jewish law is kil'ayim.
To prevent the consumption of forbidden mixtures, foods are divided into three categories.
Food in the parve category includes fish, fruit, vegetables, salt, etc.; among the Karaites, Ethiopian Jews, and some Persian Jews it also includes poultry, but other Jewish groups consider poultry to count as "meat. '' However, classical Jewish authorities argue that foods lose parve status if treated in such a way that they absorb the taste of milk or meat during cooking, soaking, or salting.
The classical rabbis expressed the opinion that each of the food rules could be waived, if the portion of food violating the regulations was less than a certain size, known as a shiur (Hebrew: size, שיעור), unless it was still possible to taste or smell it; for the milk and meat regulations, this minimal size was a ke'zayit (כזית), literally meaning anything "similar to an olive '' in size. However, the shiur is merely the minimum amount that leads to formal punishment in the classical era, but even half a shiur is prohibited by the Torah (Hebrew: Hatzi shiur assur min haTorah, חצי שיעור אסור מן התורה).
Many rabbis followed the premise that taste is principle (Hebrew: ta'am k'ikar, טעם כעיקר): in the event of an accidental mixing of milk and meat, the food could be eaten if there was no detectable change in taste. Others argued that forbidden ingredients could constitute up to half of the mixture before being disallowed. Today the rabbis apply the principle of batel b'shishim (nullified in sixty; that is, permissible so long as forbidden ingredients constitute no more than 1 / 60 of the whole) (Hebrew: batel b'shishim).
Due to the premise that taste is principle, parve (i.e. neutral) foods are considered to take on the same meat / dairy produce classification as anything they are cooked with.
Since some cooking vessels and utensils (such as ceramic dishes and wooden spoons) are porous, it is possible for them to become infused with the taste of certain foods and transfer this taste to other foods. For example, if a frying pan is used to fry beef sausage, and is then used a few hours later to fry an omelette with cheese, a slight taste of the sausage might linger.
Samuel ben Meir, brother of Jacob ben Meir, argued that infused tastes could endure in a cooking vessel or utensil for up to 24 hours; his suggestion led to the principle, known as ben yomo (Hebrew: son of the day, בן יומו), that vessels and utensils should not be used to cook milk within 24 hours of being used to cook meat (and vice versa). Although, after 24 hours, some residual flavour may still reside in porous cooking vessels and utensils, some rabbis hold the opinion that such residue would become stale and fetid, hence only infusing taste for the worse (Hebrew: nosen taam lifgam, נותן טעם לפגם), which they do not regard as violating the ban against mixing the tastes of milk and meat.
Since parve food is reclassified if it takes on the flavour of meat or dairy produce, Ashkenazi Jews traditionally forbid eating parve contents of a pot that has been used within 24 hours to cook meat, if the parve contents would be eaten with dairy produce. Their tradition similarly forbids eating parve foods with meat if the cooking vessel was used to cook dairy produce within the previous 24 hours. According to Joseph Caro, the Sephardic tradition was more lenient about such things, but Moses Isserles argued that such leniency was unreliable.
In light of these issues, Orthodox Jews take the precaution of maintaining two distinct sets of crockery and cutlery; one set (known in Yiddish as milchig and in Hebrew as halavi) is for food containing dairy produce, while the other (known in Yiddish as fleishig / fleishedik and in Hebrew as basari) is for food containing meat.
Prominent rabbis of the Middle Ages insisted that milk should not be placed on a table where people are eating meat, to avoid accidentally consuming milk while eating meat, and vice versa. Tzvi Hirsch Spira, an early 20th - century rabbi and anti-zionist commentator, argued that when this rule was created, the tables commonly in use were only large enough for one individual; Spira concludes that the rule would not apply if the table being used was large, and the milk was out of reach of the person eating meat (and vice versa).
The rabbis of the Middle Ages discussed the issue of people eating milk and meat at the same table. Jacob ben Asher suggested that each individual should eat from different tablecloths, while Moses Isserles argued that a large and obviously unusual item should be placed between the individuals, as a reminder to avoid sharing the foods. Later rabbinic writers pointed out exceptions to the rule. Chaim ibn Attar, an 18th - century kabbalist, ruled that sitting at the same table as a non-Jew eating non-kosher food was permissible; Yechiel Michel Epstein, a 19th - century rabbi, argued that the risk was sufficiently reduced if individuals sat far enough apart that the only way to share food was to leave the table.
Rashi stated that meat leaves a fatty residue in the throat and on the palate and Maimonides noted that meat stuck between the teeth might not degrade for several hours Jonathan Eybeschutz pointed out that meat and dairy produce mix during digestion, and Feivel Cohen maintained that hard cheese leaves a lingering taste in the mouth. Generally, rabbinic literature considers the collective impact of each of these issues.
The Talmud reports that Mar Ukva, a respected rabbi, would not eat dairy after eating meat at the same meal, and had a father who would wait an entire day after eating meat before eating dairy produce. Jacob ben Meir speculated that Mar Ukva 's behaviour was merely a personal choice, rather than an example he expected others to follow, but prominent rabbis of the Middle Ages argued that Mar Ukva 's practice must be treated as a minimum standard of behaviour.
Maimonides argued that time was required between meat and dairy produce because meat can become stuck in the teeth, a problem he suggested would last for about six hours after eating it; this interpretation was shared by Solomon ben Aderet, a prominent pupil of his, and Asher ben Jehiel, who gained entry to the rabbinate by Solomon ben Aderet 's approval, as well as by the later Shulchan Aruch. By contrast, tosafists argued that the key detail was just the avoidance of dairy produce appearing at the same meal as meat. Therefore, it was sufficient to just wait until a new meal -- which to them simply meant clearing the table, reciting a particular blessing, and cleaning their mouths. Some later rabbinic writers, like Moses Isserles, and significant texts, like the Zohar (as noted by Vilna Gaon and Daniel Josiah Pinto), argued that a meal still did not qualify as new unless at least an hour had passed since the previous meal.
Since most Orthodox Sephardi Jews consider the Shulchan Aruch authoritative, they regard its suggestion of waiting six hours mandatory. Ashkenazi Jews, however, have various customs. Orthodox Jews of Eastern European background usually wait for six hours, although those of German ancestry traditionally wait for only three hours, and those of Dutch ancestry have a tradition of waiting only for the one hour. The medieval tosafists stated that the practice does not apply to infants, but 18th and 19th - century rabbis, such as Abraham Danzig and Yechiel Michel Epstein, criticised those who followed lenient practices that were not traditional in their region. In the 20th century, many rabbis were in favor of leniency. Moses Stern ruled that all young children were excluded from these strictures, Obadiah Joseph made an exception for the ill, and Joseph Chaim Sonnenfeld exempted nursing women.
It has traditionally been considered less problematic to eat dairy produce before meat, on the assumption that dairy products leave neither fatty residue in the throat, nor fragments between the teeth. Many 20th century Orthodox rabbis say that washing the mouth out between eating dairy and meat is sufficient. Some argue that there should also be recitation of a closing blessing before the meat is eaten, and others view this as unnecessary. Ashkenazi Jews following kabbalistic traditions, based on the Zohar, additionally ensure that about half an hour passes after consuming dairy produce before eating meat
Some rabbis of the Middle Ages argued that after eating solid dairy products such as cheese, the hands should be washed. Shabbatai ben Meir even argues that this is necessary if utensils such as forks were used and the cheese never touched by hands. Other rabbis of that time, like Joseph Caro, thought that if it was possible to visually verify that hands were clean, then they need not be washed; Tzvi Hirsch Spira argued that washing the hands should also be practiced for milk.
Jacob ben Asher thought that washing the mouth was not sufficient to remove all residue of cheese, and suggested that eating some additional solid food is required to clean the mouth. Hard and aged cheese has long been rabbinically considered to need extra precaution, on the basis that it might have a much stronger and longer lasting taste; the risk of it leaving a fattier residue has more recently been raised as a concern. According to these rabbinic opinions, the same precautions (including a pause of up to six hours) apply to eating hard cheese before meat as apply to eating meat in a meal when the meat is eaten first. Judah ben Simeon, a 17th - century doctor in Frankfurt, argued that hard cheese is not problematic if melted. Binyomin Forst argues that leniency is proper only for cooked cheese dishes and not dishes topped with cheese.
The following guidelines apply when eating pareve food cooked on dairy or meat dishes:
Though radiative cooking of meat with dairy produce is not listed by the classical rabbis as being among the biblically prohibited forms of cooking such mixtures, a controversy remains about using a microwave oven to cook these mixtures. Rav Moshe Feinstein argues that microwave cooking is a form of cooking that counts as melacha during a Sabbath, but Rav Shlomo Zalman Auerbach disagrees.
In (Exodus 23: 19) Samaritan Pentateuch adds the following passage after the prohibition: (כי עשה זאת כזבח שכח ועברה היא לאלהי יעקב) which roughly translates "that one doing this as sacrifice forgets and enrages God of Jacob ''.
The Karaites, completely rejecting the Talmud, where the stringency of the law is strongest, have little qualms about the general mixing of meat and milk. It is only the cooking of an animal in the milk of its actual mother that is banned. While it is generally banned for the Beta Israel community of Ethiopia to prepare general mixtures of meat and milk, poultry is not included in this prohibition. However, since the movement of almost the entire Beta Israel community to Israel in the 1990s, the community has generally abandoned its old traditions and adopted the broad meat and milk ban followed by Rabbinical Judaism.
|
when did teams start in tour de france | Tour de France - Wikipedia
Jacques Anquetil (FRA) Eddy Merckx (BEL) Bernard Hinault (FRA) Miguel Indurain (ESP)
The Tour de France (French pronunciation: (tuʁ də fʁɑ̃s)) is an annual men 's multiple stage bicycle race primarily held in France, while also occasionally passing through nearby countries. Like the other Grand Tours (the Giro d'Italia and the Vuelta a España), it consists of 21 stages over a little more than three weeks.
The race was first organized in 1903 to increase sales for the newspaper L'Auto and is currently run by the Amaury Sport Organisation. The race has been held annually since its first edition in 1903 except when it was stopped for the two World Wars. As the Tour gained prominence and popularity, the race was lengthened and its reach began to extend around the globe. Participation expanded from a primarily French field, as riders from all over the world began to participate in the race each year. The Tour is a UCI World Tour event, which means that the teams that compete in the race are mostly UCI WorldTeams, with the exception of the teams that the organizers invite.
Traditionally, the race is held primarily in the month of July. While the route changes each year, the format of the race stays the same with the appearance of time trials, the passage through the mountain chains of the Pyrenees and the Alps, and the finish on the Champs - Élysées in Paris. The modern editions of the Tour de France consist of 21 day - long segments (stages) over a 23 - day period and cover around 3,500 kilometres (2,200 mi). The race alternates between clockwise and counterclockwise circuits of France.
There are usually between 20 and 22 teams, with eight riders in each. All of the stages are timed to the finish; the riders ' times are compounded with their previous stage times. The rider with the lowest cumulative finishing times is the leader of the race and wears the yellow jersey. While the general classification garners the most attention, there are other contests held within the Tour: the points classification for the sprinters, the mountains classification for the climbers, young rider classification for riders under the age of 26, and the team classification for the fastest teams. Achieving a stage win also provides prestige, often accomplished by a team 's cycling sprinter specialist.
The Tour de France was created in 1903. The roots of the Tour de France trace back to the emergence of two rival sports newspapers in the country. On one hand was Le Vélo, the first and the largest daily sports newspaper in France which sold 80,000 copies a day. On the other was L'Auto, which had been set - up by journalists and business - people including Comte Jules - Albert de Dion, Adolphe Clément, and Édouard Michelin in 1899. The rival paper emerged following disagreements over the Dreyfus Affair, a cause célèbre (in which de Dion was implicated) that divided France at the end of the 19th century over the innocence of Alfred Dreyfus, a French army officer convicted -- though later exonerated -- of selling military secrets to the Germans. The new newspaper appointed Henri Desgrange as the editor. He was a prominent cyclist and owner with Victor Goddet of the velodrome at the Parc des Princes. De Dion knew him through his cycling reputation, through the books and cycling articles that he had written, and through press articles he had written for the Clément tyre company.
L'Auto was not the success its backers wanted. Stagnating sales lower than the rival it was intended to surpass led to a crisis meeting on 20 November 1902 on the middle floor of L'Auto 's office at 10 Rue du Faubourg Montmartre, Paris. The last to speak was the most junior there, the chief cycling journalist, a 26 - year - old named Géo Lefèvre. Desgrange had poached him from Giffard 's paper. Lefèvre suggested a six - day race of the sort popular on the track but all around France. Long - distance cycle races were a popular means to sell more newspapers, but nothing of the length that Lefèvre suggested had been attempted. If it succeeded, it would help L'Auto match its rival and perhaps put it out of business. It could, as Desgrange said, "nail Giffard 's beak shut. '' Desgrange and Lefèvre discussed it after lunch. Desgrange was doubtful but the paper 's financial director, Victor Goddet, was enthusiastic. He handed Desgrange the keys to the company safe and said: "Take whatever you need. '' L'Auto announced the race on 19 January 1903.
The first Tour de France was staged in 1903. The plan was a five - stage race from 31 May to 5 July, starting in Paris and stopping in Lyon, Marseille, Bordeaux, and Nantes before returning to Paris. Toulouse was added later to break the long haul across southern France from the Mediterranean to the Atlantic. Stages would go through the night and finish next afternoon, with rest days before riders set off again. But this proved too daunting and the costs too great for most and only 15 competitors had entered. Desgrange had never been wholly convinced and he came close to dropping the idea. Instead, he cut the length to 19 days, changed the dates to 1 to 19 July, and offered a daily allowance to those who averaged at least 20 kilometres per hour (12 mph) on all the stages, equivalent to what a rider would have expected to earn each day had he worked in a factory. He also cut the entry fee from 20 to 10 francs and set the first prize at 12,000 francs and the prize for each day 's winner at 3,000 francs. The winner would thereby win six times what most workers earned in a year. That attracted between 60 and 80 entrants -- the higher number may have included serious inquiries and some who dropped out -- among them not just professionals but amateurs, some unemployed, and some simply adventurous.
Desgrange seems not to have forgotten the Dreyfus Affair that launched his race and raised the passions of his backers. He announced his new race on 1 July 1903 by citing the writer Émile Zola, whose open letter in which every paragraph started '' J'accuse... '' led to Dreyfus 's acquittal, establishing the florid style he used henceforth.
The first Tour de France started almost outside the Café Reveil - Matin at the junction of the Melun and Corbeil roads in the village of Montgeron. It was waved away by the starter, Georges Abran, at 3: 16 p.m. on 1 July 1903. L'Auto had n't featured the race on its front page that morning.
Among the competitors were the eventual winner, Maurice Garin, his well - built rival Hippolyte Aucouturier, the German favourite Josef Fischer, and a collection of adventurers including one competing as "Samson ''.
Many riders dropped out of the race after completing the initial stages as the physical effort the tour required was just too much. Only a mere 24 entrants remained at the end of the fourth stage. The race finished on the edge of Paris at Ville d'Avray, outside the Restaurant du Père Auto, before a ceremonial ride into Paris and several laps of the Parc des Princes. Garin dominated the race, winning the first and last two stages, at 25.68 kilometres per hour (15.96 mph). The last rider, Millocheau, finished 64h 47m 22s behind him.
L'Auto 's mission was accomplished as throughout the race circulation of the publication doubled, making the race something much larger than Desgrange had ever hoped for.
Such was the passion that the first Tour created in spectators and riders that Desgrange said the 1904 Tour de France would be the last. Cheating was rife and riders were beaten up by rival fans as they neared the top of the col de la République, sometimes called the col du Grand Bois, outside St - Étienne. The leading riders, including the winner Maurice Garin, were disqualified, though it took the Union Vélocipèdique de France until 30 November to make the decision. McGann says the UVF waited so long "... well aware of the passions aroused by the race. '' Desgrange 's opinion of the fighting and cheating showed in the headline of his reaction in L'Auto: THE END. Desgrange 's despair did not last. By the following spring he was planning another Tour, longer at 11 stages rather than 6 -- and this time all in daylight to make any cheating more obvious. Stages in 1905 began between 3 am and 7: 30 am. The race captured the imagination. L'Auto's circulation rose from 25,000 to 65,000; by 1908 it was a quarter of a million. The Tour returned after its suspension during World War One and continued to grow, with circulation of L'Auto reaching 500,000 by 1923. The record claimed by Desgrange was 854,000 during the 1933 Tour. Le Vélo, meanwhile, went out of business in 1904.
Desgrange and his Tour invented bicycle stage racing. Desgrange experimented with different ways of judging the winner. Initially he used total accumulated time (as used in the modern Tour de France) but from 1906 to 1912 by points for placings each day. Desgrange saw problems in judging both by time and by points. By time, a rider coping with a mechanical problem -- which the rules insisted he repair alone -- could lose so much time that it cost him the race. Equally, riders could finish so separated that time gained or lost on one or two days could decide the whole race. Judging the race by points removed over-influential time differences but discouraged competitors from riding hard. It made no difference whether they finished fast or slow or separated by seconds or hours, so they were inclined to ride together at a relaxed pace until close to the line, only then disputing the final placings that would give them points.
The format changed over time. The Tour originally ran around the perimeter of France. Cycling was an endurance sport and the organisers realised the sales they would achieve by creating supermen of the competitors. Night riding was dropped after the second Tour in 1904, when there had been persistent cheating when judges could not see riders. That reduced the daily and overall distance but the emphasis remained on endurance. Desgrange said his ideal race would be so hard that only one rider would make it to Paris. The first mountain stages (in the Pyrenees) appeared in 1910. Early tours had long multi-day stages, with the format settling on 15 stages from 1910 until 1924. After this, stages were gradually shortened, such that by 1936 there were as many as three stages in a single day. Desgrange initially preferred to see the Tour as a race of individuals. The first Tours were open to whoever wanted to compete. Most riders were in teams that looked after them. The private entrants were called touriste - routiers -- tourists of the road -- from 1923 and were allowed to take part provided they make no demands on the organisers. Some of the Tour 's most colourful characters have been touriste - routiers. One finished each day 's race and then performed acrobatic tricks in the street to raise the price of a hotel. Until 1925 Desgrange forbade team members from pacing each other. The 1927 and 1928 Tours, however, consisted mainly of team time - trials, an unsuccessful experiment which sought to avoid a proliferation of sprint finishes on flat stages. Desgrange was a traditionalist with equipment. Until 1930 he demanded that riders mend their bicycles without help and that they use the same bicycle from start to end. Exchanging a damaged bicycle for another was allowed only in 1923. Desgrange stood against the use of multiple gears and for many years insisted riders use wooden rims, fearing the heat of braking while coming down mountains would melt the glue that held the tires on metal rims (they were finally allowed in 1937).
By the end of the 1920s, Desgrange believed he could not beat what he believed were the underhand tactics of bike factories. When the Alcyon team contrived to get Maurice De Waele to win even though he was sick, he said "My race has been won by a corpse ''. In 1930 Desgrange again attempted to take control of the Tour from teams, insisting competitors enter in national teams rather than trade teams and that competitors ride plain yellow bicycles that he would provide, without a maker 's name. There was no place for individuals in the post-1930s teams and so Desgrange created regional teams, generally from France, to take in riders who would not otherwise have qualified. The original touriste - routiers mostly disappeared but some were absorbed into regional teams. In 1936 Desgrange had a prostate operation. At the time, two operations were needed; the Tour de France was due to fall between them. Desgrange persuaded his surgeon to let him follow the race. The second day proved too much and, in a fever at Charleville, he retired to his château at Beauvallon. Desgrange died at home on the Mediterranean coast on 16 August 1940. The race was taken over by his deputy, Jacques Goddet. The Tour was again disrupted by War after 1939, and did not return until 1947.
In 1944, L'Auto was closed -- its doors nailed shut -- and its belongings, including the Tour, sequestrated by the state for publishing articles too close to the Germans. Rights to the Tour were therefore owned by the government. Jacques Goddet was allowed to publish another daily sports paper, L'Équipe, but there was a rival candidate to run the Tour: a consortium of Sports and Miroir Sprint. Each organised a candidate race. L'Équipe and Le Parisien Libéré had La Course du Tour de France and Sports and Miroir Sprint had La Ronde de France. Both were five stages, the longest the government would allow because of shortages. L'Équipe's race was better organised and appealed more to the public because it featured national teams that had been successful before the war, when French cycling was at a high. L'Équipe was given the right to organise the 1947 Tour de France. However, L'Équipe 's finances were never sound and Goddet accepted an advance by Émilion Amaury, who had supported his bid to run the post-war Tour. Amaury was a newspaper magnate whose condition was that his sports editor, Félix Lévitan should join Goddet for the Tour. The two worked together, Goddet running the sporting side and Lévitan the financial.
On the Tour 's return, the format of the race settled on between 20 -- 25 stages. Most stages would last one day but the scheduling of ' split ' stages continued well in to the 1980s. 1953 saw the introduction of the Green Jersey ' Points ' competition. National teams contested the Tour until 1961. The teams were of different sizes. Some nations had more than one team and some were mixed in with others to make up the number. National teams caught the public imagination but had a snag: that riders might normally have been in rival trade teams the rest of the season. The loyalty of riders was sometimes questionable, within and between teams. Sponsors were always unhappy about releasing their riders into anonymity for the biggest race of the year, as riders in national teams wore the colours of their country and a small cloth panel on their chest that named the team for which they normally rode. The situation became critical at the start of the 1960s. Sales of bicycles had fallen and bicycle factories were closing. There was a risk, the trade said, that the industry would die if factories were not allowed the publicity of the Tour de France. The Tour returned to trade teams in 1962. In the same year, Émilion Amaury, owner of le Parisien Libéré, became financially involved in the Tour. He made Félix Lévitan co-organizer of the Tour, and it was decided that Levitan would focus on the financial issues, and Jacques Goddet on the sporting issues. The Tour de France was meant for professional cyclists, but in 1961 the organisation started the Tour de l'Avenir the amateur version.
Doping had become a problem culminating in the death of Tom Simpson in 1967, after which riders went on strike, though the organisers suspected sponsors provoked them. The Union Cycliste Internationale introduced limits to daily and overall distances, imposed rest days and tests were introduced for riders. It was then impossible to follow the frontiers, and the Tour increasingly zig - zagged across the country, sometimes with unconnected days ' races linked by train, while still maintaining some sort of loop. The Tour returned to national teams for 1967 and 1968 as "an experiment ''. The Tour returned to trade teams in 1969 with a suggestion that national teams could come back every few years, but this has not happened since.
In the early 1970s the race was dominated by Eddy Merckx, who won the General Classification five times, the Mountains Classification twice, the Points Classification three times and a record 34 stages.
During this era race director Felix Lévitan brought in a new commercial era of the Tour, beginning to recruit sponsors, sometimes accepting prizes in kind if he could not get cash. He introduced the finish of the Tour at the Avenue des Champs - Élysées in 1975, the same year the polka - dot jersey was introduced for the winner of the Mountains Classification. He helped drive an internationalization of the Tour de France, and cycling in general. In 1982 Sean Kelly of Ireland (points) and Phil Anderson of Australia (young rider) became the first winners of any Tour classifications from outside cycling 's Continental Europe heartlands, while Lévitan was influential in facilitating the participation in the 1983 Tour by amateur riders from the Eastern Bloc and Colombia. In 1984, for the first time, the Société du Tour de France organized the Tour de France Féminin, a version for women. It was run in the same weeks as the men 's version, and won by Marianne Martin. Greg LeMond of the US became the first non-European winner in the 1986 race.
While the global awareness and popularity of the Tour grew during this time, its finances became stretched. Goddet and Lévitan continued to clash over the running of the race. Lévitan launched the Tour of America, as a precursor to his plans to take the Tour de France to the US. The Tour of America lost a lot of money, and it appeared to have been cross-financed by the Tour de France. In the years before 1987, Lévitan 's position had always been protected by Émilien Amaury, the then owner of ASO, but recently, Émilien Amaury had retired and his son Philippe Amaury was now responsible. When Lévitan arrived at his office on 17 March 1987, he found that his doors were locked and he was fired. The organisation of the 1987 Tour de France was taken over by Jean - François Naquet - Radiguet. He was not successful in acquiring more funds, and was fired within one year.
Months before the start of the 1988 Tour, director Jean - François Naquet - Radiguet was replaced by Xavier Louy. In 1988 the Tour was organised by Jean - Pierre Courcol, the director of L'Équipe, then in 1989 by Jean - Pierre Carenso and then by Jean - Marie Leblanc, who in 1989 had been race director. The former television presenter Christian Prudhomme -- he commentated on the Tour among other events -- replaced Leblanc in 2007, having been assistant director for three years. In 1993 ownership of L'Équipe moved to the Amaury Group, which formed Amaury Sport Organisation (ASO) to oversee its sports operations, although the Tour itself is operated by its subsidiary the Société du Tour de France. ASO employs around 70 people full - time, in an office facing but not connected to L'Équipe in the Issy - les - Moulineaux area of outer western Paris. That number expands to about 220 during the race itself, not including 500 contractors employed to move barriers, erect stages, signpost the route and other work. ASO now also operate several other major bike races throughout the year.
The oldest and main competition in the Tour de France is known as the "general classification '', for which the yellow jersey is awarded: the winner of this is said to have won the race. A few riders from each team aim to win overall but there are three further competitions to draw riders of all specialties: points, mountains, and a classification for young riders with general classification aspirations. The leader of each of the aforementioned classifications wears a distinctive jersey, with riders leading multiple classifications wearing the jersey of the most prestigious that he leads. In addition to these four classifications, there are several minor and discontinued classifications that are competed for during the race.
The oldest and most sought after classification in the Tour de France is the general classification. All of the stages are timed to the finish. The riders ' times are compounded with their previous stage times; so the rider with the lowest aggregate time is the leader of the race. The leader is determined after each stage 's conclusion: he gains the privilege to wear the yellow jersey, presented on a podium in the stage 's finishing town, for the next stage. If a rider is leading more than one classification that awards a jersey, he wears the yellow one, since the general classification is the most important one in the race. Between 1905 and 1912 inclusive, in response to concerns about rider cheating in the 1904 race, the general classification was awarded according to a point - based system based on their placings in each stage, and the rider with the lowest total of points after the Tour 's conclusion was the winner.
The leader in the first Tour de France was awarded a green armband. The yellow jersey (The color yellow was chosen as the magazine that created the Tour, L'Auto, printed its newspapers on yellow paper), was added to the race in the 1919 edition and it has since become a symbol of the Tour de France. The first rider to wear the yellow jersey was Eugène Christophe. Each team brings multiple yellow jerseys in advance of the Tour in case one of their riders becomes the overall leader of the race. Riders usually try to make the extra effort to keep the jersey for as long as possible in order to get more publicity for the team and its sponsors. Eddy Merckx has worn the yellow jersey for 96 stages, which is more than any other rider in the history of the Tour de France. Four riders have won the general classification five times in their career: Jacques Anquetil, Eddy Merckx, Bernard Hinault, and Miguel Indurain.
The mountains classification is the second oldest jersey awarding classification in the Tour de France. The mountains classification was added to the Tour de France in the 1933 edition and was first won by Vicente Trueba. Prizes for the classification were first awarded in 1934. During stages of the race containing climbs, points are awarded to the first riders to reach the top of each categorized climb, with points available for up to the first 10 riders, depending on the classification of the climb. Climbs are classified according to the steepness and length of that particular hill, with more points available for harder climbs. The classification was preceded by the meilleur grimpeur (English: best climber) which was awarded by the organising newspaper l'Auto to a cyclist who completed each race.
The classification awarded no jersey to the leader until the 1975 Tour de France, when the organizers decided to award a distinctive white jersey with red dots to the leader. The climbers ' jersey is worn by the rider who, at the start of each stage, has the largest number of climbing points. If a rider leads two or more of classifications, the climbers ' jersey is worn by the rider in second, or third, place in that contest. At the end of the Tour, the rider holding the most climbing points wins the classification. Some riders may race with the aim of winning this particular competition, while others who gain points early on may shift their focus to the classification during the race. The Tour has five categories for ranking the mountains the race covers. The scale ranges from category 4, the easiest, to hors catégorie, the hardest. During his career Richard Virenque won the mountains classification a record seven times.
The point distribution for the mountains is as follows:
The points classification is the third oldest of the currently awarded jersey classifications. It was introduced in the 1953 Tour de France and was first won by Fritz Schär. The classification was added to draw the participation of the sprinters as well as celebrate the 50th anniversary of the Tour. Points are given to the first 15 riders to finish a stage, with an additional set of points given to the first 15 riders to cross a pre-determined ' sprint ' point during the route of each stage. The point classification leader green jersey is worn by the rider who at the start of each stage, has the greatest number of points.
In the first years, the cyclist received penalty points for not finishing with a high place, so the cyclist with the fewest points was awarded the green jersey. From 1959 on, the system was changed so the cyclists were awarded points for high place finishes (with first place getting the most points, and lower placings getting successively fewer points), so the cyclist with the most points was awarded the green jersey. The number of points awarded varies depending on the type of stage, with flat stages awarding the most points at the finish and time trials and high mountain stages awarding the fewest points at the finish. This increases the likelihood of a sprinter winning the points classification, though other riders can be competitive for the classification if they have a sufficient number of high - place finishes.
The winner of the classification is the rider with the most points at the end of the Tour. In case of a tie, the leader is determined by the number of stage wins, then the number of intermediate sprint victories, and finally, the rider 's standing in the general classification. The classification has been won a record six times by Erik Zabel and Peter Sagan.
The first year the points classification was used it was sponsored by La Belle Jardinière, a lawn mower producer, and the jersey was made green. In 1968 the jersey was changed to red to please the sponsor. However, the color was changed back the following year. For almost 25 years the classification was sponsored by Pari Mutuel Urbain, a state betting company. However they announced in November 2014 that they would not be continuing their sponsorship, and in March 2015 it was revealed that the green jersey would now be sponsored by Czech car manufacturer Škoda.
As of 2015, the points awarded stands as:
The leader of the classification is determined the same way as the general classification, with the riders ' times being added up after each stage and the eligible rider with lowest aggregate time is dubbed the leader. The Young rider classification is restricted to the riders that are under the age of 26. Originally the classification was restricted to neo-professionals -- riders that are in their first three years of professional racing -- until 1983. In 1983, the organizers made it so that only first time riders were eligible for the classification. In 1987, the organizers changed the rules of the classification to what they are today.
This classification was added to the Tour de France in the 1975 edition, with Francesco Moser being the first to win the classification after placing seventh overall. The Tour de France awards a white jersey to the leader of the classification, although this was not done between 1989 and 2000. Four riders have won both the young rider classification and the general classification in the same year: Laurent Fignon (1983), Jan Ullrich (1997), Alberto Contador (2007), and Andy Schleck (2010). Two riders have won the young rider classification three times in their respective careers: Jan Ullrich and Andy Schleck.
As of 2015 Jersey sponsor is Optician company Krys, replacing Škoda who moved to the Green Jersey.
The prix de la combativité goes to the rider who most animates the day, usually by trying to break clear of the field. The most combative rider wears a number printed white - on - red instead of black - on - white next day. An award goes to the most aggressive rider throughout the Tour. Already in 1908 a sort of combativity award was offered, when Sports Populaires and L'Education Physique created Le Prix du Courage, 100 francs and a silver gilt medal for "the rider having finished the course, even if unplaced, who is particularly distinguished for the energy he has used. '' The modern competition started in 1958. In 1959, a Super Combativity award for the most combative cyclist of the Tour was awarded. It was initially not awarded every year, but since 1981 it has been given annually. Eddy Merckx has the most wins (4) for the overall award.
The team classification is assessed by adding the time of each team 's best three riders each day. The competition does not have its own jersey but since 2006 the leading team has worn numbers printed black - on - yellow. Until 1990, the leading team would wear yellow caps. As of 2012, the riders of the leading team wear yellow helmets. During the era of national teams, France and Belgium won 10 times each. From 1973 up to 1988, there was also a team classification based on points (stage classification); members of the leading team would wear green caps.
There has been an intermediate sprints classification, which from 1984 awarded a red jersey for points awarded to the first three to pass intermediate points during the stage. These sprints also scored points towards the points classification and bonuses towards the general classification. The intermediate sprints classification with its red jersey was abolished in 1989, but the intermediate sprints have remained, offering points for the points classification and, until 2007, time bonuses for the general classification.
From 1968 there was a combination classification, scored on a points system based on standings in the general, points and mountains classifications. The design was originally white, then a patchwork with areas resembling each individual jersey design. This was also abolished in 1989.
The rider who has taken most time is called the lanterne rouge (red lantern, as in the red light at the back of a vehicle so it can be seen in the dark) and in past years sometimes carried a small red light beneath his saddle. Such was sympathy that he could command higher fees in the races that previously followed the Tour. In 1939 and 1948 the organisers excluded the last rider every day, to encourage more competitive racing.
Prize money has always been awarded. From 20,000 francs the first year, prize money has increased each year, although from 1976 to 1987 the first prize was an apartment offered by a race sponsor. The first prize in 1988 was a car, a studio - apartment, a work of art, and 500,000 francs in cash. Prizes only in cash returned in 1990.
Prizes and bonuses are awarded for daily placings and final placings at the end of the race. In 2009, the winner received 450,000 €, while each of the 21 stage winners won 8,000 € (10,000 € for the team time - trial stage). The winners of the points classification and mountains classification each win 25,000 €, the young rider competition and the combativity prize 20,000 €; the winner of the team classification (calculated by adding the cumulative times of the best three riders in each team) receives 50 000 €.
The Souvenir Henri Desgrange, in memory of the founder of the Tour, is awarded to the first rider over the Col du Galibier where his monument stands, or to the first rider over the highest col in the Tour. A similar award, the Souvenir Jacques Goddet, is made at the summit of the Col du Tourmalet, at the memorial to Jacques Goddet, Desgrange 's successor.
The modern tour typically has 21 stages, one per day.
The Tour directors categorise mass - stage starts into ' flat ', ' hilly ', or ' mountain '. This affects the points awarded in the sprint classification, whether the 3 kilometer rule is operational, and the permitted disqualification time in which riders must finish (which is the winners ' time plus a pre-determined percentage of that time). Time bonuses of 10, 6, and 4 seconds are awarded to the first three finishers, though this was not done from 2008 to 2014. Bonuses were previously also awarded to winners of intermediate sprints.
The first time trial in the Tour was between La Roche - sur - Yon and Nantes (80 km) in 1934. The first stage in modern Tours is often a short trial, a prologue, to decide who wears yellow on the opening day. The first prologue was in 1967. The 1988 event, at La Baule, was called "la préface ''. There are usually two or three time trials. The final time trial has sometimes been the final stage, more recently often the penultimate stage.
Since 1975 the race has finished with laps of the Champs - Élysées. This stage rarely challenges the leader because it is flat and the leader usually has too much time in hand to be denied. But in 1987, Pedro Delgado broke away on the Champs to challenge the 40 - second lead held by Stephen Roche. He and Roche finished in the peloton and Roche won the Tour. In modern times, there tends to be a gentlemen 's agreement: while the points classification is still contended if possible, the overall classification is not fought over; because of this, it is not uncommon for the de facto winner of the overall classification to ride into Paris holding a glass of champagne.
In 1989 the last stage was a time trial. Greg LeMond overtook Laurent Fignon to win by eight seconds, the closest margin in the Tour 's history.
The climb of Alpe d'Huez has become one of the more noted mountain stages. During the 2004 Tour de France it was the scene of a 15.5 kilometres (9.6 mi) mountain time trial on the 16th stage. Riders complained of abusive spectators who threatened their progress up the climb. Mont Ventoux is often claimed to be the hardest in the Tour because of the harsh conditions. Another notable mountain stage frequently featured climbs the Col du Tourmalet, the most visited mountain in the history of the Tour. Col du Galibier is the most visited mountain in the Alps. The 2011 Tour de France stage to Galibier marked the 100th anniversary of the mountain in the Tour and also boasted the highest finish altitude ever: 2,645 metres (8,678 ft). Some mountain stages have become memorable because of the weather. An example is a stage in 1996 Tour de France from Val - d'Isère to Sestriere. A snowstorm at the start area led to a shortening of the stage from 190 kilometres (120 mi) to just 46 kilometres (29 mi).
To host a stage start or finish brings prestige and business to a town. The prologue and first stage (Grand Départ) are particularly prestigious. The race may start with a prologue (too short to go between towns) in which case the start of the next day 's racing, which would be considered stage 1, would usually be in the same town. In 2007 director Christian Prudhomme said that "in general, for a period of five years we have the Tour start outside France three times and within France twice. ''
With the switch to the use of national teams in 1930, the costs of accommodating riders fell to the organizers instead of the sponsors and Henri Desgrange raised the money by allowing advertisers to precede the race. The procession of often colourfully decorated trucks and cars became known as the publicity caravan. It formalised an existing situation, companies having started to follow the race. The first to sign to precede the Tour was the chocolate company, Menier, one of those who had followed the race. Its head of publicity, Paul Thévenin, had first put the idea to Desgrange. It paid 50,000 francs. Preceding the race was more attractive to advertisers because spectators gathered by the road long before the race or could be attracted from their houses. Advertisers following the race found that many who had watched the race had already gone home. Menier handed out tons of chocolate in that first year of preceding the race, as well as 500,000 policemen 's hats printed with the company 's name. The success led to the caravan 's existence being formalised the following year.
The caravan was at its height between 1930 and the mid-1960s, before television and especially television advertising was established in France. Advertisers competed to attract public attention. Motorcycle acrobats performed for the Cinzano apéritif company and a toothpaste maker, and an accordionist, Yvette Horner, became one of the most popular sights as she performed on the roof of a Citroën Traction Avant. The modern Tour restricts the excesses to which advertisers are allowed to go but at first anything was allowed. The writer Pierre Bost lamented: "This caravan of 60 gaudy trucks singing across the countryside the virtues of an apéritif, a make of underpants or a dustbin is a shameful spectacle. It bellows, it plays ugly music, it 's sad, it 's ugly, it smells of vulgarity and money. ''
Advertisers pay the Société du Tour de France approximately € 150,000 to place three vehicles in the caravan. Some have more. On top of that come the more considerable costs of the commercial samples that are thrown to the crowd and the cost of accommodating the drivers and the staff -- frequently students -- who throw them. The number of items has been estimated at 11 million, each person in the procession giving out 3,000 to 5,000 items a day. A bank, GAN, gave out 170,000 caps, 80,000 badges, 60,000 plastic bags, and 535,000 copies of its race newspaper in 1994. Together, they weighed 32 tonnes (31 long tons; 35 short tons). The vehicles also have to be decorated on the morning of each stage and, because they must return to ordinary highway standards, disassembled after each stage. Numbers vary but there are normally around 250 vehicles each year. Their order on the road is established by contract, the leading vehicles belonging to the largest sponsors.
The procession sets off two hours before the start and then regroups to precede the riders by an hour and a half. It spreads 20 -- 25 kilometres (12 -- 16 mi) and takes 40 minutes to pass at between 20 kilometres per hour (12 mph) and 60 kilometres per hour (37 mph). Vehicles travel in groups of five. Their position is logged by GPS and from an aircraft and organised on the road by the caravan director -- Jean - Pierre Lachaud -- an assistant, three motorcyclists, two radio technicians, and a breakdown and medical crew. Six motorcyclists from the Garde Républicaine, the élite of the gendarmerie, ride with them.
The first three Tours from 1903 -- 1905 stayed within France. The 1906 race went into Alsace - Lorraine, territory annexed by the German Empire in 1871 after the Franco - Prussian War. Passage was secured through a meeting at Metz between Desgrange 's collaborator, Alphonse Steinès, and the German governor.
No teams from Italy, Germany, or Spain rode in 1939 because of tensions preceding the Second World War (after German assistance to the Nationalists in the Spanish Civil War it was widely expected Spain would join Germany in a European war, though this did not come to pass). Henri Desgrange planned a Tour for 1940, after war had started but before France had been invaded. The route, approved by military authorities, included a route along the Maginot Line. Teams would have been drawn from military units in France, including the British, who would have been organised by a journalist, Bill Mills. Then the Germans invaded and the race was not held again until 1947 (see Tour de France during the Second World War). The first German team after the war was in 1960, although individual Germans had ridden in mixed teams. The Tour has since started in Germany four times: in Cologne in 1965, in Frankfurt in 1980, in West Berlin on the city 's 750th anniversary in 1987, and in Düsseldorf in 2017. Plans to enter East Germany in 1987 were abandoned.
Prior to 2013, the Tour de France had visited every region of Metropolitan France except Corsica. Jean - Marie Leblanc, when he was organiser, said the island had never asked for a stage start there. It would be difficult to find accommodation for 4,000 people, he said. The spokesman of the Corsican nationalist party Party of the Corsican Nation, François Alfonsi, said: "The organisers must be afraid of terrorist attacks. If they are really thinking of a possible terrorist action, they are wrong. Our movement, which is nationalist and in favour of self - government, would be delighted if the Tour came to Corsica. '' The opening three stages of the 2013 Tour de France were held on Corsica as part of the celebrations for the 100th edition of the race.
Most stages are in mainland France, although since the mid-1950s it has become common to visit nearby countries: Andorra, Belgium, Germany (and the former West Germany), Ireland, Italy, Luxembourg, Monaco, the Netherlands, Spain, Switzerland, and the United Kingdom have all hosted stages or part of a stage. Since 1975 the finish has been on the Champs - Élysées in Paris; from 1903 to 1967 the race finished at the Parc des Princes stadium in western Paris and from 1968 to 1974 at the Piste Municipale south of the capital. Feliz Levitan, race organizer in the 1980s, was keen to host stages in the United States, but these proposals have never been developed.
The following editions of the Tour started, or are planned to start, outside France:
The Tour was first followed only by journalists from L'Auto, the organisers. The race was founded to increase sales of a floundering newspaper and its editor, Desgrange, saw no reason to allow rival publications to profit. The first time papers other than L'Auto were allowed was 1921, when 15 press cars were allowed for regional and foreign reporters.
The Tour was shown first on cinema newsreels a day or more after the event. The first live radio broadcast was in 1929, when Jean Antoine and Alex Virot of the newspaper L'Intransigeant broadcast for Radio Cité. They used telephone lines. In 1932 they broadcast the sound of riders crossing the col d'Aubisque in the Pyrenees on 12 July, using a recording machine and transmitting the sound later.
The first television pictures were shown a day after a stage. The national TV channel used two 16mm cameras, a Jeep, and a motorbike. Film was flown or taken by train to Paris. It was edited there and shown the following day.
The first live broadcast, and the second of any sport in France, was the finish at the Parc des Princes in Paris on 25 July 1948. Rik Van Steenbergen of Belgium led in the bunch after a stage of 340 kilometres (210 mi) from Nancy. The first live coverage from the side of the road was from the Aubisque on 8 July 1958. Proposals to cover the whole race were abandoned in 1962 after objections from regional newspapers whose editors feared the competition. The dispute was settled, but not in time for the race, and the first complete coverage was the following year in 1963. In 1958 the first mountain climbs were broadcast live on television for the first time, and in 1959 helicopters were first used for the television coverage.
The leading television commentator in France was a former rider, Robert Chapatte. At first he was the only commentator. He was joined in following seasons by an analyst for the mountain stages and by a commentator following the competitors by motorcycle.
Broadcasting in France was largely a state monopoly until 1982, when the socialist president François Mitterrand allowed private broadcasters and privatised the leading television channel. Competition between channels raised the broadcasting fees paid to the organisers from 1.5 per cent of the race budget in 1960 to more than a third by the end of the century. Broadcasting time also increased as channels competed to secure the rights. The two largest channels to stay in public ownership, Antenne 2 and FR3, combined to offer more coverage than its private rival, TF1. The two stations, renamed France 2 and France 3, still hold the domestic rights and provide pictures for broadcasters around the world.
The stations use a staff of 300 with four helicopters, two aircraft, two motorcycles, 35 other vehicles including trucks, and 20 podium cameras.
Domestic television covers the most important stages of the Tour, such as those in the mountains, from mid-morning until early evening. Coverage typically starts with a survey of the day 's route, interviews along the road, discussions of the difficulties and tactics ahead, and a 30 - minute archive feature. The biggest stages are shown live from start to end, followed by interviews with riders and others and features such an edited version of the stage seen from beside a team manager following and advising riders from his car. Radio covers the race in updates throughout the day, particularly on the national news channel, France Info, and some stations provide continuous commentary on long wave. The 1979 Tour was the first to be broadcast in the United States.
The combination of unprecedented rigorous doping controls and almost no positive tests helped restore fans ' confidence in the 2009 Tour de France. This led directly to an increase in global popularity of the event. The most watched stage of 2009 was stage 20, from Montélimar to Mont Ventoux in Provence, with a global total audience of 44 million, making it the 12th most watched sporting event in the world in 2009.
The Tour is an important cultural event for fans in Europe. Millions line the route, some having camped for a week to get the best view. Crowds flanking the course are reminiscent of the community festivals that are part of another form of cycle racing in a different country -- the Isle of Man TT.
The Tour de France appealed from the start not just for the distance and its demands but because it played to a wish for national unity, a call to what Maurice Barrès called the France "of earth and deaths '' or what Georges Vigarello called "the image of a France united by its earth. ''
The image had been started by the 1877 travel / school book Le Tour de la France par deux enfants. It told of two boys, André and Julien, who "in a thick September fog left the town of Phalsbourg in Lorraine to see France at a time when few people had gone far beyond their nearest town. ''
The book sold six million copies by the time of the first Tour de France, the biggest selling book of 19th - century France (other than the Bible). It stimulated a national interest in France, making it "visible and alive '', as its preface said. There had already been a car race called the Tour de France but it was the publicity behind the cycling race, and Desgrange 's drive to educate and improve the population, that inspired the French to know more of their country.
The academic historians Jean - Luc Boeuf and Yves Léonard say most people in France had little idea of the shape of their country until L'Auto began publishing maps of the race.
The Tour has inspired several popular songs in France, notably P'tit gars du Tour (1932), Les Tours de France (1936) and Faire le Tour de France (1950). Kraftwerk had a hit with "Tour de France '' in 1983 -- described as a minimalistic "melding of man and machine '' -- and produced an album, Tour de France Soundtracks in 2003, the centenary of the Tour.
The Tour and its first Italian winner, Ottavio Bottecchia, are mentioned at the end of Ernest Hemingway 's The Sun Also Rises.
In films, the Tour was background for Five Red Tulips (1949) by Jean Stelli, in which five riders are murdered. A burlesque in 1967, Les Cracks by Alex Joffé, with Bourvil et Monique Tarbès, also featured it. Footage of the 1970 Tour de France is shown in Jorgen Leth 's experimental short Eddy Merckx in the Vicinity of a Cup of Coffee. Patrick Le Gall made Chacun son Tour (1996). The comedy, Le Vélo de Ghislain Lambert (2001), featured the Tour of 1974.
In 2005, three films chronicled a team. The German Höllentour, translated as Hell on Wheels, recorded 2003 from the perspective of Team Telekom. The film was directed by Pepe Danquart, who won an Academy Award for live - action short film in 1993 for Black Rider (Schwarzfahrer). The Danish film Overcoming by Tómas Gislason recorded the 2004 Tour from the perspective of Team CSC.
Wired to Win chronicles Française des Jeux riders Baden Cooke and Jimmy Caspar in 2003. By following their quest for the points classification, won by Cooke, the film looks at the working of the brain. The film, made for IMAX theaters, appeared in December 2005. It was directed by Bayley Silleck, who was nominated for an Academy Award for documentary short subject in 1996 for Cosmic Voyage.
A fan, Scott Coady, followed the 2000 Tour with a handheld video camera to make The Tour Baby!, which raised $160,000 to benefit the Lance Armstrong Foundation, and made a 2005 sequel, Tour Baby Deux!.
Vive Le Tour by Louis Malle is an 18 - minute short of 1962. The 1965 Tour was filmed by Claude Lelouch in Pour un Maillot Jaune. This 30 - minute documentary has no narration and relies on sights and sounds of the Tour.
In fiction, the 2003 animated feature Les Triplettes de Belleville (The Triplets of Belleville) ties into the Tour de France.
After the Tour de France there are criteria in the Netherlands and Belgium. These races are public spectacles where thousands of people can see their heroes, from the Tour de France, race. The budget of a criterium is over 100,000 Euro, with most of the money going to the riders. Jersey winners or big - name riders earn between 20 and 60 thousand euros per race in start money.
Allegations of doping have plagued the Tour almost since 1903. Early riders consumed alcohol and used ether, to dull the pain. Over the years they began to increase performance and the Union Cycliste Internationale and governments enacted policies to combat the practice.
In 1924, Henri Pélissier and his brother Charles told the journalist Albert Londres they used strychnine, cocaine, chloroform, aspirin, "horse ointment '' and other drugs. The story was published in Le Petit Parisien under the title Les Forçats de la Route (' The Convicts of the Road ')
On 13 July 1967, British cyclist Tom Simpson died climbing Mont Ventoux after taking amphetamine.
In 1998, the "Tour of Shame '', Willy Voet, soigneur for the Festina team, was arrested with erythropoietin (EPO), growth hormones, testosterone and amphetamine. Police raided team hotels and found products in the possession of the cycling team TVM. Riders went on strike. After mediation by director Jean - Marie Leblanc, police limited their tactics and riders continued. Some riders had dropped out and only 96 finished the race. It became clear in a trial that management and health officials of the Festina team had organised the doping.
Further measures were introduced by race organisers and the UCI, including more frequent testing and tests for blood doping (transfusions and EPO use). This would lead the UCI to becoming a particularly interested party in an International Olympic Committee initiative, the World Anti-Doping Agency (WADA), created in 1999. In 2002, the wife of Raimondas Rumšas, third in the 2002 Tour de France, was arrested after EPO and anabolic steroids were found in her car. Rumšas, who had not failed a test, was not penalised. In 2004, Philippe Gaumont said doping was endemic to his Cofidis team. Fellow Cofidis rider David Millar confessed to EPO after his home was raided. In the same year, Jesus Manzano, a rider with the Kelme team, alleged he had been forced by his team to use banned substances.
Doping controversy has surrounded Lance Armstrong. In August 2005, one month after Armstrong 's seventh consecutive victory, L'Équipe published documents it said showed Armstrong had used EPO in the 1999 race. At the same Tour, Armstrong 's urine showed traces of a glucocorticosteroid hormone, although below the positive threshold. He said he had used skin cream containing triamcinolone to treat saddle sores. Armstrong said he had received permission from the UCI to use this cream. Further allegations ultimately culminated in the United States Anti Doping Agency (USADA) disqualifying him from all his victories since 1 August 1998, including his seven consecutive Tour de France victories, and a lifetime ban from competing in professional sports. He chose not to appeal the decision and in January 2013 he admitted doping in a television interview conducted by Oprah Winfrey, despite having made repeated denials throughout his career.
The 2006 Tour had been plagued by the Operación Puerto doping case before it began. Favourites such as Jan Ullrich and Ivan Basso were banned by their teams a day before the start. Seventeen riders were implicated. American rider Floyd Landis, who finished the Tour as holder of the overall lead, had tested positive for testosterone after he won stage 17, but this was not confirmed until some two weeks after the race finished. On 30 June 2008 Landis lost his appeal to the Court of Arbitration for Sport, and Óscar Pereiro was named as winner.
On 24 May 2007, Erik Zabel admitted using EPO during the first week of the 1996 Tour, when he won the points classification. Following his plea that other cyclists admit to drugs, former winner Bjarne Riis admitted in Copenhagen on 25 May 2007 that he used EPO regularly from 1993 to 1998, including when he won the 1996 Tour. His admission meant the top three in 1996 were all linked to doping, two admitting cheating. On 24 July 2007 Alexander Vinokourov tested positive for a blood transfusion (blood doping) after winning a time trial, prompting his Astana team to pull out and police to raid the team 's hotel. The next day Cristian Moreni tested positive for testosterone. His Cofidis team pulled out.
The same day, leader Michael Rasmussen was removed for "violating internal team rules '' by missing random tests on 9 May and 28 June. Rasmussen claimed to have been in Mexico. The Italian journalist Davide Cassani told Danish television he had seen Rasmussen in Italy. The alleged lying prompted Rasmussen 's firing by Rabobank.
On 11 July 2008 Manuel Beltrán tested positive for EPO after the first stage. On 17 July 2008, Riccardo Riccò tested positive for continuous erythropoiesis receptor activator, a variant of EPO, after the fourth stage. In October 2008, it was revealed that Riccò 's teammate and Stage 10 winner Leonardo Piepoli, as well as Stefan Schumacher -- who won both time trials -- and Bernhard Kohl -- third on general classification and King of the Mountains -- had tested positive.
After winning the 2010 Tour de France, it was announced that Alberto Contador had tested positive for low levels of clenbuterol on 21 July rest day. On 26 January 2011, the Spanish Cycling Federation proposed a 1 - year ban but reversed its ruling on 15 February and cleared Contador to race. Despite a pending appeal by the UCI, Contador finished 5th overall in the 2011 Tour de France, but in February 2012, Contador was suspended and stripped of his 2010 victory.
During the 2012 Tour, the 3rd placed rider from 2011, Fränk Schleck tested positive for the banned diuretic Xipamide and was immediately disqualified from the Tour.
In October 2012, the United States Anti-Doping Agency released a report on doping by the U.S. Postal Service cycling team, implicating, amongst others, Armstrong. The report contained affidavits from riders including Frankie Andreu, Tyler Hamilton, George Hincapie, Floyd Landis, Levi Leipheimer, and others describing widespread use of Erythropoietin (EPO), blood transfusion, testosterone, and other banned practices in several Tours. In October 2012 the UCI acted upon this report, formally stripping Armstrong of all titles since 1 August 1998, including all seven Tour victories, and announced that his Tour wins would not be reallocated to other riders.
Cyclists who have died during the Tour de France:
Another seven fatal accidents have occurred:
One rider has been King of the Mountains, won the combination classification, combativity award, the points competition, and the Tour in the same year -- Eddy Merckx in 1969, which was also the first year he participated. Had the young rider 's jersey been available at the time, he would have won that too.
Twice the Tour was won by a racer who never wore the yellow jersey until the race was over. In 1947, Jean Robic overturned a three - minute deficit on the 257 kilometres (160 mi) final stage into Paris. In 1968, Jan Janssen of the Netherlands secured his win in the individual time trial on the last day.
The Tour has been won three times by racers who led the general classification on the first stage and holding the lead all the way to Paris. Maurice Garin did it during the Tour 's very first edition, 1903; he repeated the feat the next year, but the results were nullified by the officials as a response to widespread cheating. Ottavio Bottecchia completed a GC start - to - finish sweep in 1924. And in 1928, Nicolas Frantz held the GC for the entire race, and at the end, the podium consisted solely of members of his racing team. While no one has equalled this feat since 1928, four times a racer has taken over the GC lead on the second stage and carried that lead all the way to Paris. It is worth noting that Jacques Anquetil predicted he would wear the yellow jersey as leader of the general classification from start to finish in 1961, which he did. That year, the first day had two stages, the first part from Rouen to Versailles and the second part from Versailles to Versailles. No yellow jersey was awarded after the first part, and at the end of the day Anquetil was in yellow.
The most appearances have been by Sylvain Chavanel, who rode his 18th and final Tour in 2018. Prior to Chavenel 's final Tour, he shared the record with George Hincapie with 17. In light of Hincapie 's suspension for use of performance - enhancing drugs, before which he held the mark for most consecutive finishes with sixteen, having completed all but his very first, Joop Zoetemelk and Chavanel share the record for the most finishes at 16, with Zoetemelk having completed all 16 of the Tours that he started. Of these 16 Tours Zoetemelk came in the top five 11 times, a record, finished second 6 times, a record, and won the 1980 Tour de France.
In the early years of the Tour, cyclists rode individually, and were sometimes forbidden to ride together. This led to large gaps between the winner and the number two. Since the cyclists now tend to stay together in a peloton, the margins of the winner have become smaller, as the difference usually originates from time trials, breakaways or on mountain top finishes, or from being left behind the peloton. The smallest margins between the winner and the second placed cyclists at the end of the Tour is 8 seconds between winner Greg LeMond and Laurent Fignon in 1989. The largest margin, by comparison, remains that of the first Tour in 1903: 2h 49m 45s between Maurice Garin and Lucien Pothier.
Three riders have won 8 stages in a single year: Charles Pélissier (1930), Eddy Merckx (1970, 1974), and Freddy Maertens (1976). Mark Cavendish has the most mass finish stage wins with 30 as of stage 14 in 2016, ahead of André Darrigade and André Leducq with 22, François Faber with 19, and Eddy Merckx with 18. The youngest Tour de France stage winner is Fabio Battesini, who was 19 when he won one stage in the 1931 Tour de France.
The fastest massed - start stage was in 1999 from Laval to Blois (194.5 kilometres (120.9 mi)), won by Mario Cipollini at 50.4 kilometres per hour (31.3 mph). The fastest time - trial is Rohan Dennis ' stage 1 of the 2015 Tour de France in Utrecht, won at an average of 55.446 kilometres per hour (34.453 mph). The fastest stage win was by the 2013 Orica GreenEDGE team in a team time - trial. It completed the 25 kilometres (16 mi) in Nice (stage 5) at 57.8 kilometres per hour (35.9 mph).
The longest successful post-war breakaway by a single rider was by Albert Bourlon in the 1947 Tour de France. In the Carcassone - Luchon stage, he stayed away for 253 kilometres (157 mi). It was one of seven breakaways longer than 200 kilometres (120 mi), the last being Thierry Marie 's 234 kilometres (145 mi) escape in 1991. Bourlon finished 16 m 30s ahead. This is one of the biggest time gaps but not the greatest. That record belongs to José - Luis Viejo, who beat the peloton by 22 m 50s in the Montgenèvre - Manosque stage in 1976. He was the fourth and most recent rider to win a stage by more than 20 minutes.
The only rider to win the Tour de France and an Olympic gold medal in the same year was Britain 's Bradley Wiggins in 2012. In 2018, Wiggins was joined by Geraint Thomas as the only Tour de France champions to have won an Olympic gold medal in a velodrome; they were both on the team which won the Team Pursuit Gold Medal at the 2008 Beijing Olympics.
Four riders have won five times: Jacques Anquetil (FRA), Eddy Merckx (BEL), Bernard Hinault (FRA), and Miguel Indurain (ESP). Indurain achieved the mark with a record five consecutive wins.
|
where was the movie the last american hero filmed | The Last American Hero - wikipedia
The Last American Hero (also known as Hard Driver) is a 1973 sports drama film based on the true story of American NASCAR driver Junior Johnson. Directed by Lamont Johnson, it stars Jeff Bridges as Junior Jackson, the character based on Johnson.
The film is based on Tom Wolfe 's essay "The Last American Hero Is Junior Johnson. Yes! '', which was first published in Esquire magazine in March 1965 and included in his debut collection of essays, The Kandy - Kolored Tangerine - Flake Streamline Baby, later that year. The film was favorably reviewed by Pauline Kael in The New Yorker, even though The New Yorker had a long - standing feud with Wolfe.
The film 's theme song, "I Got a Name '', sung by Jim Croce, became a best - selling single.
Junior Jackson (Jeff Bridges), a stock - car driver stays one step ahead of reform school until his father (Art Lund) is thrown in prison for moonshining. Seeing the error of his ways, Jackson begins to concentrate on his driving skills, hoping to become a professional stock car racer to raise money to get his father released from jail.
|
who plays aunt bee on the andy griffith show | Frances Bavier - wikipedia
Frances Elizabeth Bavier (December 14, 1902 -- December 6, 1989) was an American stage and television actress. Originally from New York theatre, Bavier worked in film and television from the 1950s. She is best known for her role of Aunt Bee on The Andy Griffith Show and Mayberry R.F.D. from 1960 -- 70. Aunt Bee logged more Mayberry years (ten) than any other character. She won an Emmy Award for Outstanding Supporting Comedy Actress for the role in 1967.
Born in New York City in a brownstone on Gramercy Park to Charles S., a stationary engineer, and Mary S. (née Birmingham) Bavier, Frances originally planned to become a teacher after attending Columbia University. She first appeared in vaudeville, later moving to the Broadway stage.
After graduating from the American Academy of Dramatic Arts in 1925, she was cast in the stage comedy The Poor Nut. Bavier 's big break came in the original Broadway production of On Borrowed Time. She later appeared with Henry Fonda in the play Point of No Return.
Bavier had roles in more than a dozen films, as well as playing a range of supporting roles on television. Career highlights include her turn as Mrs. Barley in the classic 1951 film The Day the Earth Stood Still. In 1955, she played the rough and tough "Aunt Maggie '' Sawtelle, a frontier Ma Barker - type character, in the Lone Ranger episode "Sawtelle 's Saga End ''. In the episode, she fights with Tonto while the Lone Ranger battles with her nephew. At the conclusion, Tonto says that he would like to trade opponents next time. In 1957, she played Nora Martin, mother to Eve Arden 's character on The Eve Arden Show, despite the fact that Arden was only 6 or 7 years younger than Bavier. That same year, Bavier guest - starred in the eighth episode of Perry Mason as Louise Marlow in "The Case of the Crimson Kiss ''.
She was in an episode of Make Room for Daddy, which featured Andy Griffith as Andy Taylor and Ron Howard as Opie Taylor. She played a character named Henrietta Perkins. The episode led to The Andy Griffith Show, and Bavier was cast in the new role of Aunt Bee. Bavier had a love - hate relationship with her famous role during the run of the show. As a New York City actress, she felt her dramatic talents were being overlooked, yet after playing Bee for eight seasons, she was the only original cast member to remain with the series in the spin - off, Mayberry R.F.D., for two additional seasons.
In contrast to her character, Bavier was easily offended on the set, and the production staff took a very cautious approach when communicating with her. Series star Andy Griffith once admitted the two clashed sometimes during the series ' long run. In an April 24, 1998, appearance on Larry King Live, Griffith said Bavier phoned him four months before she died and apologized for being "difficult '' during the series ' run.
Bavier won the Primetime Emmy Award for Outstanding Performance by an Actress in a Supporting Role in a Comedy in 1967.
In 1972, Bavier retired from acting and bought a home in Siler City, North Carolina. On choosing to live in North Carolina instead of her native New York, Bavier said, "I fell in love with North Carolina, all the pretty roads and the trees. '' Bavier never married or had children. Somewhat awkward in one - on - one relationships, she was nonetheless altruistic at heart. According to a 1981 article by Chip Womick, a staff writer of The Courier Tribune, Bavier enthusiastically promoted Christmas and Easter Seal Societies from her Siler City home, and often wrote inspirational letters to fans who sought autographs.
On November 22, 1989, Bavier was admitted to Chatham Hospital, where she was kept in the coronary care unit for two weeks. She was discharged on December 4, 1989, and died at her home two days later, eight days before her 87th birthday. The immediate causes of death were listed as congestive heart failure, myocardial infarction, coronary artery disease, and atherosclerosis, with supporting factors being breast cancer, arthritis, and COPD. Upon her death, she was found to have had 14 cats and worn furniture, fixtures, and carpet. She was described "... as living a sparse life in her latter years, a very quiet life ''. Bavier is interred at Oakwood Cemetery in Siler City. Her headstone includes the name of her most famous role, "Aunt Bee '' and reads, "To live in the hearts of those left behind is not to die. ''
|
khesari pulse increase the risk of catching which disease | Lathyrism - wikipedia
Lathyrism or neurolathyrism is a neurological disease of humans and domestic animals, caused by eating certain legumes of the genus Lathyrus. This problem is mainly associated with Lathyrus sativus (also known as grass pea, chickling pea, kesari dal, or almorta) and to a lesser degree with Lathyrus cicera, Lathyrus ochrus and Lathyrus clymenum containing the toxin ODAP.
The lathyrism resulting from the ingestion of Lathyrus odoratus seeds (sweet peas) is often referred to as odoratism or osteolathyrism, which is caused by a different toxin (beta - aminopropionitrile) that affects the linking of collagen, a protein of connective tissues.
The consumption of large quantities of Lathyrus grain containing high concentrations of the glutamate analogue neurotoxin β - oxalyl - L - α, β - diaminopropionic acid (ODAP, also known as β - N - oxalyl - amino - L - alanine, or BOAA) causes paralysis, characterized by lack of strength in or inability to move the lower limbs, and may involve pyramidal tracts producing signs of upper motor neuron damage. The toxin may also cause aortic aneurysm. A unique symptom of lathyrism is the atrophy of gluteal muscles (buttocks). ODAP is a poison of mitochondria, leading to excess cell death, especially in motor neurons. Children can additionally develop bone deformity and reduced brain development.
This disease is prevalent in some areas of Bangladesh, Ethiopia, India and Nepal, and affects more men than women. Men between 25 and 40 are particularly vulnerable.
The toxicological cause of the disease has been attributed to the neurotoxin ODAP which acts as a structural analogue of the neurotransmitter glutamate. Ingestion of legumes containing the toxin occurs, although knowledge of how to detoxify Lathyrus is present, but drought conditions can lead to fuel and water shortages preventing the necessary steps from being taken, particularly in impoverished countries. Lathyrism usually occurs where the despair of poverty and malnutrition leaves few other food options. Lathyrism can also be caused by food adulteration.
Recent research suggests that sulfur amino acids have a protective effect against the toxicity of ODAP.
Eating the chickling pea with grain having high concentrations of sulphur - based amino acids reduces the risk of lathyrism if grain is available. Food preparation is also an important factor. Toxic amino acids are readily soluble in water and can be leached. Bacterial (lactic acid) and fungal (tempeh) fermentation is useful to reduce ODAP content. Moist heat (boiling, steaming) denatures protease inhibitors which otherwise add to the toxic effect of raw grasspea through depletion of protective sulfur amino acids. During times of drought and famine, water for steeping and fuel for boiling is frequently also in short supply. Poor people sometimes know how to reduce the chance of developing lathyrism but face a choice between risking lathyrism and starvation.
The underlying cause for excessive consumption of grasspea is a lack of alternative food sources. This is a consequence of poverty and political conflict. The prevention of lathyrism is therefore a socio - economic challenge.
The first mentioned intoxication goes back to ancient India and also Hippocrates mentions a neurological disorder 46 B.C. in Greece caused by Lathyrus seed. Lathyrism was occurring on a regular basis.
During the Spanish War of Independence against Napoleon, grasspea served as a famine food. This was the subject of one of Francisco de Goya 's famous aquatint prints titled Gracias a la Almorta ("Thanks to the Grasspea ''), depicting poor people surviving on a porridge made from grasspea flour, one of them lying on the floor, already crippled by it.
During WWII, on the order of Colonel I. Murgescu, commandant of the Vapniarka concentration camp in Transnistria, the detainees - most of them Jews - were fed nearly exclusively with fodder pea. Consequently, they became ill from lathyrism.
In the film Ashes (English title) by Andrzej Wajda based on the novel Popioly (Polish title) translated as Lost army (English title) by Stefan Żeromski spanning the period 1798 -- 1812, a horse is poisoned by grain from a Spanish village. The footage of the horse losing control of its hind legs suggests that it was fed with Almortas.
During the post Civil war period in Spain, there were several outbreaks of lathyrism, caused by the shortage of food, which led people to consume excessive amounts of Almorta flour.
In Spain, a seed mixture known as comuña consisting of Lathyrus sativus, L. cicera, Vicia sativa and V. ervilia provides a potent mixture of toxic amino acids to poison monogastric (single stomached) animals. Particularly the toxin β - cyanoalanine from seeds of V. sativa enhances the toxicity of such a mixture through its inhibition of sulfur amino acid metabolism (conversion of methionine to cysteine leading to excretion of cystathionine in urine) and hence depletion of protective reduced thiols. Its use for sheep does not pose any lathyrism problems if doses do not exceed 50 percent of the ration.
Ronald Hamilton suggested in his paper The Silent Fire: ODAP and the death of Christopher McCandless that itinerant traveler Christopher McCandless may have died from starvation after being unable to hunt or gather food due to lathyrism induced paralysis of his legs caused by eating the seeds of Hedysarum alpinum. In 2014, a preliminary lab analysis indicated that the seeds did contain ODAP. However, a more detailed mass spectrometric analysis conclusively ruled out ODAP and lathyrism.
A related disease has been identified and named osteolathyrism, because it affects the bones and connecting tissues, instead of the nervous system. It is a skeletal disorder, caused by the toxin beta - aminopropionitrile (BAPN), and characterized by hernias, aortic dissection, exostoses, and kyphoscoliosis and other skeletal deformities, apparently as the result of defective aging of collagen tissue. The cause of this disease is attributed to beta - aminopropionitrile, which inhibits the copper - containing enzyme lysyl oxidase, responsible for cross-linking procollagen and proelastin. BAPN is also a metabolic product of a compound present in sprouts of grasspea, pea and lentils. A disorder that is clinically similar is konzo.
|
how many games are in the american league playoff series | Playoffs - wikipedia
The playoffs, play - offs, postseason and / or finals of a sports league are a competition played after the regular season by the top competitors to determine the league champion or a similar accolade. Depending on the league, the playoffs may be either a single game, a series of games, or a tournament, and may use a single - elimination system or one of several other different playoff formats. Playoff, in regard to international fixtures, is to qualify or progress to the next round of a competition or tournament.
In team sports in the U.S. and Canada, the vast distances and consequent burdens on cross-country travel have led to regional divisions of teams. Generally, during the regular season, teams play more games in their division than outside it, but the league 's best teams might not play against each other in the regular season. Therefore, in the postseason a playoff series is organized. Any group - winning team is eligible to participate, and as playoffs became more popular they were expanded to include second - or even lower - placed teams -- the term "wild card '' refers to these teams.
In England and Scotland playoffs are used in association football to decide promotion for lower finishing teams, rather than to decide a champion in the way they are used in North America. In the Championship (the second tier of English football) teams finishing 3rd to 6th after the regular season compete to decide the final promotion spot to the Premier League.
Evidence of playoffs in professional football dates to at least 1919, when the "New York Pro Championship '' was held in Western New York (it is possible one was held in 1917, but that is not known for sure). The Buffalo and Rochester metropolitan areas each played a championship game, the winners of which would advance to the "New York Pro Championship '' on Thanksgiving weekend. The top New York teams were eventually absorbed into the NFL upon its founding in 1920, but the league (mostly driven by an Ohio League that did not have true championship games, though they frequently scheduled de facto championship matchups) did not adopt the New York league 's playoff format, opting for a championship based on regular season record for its first twelve seasons; as a result, four of the first six "championships '' were disputed. Technically, a vote of league owners was all that was required to win a title, but the owners had a gentlemen 's agreement to pledge votes based on a score (wins divided by the sum of wins and losses, with a few tiebreakers). When two teams tied at the top of the standings in 1932, an impromptu playoff game was scheduled to settle the tie.
The National Football League divided its teams into divisions in 1933 and began holding a single playoff championship game between division winners. In 1950 the NFL absorbed three teams from the rival All - America Football Conference, and the former "Divisions '' were now called "Conferences '', echoing the college use of that term. In 1967, the NFL expanded and created four divisions under the two conferences, which led to the institution of a larger playoff tournament. After the AFL - NFL merger brought the American Football League into the NFL, the NFL began to use three divisions and a single wild card team in each conference for its playoffs, in order to produce eight contenders out of six divisions; this was later expanded in 1978 & 1990 so that more wild card teams could participate.
In 2002 the NFL added its 32nd team, the Houston Texans, and significantly reshuffled its divisional alignment. The league went from 6 division winners and 6 wild card spots to 8 division winners and only 4 wild card qualifiers. The winners of each division automatically earn a playoff spot and a home game in their first rounds, and the two top non-division winners from each conference will also make the playoffs as wild - card teams. The top two teams with the best records in the regular season get a first round bye, and each of the bottom two division winners plays one of the two wild - card teams. Each winner of a wild - card game then plays one of the two bye teams. The winners of these two games go to the conference championships, and the winners of those conference championship games then face each other in the Super Bowl.
The College Football Playoff National Championship is a post-season college football bowl game, used to determine a national champion of the NCAA Division I Football Bowl Subdivision (FBS), which began play in the 2014 college football season. The game serves as the final of the College Football Playoff, a bracket tournament between the top four teams in the country as determined by a selection committee, which was established as a successor to the Bowl Championship Series and its similar BCS National Championship Game. Unlike the BCS championship, the participating teams in the College Football Playoff National Championship are determined by two semi-final bowls -- hosted by two of the consortium 's six member bowls yearly -- and the top two teams as determined by the selection committee do not automatically advance to the game in lieu of other bowls.
The game is played at a neutral site, determined through bids by prospective host cities (similarly to the Super Bowl and NCAA Final Four). When announcing it was soliciting bids for the 2016 and 2017 title games, playoff organizers noted that the bids must propose host stadiums with a capacity of at least 65,000 spectators, and cities can not host both a semi-final game and the title game in the same year.
The winner of the game is awarded a new championship trophy instead of the "crystal football '', which has been given by the American Football Coaches Association (AFCA) since 1986; officials wanted a new trophy that was unconnected with the previous BCS championship system. The new College Football Playoff National Championship Trophy is sponsored by Dr Pepper, which paid an estimated $35 million for the sponsorship rights through 2020. The 26.5 - inch high, 35 - pound trophy was unveiled on July 14, 2014.
The NCAA Division I Football Championship is an American college football tournament played each year to determine the champion of the NCAA Division I Football Championship Subdivision (FCS). Prior to 2006, the game was known as the NCAA Division I - AA Football Championship. The FCS is the highest division in college football to hold a playoff tournament sanctioned by the NCAA to determine its champion. The four - team playoff system used by the Bowl Subdivision is not sanctioned by the NCAA.
The NCAA Division II Football Championship is an American college football tournament played annually to determine a champion at the NCAA Division II level. It was first held in 1973. Prior to 1973, four regional bowl games were played in order to provide postseason action for what was then called the "NCAA College Division '' and a poll determined the final champion.
The National Championship game was held at Sacramento, California from 1973 to 1975. It was in Wichita Falls, Texas in 1976 and 1977. The game was played in Longview, Texas in 1978. For 1979 and 1980, Albuquerque, New Mexico hosted the game. McAllen, Texas hosted the championship games from 1981 to 1985. From 1986 to 2013, the Division II championship game was played at Braly Municipal Stadium near the campus of the University of North Alabama in Florence, Alabama. Between 2014 and 2017, the championship game was be played at Children 's Mercy Park in Kansas City, Kansas. Since 1994, the games have been broadcast on ESPN.
The NCAA Division III Football Championship began in 1973. Before 1973, most of the schools now in Division III competed either in the NCAA College Division or the National Association of Intercollegiate Athletics (NAIA). NCAA Divisions II and III were created by splitting the College Division in two, with schools that wished to continue awarding athletic scholarships placed in Division II and those that did not want to award them placed in Division III.
The Division III playoffs begin with 32 teams selected to participate in the playoffs. The Division III championship game, known as the Stagg Bowl, has been played annually in Salem, Virginia at Salem Football Stadium since 1993. It was previously played in Phenix City, Alabama at Garrett - Harrison Stadium (1973 -- 1982, 1985 -- 1989), at the College Football Hall of Fame, when the Hall was located in Kings Island, Ohio at Galbreath Field (1983 -- 1984), and Bradenton, Florida at Hawkins Stadium (1990 -- 1992).
As a rule, international association football has only had championship playoffs when a league is divided into several equal divisions / conferences / groups (Major League Soccer) and / or when the season is split into two periods (as in many leagues in Latin America, such as Mexico 's Liga MX). In leagues with a single table done only once a year, as in most of Europe, playoff systems are not used to determine champions, although in some countries such systems are used to determine teams to be promoted to higher leagues (e.g., England) or qualifiers for European club competitions (such as Greece and the Netherlands), usually between teams that did n't perform well enough to earn an automatic spot.
A test match is a match played at the end of a season between a team that has done badly in a higher league and one that has done well in a lower league of the same football league system. The format of a test match series varies; for instance it can be a head - to - head between one of the worse finishers of the higher league and one of the better finishers of the lower league, or it can be a mini league where all participants play each other or teams only play those from the other league. The winner of the test match series play in the higher league the following season, and the loser in the lower league.
In international football, playoffs were a feature of the 1954 and 1958 FIFA World Cup final tournaments. They are still a feature of the qualification tournaments for the FIFA World Cup and the UEFA European Football Championship.
In the qualification playoffs for the 2006 FIFA World Cup, for example:
In addition to their league competitions, most European footballing nations also have knockout cup competitions -- English football, for example, has the FA Cup and the League Cup. These competitions are open to many teams -- 92 clubs compete for the League Cup, and hundreds compete for the FA Cup. These competitions run concurrently with the "regular season '' league competitions and are not regarded as playoffs.
In Argentine football, playoffs in the style of the English leagues occur in the Primera B Metropolitana, part of the third tier, and leagues below it (Primera C Metropolitana and Primera D Metropolitana). All Primera Metropolitana tourneys cover the area in and around Buenos Aires, the capital city. The Torneo Reducidos (reduced tournaments), however, involve 8 teams below the top two, as opposed to 4.
Before the top - flight Argentine Primera División abandoned its traditional Apertura and Clausura format in 2015 in favor of an expanded single season, there was no playoff between the Apertura and Clausura winners. As a result, the league crowned two champions each year. After each Clausura, the two teams with the lowest points - per - game total for the previous six tournaments (three years, counting only Primera División games) were relegated to Primera B Nacional to be replaced by that league 's champion and runner - up teams; the two teams immediately above contested promotion / relegation series with the third and fourth places in Primera B Nacional, counted by its aggregate table. In Primera B Nacional, the same procedure continues in use for relegation to either Primera B Metropolitana or Torneo Argentino A for non-Buenos Aires clubs. From 2015 onward, relegation from the Primera División will be based solely on league position at the end of the season (which, effective in 2016 -- 17, changed from a February -- December format to an August -- June format).
The Australian A-League, which also features a team in New Zealand, has determined its champions via a playoff system, officially known as the "Finals Series '' (reflecting standard Australian English usage), since its inception in the 2005 -- 06 season.
From the league 's inception through the 2008 -- 09 season, the top four teams advanced to the finals series, employed using a modified Page playoff system. The top two teams at the end of league play were matched in one semifinal, with the winner advancing directly to the Grand Final and the loser going into the Preliminary Final. The next two teams played a semifinal for a place in the Preliminary Final, whose winner took the other place in the Grand Final. Both semifinals were two - legged, while the Preliminary Final and Grand Final were one - off matches.
When the league expanded to 10 teams beginning in 2009 -- 10, the finals expanded to six teams. The format of the six - team playoff established at that time was:
Starting with the 2012 -- 13 season, the finals format has been changed to a pure knockout tournament consisting entirely of one - off matches:
It should be noted that the concept of a finals series / playoff is standard in Australian sport.
The Belgian First Division A (previously known as the "First Division '' and "Pro League '') has a fairly complex playoff system, currently consisting of two levels and at one time three.
Since the 2009 -- 10 season, playoffs have been held to determine the champion and tickets for the Champions League and Europa League. The six highest ranked teams play home - and - away matches against each other; a total of 10 matches each. The 6 participating teams start with the points accumulated during the regular competition divided by two. The first 3 teams after the play - offs get a European ticket. The fourth ranked team (or fifth, when the cup holder is already qualified for European football) plays a knock - out match against the winner of play - off 2. From 2009 -- 10 through 2015 -- 16, teams ranked 7 -- 14 played in two groups; from 2016 -- 17 forward, this playoff will continue to be contested in two groups, but with a total of 12 teams (details below). All points gained from the regular competition are lost. The two group winners play a final match to determine the winner of play - off 2. The winning team plays a final match against the fourth - ranked team (or fifth) for the last European ticket.
The play - off system has been criticized because more points per match can be earned in the play - off stage than in the regular competition. This way the team who wins the most matches is n't automatically the national champion. The biggest upside in favor of the play - off system is the higher number of matches (40 instead of 34 compared to the previous season) and more top matches. The extra matches also generate higher revenues for the teams.
Nonetheless, the higher number of matches takes an extra toll on teams and players. Besides play - offs, the Royal Belgian Football Association (KBVB) also introduced Christmas football in order to complete the extra matches in time. This posed some problems because a few matches had to be cancelled due to snowy pitches. The delays will probably cause the tight schedule to fail and postpone the end of the season.
Some structural changes were instituted in 2015 -- 16:
From 1974 through 2015, the 15th team out of 16 in the final standings was involved in a playoff pool with three teams from the Belgian Second Division after each season, to determine which of these teams played in the First Division / Pro League the oncoming season. The lowest ranked team of the First Division / Pro League was relegated and replaced by the Second Division champion.
Originally, these playoffs were introduced in 1974 and were part of the Second Division, to determine which team was promoted to the highest level together with the division champions. From the 2005 -- 06 season on, only one team was relegated directly from the First Division, with the 17th team taking part in the playoff. As a result, this playoff was still called the Belgian Second Division Final Round, although one team from the Pro League took part each year.
Starting in 2015 -- 16, this playoff was scrapped and replaced with direct relegation for the bottom Pro League / First Division A team only.
Further changes will be introduced to the Europa League playoffs from 2016 -- 17 forward. The playoff will involve a total of 12 teams -- nine from First Division A, and three from First Division B (the renamed Second Division). The First Division A qualifiers will be those finishing between 7th and 15th on the regular - season table. The First Division B qualifiers will be the top three teams from that league 's regular - season table, excluding the division champion, which instead earns promotion to First Division A. As in the previous format, the teams will be divided into two groups, each playing home - and - away within the group, and the two group winners will play a one - off final, with the winner of that match advancing to a one - off match against the fourth - or fifth - place team from the championship playoff (depending on available European slots) for the final Europa League place.
In Brazil, the Copa do Brasil, the second most prestigious country - wide competition, is contested in pure "knockout '' format since its inception in 1989. While the top two tiers in the Brazilian League -- Série A and Série B -- are contested in double round robin format, the lower tiers Série C and Série D include knockout rounds in their final stages.
Bulgaria instituted an elaborate playoff system in its top flight, the First League, in the 2016 -- 17 season.
After the league 's 14 teams play a full home - and - away season, the league splits into two playoffs -- a 6 - team "championship playoff '' and an 8 - team "qualifying playoff '', with the latter split into two 4 - team groups. Each playoff begins with teams carrying over all goals and statistics from the home - and - away season.
Each team in the championship playoff plays the others home and away one additional time. At the end of this stage:
Each group within the qualifying playoff also plays home and away within its group; after this stage, teams enter another level of playoffs depending on their positions in the group.
The top two teams in each group enter a knockout playoff consisting entirely of two - legged matches (unless one of these teams is the winner of that season 's Bulgarian Cup, in which case it will not enter the playoff and the team that it would have played receives a bye into the playoff final). The winner of this playoff then contests a one - off match against the third - place (or fourth - place) team from the championship playoff, with the winner claiming the final Europa League place.
The bottom two teams from each group begin a series of relegation playoffs. The series starts with a knockout playoff that also consists entirely of two - legged matches. The winner of the playoff remains in the First League for the following season. The losing teams then enter the following series of two - legged promotion / relegation matches:
With the creation of the Liga Dominicana de Fútbol in 2014 to replace the Primera División de Republica Dominicana, it introduced a playoff system to determine the champion of the season.
When the Football League was first expanded to two divisions in 1892, test matches were employed to decide relegation and promotion between them, but the practice was scrapped in favour of automatic relegation and promotion in 1898.
The use of play - offs to decide promotion issues returned to the League in 1986 with the desire to reduce the number of mid-table clubs with nothing to play for at the end of the season. The Football Conference, now known as the National League, introduced play - offs in 2002 after the Football League agreed to a two - club exchange with the Conference.
The top two teams in the EFL Championship and in EFL League One are automatically promoted to the division above and thus do not compete in the play - offs. The top three teams in EFL League Two and the champion of the National League (formerly known as Conference Premier) are also automatically promoted. In each of these divisions the four clubs finishing below the automatic promotion places compete in two - legged semi-finals with the higher - placed club enjoying home advantage in the second leg. The away goals rule does not apply for the semi-finals, which has led to some games swinging the way of a team that otherwise would have been beaten by the rule. The Football League play - off finals were originally played in two legs, at both teams ' home grounds, but were later changed to one - off affairs, which are played at Wembley Stadium in London.
Teams are also promoted using a play - off tournament from levels six to eight of the football pyramid. At level six, the play - off semi finals are two leg ties with the final being a single match played at the home ground of the highest placed of the two teams. At levels seven and eight, all of the ties are single matches played at the home ground of the team with the highest league position.
In 2003, Gillingham proposed replacing the current play - off system with one involving six clubs from each division and replacing the two - legged ties with one - off matches. If adopted, the two higher - placed clubs in the play - offs would have enjoyed first - round byes and home advantage in the semi-finals. It was a controversial proposal -- some people did not believe a club finishing only in eighth position in the League could (or should) compete in the Premiership while others found the system too American for their liking. Although League chairmen initially voted in favour of the proposal, it was blocked by The FA and soon abandoned.
The championship of every division in English football is determined solely by the standings in the league. However, a championship play - off would be held if the top two teams were tied for points, goal difference and goals scored in both their overall league record; to date, this has never happened. A play - off would also be scheduled if two teams are tied as above for a position affecting promotion, relegation, or European qualification.
Starting in the 2007 -- 08 season, Superleague Greece instituted a playoff system to determine all of its places in European competition for the following season, except for those of the league champion and the cup winner. Currently, the league is entitled to two Champions League places and three in the Europa League, with one of the Europa League places reserved for the cup winner. The playoff currently takes the form of a home - and - away mini-league involving the second - through fifth - place teams, under the following conditions:
In 2004 - 05, Italy 's professional league introduced a promotion playoff to its second tier of football, Serie B. It operates almost identically to the system currently used in England. The top two clubs in Serie B earn automatic promotion to Serie A with the next four clubs entering a playoff to determine who wins the third promotion place, as long as fewer than 10 points separate the third and fourth - placed teams (which often occurs).
Like the English playoffs, the Italian playoffs employ two - legged semi-finals, with the higher finisher in the league table earning home advantage in the second leg. If the teams are level on aggregate after full - time of the second leg, away goals are not used, but extra time is used. Unlike England, the Italian playoff final is two - legged, again with the higher finisher earning home advantage in the second leg. In both rounds, if the tie is level on aggregate after extra time in the second leg, the team that finished higher in the league standings wins.
In 2003 -- 04, Italy 's football league used a two - legged test match to determine one spot in the top level of its system, Serie A. Some leagues in continental Europe combine automatic promotion / relegation with test matches. For example, in the Netherlands, only one club is automatically relegated from its top level, the Eredivisie, each season, with the winner of the second - flight being promoted. The next two lower - placed teams enter a promotion / relegation mini-league with high - placed teams from the Dutch First Division
J. League in Japan used a test match series between the third - from - bottom team in J1 and third - place team in J2 (see J. League Promotion / Relegation Series) from 2004 to 2008. The Promotion / Relegation Series concept dates as far back as 1965 and the first season of the Japan Soccer League.
The Japan Football League, the current Japanese third division, uses the Promotion / Relegation Series only when the number of clubs in the league needs to be filled with clubs from the Japanese Regional Leagues.
A new Promotion / Relegation Series will occur beginning with the 2012 season of J. League Division 2, conditional on the top two JFL teams fulfilling J. League club criteria. In turn, J2 will implement a playoff on the style of England for the 3rd to 6th clubs.
Mexico 's top flight league, Liga MX, is contested annually by 18 teams. In each of two annual tournaments, every team plays every other team in the league once (17 games), after which the top eight teams advance to the Liguilla.
In the Liguilla, all rounds are home - and - away. Teams are drawn so the best team plays the worst, the second - best plays the second - worst, and so on. After one round, the teams are redrawn so the best remaining team again plays the worst remaining one and the second - best faces the second - worst in the semi-finals. The two winners of this round play each other for the championship.
There is no playoff between the Apertura and Clausura winner. As a result, the league crowns two champions each year. After each Clausura, the team with the lowest points - per - game total for the previous six tournaments (three years, counting only Liga MX games) is relegated to Ascenso MX to be replaced by that league 's champion (if eligible).
In the Netherlands, a playoff was introduced in season 2005 -- 2006. It is used to determine which teams from the Eredivisie qualify for European football. The playoff system has been criticized by clubs, players and fans as the number of matches will increase. Under the original playoff format, it was possible, though thoroughly unlikely, that the runner - up would not qualify for Europe; the following year, the format was changed so that the second - place team was assured of no worse than a UEFA Cup berth. Starting in 2008 -- 09, the format was changed yet again. The champion goes directly to the Champions League; the runner - up enters the second qualification round of the CL; the number three enters the fourth (and last) qualification round of the UEFA Europa League (EL; the new name of the UEFA Cup from 2009 -- 10 onward) and the number four goes to the third qualification round of the EL. The only play - off will be for the clubs placed 5th through 8th. The winner of that play - off receives a ticket for the second qualification round of the EL.
Playoffs are also part of the promotion and relegation structure between the Eredivisie and the Eerste Divisie, the two highest football leagues in the Netherlands.
The Scottish Premier League (SPL) experimented briefly with playoffs in the mid-1990s, with only one team -- Dundee United -- achieving promotion through it (Partick Thistle were relegated at their expense). The SPL never conducted playoffs throughout the remainder of its existence.
After the SPL and Scottish Football League (SFL) merged in 2013 to form the Scottish Professional Football League (SPFL), the SPL 's successor league, the Scottish Premiership, launched a promotion / relegation playoff linked with the second level, the Scottish Championship (formerly the Scottish First Division). The bottom team from the Premiership is automatically relegated and is replaced by the top team from the Championship, provided that club meets Premiership entry criteria. The second, third and fourth placed teams from the Championship hold a knockout playoff consisting of two - legged ties. The winner of this playoff then faces the second - from - bottom Premiership team, also over two legs, with the winner of that tie taking up the final Premiership place (again, assuming that the Championship club meets Premiership criteria).
The three lower divisions of the SPFL -- the Championship, Scottish League One (formerly the Second Division) and Scottish League Two (formerly the Third Division) -- continue with the promotion / relegation playoff system their predecessor leagues used in the SFL era. In the Championship / League One and League One / League Two, while the champions are automatically promoted and the bottom team relegated, there are playoffs of the second - bottom teams against the second, third and fourth placed teams from the league below. Home and away ties decide semi-finals and a final, and the overall winner plays in the higher league the following season, with the loser in the lower league.
Beginning with the 2014 -- 15 season, promotion and relegation between the SPFL and the Scottish regional leagues were introduced. Following the end of the league season, the winners of the fifth - level Highland and Lowland Leagues compete in a two - legged playoff. The winner then enters a two - legged playoff against the bottom team from Scottish League Two, with the winner of that tie either remaining in or promoted to League Two.
Long before the SPL era, two situations arose in which the top two teams in the table had to share the title as neither goal average nor goal difference had been instituted to break ties. The first was the inaugural season, in which Dumbarton and Rangers both earned 29 points and had to play off for the title. The match ended in a 0 - 0 draw and both teams shared the title. The second happened 19 years later, in the Second Division, when Leith Athletic and Raith Rovers both earned 33 points. This time, the clubs chose not to play off. In 1915 goal average was finally instituted.
For the 2010 / 11 season, the Segunda División experimented with promotion playoffs between the 3rd to 6th placed teams, similar to the rules in the English and Italian systems. However, due to reserve teams being allowed to compete in the same football league system, subsequent places may be allowed to play off depending on reserve teams finishing within the 3rd to 6th places.
At a lower level, playoffs in Segunda División B take place to decide the divisional title between the 4 group winners, and to decide which other teams would be promoted, as follows:
Previously a play off system had been used in which the teams finishing 3rd and 4th from last in La Liga had played off against the teams finishing 3rd and 4th in the Segunda División. This system had been introduced in the 1980s but ended in 1998 - 99.
In Major League Soccer in the United States and Canada, at the end of the regular season, the top five teams in each of its two conferences qualify for the playoffs (from the 2012 season to the 2014 season). Under this system, the conferences have separate playoff brackets. Since the 2015 season six teams per conference qualify, 12 teams in total, and Audi is the official sponsor.
In the first round of the postseason knockout tournament, the fourth - place team in each conference hosts the fifth - place team from the same conference in a one - off match, with the winner advancing to the Conference Semifinals. From the 2015 season the third - place team plays host to the sixth in the other one - off match, with the two winners advancing to the Conference Semifinals.
The Conference Semifinals and the Conference Championships are conducted under a home - and - away, aggregate - goal format. For each conference, the top seed plays the first - round winner, and the 2nd seed faces the 3rd seed in the Conference Semifinal series, with the lower seeded team hosting the first game. From the 2015 season the top seed plays the lowest remaining seed, and the 2nd plays the next - lowest seed in the Conference Semifinals.
The team that scores the most goals in the home - and - away series advances to the Conference Championship, which expands from a one - off match to a two - legged match starting in 2012. If the teams are tied after 90 minutes of the second leg in either the Conference Semifinal or Conference Championship, a 30 - minute extra time period (divided into two 15 - minute periods) would be played followed by a penalty - kick shootout, if necessary. As in the Conference Semifinals, the lower seed in the Conference Championship hosts the first leg.
The winner of each conference will play for the MLS Cup, the league championship. Since 2012, the MLS Cup is hosted by the conference champion with the most table points during the regular season.
In the case of ties after regulation in the First Round and MLS Cup, 30 minutes of extra time (divided into two 15 - minute periods) would be played followed by a penalty - kick shootout, if necessary, to determine the winners.
Historically MLS did not use the away goals rule in any playoff series, but it has begun to do so in 2014 to be consistent with international practice.
The defunct Women 's Professional Soccer (WPS), which operated only in the U.S., conducted a four - team stepladder tournament consisting of one - off knockout matches. The third seed hosted the fourth seed in the first round. The winner of that game advanced to the "Super Semifinal '', hosted by the second seed. The Super Semifinal winner traveled to the top seed for the championship game. The replacement of WPS, the National Women 's Soccer League (which launched in 2013), has a more standard four - team knockout playoff in which the winners of two one - off semifinals advance to the one - off final.
Playoffs are used throughout Australia in Australian rules football to determine the premiership. The term finals is most commonly used to describe them.
In each league, between four and eight teams (depending on league size) qualify for the finals based on the league ladder at the end of the season. Australian rules football leagues employ finals systems which act as a combination between a single elimination tournament for lower - ranked teams and a double elimination tournament for higher - ranked teams in order to provide teams with an easier pathway to the Grand Final as reward for strong performances throughout the season. Finals are decided by single matches, rather than series.
The Australian Football League, which is the top level of the sport, currently has eight teams qualify for the finals under a system designed by the league in 2000. Between 1931 -- 1999, variants of the McIntyre System were used to accommodate four, five, six and eight teams, and prior to 1930, six different finals systems were used.
In most other leagues, from state - level leagues such as the South Australian National Football League and West Australian Football League, down to local suburban leagues, it is most common for either four or five teams to qualify for the finals. In these cases the Page -- McIntyre final four system or the McIntyre final five system are used universally.
The Australian Football League (which was known until 1990 as the Victorian Football League) was the first league to introduce regular finals when it was established in 1897. The South Australian National Football League introduced finals in 1898, and other leagues soon followed.
Prior to 1897, the premiership was generally awarded to the team with the best overall win - loss record at the end of the season. If two teams had finished with equal records, a playoff match for the premiership was required: this occurred in the Challenge Cup in 1871, the SAFA in 1889 and 1894, and in the VFA in 1896.
The teams finishing in fourth and fifth place in the regular season face each other in the wildcard game. The winner of the wildcard game faces the team that finished in third place in the first round of the play - offs. The winner of the first round faces the team that finished in second place during the regular season, and the winner of that round faces the team that finished in first place for the championship in the Korean Series.
Before 1950 the original Japanese Baseball League had been a single - table league of franchises. After it was reorganized into the Nippon Professional Baseball (NPB) system, a series of playoffs ensued between the champions of the Central League and Pacific League.
Before the playoff system was developed in both professional leagues, the Pacific League had applied a playoff system on two occasions. The first was between 1973 -- 1982, when a split - season was applied with a 5 - game playoff between the winning teams from both halves of season (unless a team won both of the halves so that they did not need to play such a game). The second time was between 2004 -- 2006, when the top three teams played a two - staged stepladder knockout (3 games in the first stage and 5 games in the second stage) to decide the League Champion (and the team playing in the Japan Series). After this system was applied, the Seibu Lions (now Saitama Seibu Lions), Chiba Lotte Marines and Hokkaido Nippon Ham Fighters, which claimed the Pacific League Championship under this system, were all able to clinch the following Japan Series in that season. The success of such a playoff system convinced the Central League to consider a similar approach. In 2007, a new playoff system, named the "Climax Series '', was introduced to both professional leagues in NPB to decide the teams that would compete for the Japan Series. The Climax Series basically applied the rule of the playoff system in the Pacific League, with one important change: each League championship is awarded to the team finishing the regular season at the top of their respective league, regardless of their fate in the playoffs. This means that the two League Champions are not guaranteed to make the Japan Series. The Chunichi Dragons took advantage of this in the first Climax Series season, finishing second in the regular season but sweeping the Hanshin Tigers and the League Champion Yomiuri Giants in the Central League to win a place in the Japan Series; they subsequently defeated the Hokkaido Nippon Ham Fighters to claim their first Japan Series in 52 years.
In 2008, the format of Climax Series will have a slight change, in which the second stage will be played over a maximum of six games, with the League Champion starting with an automatic one game advantage.
Major League Baseball (MLB) itself does not use the term "tournament '' for postseason action. Instead they use the term "postseason '' as the title of the official elimination tournament held after the conclusion of Major League Baseball 's regular season. Since the 2012 season, it has consisted of a first round single - elimination knockout game between the two wildcards in each league, a best - of - 5 second round series called the Division Series, and two rounds of best - of - seven series for the League Championship and World Series.
MLB uses a "2 - 3 - 2 '' format for the final two rounds of its postseason tournament. In the Majors, the singular term "playoff '' is reserved for the rare situation in which two (or more) teams find themselves tied at the end of the regular season and are forced to have a tiebreaking playoff game (or games) to determine which team will advance to the postseason. Thus, in the majors, a "playoff '' is actually part of the regular season and thus can be called a "Pennant playoff ''. However, the plural term "Playoffs '' is conventionally used by fans and media to refer to baseball 's postseason tournament (and has always been used by minor league baseball for its own postseason play), not including the "World Series '' (see below), so this article defers to that usage.
MLB is the oldest of the major American professional sports, dating back to the 1870s. As such, it is steeped in tradition. The final series to determine its champion has been called the "World Series '' (originally "World 's Championship Series '' and then "World 's Series '') as far back as the National League 's contests with the American Association during the 1880s. The "Playoffs '' determine which two teams play in the "World Series ''.
Taiwan 's playoff is different to many such competitions, due to the league 's split - season format. The winners of the first half - season and the winners of the second half - season are eligible to play in the playoffs, but if the best overall team have not won either half season then they qualify into a wild card series against the weaker half - season winner, with the winner of this advancing into the Taiwan Series to face the other half - season winner. If the first and second half winners are different, but one of them is also the best overall team, then both teams progress directly to the Taiwan Series. Finally, if one team wins both halves of the season then a playoff will take place between the second and third best teams for the right to play them in the Final Series; in this case the team winning both halves of the season will begin the Taiwan Series with an automatic one game advantage.
The present organization known as the National Basketball Association, then called the BAA (Basketball Association of America), had its inaugural season in 1946 -- 47. Teams had always have different strength of schedule from each other; currently, a team plays a team outside its conference twice, a team within its conference but outside its division three or four times, and a team from its own division four times.
In the current system, eight clubs from each of the league 's two conferences qualify for the playoffs, with separate playoff brackets for each conference. In the 2002 -- 03 season, the first - round series were expanded from best - of - 5 to best - of - 7; all other series have always been best - of - 7. In all series, home games alternate between the two teams in a 2 - 2 - 1 - 1 - 1 format.
The 2 - 3 - 2 finals format was adopted from the 1985 Finals to 2013, copying the format that was then in effect in the National Hockey League. Prior to 1985, almost all finals were played in the 2 - 2 - 1 - 1 - 1 format (although the 1971 Finals between Milwaukee and Baltimore were on an alternate - home basis, some 1950s finals used the 2 - 3 - 2 format, and the 1975 Golden State - Washington and 1978 and 1979 Seattle - Washington Finals were on a 1 - 2 - 2 - 1 - 1 basis). Also, prior to the 1980s, East and West playoffs were on an alternate - home basis except for b those series when distance made the 2 - 2 - 1 - 1 - 1 format more practical. Since 2014, the NBA Finals restored the original format.
Teams are seeded according to their regular - season record. Through the 2014 -- 15 season, the three division champions and best division runner - up received the top four seeds, with their ranking based on regular - season record. The remaining teams were seeded strictly by regular - season record. However, if the best division runner - up had a better record than other division champs, it could be seeded as high as second. Beginning in 2015 -- 16, the NBA became the first major American league to eliminate automatic playoff berths for division champions; the top eight teams overall in each conference now qualify for the playoffs, regardless of divisional alignment.
Top flight basketball leagues elsewhere also employ a playoff system mimicking the NBA 's. However, most leagues are not divided into divisions and conferences, and employ a double round robin format akin to league association football, unlike the NBA where teams are divided into divisions and conferences, which leads to different strengths of schedule per team. Teams are seeded on regular season record. The playoff structure can be single - elimination or a best - of series, with the higher seed, if held the playoffs are not held at a predetermined venue, having the home court advantage.
Aside from the playoffs, some leagues also have a knockout tournament akin to the FA Cup running in parallel to the regular season. These are not considered playoffs.
In the EuroLeague, after the regular season plays a best - of - 5 playoffs in a 2 -- 2 -- 1 format. However, from the semifinals on, it is a single elimination tournament held at a predetermined venue. Still others also have a relegation playoff.
In NCAA Division I basketball conferences, a playoff or "postseason tournament '' is held after the regular season; these are held at a predetermined venue, and are single - elimination tournaments; higher seeds may be afforded byes. The winners, and some losers which are selected as "at - large bids '', play in the NCAA tournament, which is also single - elimination and held at predetermined venues.
In the WNBA Playoffs, the league 's best 8 teams, no matter the conference, compete, and are based on regular season record. The top two seeds get double byes and the next two seeds first - round byes. The first two rounds are one - off rounds, and the league semifinals & Finals are best - of - 5 on a 2 - 2 - 1 basis.
In the Canadian Football League, the playoffs begin in November. After the regular season, the top team from each division has an automatic home game berth in the Division Final, and a bye week during the Division Semifinal. The second - place team from each division hosts the third - place team in the Division Semifinal, unless the fourth - place team from the opposite division finishes with a better record. This "crossover rule '' does not come into play if the teams have identical records -- there are no tiebreakers. While the format means that it is possible for two teams in the same division to play for the Grey Cup, so far only one crossover team has won the divisional semifinal game. The winners of each Division 's Semifinal game then travel to play the first place teams in the Division Finals. Since 2005, the Division Semifinals and Division Finals have been sponsored by Scotiabank and are branded as the "Scotiabank East Championship '' and "Scotiabank West Championship ''. The two division champions then face each other in the Grey Cup game, which is held on the third or fourth Sunday of November.
The Edmonton Eskimos are notable for qualifying for the CFL playoffs every year from 1972 to 2005, a record in North American pro sports. The Eskimos are also notable for being the first crossover team to ever win the divisional semifinal game.
The National Hockey League playoff system is an elimination tournament competition for the Stanley Cup, consisting of four rounds of best - of - seven series. The first three rounds determine which team from each conference will advance to the final round, dubbed the Stanley Cup Final. The winner of that series becomes the NHL and Stanley Cup champion. The reigning Cup Champions are the Pittsburgh Penguins.
Since 2014 the Conference Quarterfinals consists of four match - ups in each conference, based on the seedings division-wise (# 1 vs. # 4, and # 2 vs. # 3). The division winner with the best record in the conference plays the lowest wild - card seed, while the other division winner plays the top wild - card seed (wild - card teams, who are de facto 4th seeds, may cross over to another division within the conference). In the Conference Semifinals, the four remaining teams in the conference face each other. In the third round, the Conference Finals, the two surviving teams play each other, with the conference champions proceeding to the Stanley Cup Final.
For the first two rounds, the higher - seeded team has home - ice advantage (regardless of point record). Thereafter, it goes to the team with the better regular season record. In all rounds the team with home - ice advantage hosts Games 1, 2, 5 and 7, while the opponent hosts Games 3, 4 and 6 (Games 5 -- 7 are played "if necessary '').
In the United Kingdom, the Elite Ice Hockey League playoffs are an elimination tournament where the draw is based on the finishing position of teams in the league. Of the 10 teams which compete, the top 8 qualify for the playoffs. The first round (the quarter - finals) are played over two legs (home and away) where the team who finished in 1st place in the regular season plays the team which finished 8th, 2nd plays 7th and so on, with the aggregate score deciding which team progresses.
The semi-finals and final are held over the course of a single weekend at the National Ice Centre in Nottingham. Each consists of a single game with the losing team being eliminated, with the two semi-final games being played on the Saturday and the final on the Sunday. There is also a third - place game held earlier on the Sunday between the losing teams from the semi-finals. Unlike in the NHL, the winners of the Elite League playoffs are not considered to be the league champions for that season (that title goes to the team which finishes in first place in the league), rather the playoffs are considered to be a separate competition although being crowned playoff champions is a prestigious accolade nonetheless. The most recent playoff champions are the Sheffield Steelers.
NASCAR implemented a "playoff '' system beginning in 2004, which they coined the "Chase for the NEXTEL Cup ''. When first introduced, only NASCAR 's top series used the system. In the original version of the Chase (2004 -- 2006), following the 26th race of the season, all drivers in the top 10 and any others within 400 points of the leader got a spot in the 10 - race playoff. Like the current system, drivers in the Chase had their point totals adjusted. However, it was based on the number of points at the conclusion of the 26th race. The first - place driver in the standings led with 5,050 points; the second - place driver started with 5,045. Incremental five - point drops continued through 10th place with 5,005 points.
The first major change to the Chase was announced by NASCAR chairman and CEO Brian France on January 22, 2007. After 26 races, the top 12 drivers advanced to contend for the points championship and points were reset to 5000. Each driver within the top 12 received an additional 10 points for each win during the "regular season '', or first 26 races, thus creating a seeding based on wins. As in previous years, the Chase consisted of 10 races and the driver with the most points at the conclusion of the 10 races was the NEXTEL Cup Series Champion. Under the points system then in use, drivers could earn 5 bonus points for leading the most laps, and 5 bonus points for leading a single lap. Brian France explained why NASCAR made the changes to the chase:
"The adjustments taken (Monday) put a greater emphasis on winning races. Winning is what this sport is all about. Nobody likes to see drivers content to finish in the top 10. We want our sport -- especially during the Chase -- to be more about winning. ''
Beginning with the 2008 season, the playoff became known as the "Chase for the Sprint Cup '' due to the NEXTEL / Sprint merger.
The next format of the Chase was announced by France on January 26, 2011, along with several other changes, most significantly to the points system. After 26 races, 12 drivers still advanced to the Chase, but the qualifying criteria changed, as well as the number of base points that drivers received at the points reset.
Under this system. only the top 10 drivers in points automatically qualified for the Chase. They were joined by two "wild card '' qualifiers, specifically the two drivers ranked from 11th through 20th in points who had the most race wins (with tiebreakers used if needed to select exactly two qualifiers). These drivers then had their base points reset to 2,000 instead of the previous 5,000, reflecting the greatly reduced points available from each race (a maximum of 48 for the race winner, as opposed to a maximum of 195 in the pre-2011 system). After the reset, the 10 automatic qualifiers received 3 bonus points for each race win, while the wild card qualifiers received no bonus.
On January 30, 2014. even more radical changes to the Chase were announced; these took effect for the 2014 season:
The Chase for the Sprint Cup has been generally panned since its inception, as many drivers and owners have criticized the declining importance of the first 26 races, as well as very little change in schedule from year to year. Mike Fisher, the director of the NASCAR Research and Development Center, has been one of the more vocal critics of the system, saying that "Due to NASCAR having the same competitors on the track week in, week out, a champion emerges. In stick - and - ball sports, every team has a different schedule, so head - to - head series are necessary to determine a champion. That does not apply to auto racing. ''
NASCAR extended the Chase format to its other two national touring series, the Xfinity Series and Camping World Truck Series, beginning in 2016. The formats used in the two lower series are broadly similar to the format used in the Cup Series, but have some significant differences:
Starting with the 2017 season, NASCAR abandoned the term "Chase '', instead calling its final series the "playoffs ''.
Play - offs are used to decide the premiers of the National Rugby League (NRL) in Australasia, where they are known as finals (also as semi finals or semis) -- as in Australian rules football, the participating teams only come from within a single division, and the tournament is staged as single matches rather than series. Currently, in the NRL, eight teams qualify for the finals; starting with the 2012 season, the system was changed from the McIntyre Final Eight to the same system used by the AFL.
Previously, the term play - off was used in the NSWRL competition to describe matches which were played as tie breakers to determine qualification to the finals series. Since 1995, points differential decides finals ' qualification, play - offs are no longer held.
The European Super League rugby league competition has used a play - off system to decide its champion since 1998. The original play - off format featured the top five highest - ranked teams after the regular season rounds. Starting in 2002, the play - offs added an extra spot to allow the top six to qualify. With the addition of two new teams for the 2009 season, the play - offs expanded to eight teams. The next format, scrapped after the 2014 season, worked as follows:
Week One
Week Two
Week Three
Week Four
* Opponents decided by the QPO winner (in Week 1) that finished higher in the regular season
Beginning in 2015, the Super League season was radically reorganised, and more closely integrated with that of the second - level Championship. Following a home - and - away season of 22 matches, the top eight clubs in Super League now enter a single round - robin mini-league known as the Super 8s, with the top four teams after that stage entering a knockout play - off to determine the champion. The four bottom teams in Super League at the end of the home - and - away season are joined by the top four from the Championship after its home - and - away season. These eight teams play their own single - round - robin mini-league known as The Qualifiers; at its end, the top three teams are assured of places in the next season 's Super League, with the fourth - and fifth - place teams playing a single match billed as the "Million Pound Game '', with the winner also playing in Super League in the following season.
The two tiers directly below Super League, the Championship and League 1 (the latter of which was known as Championship 1 from 2009 -- 2014) -- formerly the National Leagues until the 2009 addition of a French club to the previously all - British competition -- used the old top six system to determine which teams were promoted between its levels through the 2014 season. After that season, both leagues abandoned the top six system. Before the 2008 season, when Super League established a franchising system and ended automatic promotion and relegation in Super League, the National Leagues also used this system to determine the team that earned promotion to Super League. The top six system involved the following:
Week One
Week Two
Week Three
Week Four
Since 2015, all clubs in Super League and the Championship play a 22 - match home - and - away season. Upon the end of the home - and - away season, the clubs will split into three leagues, with two of them including Championship clubs. As previously noted, the Super 8s will feature the top eight Super League sides. The second league, The Qualifiers, will include the bottom four clubs from Super League and the top four from the Championship, whilst the third will feature the remaining eight Championship sides. The bottom two leagues will begin as single round - robin tournaments. In The Qualifiers, the top three sides will either remain in or be promoted to Super League, with the fourth - and fifth - place teams playing the aforementioned "Million Pound Game '' for the final Super League place. In the third league, the sides compete for the Championship Shield, with the top four teams after the round - robin phase entering a knockout playoff for the Shield. The bottom two teams are relegated to League 1.
League 1 currently conducts a 15 - match, single round - robin regular season. At that time, the league splits in two. The top eight clubs play in their own Super 8s, also contested as a single round robin. At the end of the Super 8s, the top club earns the season title and immediate promotion to the Championship. The second - through fifth - place clubs contest a knockout playoff for the second place in the Championship. The bottom eight clubs play their own single round - robin phase; at its end, the top two teams play a one - off match for the League 1 Shield.
In the Aviva Premiership the top four qualify for the play - offs, where they are not referred to by that name. The tournament is a Shaughnessy playoff: the team who finished first after the league stage plays the team who finished fourth, while the team who finished second plays the team who finished third in the Semi-Finals with the higher - ranked team having homefield advantage. The winners of these semi-finals qualify for the Premiership Final at Twickenham, where the winner will be champions of the league.
Through the 2016 -- 17 season, the second - level RFU Championship uses play - offs -- but unlike the Premiership, the Championship officially uses the term "play - offs ''. At the end of the league stage, top teams advance to a series of promotion play - offs. From the first season of the Championship in 2009 -- 10 to 2011 -- 12, the top eight teams advanced; since 2012 -- 13, the top four have advanced. A relegation play - off involving the bottom four teams existed through the 2011 -- 12 season, but was scrapped from 2012 -- 13 on.
The original promotion play - offs divided the eight teams into two groups of four each, with the teams within each group playing a home - and - away mini-league. The top two teams in each group advanced to a knockout phase. In 2010, the semi-finals were one - off matches; in 2011, they became two - legged. The top team in each pool played the second - place team from the other group in the semi-finals; the winners advanced to the two - legged final, where the ultimate winner earned promotion to the Premiership (assuming that the team met the minimum criteria for promotion).
In the first year of the play - offs in 2010, all eight teams started equal. After that season, it was decided to reward teams for their performance in league play. in 2011 and 2012, the top two teams at the end of the league stage carried over 3 competition points to the promotion play - offs; the next two teams carried over 2; the next two carried over 1; and the final two teams carried over none. (Points were earned using the standard bonus points system.)
The relegation play - offs, like the first stage of the promotion play - offs, were conducted as a home - and - away league, with the bottom team at the end of league play relegated to National League 1. As with the 2010 promotion play - offs, that season 's relegation play - offs started all teams equal. in 2011 and 2012, each team in the relegation play - offs carried over 1 competition point for every win in the league season.
Beginning with the 2012 -- 13 season, the pool stage of the promotion playoffs was abolished, with the top four sides directly entering the semi-finals. The format of the knockout stage remained unchanged from 2012, with two - legged semi-finals followed by a two - legged final. At the other end of the table, the bottom club is now automatically relegated.
Effective with the 2017 -- 18 season, the promotion play - offs will be scrapped for a minimum of three seasons, to be replaced with automatic promotion for the club finishing atop the league at the end of the home - and - away season (provided said club meets minimum Premiership standards).
The highest level of French rugby union, the Top 14, expanded its playoffs starting with the 2009 -- 10 season from a four - team format to six teams. In the new system, the top two teams after the double round - robin season receive first - round byes. The first - round matches involve the third - through sixth - place teams, bracketed so that 3 hosts 6 and 4 hosts 5. The winners then advance to face the top two teams in the semifinals, which are held at nominally neutral sites (a traditional feature in the French playoffs) -- although in the 2011 -- 12 season, the semifinals were held at Stadium de Toulouse, occasionally used as a "big - game '' venue by traditional Top 14 power Stade Toulousain. The winners of these semifinals qualify for the final at Stade de France (though in 2016, the final was at Camp Nou in Barcelona due to conflict with UEFA Euro 2016), where the winner will be champions of the league and receive the Bouclier de Brennus. Before 2009 -- 10, the playoffs format was identical to that of the English Premiership with the exception of neutral sites for the semifinals.
Beginning in 2017 -- 18, only the bottom club is automatically relegated to Rugby Pro D2. The second - from - bottom Top 14 side plays a one - off match against the runner - up of the Pro D2 playoffs for the final place in the next Top 14 season.
Pro D2 adopted the Top 14 playoff system effective in 2017 -- 18, though with all matches held at the higher seed 's home field. The playoff champion earns automatic promotion; as noted above, the runner - up enters a one - off match for potential promotion to Top 14. Previously, Pro D2 used a four - team playoff, involving the second - through fifth - place teams, to determine the second of two teams promoted to the next season 's Top 14, with the regular - season champions earning automatic promotion. Under this system, the promotion semifinals were held at the home fields of the second - and third - place teams, and the promotion final was held at a neutral site.
The Pro14, originally known as the Celtic League and later as Pro12, adopted a four - team playoff starting with the 2009 -- 10 season. The format was essentially identical to that of the English Premiership. Through the 2013 -- 14 season, the final was held at a ground chosen by the top surviving seed, with the caveat that the venue must have a capacity of at least 18,000. In 2012 -- 13, top seed Ulster could not use its regular home ground of Ravenhill for that reason (the ground was later expanded to meet the requirement). The league changed to using a predetermined site for its championship final in 2014 -- 15.
With the addition of two South African sides in 2017 -- 18, the league split into two conferences and expanded its playoffs to six teams. The top team of each conference earns a bye into the semifinals, where they will host the winners of matches between the second - and third - place teams from the other conferences (with the second - place team hosting the third - place team from the opposite conference).
Both domestic competitions in New Zealand rugby -- the semi-professional Mitre 10 Cup (formerly Air New Zealand Cup and ITM Cup) and the nominally amateur Heartland Championship -- use a playoff system to determine their champions, although the term "playoff '' is also not used in New Zealand, with "finals '' used instead.
In the 2006 Air New Zealand Cup, the first season of the revamped domestic structure in that country, the top six teams after Round One of the competition automatically qualified for the finals, officially known as Round Three. Their relative seeding was determined by their standings at the end of the Top Six phase of Round Two. The teams that finished below the top six entered repechage pools in Round Two, with the winner of each pool taking up one of the final two finals slots. The seventh seed was the repechage winner with the better record, and the eighth seed was the other repechage winner.
From 2007 onward, the former Rounds One and Two were collapsed into a single pool phase of play in which all teams participated. In 2007 and 2008, the top eight teams advanced to the playoffs; in what was intended to be the final season of the Air New Zealand Cup format in 2009, the Shaughnessy format was used, with the top four advancing to the finals. The New Zealand Rugby Union (NZRU) ultimately decided to stay with the previous format for the rebranded 2010 ITM Cup, with the same four - team playoff as in 2009. Starting in 2011, the NZRU split the ITM Cup into two seven - team leagues, the top - level Premiership and second - level Championship, and instituted promotion and relegation in the ITM Cup (a feature of the country 's former National Provincial Championship). The competition was renamed the Mitre 10 Cup in 2016.
The playoffs in each season format have consisted of a single - elimination tournament. The teams are bracketed in the normal fashion, with the higher seed receiving home - field advantage. In 2007 and 2008, the playoff was rebracketed after the quarterfinals, with the highest surviving seed hosting the lowest surviving seed and the second - highest surviving seed hosting the third surviving seed. The winners of these semifinals qualify for the Cup Final (2006 -- 10) or Premiership / Championship Final (2011 --), held at the home ground of the higher surviving seed. From 2011 onward, the winner of the Championship Final is promoted to the Premiership, replacing that league 's bottom team.
Because the 2011 season ran up against that year 's Rugby World Cup in New Zealand, the competition window was truncated, with only the top two teams in each division advancing to the final match. The Shaughnessy finals series returned to both divisions in 2012, and is currently used in non-World Cup years.
In the Heartland Championship, teams play for two distinct trophies -- the more prestigious Meads Cup and the Lochore Cup. The 12 Heartland Championship teams are divided into two pools for round - robin play in Round One, with the top three in each pool advancing to the Meads Cup and the bottom three dropping to the Lochore Cup.
Round Two in both the Meads and Lochore Cups is an abbreviated round - robin tournament, with each team playing only the teams it did not play in Round One. The top four teams in the Meads Cup pool at the end of Round Two advance to the Meads Cup semifinals; the same applies for the Lochore Cup contestants.
The semifinals of both cups are seeded 1 vs 4 and 2 vs 3, with the higher seeds earning home field advantage. The semifinal winners advance to their respective cup final, hosted by the higher surviving seed.
Throughout the pre-2011 history of Super Rugby -- both in the Super 12 and Super 14 formats -- the competition 's organiser, SANZAR (renamed SANZAAR in 2016), held a Shaughnessy playoff involving the top four teams. The top two teams on the league ladder each hosted a semifinal, with the top surviving team hosting the final.
In May 2009, SANZAR announced that it would adopt an expanded playoff when the competition added a new Australian team for the 2011 season. Through 2015, the Super Rugby playoff involved six teams -- the winners of each of three conferences (Australia, New Zealand and South Africa conferences), plus the three non-winners with the most competition points without regard to conference affiliation.
The top two conference winners received a first - round bye; each played at home against the winner of an elimination match involving two of the four other playoff teams. As in the previous system, the final was hosted by the top surviving seed.
Further expansion of the competition in 2016 to 18 teams, with one extra entry from South Africa and new teams based in Argentina and Japan, saw the playoff bracket expand to eight teams. The teams were split into African and Australasian groups, with the Argentine and Japanese teams joining the African group. Each group in turn was divided into two conferences (Australia, New Zealand, Africa 1, Africa 2). Conference winners received the top four playoff seeds, and were joined by the top three remaining Australasian teams and the top remaining team from the African group on table points, again without regard to conference affiliation. The higher seed still hosted all playoff matches, including the final.
With the contraction of the league to 15 teams for 2018, with one Australian and two South African teams being axed, the playoff format changed yet again. The number of conferences was reduced from four to three -- Australia, New Zealand and South Africa, with the Argentine team joining the South Africa conference and the Japanese team joining the Australia conference. The playoff will remain at eight teams, with the three conference winners joined by five "wildcards '', specifically the top remaining teams without regard to conference affiliation. The conference winners and the top wildcard will host quarterfinals, with all remaining matches hosted by the higher seed.
|
where is the best baker in america filmed | Best Baker in America - Wikipedia
Best Baker in America is an American cooking competition television series that airs on Food Network.
The first season of the series officially premiered on September 27, 2017; and it was presented by Bon Appétit magazine editor Adam Rapoport, who also served as a judge alongside Food Network chefs Jason Smith and Marcela Valladolid. The second season of the series premiered on May 7, 2018; with Rapoport having been replaced as host by chef Scott Conant, along with a rotating lineup of special guest chefs who would serve as the third judge.
|
which of the following products is not a major export commodity for belgium | Economy of Argentina - Wikipedia
The economy of Argentina is an upper - middle income economy for fiscal year 2016 according to World Bank Latin America 's third largest, and the second largest in South America behind Brazil.
The country benefits from rich natural resources, a highly literate population, an export - oriented agricultural sector, and a diversified industrial base. Argentina 's economic performance has historically been very uneven, in which high economic growth alternated with severe recessions, particularly during the late twentieth century, and income maldistribution and poverty increased. Early in the twentieth century Argentina had one of the ten highest per capita GDP levels in the world, at par with Canada and Australia and surpassing both France and Italy. Argentina today (2018) is plagued by currency crisis which involved a potential bailout from International Monetary Fund The currency decline by 18 % in the period of 12 days (May 2018) to more than 25 Argentinian Pesos per U.S. Dollar
Argentina is considered a frontier market by the FTSE Global Equity Index (2018), and is one of the G - 20 major economies.
Prior to the 1880s, Argentina was a relatively isolated backwater, dependent on the salted meat, wool, leather, and hide industries for both the greater part of its foreign exchange and the generation of domestic income and profits. The Argentine economy began to experience swift growth after 1880 through the export of livestock and grain commodities, as well as through British and French investment, marking the beginning of a fifty - year era of significant economic expansion and mass European immigration.
During its most vigorous period, from 1880 to 1905, this expansion resulted in a 7.5-fold growth in GDP, averaging about 8 % annually. One important measure of development, GDP per capita, rose from 35 % of the United States average to about 80 % during that period. Growth then slowed considerably, such that by 1941 Argentina 's real per capita GDP was roughly half that of the U.S. Even so, from 1890 to 1950 the country 's per capita income was similar to that of Western Europe; although income in Argentina remained considerably less evenly distributed.
The Great Depression caused Argentine GDP to fall by a fourth between 1929 and 1932. Having recovered its lost ground by the late 1930s partly through import substitution, the economy continued to grow modestly during World War II (in contrast to the recession caused by the previous world war). The war led to a reduced availability of imports and higher prices for Argentine exports that combined to create a US $1.6 billion cumulative surplus, a third of which was blocked as inconvertible deposits in the Bank of England by the Roca -- Runciman Treaty. Benefiting from innovative self - financing and government loans alike, value added in manufacturing nevertheless surpassed that of agriculture for the first time in 1943, employed over 1 million by 1947, and allowed the need for imported consumer goods to decline from 40 % of the total to 10 % by 1950.
The populist administration of Juan Perón nationalized the Central Bank, railways, and other strategic industries and services from 1945 to 1955. The subsequent enactment of developmentalism after 1958, though partial, was followed by a promising fifteen years. Inflation first became a chronic problem during this period (it averaged 26 % annually from 1944 to 1974); but though it did not become fully "developed, '' from 1932 to 1974 Argentina 's economy grew almost fivefold (or 3.8 % in annual terms) while its population only doubled. While unremarkable, this expansion was well - distributed and so resulted in several noteworthy changes in Argentine society - most notably the development of the largest proportional middle class (40 % of the population by the 1960s) in Latin America as well as the region 's highest - paid, most unionized working class.
The economy, however, declined during the military dictatorship from 1976 to 1983, and for some time afterwards. The dictatorship 's chief economist, José Alfredo Martínez de Hoz, advanced a corrupt, anti-labor policy of financial liberalization that increased the debt burden and interrupted industrial development and upward social mobility. Over 400,000 companies of all sizes went bankrupt by 1982, and neoliberal economic policies prevailing from 1983 through 2001 failed to reverse the situation.
Record foreign debt interest payments, tax evasion, and capital flight resulted in a balance of payments crisis that plagued Argentina with severe stagflation from 1975 to 1990, including a bout of hyperinflation in 1989 and 1990. Attempting to remedy this situation, economist Domingo Cavallo pegged the peso to the U.S. dollar in 1991 and limited the growth in the money supply. His team then embarked on a path of trade liberalization, deregulation, and privatization. Inflation dropped to single digits and GDP grew by one third in four years.
External economic shocks, as well as a dependency on volatile short - term capital and debt to maintain the overvalued fixed exchange rate, diluted benefits, causing erratic economic growth from 1995 and the eventual collapse in 2001. That year and the next, the economy suffered its sharpest decline since 1930; by 2002, Argentina had defaulted on its debt, its GDP had declined by nearly 20 % in four years, unemployment reached 25 %, and the peso had depreciated 70 % after being devalued and floated.
Argentina 's socio - economic situation has since been steadily improving. Expansionary policies and commodity exports triggered a rebound in GDP from 2003 onward. This trend has been largely maintained, creating over five million jobs and encouraging domestic consumption and fixed investment. Social programs were strengthened, and a number of important firms privatized during the 1990s were renationalized beginning in 2003. These include the postal service, AySA (the water utility serving Buenos Aires), Pension funds (transferred to ANSES), Aerolíneas Argentinas, the energy firm YPF, and the railways.
The economy nearly doubled from 2002 to 2011, growing an average of 7.1 % annually and around 9 % for five consecutive years between 2003 and 2007. Real wages rose by around 72 % from their low point in 2003 to 2013. The global recession did affect the economy in 2009, with growth slowing to nearly zero; but high economic growth then resumed, and GDP expanded by around 9 % in both 2010 and 2011. Foreign exchange controls, austerity measures, persistent inflation, and downturns in Brazil, Europe, and other important trade partners, contributed to slower growth beginning in 2012, however. Growth averaged just 1.3 % from 2012 to 2014, and rose to 2.4 % in 2015.
The Argentine government bond market is based on GDP - linked bonds, and investors, both foreign and domestic, netted record yields amid renewed growth. Argentine debt restructuring offers in 2005 and 2010 resumed payments on the majority of its almost US $100 billion in defaulted bonds and other debt from 2001.
Holdouts controlling 7 % of the bonds, including some small investors, hedge funds, and vulture funds led by Paul Singer 's Cayman Islands - based NML Capital Limited, rejected the 2005 and 2010 offers to exchange their defaulted bonds. Singer, who demanded US $832 million for Argentine bonds purchased for US $49 million in the secondary market in 2008, attempted to seize Argentine government assets abroad and sued to stop payments from Argentina to the 93 % who had accepted the earlier swaps despite the steep discount. Bondholders who instead accepted the 2005 offer of 30 cents on the dollar, had by 2012 received returns of about 90 % according to estimates by Morgan Stanley. Argentina settled with virtually all holdouts in February 2016 at a cost of US $9.3 billion; NML received US $2.4 billion, a 392 % return on the original value of the bonds.
While the Argentine Government considers debt left over from illegitimate governments unconstitutional odious debt, it has continued servicing this debt despite the annual cost of around US $14 billion and despite being nearly locked out of international credit markets with annual bond issues since 2002 averaging less than US $2 billion (which precludes most debt roll over).
Argentina has nevertheless continued to hold successful bond issues, as the country 's stock market, consumer confidence, and overall economy continue to grow. The country 's successful, US $16.5 billion bond sale in April 2016 was the largest in emerging market history.
On May 2018, the Argentin 's government asked to the International Monetary Fund for its intervention, with an emergency loan for a thirty (30) billion bailout, as it is has been reported by Bloomberg.
On May 2018 the official estimated inflation has peaked up to 25 percent a year, and on May 4 the Argentina 's central bank has raised interest rates on pesos to 40 percent from 27.25 percent, which is the highest in the world, since the national currency has lost 18 % of its value by the beginning of the year.
In the general context of increasing interest rates for US Treasury bonds and increasing exchange rate of US dollar with respect of other currencies, this critical scenario it is believed on having a relevant impact in the middle - term scenario accross other emerging economies, such as Brazil, Mexico, India, China and Indonesia.
Argentina is one of the world 's major agricultural producers, ranking among the top producers in most of the following, exporters of beef, citrus fruit, grapes, honey, maize, sorghum, soybeans, squash, sunflower seeds, wheat, and yerba mate. Agriculture accounted for 9 % of GDP in 2010, and around one fifth of all exports (not including processed food and feed, which are another third). Commercial harvests reached 103 million tons in 2010, of which over 54 million were oilseeds (mainly soy and sunflower), and over 46 million were cereals (mainly maize, wheat, and sorghum).
Soy and its byproducts, mainly animal feed and vegetable oils, are major export commodities with one fourth of the total; cereals added another 10 %. Cattle - raising is also a major industry, though mostly for domestic consumption; beef, leather and dairy were 5 % of total exports. Sheep - raising and wool are important in Patagonia, though these activities have declined by half since 1990. Biodiesel, however, has become one of the fastest growing agro-industrial activities, with over US $2 billion in exports in 2011.
Fruits and vegetables made up 4 % of exports: apples and pears in the Río Negro valley; rice, oranges and other citrus in the northwest and Mesopotamia; grapes and strawberries in Cuyo (the west), and berries in the far south. Cotton and tobacco are major crops in the Gran Chaco, sugarcane and chile peppers in the northwest, and olives and garlic in the west. Yerba mate tea (Misiones), tomatoes (Salta) and peaches (Mendoza) are grown for domestic consumption. Organic farming is growing in Argentina, and the nearly 3 million hectares (7.5 million acres) of organic cultivation is second only to Australia. Argentina is the world 's fifth - largest wine producer, and fine wine production has taken major leaps in quality. A growing export, total viticulture potential is far from having been met. Mendoza is the largest wine region, followed by San Juan.
Government policy towards the lucrative agrarian sector is a subject of, at times, contentious debate in Argentina. A grain embargo by farmers protesting an increase in export taxes for their products began in March 2008, and, following a series of failed negotiations, strikes and lockouts largely subsided only with the 16 July, defeat of the export tax - hike in the Senate.
Argentine fisheries bring in about a million tons of catch annually, and are centered on Argentine hake, which makes up 50 % of the catch; pollock, squid, and centolla crab are also widely harvested. Forestry has long history in every Argentine region, apart from the pampas, accounting for almost 14 million m3 of roundwood harvests. Eucalyptus, pine, and elm (for cellulose) are also grown, mainly for domestic furniture, as well as paper products (1.5 million tons). Fisheries and logging each account for 2 % of exports.
Mining and other extractive activities, such as gas and petroleum, are growing industries, increasing from 2 % of GDP in 1980 to around 4 % today. The northwest and San Juan Province are the main regions of activity. Coal is mined in Santa Cruz Province. Metals and minerals mined include borate, copper, lead, magnesium, sulfur, tungsten, uranium, zinc, silver, titanium, and gold, whose production was boosted after 1997 by the Bajo de Alumbrera mine in Catamarca Province and Barrick Gold investments a decade later in San Juan. Metal ore exports soared from US $200 million in 1996 to US $1.2 billion in 2004, and to over US $3 billion in 2010.
Around 35 million m3 each of petroleum and petroleum fuels are produced, as well as 50 billion m3 of natural gas, making the nation self - sufficient in these staples, and generating around 10 % of exports. The most important oil fields lie in Patagonia and Cuyo. A network of pipelines (next to Mexico 's, the second - longest in Latin America) send raw product to Bahía Blanca, center of the petrochemical industry, and to the La Plata - Greater Buenos Aires - Rosario industrial belt.
Manufacturing is the largest single sector in the nation 's economy (15 % of GDP), and is well - integrated into Argentine agriculture, with half the nation 's industrial exports being agricultural in nature. Based on food processing and textiles during its early development in the first half of the 20th century, industrial production has become highly diversified in Argentina. Leading sectors by production value are: Food processing and beverages; motor vehicles and auto parts; refinery products, and biodiesel; chemicals and pharmaceuticals; steel and aluminium; and industrial and farm machinery; electronics and home appliances. These latter include over three million big ticket items, as well as an array of electronics, kitchen appliances and cellular phones, among others.
Argentina 's auto industry produced 791,000 motor vehicles in 2013, and exported 433,000 (mainly to Brazil, which in turn exported a somewhat larger number to Argentina); Argentina 's domestic new auto market reached a record 964,000 in 2013. Beverages are another significant sector, and Argentina has long been among the top five wine producing countries in the world; beer overtook wine production in 2000, and today leads by nearly two billion liters a year to one. Other manufactured goods include: glass and cement; plastics and tires; lumber products; textiles; tobacco products; recording and print media; furniture; apparel and leather.
Most manufacturing is organized in the 314 industrial parks operating nationwide as of 2012, a fourfold increase over the past decade. Nearly half the industries are based in the Greater Buenos Aires area, although Córdoba, Rosario, and Ushuaia are also significant industrial centers; the latter city became the nation 's leading center of electronics production during the 1980s. The production of computers, laptops, and servers grew by 160 % in 2011, to nearly 3.4 million units, and covered two - thirds of local demand. Argentina has also become an important manufacturer of cell phones, providing about 80 % of all devices sold in the country. Another important rubric historically dominated by imports - farm machinery - was similarly replaced by domestic production, which covered 60 % of demand by 2013. Production of cell phones, computers, and similar products is actually an "assembly '' industry, with the majority of the higher technology components being imported, and the designs of products originating from foreign countries. High labour costs for Argentina assembly work tend to limit product sales penetration to Latin America, where regional trade treaties exist.
Construction permits nationwide covered over 15 million m2 (160 million ft2) in 2013. The construction sector accounts for over 5 % of GDP, and two - thirds of construction is for residential buildings.
Argentine electric output totaled over 133 billion Kwh in 2013. This was generated in large part through well developed natural gas and hydroelectric resources. Nuclear energy is also of high importance, and the country is one of the largest producers and exporters, alongside Canada and Russia of cobalt - 60, a radioactive isotope widely used in cancer therapy.
The service sector is the largest contributor to total GDP, accounting for over 60 %. Argentina enjoys a diversified service sector, which includes well - developed social, corporate, financial, insurance, real estate, transport, communication services, and tourism.
The telecommunications sector has been growing at a fast pace, and the economy benefits from widespread access to communications services. These include: 77 % of the population with access to mobile phones, 95 % of whom use smartphones; Internet (over 32 million users, or 75 % of the population); and broadband services (accounting for nearly all 14 million accounts). Regular telephone services, with 9.5 million lines, and mail services are also robust. Total telecom revenues reached more than $17.8 billion in 2013, and while only one in three retail stores in Argentina accepted online purchases in 2013 E-commerce reached US $4.5 billion in sales.
Trade in services remained in deficit, however, with US $15 billion in service exports in 2013 and US $19 billion in imports. Business Process Outsourcing became the leading Argentine service export, and reached US $3 billion. Advertising revenues from contracts abroad were estimated at over US $1.2 billion.
Tourism is an increasingly important sector and provided 4 % of direct economic output (over US $17 billion) in 2012; around 70 % of tourism sector activity by value is domestic.
Argentine banking, whose deposits exceeded US $120 billion in December 2012, developed around public sector banks, but is now dominated by the private sector. The private sector banks account for most of the 80 active institutions (over 4,000 branches) and holds nearly 60 % of deposits and loans, and as many foreign - owned banks as local ones operate in the country. The largest bank in Argentina by far, however, has long been the public Banco de la Nación Argentina. Not to be confused with the Central Bank, this institution now accounts for 30 % of total deposits and a fifth of its loan portfolio.
During the 1990s, Argentina 's financial system was consolidated and strengthened. Deposits grew from less than US $15 billion in 1991 to over US $80 billion in 2000, while outstanding credit (70 % of it to the private sector) tripled to nearly US $100 billion.
The banks largely lent US dollars and took deposits in Argentine pesos, and when the peso lost most of its value in early 2002, many borrowers again found themselves hard pressed to keep up. Delinquencies tripled to about 37 %. Over a fifth of deposits had been withdrawn by December 2001, when Economy Minister Domingo Cavallo imposed a near freeze on cash withdrawals. The lifting of the restriction a year later was bittersweet, being greeted calmly, if with some umbrage, at not having these funds freed at their full U.S. dollar value. Some fared worse, as owners of the now - defunct Velox Bank defrauded their clients of up to US $800 million.
Credit in Argentina is still relatively tight. Lending has been increasing 40 % a year since 2004, and delinquencies are down to less than 2 %. Still, credit outstanding to the private sector is, in real terms, slightly below its 1998 peak, and as a percent of GDP (around 18 %) quite low by international standards. The prime rate, which had hovered around 10 % in the 1990s, hit 67 % in 2002. Although it returned to normal levels quickly, inflation, and more recently, global instability, have been affecting it again. The prime rate was over 20 % for much of 2009, and around 17 % since the first half of 2010.
Partly a function of this and past instability, Argentines have historically held more deposits overseas than domestically. The estimated US $173 billion in overseas accounts and investment exceeded the domestic monetary base (M3) by nearly US $10 billion in 2012.
According to World Economic Forum 's 2017 Travel & Tourism Competitiveness Report, tourism generated over US $22 billion, or 3.9 % of GDP, and the industry employed more than 671,000 people, or approximately 3.7 % of the total workforce. Tourism from abroad contributed US $5.3 billion, having become the third largest source of foreign exchange in 2004. Around 5.7 million foreign visitors arrived in 2017, reflecting a doubling in visitors since 2002 despite a relative appreciation of the peso.
Argentines, who have long been active travelers within their own country, accounted for over 80 %, and international tourism has also seen healthy growth (nearly doubling since 2001). Stagnant for over two decades, domestic travel increased strongly in the last few years, and visitors are flocking to a country seen as affordable, exceptionally diverse, and safe.
Foreign tourism, both to and from Argentina, is increasing as well. INDEC recorded 5.2 million foreign tourist arrivals and 6.7 million departures in 2013; of these, 32 % arrived from Brazil, 19 % from Europe, 10 % from the United States and Canada, 10 % from Chile, 24 % from the rest of the Western Hemisphere, and 5 % from the rest of the world. Around 48 % of visitors arrived by commercial flight, 40 % by motor travel (mainly from neighboring Brazil), and 12 % by sea. Cruise liner arrivals are the fastest growing type of foreign tourism to Argentina; a total of 160 liners carrying 510,000 passengers arrived at the Port of Buenos Aires in 2013, an eightfold increase in a just a decade.
Electricity generation in Argentina totaled 133.3 billion Kwh in 2013. The electricity sector in Argentina constitutes the third largest power market in Latin America. It mainly still relies on centralised generation by natural gas power generation (51 %), hydroelectricity (28 %), and oil - fired generation (12 %). Reserves of shale gas and oil in the Vaca Muerta oil field and elsewhere are estimated to be the world 's third - largest.
Despite the country 's large untapped wind and solar potential new renewable energy technologies and distributed energy generation are barely exploited. Wind energy is the fastest growing among new renewable sources. Fifteen wind farms have been developed since 1994 in Argentina, the only country in the region to produce wind turbines. The 55 MW of installed capacity in these in 2010 will increase by 895 MW upon the completion of new wind farms begun that year. Solar power is being promoted with the goal of expanding installed solar capacity from 6 MW to 300, and total renewable energy capacity from 625 MW to 3,000 MW.
Argentina is in the process of commissioning large centralised energy generation and transmission projects. An important number of these projects are being financed by the government through trust funds, while independent private initiative is limited as it has not fully recovered yet from the effects of the Argentine economic crisis.
The first of the three nuclear reactors was inaugurated in 1974, and in 2015 nuclear power generated 5 % of the country 's energy output.
The electricity sector was unbundled in generation, transmission and distribution by the reforms carried out in the early 1990s. Generation occurs in a competitive and mostly liberalized market in which 75 % of the generation capacity is owned by private utilities. In contrast, the transmission and distribution sectors are highly regulated and much less competitive than generation.
Argentina 's transport infrastructure is relatively advanced, and at a higher standard than the rest of Latin America. There are over 230,000 km (144,000 mi) of roads (not including private rural roads) of which 72,000 km (45,000 mi) are paved, and 2,800 kilometres (1,700 mi) are expressways, many of which are privatized tollways. Having tripled in length in the last decade, multilane expressways now connect several major cities with more under construction. Expressways are, however, currently inadequate to deal with local traffic, as over 12 million motor vehicles were registered nationally as of 2012 (the highest, proportionately, in the region).
The railway network has a total length of 37,856 kilometres (23,523 mi), though at the network 's peak this figure was 47,000 km (29,204 mi). After decades of declining service and inadequate maintenance, most intercity passenger services shut down in 1992 following the privatization of the country 's railways and the breaking up of the state rail company, while thousands of kilometers fell into disuse. Outside Greater Buenos Aires most rail lines still in operation are freight related, carrying around 23 million tons a year. The metropolitan rail lines in and around Buenos Aires remained in great demand owing in part to their easy access to the Buenos Aires Underground, and the commuter rail network with its 833 kilometres (518 mi) length carries around 1.4 million passengers daily.
In April 2015, by overwhelming majority the Argentine Senate passed a law which re-created Ferrocarriles Argentinos as Nuevos Ferrocarriles Argentinos, effectively re-nationalising the country 's railways. In the years leading up to this move, the country 's railways had seen significant investment from the state, purchasing new rolling stock, re-opening lines closed under privatization and re-nationalising companies such as the Belgrano Cargas freight operator. Some of these re-opened services include the General Roca Railway service to Mar del Plata, the Tren a las Nubes tourist train and the General Mitre Railway service from Buenos Aires to Córdoba. while brand new services include the Posadas - Encarnación International Train.
Inaugurated in 1913, the Buenos Aires Underground was the first underground rail system built in Latin America, the Spanish speaking world and the Southern Hemisphere. No longer the most extensive in South America, its 60 kilometres (37 mi) of track carry a million passengers daily.
Argentina has around 11,000 km (6,835 mi) of navigable waterways, and these carry more cargo than do the country 's freight railways. This includes an extensive network of canals, though Argentina is blessed with ample natural waterways as well, the most significant among these being the Río de la Plata, Paraná, Uruguay, Río Negro, and Paraguay rivers. The Port of Buenos Aires, inaugurated in 1925, is the nation 's largest; it handled 11 million tons of freight and transported 1.8 million passengers in 2013.
Aerolíneas Argentinas is the country 's main airline, providing both extensive domestic and international service. Austral Líneas Aéreas is Aerolíneas Argentinas ' subsidiary, with a route system that covers almost all of the country. LADE is a military - run commercial airline that flies extensive domestic services. The nation 's 33 airports handled air travel totalling 25.8 million passengers in 2013, of which domestic flights carried over 14.5 million; the nation 's two busiest airports, Jorge Newbery and Ministro Pistarini International Airports, boarded around 9 million flights each.
Argentine exports are fairly well diversified. However, although agricultural raw materials are over 20 % of the total exports, agricultural goods still account for over 50 % of exports when processed foods are included. Soy products alone (soybeans, vegetable oil) account for almost one fourth of the total. Cereals, mostly maize and wheat, which were Argentina 's leading export during much of the twentieth century, make up less than one tenth now.
Industrial goods today account for over a third of Argentine exports. Motor vehicles and auto parts are the leading industrial export, and over 12 % of the total merchandise exports. Chemicals, steel, aluminum, machinery, and plastics account for most of the remaining industrial exports. Trade in manufactures has historically been in deficit for Argentina, however, and despite the nation 's overall trade surplus, its manufacturing trade deficit exceeded US $30 billion in 2011. Accordingly, the system of non-automatic import licensing was extended in 2011, and regulations were enacted for the auto sector establishing a model by which a company 's future imports would be determined by their exports (though not necessarily in the same rubric).
A net energy importer until 1987, Argentina 's fuel exports began increasing rapidly in the early 1990s and today account for about an eighth of the total; refined fuels make up about half of that. Exports of crude petroleum and natural gas have recently been around US $3 billion a year. Rapidly growing domestic energy demand and a gradual decline in oil production, resulted in a US $3 billion energy trade deficit in 2011 (the first in 17 years) and a US $6 billion energy deficit in 2013.
Argentine imports have historically been dominated by the need for industrial and technological supplies, machinery, and parts, which have averaged US $50 billion since 2011 (two - thirds of total imports). Consumer goods including motor vehicles make up most of the rest. Trade in services has historically in deficit for Argentina, and in 2013 this deficit widened to over US $4 billion with a record US $19 billion in service imports. The nation 's chronic current account deficit was reversed during the 2002 crisis, and an average current account surplus of US $7 billion was logged between 2002 and 2009; this surplus later narrowed considerably, and has been slightly negative since 2011.
Foreign direct investment in Argentina is divided nearly evenly between manufacturing (36 %), natural resources (34 %), and services (30 %). The chemical and plastics sector (10 %) and the automotive sector (6 %) lead foreign investment in local manufacturing; oil and gas (22 %) and mining (5 %), in natural resources; telecommunications (6 %), finance (5 %), and retail trade (4 %), in services. Spain was the leading source of foreign direct investment in Argentina, accounting for US $22 billion (28 %) in 2009; the U.S. was the second leading source, with $13 billion (17 %); and China grew to become the third - largest source of FDI by 2011. Investments from the Netherlands, Brazil, Chile, and Canada have also been significant; in 2012, foreign nationals held a total of around US $112 billion in direct investment.
Several bilateral agreements play an important role in promoting U.S. private investment. Argentina has an Overseas Private Investment Corporation (OPIC) agreement and an active program with the U.S. Export - Import Bank. Under the 1994 U.S. -- Argentina Bilateral Investment Treaty, U.S. investors enjoy national treatment in all sectors except shipbuilding, fishing, nuclear - power generation, and uranium production. The treaty allows for international arbitration of investment disputes.
Foreign direct investment (FDI) in Argentina, which averaged US $5.7 billion from 1992 to 1998 and reached in US $24 billion in 1999 (reflecting the purchase of 98 % of YPF stock by Repsol), fell during the crisis to US $1.6 billion in 2003. FDI then accelerated, reaching US $8 billion in 2008. The global crisis cut this figure to US $4 billion in 2009; but inflows recovered to US $6.2 billion in 2010. and US $8.7 billion in 2011, with FDI in the first half of 2012 up by a further 42 %.
FDI volume remained below the regional average as a percent of GDP even as it recovered, however; Kirchner Administration policies and difficulty in enforcing contractual obligations had been blamed for this modest performance. The nature of foreign investment in Argentina nevertheless shifted significantly after 2000, and whereas over half of FDI during the 1990s consisted in privatizations and mergers and acquisitions, foreign investment in Argentina became the most technologically oriented in the region -- with 51 % of FDI in the form of medium and high - tech investment (compared to 36 % in Brazil and 3 % in Chile).
The economy recovered strongly from the 2001 -- 02 crisis, and was the 21st largest in purchasing power parity terms in 2011; its per capita income on a purchasing power basis was the highest in Latin America. A lobby representing US creditors who refused to accept Argentina 's debt - swap programmes has campaigned to have the country expelled from the G20. These holdouts include numerous vulture funds which had rejected the 2005 offer, and had instead resorted to the courts in a bid for higher returns on their defaulted bonds. These disputes had led to a number of liens against central bank accounts in New York and, indirectly, to reduced Argentine access to international credit markets.
The government under President Mauricio Macri announced to be seeking a new loan from the IMF in order to avoid another economic crash similar to the one in 2001. The May 2018 announcement comes at a time of high inflation and falling interest rates. The loan would reportedly be worth $30 billion.
Following 25 years of boom and bust stagnation, Argentina 's economy doubled in size from 2002 to 2013, and officially, income poverty declined from 54 % in 2002 to 5 % by 2013; an alternative measurement conducted by CONICET found that income poverty declined instead to 15.4 %. Poverty measured by living conditions improved more slowly, however, decreasing from 17.7 % in the 2001 Census to 12.5 % in the 2010 Census. Argentina 's unemployment rate similarly declined from 25 % in 2002 to an average of around 7 % since 2011 largely because of both growing global demand for Argentine commodities and strong growth in domestic activity.
Given its ongoing dispute with holdout bondholders, the government has become wary of sending assets to foreign countries (such as the presidential plane, or artworks sent to foreign exhibitions) in case they might be impounded by courts at the behest of holdouts.
The government has been accused of manipulating economic statistics.
Official CPI inflation figures released monthly by INDEC have been a subject of political controversy since 2007. Official inflation data are disregarded by leading union leaders, even in the people sector, when negotiating pay rises. Some private - sector estimates put inflation for 2010 at around 25 %, much higher than the official 10.9 % rate for 2010. Inflation estimates from Argentina 's provinces are also higher than the government 's figures. The government stands by the validity of its data, but has called in the International Monetary Fund to help it design a new nationwide index to replace the current one.
The official government CPI is calculated based on 520 products, however the controversy arises from these products not being specified, and thus how many of those products are subject to price caps and subsidies. Economic analysts have been prosecuted for publishing estimates that disagree with official statistics. The government enforces a fine of up to 500,000 pesos for providing what it calls "fraudulent inflation figures '',. Beginning in 2015, the government again began to call for competitive bids from the private sector to provide a weekly independent inflation index.
High inflation has been a weakness of the Argentine economy for decades. Inflation has been unofficially estimated to be running at around 25 % annually since 2008, despite official statistics indicating less than half that figure; these would be the highest levels since the 2002 devaluation. A committee was established in 2010 in the Argentine Chamber of Deputies by opposition Deputies Patricia Bullrich, Ricardo Gil Lavedra, and others to publish an alternative index based on private estimates. Food price increases, particularly that of beef, began to outstrip wage increases in 2010, leading Argentines to decrease beef consumption per capita from 69 kg (152 lb) to 57 kg (125 lb) annually and to increase consumption of other meats.
Consumer inflation expectations of 28 to 30 % led the national mint to buy banknotes of its highest denomination (100 pesos) from Brazil at the end of 2010 to keep up with demand. The central bank pumped at least 1 billion pesos into the economy in this way during 2011.
As of June 2015, the government said that inflation was at 15.3 %; approximately half that of some independent estimates. Inflation remained at around 18.6 % in 2015 according to an IMF estimate; but following a sharp devaluation enacted by the Mauricio Macri administration on December 17, inflation reignited during the first half of 2016 - reaching 42 % according to the Finance Ministry.
Supermarkets in Argentina have adopted electronic price tags, allowing prices to be updated quicker.
In relation to other Latin American countries, has a moderate to low level of income inequality. Its Gini coefficient of about 0.427 (2014) is reported to be the lowest among Latin American countries. The social gap is worst in the suburbs of the capital, where beneficiaries of the economic rebound live in gated communities, and many of the poor (particularly undocumented immigrants) live in slums known as villas miserias.
In the mid-1970s, the most affluent 10 % of Argentina 's population had an income 12 times that of the poorest 10 %. That figure had grown to 18 times by the mid-1990s, and by 2002, the peak of the crisis, the income of the richest segment of the population was 43 times that of the poorest. These heightened levels of inequality had improved to 26 times by 2006, and to 16 times at the end of 2010. Economic recovery after 2002 was thus accompanied by significant improvement in income distribution: in 2002, the richest 10 % absorbed 40 % of all income, compared to 1.1 % for the poorest 10 %; by 2010, the former received 29 % of income, and the latter, 1.8 %.
Argentina has an inequality - adjusted human development index of 0.680, compared to 0.542 and 0.661 for neighboring Brazil and Chile, respectively. The 2010 Census found that poverty by living conditions still affect 1 in 8 inhabitants, however; and while the official, household survey income poverty rate (based on U $ S 100 per person per month, net) was 4.7 % in 2013, the National Research Council estimated income poverty in 2010 at 22.6 %, with private consulting firms estimating that in 2011 around 21 % fell below the income poverty line. The World Bank estimated that, in 2013, 3.6 % subsisted on less than US $3.10 per person per day.
|
why are there only 5 continents in the olympics | Olympic symbols - wikipedia
The Olympic symbols are icons, flags and symbols used by the International Olympic Committee (IOC) to elevate the Olympic Games. Some -- such as the flame, fanfare, and theme -- are more commonly used during Olympic competition, but others, such as the flags, can be seen throughout the years.
The Olympic motto is the hendiatris Citius, Altius, Fortius, which is Latin for "Faster, Higher, Stronger ''. It was proposed by Pierre de Coubertin upon the creation of the International Olympic Committee in 1894. Coubertin borrowed it from his friend Henri Didon, a Dominican priest who was an athletics enthusiast. Coubertin said "These three words represent a programme of moral beauty. The aesthetics of sport are intangible. '' The motto was introduced in 1924 at the Olympic Games in Paris. A more informal but well - known motto, also introduced by Coubertin, is "The most important thing is not to win but to take part! '' Coubertin got this motto from a sermon by the Bishop of Pennsylvania during the 1908 London Games.
The rings are five interlocking rings, coloured blue, yellow, black, green, and red on a white field, known as the "Olympic rings ''. The symbol was originally designed in 1912 by de Coubertin. He appears to have intended the rings to represent the five participating regions: Africa, Asia, America, Oceania and Europe. According to Coubertin, the colours of the rings together with the white of the background included the colours composing every competing nation 's flag at the time. Upon its initial introduction, Coubertin stated the following in the August 1912 edition of Olympique:
... the six colours (including the flag 's white background) combined in this way reproduce the colours of every country without exception. The blue and yellow of Sweden, the blue and white of Greece, the tricolour flags of France, England, the United States, Germany, Belgium, Italy and Hungary, and the yellow and red of Spain are included, as are the innovative flags of Brazil and Australia, and those of ancient Japan and modern China. This, truly, is an international emblem.
In his article published in the Olympic Revue the official magazine of the International Olympic Committee in November 1992, the American historian Robert Barney explains that the idea of the interlaced rings came to Pierre de Coubertin when he was in charge of the USFSA, an association founded by the union of two French sports associations and until 1925, responsible for representing the International Olympic Committee in France: The emblem of the union was two interlaced rings (like the vesica piscis typical interlaced marriage rings) and originally the idea of Swiss psychiatrist Carl Jung: for him, the ring symbolized continuity and the human being.
The 1914 Congress was suspended due to the outbreak of World War I, but the symbol and flag were later adopted. They would first officially debut at the Games of the VII Olympiad in Antwerp, Belgium, in 1920.
The symbol 's popularity and widespread use began during the lead - up to the 1936 Summer Olympics in Berlin. Carl Diem, president of the Organizing Committee of the 1936 Summer Olympics, wanted to hold a torchbearers ' ceremony in the stadium at Delphi, site of the famous oracle, where the Pythian Games were also held. For this reason he ordered construction of a milestone with the Olympic rings carved in the sides, and that a torchbearer should carry the flame along with an escort of three others from there to Berlin. The ceremony was celebrated but the stone was never removed. Later, two American authors, Lynn and Gray Poole, when visiting Delphi in the late 1950s, saw the stone and reported in their History of the Ancient Games that the Olympic rings design came from ancient Greece. This has become known as "Carl Diem 's Stone ''. This created a myth that the symbol had an ancient Greek origin.
The current view of the International Olympic Committee (IOC) is that the symbol "reinforces the idea '' that the Olympic Movement is international and welcomes all countries of the world to join. As can be read in the Olympic Charter, the Olympic symbol represents the union of the "five continents '' of the world and the meeting of athletes from throughout the world at the Olympic Games. However, no continent is represented by any specific ring. Prior to 1951, the official handbook stated that each colour corresponded to a particular continent: blue for Europe, yellow for Asia, black for Africa, green for Australia and Oceania and red for the Americas; this was removed because there was no evidence that Coubertin had intended it (the quotation above was probably an afterthought). Nevertheless, the logo of the Association of National Olympic Committees places the logo of each of its five continental associations inside the ring of the corresponding colour.
The Olympic flag was created by Pierre de Coubertin in 1925.
The Olympic flag has a white background, with five interlaced rings in the centre: blue, yellow, black, green and red. This design is symbolic; it represents the five continents of the world, united by Olympism, while the six colours are those that appear on all the national flags of the world at the present time.
There are specific Olympic flags that are displayed by cities that will be hosting the next Olympic games. During each Olympic closing ceremony in what is traditionally known as the Antwerp Ceremony, the flag is passed from the mayor of one host city to the next host, where it will then be taken to the new host and displayed at city hall. These flags should not be confused with the larger Olympic flags designed and created specifically for each games, which are flown over the host stadium and then retired. Because there is no specific flag for this purpose, the flags flown over the stadiums generally have subtle differences, including minor color variations, and, more noticeably, the presence (or lack) of white outlines around each ring.
The first Olympic flag was presented to the Jr National Olympics at the 1920 Summer Olympics by the city of Antwerp, Belgium. At the end of the Games, the flag could not be found and a new Olympic flag had to be made for the 1924 Summer Olympics in Paris. Despite it being a replacement, the IOC officially still calls this the "Antwerp Flag '' instead of the "Paris Flag ''. It was passed on to the next organizing city of the Summer Olympics or Winter Olympics until the 1952 Winter Olympics in Oslo, Norway, when a separate Olympic flag was created to be used only at the Winter Olympics (see below). The 1924 flag then continued to be used at the Summer Olympics until the Games of Seoul 1988 when it was retired.
In 1997, at a banquet hosted by the US Olympic Committee, a reporter was interviewing Hal Haig Prieste who had won a bronze medal in platform diving as a member of the 1920 US Olympic team. The reporter mentioned that the IOC had not been able to find out what had happened to the original Olympic flag. "I can help you with that, '' Prieste said, "It 's in my suitcase. '' At the end of the Antwerp Olympics, spurred on by teammate Duke Kahanamoku, he climbed a flagpole and stole the Olympic flag. For 77 years the flag was stored away in the bottom of his suitcase. The flag was returned to the IOC by Prieste, by then 103 years old, in a special ceremony held at the 2000 Games in Sydney. The original Antwerp Flag is now on display at the Olympic Museum in Lausanne, Switzerland, with a plaque thanking him for donating it.
The Oslo flag was presented to the IOC by the mayor of Oslo, Norway, during the 1952 Winter Olympics. Since then, it has been passed to the next organizing city for the Winter Olympics. Currently, the actual Oslo flag is kept preserved in a special box, and a replica has been used during recent closing ceremonies instead.
As a successor to the Antwerp Flag, the Seoul flag was presented to the IOC at the 1988 Summer Olympics by the city of Seoul, South Korea, and has since then been passed on to the next organizing city of the Summer Olympics. The Seoul flag is currently on display at the Tokyo Metropolitan Government Building.
As a successor to the Seoul Flag, the Rio flag was presented to the IOC at the 2016 Summer Olympics by the city of Rio de Janeiro, Brazil, and has since then been passed on to the next organizing city of the Summer Olympics, Tokyo.
For the inaugural Youth Olympic Games, an Olympic flag was created for the junior version of the Games. The flag is similar to the Olympic flag, but has the host city and year on it and was first presented to Singapore by IOC President Jacques Rogge. During the closing ceremony on 26 August 2010, Singapore officials presented it to the next organizing committee, Nanjing 2014.
For the inaugural winter Youth Olympic Games, an Olympic flag was presented to the IOC at the 2012 Winter Youth Olympics by the city of Innsbruck, Austria, and has since then been passed on to the next organizing city of the Winter Youth Olympics.
The modern tradition of moving the Olympic flame via a relay system from Greece to the Olympic venue began with the Berlin Games in 1936. Months before the Games are held, the Olympic flame is lit on a torch, with the rays of the Sun concentrated by a parabolic reflector, at the site of the Ancient Olympics in Olympia, Greece. The torch is then taken out of Greece, most often to be taken around the country or continent where the Games are held. The Olympic torch is carried by athletes, leaders, celebrities, and ordinary people alike, and at times in unusual conditions, such as being electronically transmitted via satellite for Montreal 1976, submerged underwater without being extinguished for Sydney 2000, or in space and at the North Pole for Sochi 2014. On the final day of the torch relay, the day of the Opening Ceremony, the Flame reaches the main stadium and is used to light a cauldron situated in a prominent part of the venue to signify the beginning of the Games.
The Olympic medals awarded to winners are another symbol associated with the Olympic games. The medals are made of gold - plated silver -- for the gold medals -- silver, or bronze, and are awarded to the top three finishers in a particular event. Each medal for an Olympiad has a common design, decided upon by the organizers for the particular games. From 1928 until 2000, the obverse side of the medals contained an image of Nike, the traditional goddess of victory, holding a palm in her left hand and a winner 's crown in her right. This design was created by Giuseppe Cassioli. For each Olympic games, the reverse side as well as the labels for each Olympiad changed, reflecting the host of the games.
In 2004, the obverse side of the medals changed to make more explicit reference to the Greek character of the games. In this design, the goddess Nike flies into the Panathenic stadium, reflecting the renewal of the games. The design was by Greek jewelry designer Elena Votsi.
Olympic diplomas are given to competitors placing fourth, fifth, and sixth since 1949, and to competitors placing seventh and eighth since 1981.
The "Olympic Hymn '', officially known as the "Olympic Anthem '', is played when the Olympic flag is raised. It was composed by Spyridon Samaras with words from a poem of the Greek poet and writer Kostis Palamas. Both the poet and the composer were the choice of Demetrius Vikelas, a Greek Pro-European and the first President of the IOC. The anthem was performed for the first time for the ceremony of opening of the 1896 Athens Olympic Games but was n't declared the official hymn by the IOC until 1958. In the following years, every hosting nation commissioned the composition of a specific Olympic hymn for their own edition of the Games until the 1960 Winter Olympics in Squaw Valley.
Other notable Olympic anthems and fanfares include:
Several other composers have contributed Olympic music, including Henry Mancini, Francis Lai, Marvin Hamlisch, Philip Glass, David Foster, Mikis Theodorakis, Ryuichi Sakamoto, Vangelis, Basil Poledouris, Michael Kamen, and Mark Watters.it had oath too: we swear that we will take part in the Olympic Games in loyal competition, respecting the regulations which govern them and desirous of participating in them in the true spirit of sportsmanship for the honour of our country and for the glory of sports.
The kotinos (Greek: κότινος), is an olive branch, originally of wild olive - tree, intertwined to form a circle or a horse - shoe, introduced by Heracles. In the ancient Olympic Games there were no gold, silver, or bronze medals. There was only one winner per event, crowned with an olive wreath made of wild olive leaves from a sacred tree near the temple of Zeus at Olympia. Aristophanes in Plutus makes a sensible remark as to why victorious athletes are crowned with a wreath made of wild olive instead of gold. The victorious athletes were honoured, feted, and praised. Their deeds were heralded and chronicled so that future generations could appreciate their accomplishments.
Herodotus describes the following story which is relevant to the olive wreath. Xerxes was interrogating some Arcadians after the Battle of Thermopylae. He inquired why there were so few Greek men defending Thermopylae. The answer was "All other men are participating in the Olympic Games ''. And when asked "What is the prize for the winner? '', "An olive - wreath '' came the answer. Then Tigranes, one of his generals uttered a most noble saying: "Good heavens! Mardonius, what kind of men are these against whom you have brought us to fight? Men who do not compete for possessions, but for honour. ''
However, in later times, this was not their only reward; the athlete was rewarded with a generous sum of money by his country. The kotinos tradition was renewed specifically for the Athens 2004 Games, although in this case it was bestowed together with the gold medal. Apart from its use in the awards ceremonies, the kotinos was chosen as the 2004 Summer Olympics emblem.
The Olympic salute is a variant of the Roman salute, with the right arm and hand stretched and pointing upward, the palm outward and downward, with the fingers touching. However, unlike the Roman Salute, the arm is raised higher and at an angle to the right from the shoulder. The greeting is visible on the official posters of the games at Paris 1924 and Berlin 1936.
The Olympic salute has fallen out of use since World War II because of its strong resemblance to the Nazi salute. It was used for the last time by the French team in the opening ceremony of the 1948 Winter Olympics.
Since the 1968 Winter Olympics in Grenoble, France, the Olympic Games have had a mascot, usually an animal native to the area or occasionally human figures representing the cultural heritage. The first major mascot in the Olympic Games was Misha in the 1980 Summer Olympics in Moscow. Misha was used extensively during the opening and closing ceremonies, had a TV animated cartoon and appeared on several merchandise products. Nowadays, most of the merchandise aimed at young people focuses on the mascots, rather than the Olympic flag or organization logos.
The Olympic movement is very protective of its symbols; as many jurisdictions have given the movement exclusive trademark rights to any interlocking arrangement of five rings, and usage of the word "Olympic ''. The rings are not eligible for copyright protection, both because of their date of creation and because five circles arranged in a pattern do not reach the threshold of originality required to be copyrighted.
The movement has taken action against numerous groups alleged to have violated their trademarks, including the Gay Games; the Minneapolis - based band The Hopefuls, formerly The Olympic Hopefuls; the Redneck Olympics or Redneck Games; Awana Clubs International, a Christian youth ministry who used the term for its competitive games; and Wizards of the Coast, publisher at the time of the IOC 's complaint of the card game Legend of the Five Rings.
In 1938, the Norwegian brewery Frydenlund patented a label for its root beer which featured the five Olympic rings. In 1952, when Norway was to host the Winter Olympics, the Olympic Committee was notified by Norway 's Patent Office that it was Frydenlund who owned the rights to the rings in that country. Today, the successor company Ringnes AS owns the rights to use the patented five rings on its root beer. In addition, a few other companies have been successful in using the Olympic name, such as Olympic Paint, which has a paintbrush in the form of a torch as its logo, and the former Greek passenger carrier Olympic Airlines.
Certain other sporting organizations and events have been granted permission by the IOC to use the word "Olympics '' in their name, such as the Special Olympics, an international sporting event held every four years for people with intellectual disabilities.
In recent years, organizing committees have also demanded the passing of laws to combat ambush marketing by non-official sponsors during the Games -- such as the London Olympic Games and Paralympic Games Act 2006 -- putting heavy restrictions on using any term or imagery that could constitute an unauthorized association with the games, including mere mentioning of the host city, the year, and others.
|
what member of congress is third in line to the presidency | United States Presidential line of succession - wikipedia
The United States presidential line of succession is the order in which persons may become or act as President of the United States if the incumbent President becomes incapacitated, dies, resigns, or is removed from office (by impeachment by the House of Representatives and subsequent conviction by the Senate). The line of succession is set by the United States Constitution and the Presidential Succession Act of 1947 as subsequently amended to include newly created cabinet offices. The succession follows the order of Vice President, Speaker of the House of Representatives, President pro tempore of the Senate, and then the heads of federal executive departments who form the Cabinet of the United States. The Cabinet currently has fifteen members, beginning with the Secretary of State, and followed by the rest in the order of their positions ' creation. Those heads of department who are ineligible to act as President are also ineligible to succeed the President by succession, for example most commonly if they are not a natural - born U.S. citizen.
Several constitutional law experts have raised questions as to the constitutionality of the provisions that the Speaker of the House and the President pro tempore of the Senate succeed to the presidency, and in 2003 the Continuity of Government Commission raised a number of other issues with the current line of succession.
The current presidential order of succession is as follows:
Cabinet officers are in line according to the chronological order of their department 's creation or the department of which their department is the successor (the Department of Defense being successor to the Department of War, and the Department of Health and Human Services being successor to the Department of Health, Education and Welfare).
To be eligible to serve as President, a person must be a natural - born U.S. citizen, at least 35 years of age, and a resident within the United States for at least 14 years. These eligibility requirements are specified both in the U.S. Constitution, Article II, Section 1, Clause 5, and in the Presidential Succession Act of 1947 (3 U.S.C. § 19 (e)).
Acting officers may be eligible. In 2009, the Continuity of Government Commission, a private non-partisan think tank, reported,
The language in the current Presidential Succession Act is less clear than that of the 1886 Act with respect to Senate confirmation. The 1886 Act refers to "such officers as shall have been appointed by the advice and consent of the Senate to the office therein named... '' The current act merely refers to "officers appointed, by and with the advice and consent of the Senate. '' Read literally, this means that the current act allows for acting secretaries to be in the line of succession as long as they are confirmed by the Senate for a post (even for example, the second or third in command within a department). It is common for a second in command to become acting secretary when the secretary leaves office. Though there is some dispute over this provision, the language clearly permits acting secretaries to be placed in the line of succession. (We have spoken to acting secretaries who told us they had been placed in the line of succession.)
Two months after succeeding Franklin D. Roosevelt, President Harry S. Truman proposed that the Speaker of the House and the President pro tempore of the Senate be granted priority in the line of succession over the Cabinet so as to ensure the President would not be able to appoint his successor to the Presidency.
The Secretary of State and the other Cabinet officials are appointed by the President, while the Speaker of the House and the President pro tempore of the Senate are elected officials. The Speaker is chosen by the U.S. House of Representatives, and every Speaker has been a member of that body for the duration of their term as Speaker, though this is not technically a requirement; the President pro tempore is chosen by the U.S. Senate and customarily the Senator of the majority party with the longest record of continuous service fills this position. The Congress approved this change and inserted the Speaker and President pro tempore in line, ahead of the members of the Cabinet in the order in which their positions were established.
In his speech supporting the changes, Truman noted that the House of Representatives is more likely to be in political agreement with the President and Vice President than the Senate. The succession of a Republican to a Democratic Presidency would further complicate an already unstable political situation. However, when the changes to the succession were signed into law, they placed Republican House Speaker Joseph W. Martin first in the line of succession after the Vice President.
Some of Truman 's critics said that his ordering of the Speaker before the President pro tempore was motivated by his dislike of the then - current holder of the latter rank, Senator Kenneth McKellar. Further motivation may have been provided by Truman 's preference for House Speaker Sam Rayburn to be next in the line of succession, rather than Secretary of State Edward R. Stettinius, Jr.
The line of succession is mentioned in three places in the Constitution:
Article II, Section 1 of the United States Constitution provides that:
In case of the removal of the President from office, or of his death, resignation, or inability to discharge the powers and duties of the said office, the same shall devolve on the Vice President... until the disability be removed, or a President elected.
This originally left open the question whether "the same '' refers to "the said office '' or only "the powers and duties of the said office. '' Some historians, including Edward Corwin and John D. Feerick, have argued that the framers ' intention was that the Vice President would remain Vice President while executing the powers and duties of the Presidency; however, there is also much evidence to the contrary, the most compelling of which is Article I, section 3, of the Constitution itself, the relevant text of which reads:
The Vice President of the United States shall be President of the Senate, but shall have no vote, unless they be equally divided. The Senate shall chuse (sic) their other officers, and also a President pro tempore, in the absence of the Vice President, or when he shall exercise the office of President of the United States.
This text appears to answer the hypothetical question of whether the office or merely the powers of the Presidency devolved upon the Vice President on his succession. Thus, the 25th Amendment merely restates and reaffirms the validity of existing precedent, apart from adding new protocols for Presidential disability. Not everyone agreed with this interpretation when it was first actually tested, and it was left to Vice President John Tyler, the first presidential successor in U.S. history, to establish the precedent that was respected in the absence of the 25th Amendment.
Upon the death of President William Henry Harrison in 1841 and after a brief hesitation, Tyler took the position that he was the President and not merely acting President upon taking the presidential oath of office. However, some contempories -- including John Quincy Adams, Henry Clay and other members of Congress Whig party leaders and even Tyler 's own cabinet -- believed that he was only acting as President and did not have the office itself.
Nonetheless, Tyler adhered to his position, even returning, unopened, mail addressed to the "Acting President of the United States '' sent by his detractors. Tyler 's view ultimately prevailed when the Senate voted to accept the title "President, '' and this precedent was followed thereafter. The question was finally resolved by Section 1 of the 25th Amendment, which specifies that "In case of the removal of the President from office or of his death or resignation, the Vice President shall become President. '' The Amendment does not specify whether officers other than the Vice President can become President rather than Acting President in the same set of circumstances. The Presidential Succession Act refers only to other officers acting as President rather than becoming President.
The Presidential Succession Act of 1792 was the first succession law passed by Congress. The act was contentious because the Federalists did not want the then Secretary of State, Thomas Jefferson, who had become the leader of the Democratic - Republicans, to follow the Vice President in the succession. There were also separation of powers concerns over including the Chief Justice of the United States in the line. The compromise they worked out established the President pro tempore of the Senate as next in line after the Vice President, followed by the Speaker of the House of Representatives.
In either case, these officers were to "act as President of the United States until the disability be removed or a president be elected. '' The Act called for a special election to be held in November of the year in which dual vacancies occurred (unless the vacancies occurred after the first Wednesday in October, in which case the election would occur the following year; or unless the vacancies occurred within the last year of the presidential term, in which case the next election would take place as regularly scheduled). The people elected President and Vice President in such a special election would have served a full four - year term beginning on March 4 of the next year, but no such election ever took place.
In 1881, after the death of President Garfield, and in 1885, after the death of Vice President Hendricks, there had been no President pro tempore in office, and as the new House of Representatives had yet to convene, no Speaker either, leaving no one at all in the line of succession after the vice president. When Congress convened in December 1885, President Cleveland asked for a revision of the 1792 act, which was passed in 1886. Congress replaced the President pro tempore and Speaker with officers of the President 's Cabinet with the Secretary of State first in line. In the first 100 years of the United States, six former Secretaries of State had gone on to be elected President, while only two congressional leaders had advanced to that office. As a result, changing the order of the line of succession seemed reasonable.
The Presidential Succession Act of 1947, signed into law by President Harry S. Truman, added the Speaker of the House and President pro tempore back in the line, but switched the two from the 1792 order. It remains the sequence used today. Since the reorganization of the military in 1947 had merged the War Department (which governed the Army) with the Department of the Navy into the Department of Defense, the Secretary of Defense took the place in the order of succession previously held by the Secretary of War. The office of Secretary of the Navy, which had existed as a Cabinet - level position since 1798, had become subordinate to the Secretary of Defense in the military reorganization, and so was dropped from the line of succession in the 1947 Succession Act.
Until 1971, the Postmaster General, the head of the Post Office Department, was a member of the Cabinet, initially the last in the presidential line of succession before new officers were added. Once the Post Office Department was re-organized into the United States Postal Service, a special agency independent of the executive branch, the Postmaster General ceased to be a member of the Cabinet and was thus removed from the line of succession.
The United States Department of Homeland Security was created in 2002. On March 9, 2006, pursuant to the renewal of the Patriot Act as Pub. L. 109 -- 177, the Secretary of Homeland Security was added to the line of succession. The order of Cabinet members in the line has always been the same as the order in which their respective departments were established. Despite custom, many in Congress had wanted the Secretary to be placed at number eight on the list -- below the Attorney General, above the Secretary of the Interior, and in the position held by the Secretary of the Navy prior to the creation of the Secretary of Defense -- because the Secretary, already in charge of disaster relief and security, would presumably be more prepared to take over the presidency than some of the other Cabinet secretaries. Despite this, the 2006 law explicitly specifies that the "Secretary of Homeland Security '' follows the "Secretary of Veterans Affairs '' in the succession, effectively at the end of the list.
While nine Vice Presidents have succeeded to the office upon the death or resignation of the President, and two Vice Presidents have temporarily served as acting President, no other officer has ever been called upon to act as President. On March 4, 1849, President Zachary Taylor 's term began, but he declined to be sworn in on a Sunday, citing religious beliefs, and the Vice President was not sworn either. As the last President pro tempore of the Senate, David Rice Atchison was thought by some to be next in line after the Vice President, and his tombstone claims that he was US President for the day. However, Atchison took no oath of office to the presidency either, and his term as Senate President pro tempore had by then expired.
In 1865, when Andrew Johnson assumed the Presidency on the death of Abraham Lincoln, the office of Vice President became vacant. At that time, the Senate President pro tempore was next in line to the presidency. In 1868, Johnson was impeached by the House of Representatives and subjected to trial in the Senate, and if he had been convicted and thereby removed from office, Senate President pro tempore Benjamin Wade would have become acting President. This posed a conflict of interest, as Wade 's "guilty '' vote could have been decisive in removing Johnson from office and giving himself presidential powers and duties. (Johnson was acquitted by a one - vote margin.)
In his book The Shadow Presidents, which he published in 1979, Michael Medved describes a situation that arose prior to the 1916 election, when the First World War was raging in Europe. In view of the contemporary international turmoil, President Woodrow Wilson thought that if he lost the election it would be better for his opponent to begin his administration straight away, instead of waiting through the lame duck period, which at that time had a duration of almost four months. President Wilson and his aides formed a plan to exploit the rule of succession so that his rival Charles Evans Hughes could take over the Presidency as soon as the result of the election was clear. The plan was that Wilson would appoint Hughes to the post of Secretary of State. Wilson and his Vice President Thomas R. Marshall would then resign, and as the Secretary of State was at that time designated next in line of succession, Hughes would become President immediately. As it happened, President Wilson won re-election, so the plan was never put into action.
Since the 25th Amendment 's ratification, its Second Section, which addresses Vice Presidential succession as noted above, has been invoked twice.
During the 1973 Vice Presidential vacancy, House Speaker Carl Albert was first in line. As the Watergate scandal made President Nixon 's removal or resignation possible, Albert would have become Acting President and -- under Title 3, Section 19 (c) of the U.S. Code -- would have been able to "act as President until the expiration of the then current Presidential term '' on January 20, 1977. Albert openly questioned whether it was appropriate for him, a Democrat, to assume the powers and duties of the presidency when there was a public mandate for the Presidency to be held by a Republican. Albert announced that should he need to assume the Presidential powers and duties, he would do so only as a caretaker. However, with the nomination and confirmation of Gerald Ford to the Vice Presidency, which marked the first time the Second Section of the Twenty - fifth Amendment was invoked, these series of events were never tested. Albert again became first - in - line during the first four months of Ford 's Presidency, before the confirmation of Vice President Nelson Rockefeller, which marked the second time Section 2 of the Twenty - fifth Amendment was invoked.
In 1981, when President Ronald Reagan was shot, Vice President George H.W. Bush was traveling in Texas. Secretary of State Alexander Haig responded to a reporter 's question regarding who was running the government by stating:
Constitutionally, gentlemen, you have the President, the Vice President and the Secretary of State in that order, and should the President decide he wants to transfer the helm to the Vice President, he will do so. He has not done that. As of now, I am in control here, in the White House, pending return of the Vice President and in close touch with him. If something came up, I would check with him, of course.
A bitter dispute ensued over the meaning of Haig 's remarks. Most people believed that Haig was referring to the line of succession and erroneously claimed to have temporary Presidential authority, due to his implied reference to the Constitution. Haig and his supporters, noting his familiarity with the line of succession from his time as White House Chief of Staff during Richard Nixon 's resignation, said he only meant that he was the highest - ranking officer of the Executive branch on - site, managing things temporarily until the Vice President returned to Washington.
Several constitutional law experts have raised questions as to the constitutionality of the provisions that the Speaker of the House and the President pro tempore of the Senate succeed to the presidency. James Madison, one of the authors of the Constitution, raised similar constitutional questions about the Presidential Succession Act of 1792 in a 1792 letter to Edmund Pendleton. Two of these issues can be summarized:
In 2003 the Continuity of Government Commission suggested that the current law has "at least seven significant issues... that warrant attention '', specifically:
|
when does jurassic park the fallen kingdom come out | Jurassic World: Fallen Kingdom - wikipedia
Jurassic World: Fallen Kingdom is a 2018 American science fiction adventure film directed by J.A. Bayona. The film is the sequel to Jurassic World (2015) and is the fifth installment of the Jurassic Park film series, as well as the second installment of a planned Jurassic World trilogy. The film features Derek Connolly and Jurassic World director Colin Trevorrow both returning as writers, with Trevorrow and original Jurassic Park director Steven Spielberg acting as executive producers. The film is set on the fictional island of Isla Nublar, off Central America 's pacific coast. Chris Pratt, Bryce Dallas Howard, B.D. Wong, and Jeff Goldblum reprise their roles from previous films in the series, with Ted Levine, Rafe Spall, Toby Jones, Justice Smith, James Cromwell, and Geraldine Chaplin joining the cast.
Filming took place from February to July 2017 in the United Kingdom and Hawaii. Fallen Kingdom premiered in Madrid, Spain on May 21, 2018, and is scheduled to be released in the United States on June 22, 2018, by Universal Pictures. An untitled sequel is set to be released on June 11, 2021.
After the demise of the Jurassic World theme park on Isla Nublar off Central America 's pacific coast, the dinosaurs roam freely on the island for three years until an impending volcanic eruption threatens their existence. Former Jurassic World manager Claire Dearing creates a dinosaur rescue organization called the Dinosaur Protection Group and teams up with Benjamin Lockwood, the former partner of John Hammond, to bring the creatures to a sanctuary in America. Claire 's boyfriend Owen Grady, a former dinosaur trainer at the park, joins the mission to locate Blue, the last of the Velociraptors that he trained. However, after Isla Nublar is destroyed by the eruption, Claire and Owen are betrayed and learn about a greater scheme that emerges when the creatures are being auctioned away at Lockwood 's estate, and disaster strikes when an incredibly dangerous and intelligent hybrid dinosaur, the Indoraptor, escapes and begins a reign of terror across the estate.
During early conversations on the 2015 film Jurassic World, executive producer Steven Spielberg told director Colin Trevorrow that he was interested in having several more films made. In April 2014, Trevorrow announced that sequels had been discussed: "We wanted to create something that would be a little bit less arbitrary and episodic, and something that could potentially arc into a series that would feel like a complete story. '' Trevorrow hinted that Chris Pratt and Omar Sy could reprise their roles for the next few films and said he would direct the film if asked. Trevorrow later told Spielberg that he would only focus on directing one film in the series. In May 2015, Trevorrow announced that he would not direct another film in the series: "I would be involved in some way, but not as director. '' Trevorrow felt that different directors could bring different qualities to future films. Pratt is signed on for future films in the series, as is Ty Simpkins, who portrayed Gray in Jurassic World.
On June 3, 2015, Trevorrow stated that Jurassic World left many story possibilities open for the sequel 's director that could potentially allow the film to take place in a different location, rather than on an island like previous films. Trevorrow hinted that the next film could involve dinosaurs being used by other companies for non-entertainment purposes, possibly in agriculture, medicine, and war: "I really like the idea that this group of geneticists are n't the only people who can make a dinosaur (...) when you think of the differences between Apple and PC -- the minute something goes open - source, there are all kinds of entities and interests that may be able to utilize that technology. ''
On June 8, 2015, Jurassic World producer Frank Marshall met with Trevorrow and Universal Studios to discuss a sequel. Later that month, Trevorrow did not deny that the film could involve "dinosaur soldiers '' and said the series is "not always gon na be about a Jurassic Park '', saying he felt that future films could explore the idea of dinosaurs and humans co-existing together. Trevorrow also hinted that the next film may not involve the Jurassic World theme park and said he would be interested in seeing a Jurassic Park film made by one of several Spanish horror film directors, whose names he did not mention.
On July 23, 2015, Universal announced that a fifth film is scheduled for a June 22, 2018, release date. It was also announced that Trevorrow would write the script with his writing partner Derek Connolly, as they did for Jurassic World; that the film would be produced by Frank Marshall; and that Spielberg and Trevorrow would act as executive producers. Universal also said that Chris Pratt and Bryce Dallas Howard would reprise their roles from the previous film. At the time of the film 's announcement, Trevorrow said the series "is n't always going to be limited to theme parks '' and confirmed that the film would not involve "a bunch of dinosaurs chasing people on an island. That 'll get old real fast. '' Trevorrow also spoke of the film 's possible open - source storyline: "It 's almost like InGen is Mac, but what if PC gets their hands on it? What if there are 15 different entities around the world who can make a dinosaur? ''
In August 2015, Howard said that the script was being written, and it was announced that the film would be released in the UK two weeks early, on June 7, 2018. Later that year, B.D. Wong said he "would be happy to return '' as Dr. Henry Wu, while Howard announced that filming would begin in 2017. Howard also said she would be interested in seeing characters from earlier Jurassic Park films return for the fifth film, saying, "I could see versions of the film where a lot of the characters come back. '' By October 2015, director J.A. Bayona was being considered to direct the film, although he chose instead to direct the World War Z sequel, a project to which he had already signed on. In January 2016, it was reported that Bayona could be a candidate to direct the film after he left the World War Z sequel.
In March 2016, London was being scouted as a possible filming location and setting for the film, and it was subsequently announced that filming would take place at a UK studio. On April 14, 2016, actor Jeff Goldblum said he had no plans to appear in the film as his character Ian Malcolm, although he said he was open to the possibility. On April 18, 2016, Bayona was announced as the film 's director, with Belén Atienza and Patrick Crowley joining Marshall as producers. Spielberg, Marshall, and Kathleen Kennedy had been impressed by Bayona 's 2012 film, The Impossible, and initially considered having him direct Jurassic World, but he declined as he felt there was not enough time for production. Trevorrow wanted Bayona to direct the film after seeing his 2007 horror film, The Orphanage. After Bayona was hired, Trevorrow said about the film, "We 're moving it into new territory. J.A. Bayona is an incredible director and I know he 'll push the boundaries of what a ' Jurassic ' movie is. I think it 's important that we take risks. A franchise must evolve or perish. '' In June 2016, actor Sam Neill was asked if he would return to the series as Dr. Alan Grant and responded, "You never say never, but I think it 's moved on. It 's different times. ''
The film, under the working title of Ancient Futures, was in full pre-production as of July 2016, with storyboards being designed. Production was scheduled to begin in Hawaii in February 2017. Wales was also confirmed as a filming location, including Brecon Beacons and Penbryn. Trevorrow stated that Hawaii would be used as a primary filming location, while U.K. shooting would be limited to studios, without the story taking place there. Trevorrow also said that the film would feature many dinosaurs that were not seen in previous films and denied that the film 's story would involve militarized dinosaurs, which would only be mentioned in the film.
For the film 's second half in which dinosaurs are transported by boats to the mainland, Ecuador and Peru had both been scouted as possible filming locations and settings, while Marshall thought that Cabo San Lucas would be ideal, but such locations ultimately did not work for the film 's story. Although the film would be partially shot in England, Spielberg felt that the country was too far from the fictional Isla Nublar to be used as the in - film setting during the second half, as he and the producers did not want the film to focus too much time on a boat. Crowley stated, "Rather than making it a movie about traveling on a boat, which is not very exciting, you needed to get to the place. ''
In September 2016, Bayona confirmed that the film would be the second chapter in a planned Jurassic World trilogy. Later that year, Marshall said that Wong was "probably going to come back, '' while Jurassic World composer Michael Giacchino confirmed that he would return to compose the fifth film. Óscar Faura was announced as the film 's cinematographer at the end of the year.
Trevorrow and Connolly began work on the script and devised the basic story during a road trip that they took in June 2015, immediately after the release of Jurassic World. Having directed Jurassic World, Trevorrow became familiar with how animatronics worked and wrote scenes into the sequel in a way that would allow for their use, as animatronics are incapable of certain actions such as running. In September 2015, Trevorrow said the film 's story was inspired by a quote from Dr. Alan Grant in the first film: "Dinosaurs and man, two species separated by 65 million years of evolution, have suddenly been thrown back into the mix together. How can we possibly have the slightest idea of what to expect? '' In his initial film treatment, Trevorrow had included story elements that Marshall and Crowley considered excessive for a single film, as the producers felt it was also important to include details about Owen and Claire 's lives after the events of Jurassic World.
After Bayona was hired, he began reading all of Michael Crichton 's novels -- including Jurassic Park and its sequel, The Lost World -- for inspiration and "to try to immerse myself in Crichton 's mind. '' Trevorrow and Connolly began working with Bayona in July 2016, to perfect the script to the director 's liking. Trevorrow stated that the film would be more "suspenseful and scary '' than its predecessor: "It 's just the way it 's designed; it 's the way the story plays out. I knew I wanted Bayona to direct it long before anyone ever heard that it was a possibility, so the whole thing was just built around his skillset. '' Trevorrow later described the film as "The Impossible meets The Orphanage with dinosaurs. ''
Bayona stated that with the first half of the film set on an island, "you have what you expect from a Jurassic movie, '' while the second half "moves to a totally different environment that feels more suspenseful, darker, claustrophobic, and even has this kind of gothic element, which I love. '' Bayona 's concept of gothic suspense for the film was influenced by Alfred Hitchcock films as well as the 1979 film Dracula. Bayona compared the film to The Empire Strikes Back and Star Trek II: The Wrath of Khan, which were both considered darker than their predecessors. Trevorrow said the film 's dinosaurs would be "a parable of the treatment animals receive today: the abuse, medical experimentation, pets, having wild animals in zoos like prisons, the use the military has made of them, animals as weapons. '' Trevorrow later said that with the film 's dinosaur auction, "The worst instincts of mankind are revealed. The first film was very clearly about corporate greed. This is just about human greed. ''
Marshall said that Bayona had incorporated his own ideas into the film 's script, but stated that it is essentially the same original story devised by Trevorrow and Connolly. In September 2016, Trevorrow said that the film would be based on concepts from the novels and would include dialogue from the first novel. Trevorrow also stated that the story would be heavily inspired by the idea that, "A mistake made a long time ago just ca n't be undone. '' Among Trevorrow 's ideas was to include Jeff Goldblum 's character of Ian Malcolm, who appeared in the franchise 's earlier films. Goldblum had dialogue from the novel version of his character added into the film 's script. Marshall stated that Trevorrow wrote Malcolm as "the ' Uh oh, danger, I told you so ' kind of character, '' and Trevorrow said about the character, "I saw him as kind of Al Gore. He 's got a beard now, and he 's like, ' I told all of you this was going to be a disaster, and sure enough it is. ' ''
In October 2016, casting was underway for the role of a nine - year - old girl. The following month, Toby Jones and Rafe Spall were in discussions about joining the cast. By that time, Tom Holland -- who previously starred in The Impossible -- had discussed having a possible role in the film, but he did not believe he would be available for filming because of scheduling conflicts. Jones and Spall were cast at the end of the year, along with Justice Smith.
Daniella Pineda, Ted Levine and James Cromwell were cast in early 2017, while Wong confirmed his return as Dr. Henry Wu. To maintain secrecy, the Ancient Futures title was used in the casting phase. During auditions, references to dinosaurs were replaced with animals such as lions and grizzly bears. To convince the studio that Pineda was right for the role of Zia, Bayona had her demonstrate that she could perform comedy and drama scenes, as well as improvisation. Pineda auditioned a total of seven times before receiving the role. She auditioned for Bayona, Atienza, and Crowley, and did not meet the cast until she arrived in England for filming.
In March 2017, Bayona announced that Geraldine Chaplin, who had roles in each of his previous films, had joined the cast. The next month, it was announced that Jeff Goldblum would be reprising his role as Dr. Ian Malcolm from the first two films. Bayona considered Malcolm a "great character! '' while Marshall said, "The world has changed a lot since Ian Malcolm went to Jurassic Park and we need his point of view now more than ever. He told us about chaos theory, he was right. ''
Filming began at Langley Business Centre in Slough, England, on February 23, 2017. A majority of filming in England took place at Pinewood Studios. Because of its large sound stages, Pinewood Studios was considered perfect for the film 's many interior scenes. After filming concluded in England, production moved to Hawaii, which was used as a primary filming location. Scenes shot in Hawaii were set on Isla Nublar, the fictional island featured in the first and fourth films. Scenes were also expected to be shot at Brecon Beacons National Park in Wales. The film was shot in CinemaScope, and is the first entry in the Jurassic Park series presented in 2.40: 1 aspect ratio. The film crew made use of Arri Alexa 65 cameras during the course of filming. Spielberg was shown scenes from the film during production and he offered his opinions to Bayona. During filming and in between takes, Bayona used an iPod to play different types of music on set to inspire the actors during certain scenes, as he had done for his previous films. Bayona also played sound effects from previous films in the series, including a T. rex roar that he sometimes used to get a natural reaction from the actors, who agreed to be scared by Bayona during certain takes to increase authenticity. In particular, Bayona played unexpected sounds and loud music to scare Smith for certain scenes, as his character is portrayed as easily frightened.
In April 2017, scenes were filmed at East Berkshire College in Berkshire, England. Later that month, filming took place at Hartland Park -- formerly the Pyestock jet engine test site -- in Fleet, Hampshire, England. Scenes were scheduled to be filmed on sets at Hawley Common, also in Hampshire. On May 10, 2017, it was reported that scenes were being filmed at Rock Barracks military base, near Woodbridge, Suffolk. Goldblum began filming his scenes on the same day. By that time, a leak had stated that the film would center around Blue, the last surviving raptor from Jurassic World, and Owen stopping her from being used for violence. On May 24, 2017, scenes were shot at Hampshire 's Blackbushe Airport, which stood in as an American airfield. Filming in the United Kingdom concluded on June 9, 2017. Up to that point, Trevorrow was present as an on - set writer for each day of production so he could aid Bayona with any possible script changes. Goldblum shot his scenes at Pinewood Studios, and concluded his shoot on the last day of filming in the United Kingdom.
Filming in Hawaii was underway as of June 13, 2017. On June 21, 2017, filming began at Heʻeia Kea Small Boat Harbor in Heʻeia, Hawaii. More than half of the harbor was closed for filming, which required the use of smoke machines. Scenes were scheduled to be shot at the harbor throughout the end of the month. On June 22, 2017, the film 's official title was announced as Jurassic World: Fallen Kingdom. At the time, filming was underway at Kualoa Ranch on the Hawaiian island of Oahu. In Hawaii, scenes in which characters are running were filmed with the use of the Edge Arm, a stabilized camera that was attached to a crane, which was mounted to a truck that drove alongside the actors. The specialized camera allowed for scenes to be shot steadily despite the truck driving over rough terrain.
The film includes a scene on Isla Nublar in which Claire and Franklin are riding in a ball - shaped Jurassic World Gyrosphere ride to evade dinosaurs. The scene was shot in Hawaii and England, and Bayona described it as one of the film 's biggest challenges. In Hawaii, the Edge Arm was used to film the actors riding in the Gyrosphere as it was hauled on a trailer to simulate its movement. In England, an outdoor roller coaster track with a drop was constructed for the Gyrosphere, which Howard and Smith rode in to shoot a scene in which the ride plummets off a cliff and into the water surrounding Isla Nublar. The final portion of the scene was shot at Pinewood Studios, where a large indoor tank was constructed and filled with water to depict the submerged ride. Pratt was aided by a diving instructor for the scene, which also involved Howard and Smith underwater. Filming in the tank lasted approximately one week and required 85 crew members.
On July 7, 2017, filming took place at Oahu 's Hālona Blowhole, where Pratt and Howard shot scenes on a beach. Filming concluded on July 8, 2017. Bayona said that making the film was the biggest challenge of his life.
Animatronics were used to depict many of the film 's dinosaurs. The film features more dinosaurs than any previous film in the series, as Bayona wanted to include several new dinosaurs not previously seen in earlier films, in addition to the fictional Indoraptor. The film also features more animatronic dinosaurs than any previous sequel. The animatronics used in the film were more technologically advanced than in the previous films. Special - effects artist Neal Scanlan served as the film 's creature effects creative supervisor and worked on the animatronic dinosaurs, while David Vickery of Industrial Light & Magic created versions of the dinosaurs through computer - generated imagery (CGI). Scanlan worked closely with Bayona and Vickery to create the creatures. Bayona stated that animatronics "are very helpful on set, especially for the actors so they have something to perform against. There 's an extra excitement if they can act in front of something real. ''
Among the animatronics used was a life - sized Tyrannosaurus rex, built by Scanlan 's team. The animatronic was controlled through joysticks, with the ability to breathe and move its head. Within the film 's story, the T. rex is portrayed as the same individual previously featured in Jurassic Park and Jurassic World. Scanlan 's team also created functional animatronic models of the Indoraptor and Blue. The movements of the Blue animatronic required the work of 15 puppeteers, who were hidden beneath it during filming. The Blue animatronic 's movements were rehearsed in advance of each scene. Scanlan 's team also made puppeteering aids, rod puppets, and several prop dinosaurs, all created in coordination with Vickery to ensure a consistent result between the practical effects and CGI. After reading fan thoughts on dinosaurs and speaking with children, Bayona realized that dinosaur textures and colors were frequently brought up, and stated, "I thought that was the area where I could play with. They feel somehow a little bit more exotic and richer in this movie. ''
For advice on veterinary procedures and animatronic movements, the filmmakers sought a veterinary surgeon who had experience with African wildlife. Jonathan Cranston, a Gloucestershire veterinary surgeon, was recommended for the position because of his experience with wildlife in South Africa. Cranston advised Bayona and the producers on how to choreograph several scenes to accurately depict complex veterinarian procedures that involved the dinosaurs. Cranston also worked closely with Pratt, Howard, Pineda and Smith to teach them how to perform such procedures. Additionally, Cranston advised the puppeteers on creating subtle and authentic animal movements, and also worked with Bayona on two scenes. Cranston was on set for 12 days, primarily at Pinewood Studios.
Jurassic World: Fallen Kingdom had its premiere at the WiZink Center in Madrid, Spain on May 21, 2018. The film 's worldwide theatrical release is scheduled to begin on June 6, 2018, when the film is expected to be released in the United Kingdom. The film is scheduled for release in the United States on June 22, 2018.
A six - second clip from the film was released on November 22, 2017. The first trailer was teased for release on November 30, 2017, but this was later confirmed to be incorrect. Several teaser trailers and a behind - the - scenes featurette of the film were released in early December 2017, prior to the release of a full - length trailer on December 7. That month, Universal launched a website for the Dinosaur Protection Group that ultimately included miscellaneous information about the group and its effort to save the island 's dinosaurs, as well as a video featuring Howard, Pineda and Smith as their characters. A second trailer aired during Super Bowl LII on February 4, 2018. A 30 - second teaser trailer was released on April 13, 2018, announcing the release of a third full trailer on April 18.
Universal spent $185 million on partners for a global marketing campaign, more than double the cost of the previous film 's partner program. The campaign included nine partners which aired television commercials and sold products to promote the film. The partners included Dairy Queen, Doritos, Dr Pepper, Ferrero SpA, Jeep, Juicy Fruit, Kellogg 's, M&M 's, and Skittles. The global marketing campaign consisted of 1.3 billion items to promote the film, including 100 million boxes of Kellogg 's products and 15 million packages of Kinder Joy candy by Ferrero. Dairy Queen, a returning partner from the previous film, sold "Jurassic Chomp '' ice cream desserts in collectable cups, while Doritos marketed bags of chips that featured the film 's dinosaurs. Dr Pepper marketed soda cans that featured images of the film 's dinosaurs. For Super Bowl LII, Trevorrow directed a Jeep commercial starring Goldblum and featuring a T. rex. Within 24 hours of its release, the commercial received 39.7 million online views, which was more than any film trailer that was watched online following its Super Bowl LII television debut.
Licensing partners included Mattel, Lego, and Funko, all of which created toys based on the film. Mattel produced a variety of toys, including dinosaurs and action figures, as well as Barbie dolls featuring the likeness of Pratt and Howard as their characters. A mobile app titled Jurassic World Facts was released as a tie - in to Mattel 's dinosaur toys, which included symbols that could be scanned to collect facts about each creature. Lego is expected to release 13 Lego sets based on the film. A video game, Jurassic World Evolution, is scheduled to be released simultaneously with the film. A two - part virtual reality miniseries titled Jurassic World: Blue was released for Oculus VR headsets as a film tie - in. It was created by Felix & Paul Studios and Industrial Light and Magic, and features Blue on Isla Nublar at the time of the volcanic eruption.
In December 2017, a survey from Fandango indicated that Fallen Kingdom was one of the most anticipated films of 2018. In April 2018, BoxOffice Magazine projected the film to gross between $130 -- 150 million in its opening weekend in the United States and Canada, and a total of $325 -- 380 million for its final gross.
An untitled sequel, known as Jurassic World 3, is scheduled for release on June 11, 2021. Trevorrow will direct the film, and will write the screenplay with Emily Carmichael, based on a story by him and Connolly. Trevorrow and Spielberg will serve as executive producers for the film, with Marshall and Crowley as producers. Pratt and Howard will reprise their roles for the film.
|
india's first international stock exchange has been launched recently in which city | India International Exchange - Wikipedia
The India International Exchange (INX) is India 's first international stock exchange, opened in 2017. It is located at the International Financial Services Centre (IFSC), GIFT City in Gujarat. It is a wholly owned subsidiary of the Bombay Stock Exchange (BSE). The INX will be initially headed by V. Balasubramanian with other staff from the BSE.
It was inaugurated on 9 January 2017 by Indian prime minister Narendra Modi, the trading operations were scheduled to begin on 16 January 2017. It was claimed to be fastest in the world and aimed to operate for 22 hours a day.
|
where was morocco love in the time of war filmed | Morocco: Love in Times of War - Wikipedia
Morocco: Love in Times of War is a war drama set primarily in 1920 's Melilla, Morocco. Occurring during the Rif War, the series revolves around a group of nurses from Madrid, Spain who are sent to Morocco by Queen Victoria Eugenia to open a hospital in the war torn region of North Africa. The nurses learn firsthand the cruelty of war, but still find time for romance. The series debuted in 2017 on Antena 3 and in 2018 on Netflix.
In 1921, Morocco is being ravaged by the events of the Rif War. The Riffian resistance in the country has killed many soldiers of the Spanish Army. To remedy this, Queen Victoria Eugenie agrees to send a group of nurses from the Spanish Red Cross to Melilla in order to establish a hospital. This group of nurses is led by the Duchess of Victoria, María del Carmen Angoloti y Mesa, and is made up of members of Spain 's upper class.
The group arrives in Melilla, and set up a hospital in an old library. It is not long before they are put into action. Despite all the injuries and casualties, these nurses have not lost hope and still find time to seek romance from the soldiers and doctors that they are surrounded by. Eventually it is agreed upon that more lives can be saved out on the front lines before injured soldiers are even brought into the hospital. None of these nurses are combat trained, but some of them will now have to figure out how to stay alive on the front lines.
|
what is the current height of mount everest | Mount Everest - Wikipedia
Mount Everest, known in Nepali as Sagarmāthā and in Tibetan as Chomolungma, is Earth 's highest mountain above sea level, located in the Mahalangur Himal sub-range of the Himalayas. The international border between China (Tibet Autonomous Region) and Nepal (Province No. 1) runs across its summit point.
The current official elevation of 8,848 m (29,029 ft), recognised by China and Nepal, was established by a 1955 Indian survey and subsequently confirmed by a Chinese survey in 1975. In 2005, China remeasured the rock height of the mountain, with a result of 8844.43 m. There followed an argument between China and Nepal as to whether the official height should be the rock height (8,844 m., China) or the snow height (8,848 m., Nepal). In 2010, an agreement was reached by both sides that the height of Everest is 8,848 m, and Nepal recognises China 's claim that the rock height of Everest is 8,844 m.
In 1865, Everest was given its official English name by the Royal Geographical Society, upon a recommendation by Andrew Waugh, the British Surveyor General of India. As there appeared to be several different local names, Waugh chose to name the mountain after his predecessor in the post, Sir George Everest, despite George Everest 's objections.
Mount Everest attracts many climbers, some of them highly experienced mountaineers. There are two main climbing routes, one approaching the summit from the southeast in Nepal (known as the "standard route '') and the other from the north in Tibet. While not posing substantial technical climbing challenges on the standard route, Everest presents dangers such as altitude sickness, weather, and wind, as well as significant hazards from avalanches and the Khumbu Icefall. As of 2017, nearly 300 people have died on Everest, many of whose bodies remain on the mountain.
The first recorded efforts to reach Everest 's summit were made by British mountaineers. As Nepal did not allow foreigners into the country at the time, the British made several attempts on the north ridge route from the Tibetan side. After the first reconnaissance expedition by the British in 1921 reached 7,000 m (22,970 ft) on the North Col, the 1922 expedition pushed the north ridge route up to 8,320 m (27,300 ft), marking the first time a human had climbed above 8,000 m (26,247 ft). Seven porters were killed in an avalanche on the descent from the North Col. The 1924 expedition resulted in one of the greatest mysteries on Everest to this day: George Mallory and Andrew Irvine made a final summit attempt on 8 June but never returned, sparking debate as to whether or not they were the first to reach the top. They had been spotted high on the mountain that day but disappeared in the clouds, never to be seen again, until Mallory 's body was found in 1999 at 8,155 m (26,755 ft) on the north face. Tenzing Norgay and Edmund Hillary made the first official ascent of Everest in 1953, using the southeast ridge route. Tenzing had reached 8,595 m (28,199 ft) the previous year as a member of the 1952 Swiss expedition. The Chinese mountaineering team of Wang Fuzhou, Gonpo, and Qu Yinhua made the first reported ascent of the peak from the north ridge on 25 May 1960.
The history of this area dates back to 800 BCE, when the ancient Kirati had their Kirata Kingdom in the Himalayan mountains. The Mahalangur range of the Himalaya is also known as Kirat area of eastern Nepal.
In 1715, the Qing Empire surveyed the mountain while mapping its territory and depicted it as Mount Qomolangma no later than 1719.
In 1802, the British began the Great Trigonometric Survey of India to fix the locations, heights, and names of the world 's highest mountains. Starting in southern India, the survey teams moved northward using giant theodolites, each weighing 500 kg (1,100 lb) and requiring 12 men to carry, to measure heights as accurately as possible. They reached the Himalayan foothills by the 1830s, but Nepal was unwilling to allow the British to enter the country due to suspicions of political aggression and possible annexation. Several requests by the surveyors to enter Nepal were turned down.
The British were forced to continue their observations from Terai, a region south of Nepal which is parallel to the Himalayas. Conditions in Terai were difficult because of torrential rains and malaria. Three survey officers died from malaria while two others had to retire because of failing health.
Nonetheless, in 1847, the British continued the survey and began detailed observations of the Himalayan peaks from observation stations up to 240 km (150 mi) distant. Weather restricted work to the last three months of the year. In November 1847, Andrew Waugh, the British Surveyor General of India made several observations from the Sawajpore station at the east end of the Himalayas. Kangchenjunga was then considered the highest peak in the world, and with interest he noted a peak beyond it, about 230 km (140 mi) away. John Armstrong, one of Waugh 's subordinates, also saw the peak from a site farther west and called it peak "b ''. Waugh would later write that the observations indicated that peak "b '' was higher than Kangchenjunga, but given the great distance of the observations, closer observations were required for verification. The following year, Waugh sent a survey official back to Terai to make closer observations of peak "b '', but clouds thwarted his attempts.
In 1849, Waugh dispatched James Nicolson to the area, who made two observations from Jirol, 190 km (120 mi) away. Nicolson then took the largest theodolite and headed east, obtaining over 30 observations from five different locations, with the closest being 174 km (108 mi) from the peak.
Nicolson retreated to Patna on the Ganges to perform the necessary calculations based on his observations. His raw data gave an average height of 9,200 m (30,200 ft) for peak "b '', but this did not consider light refraction, which distorts heights. However, the number clearly indicated that peak "b '' was higher than Kangchenjunga. Nicolson contracted malaria and was forced to return home without finishing his calculations. Michael Hennessy, one of Waugh 's assistants, had begun designating peaks based on Roman numerals, with Kangchenjunga named Peak IX. Peak "b '' now became known as Peak XV.
In 1852, stationed at the survey headquarters in Dehradun, Radhanath Sikdar, an Indian mathematician and surveyor from Bengal, was the first to identify Everest as the world 's highest peak, using trigonometric calculations based on Nicolson 's measurements. An official announcement that Peak XV was the highest was delayed for several years as the calculations were repeatedly verified. Waugh began work on Nicolson 's data in 1854, and along with his staff spent almost two years working on the numbers, having to deal with the problems of light refraction, barometric pressure, and temperature over the vast distances of the observations. Finally, in March 1856 he announced his findings in a letter to his deputy in Calcutta. Kangchenjunga was declared to be 8,582 m (28,156 ft), while Peak XV was given the height of 8,840 m (29,002 ft). Waugh concluded that Peak XV was "most probably the highest in the world ''. Peak XV (measured in feet) was calculated to be exactly 29,000 ft (8,839.2 m) high, but was publicly declared to be 29,002 ft (8,839.8 m) in order to avoid the impression that an exact height of 29,000 feet (8,839.2 m) was nothing more than a rounded estimate. Waugh is sometimes playfully credited with being "the first person to put two feet on top of Mount Everest ''.
While the survey wanted to preserve local names if possible (e.g., Kangchenjunga and Dhaulagiri), Waugh argued that he could not find any commonly used local name. Waugh 's search for a local name was hampered by Nepal and Tibet 's exclusion of foreigners. Many local names existed, including "Deodungha '' ("Holy Mountain '') in Darjeeling and the Tibetan "Chomolungma '', which appeared on a 1733 map published in Paris by the French geographer D'Anville. In the late 19th century, many European cartographers incorrectly believed that a native name for the mountain was Gaurishankar, a mountain between Kathmandu and Everest.
Waugh argued that because there were many local names, it would be difficult to favour one name over all others, so he decided that Peak XV should be named after Welsh surveyor Sir George Everest, his predecessor as Surveyor General of India. Everest himself opposed the name suggested by Waugh and told the Royal Geographical Society in 1857 that "Everest '' could not be written in Hindi nor pronounced by "the native of India ''. Waugh 's proposed name prevailed despite the objections, and in 1865, the Royal Geographical Society officially adopted Mount Everest as the name for the highest mountain in the world. The modern pronunciation of Everest (/ ˈɛvrɪst, ˈɛvər - /) is different from Sir George 's pronunciation of his surname (/ ˈiːvrɪst / EEV - rist).
The Tibetan name for Mount Everest is ཇོ ་ མོ ་ གླང ་ མ (IPA: (t͡ɕhòmòlɑ́ŋmɑ̀), lit. "Holy Mother ''), whose official Tibetan pinyin form is Qomolangma. It is also popularly romanised as Chomolungma and (in Wylie) as Jo - mo - glang - ma or Jomo Langma. The official Chinese transcription is 珠穆朗玛 峰 (t 珠穆朗瑪 峰), whose pinyin form is Zhūmùlǎngmǎ Fēng. It is also infrequently simply translated into Chinese as Shèngmǔ Fēng (t 聖母 峰, s 圣母 峰, lit. "Holy Mother Peak ''). In 2002, the Chinese People 's Daily newspaper published an article making a case against the use of "Mount Everest '' in English, insisting that the mountain should be referred to as Mount "Qomolangma '', based on the official form of the local Tibetan name. The article argued that British colonialists did not "first discover '' the mountain, as it had been known to the Tibetans and mapped by the Chinese as "Qomolangma '' since at least 1719.
In the early 1960s, the Nepalese government coined a Nepali name for Mount Everest, Sagarmāthā or Sagar - Matha (सागर - मथ्था).
The 8,848 m (29,029 ft) height given is officially recognised by Nepal and China, although Nepal plans a new survey.
In 1856, Andrew Waugh announced Everest (then known as Peak XV) as 8,840 m (29,002 ft) high, after several years of calculations based on observations made by the Great Trigonometric Survey.
The elevation of 8,848 m (29,029 ft) was first determined by an Indian survey in 1955, made closer to the mountain, also using theodolites. It was subsequently reaffirmed by a 1975 Chinese measurement of 8,848.13 m (29,029.30 ft). In both cases the snow cap, not the rock head, was measured. In May 1999 an American Everest Expedition, directed by Bradford Washburn, anchored a GPS unit into the highest bedrock. A rock head elevation of 8,850 m (29,035 ft), and a snow / ice elevation 1 m (3 ft) higher, were obtained via this device. Although it has not been officially recognised by Nepal, this figure is widely quoted. Geoid uncertainty casts doubt upon the accuracy claimed by both the 1999 and 2005 surveys.
A detailed photogrammetric map (at a scale of 1: 50,000) of the Khumbu region, including the south side of Mount Everest, was made by Erwin Schneider as part of the 1955 International Himalayan Expedition, which also attempted Lhotse. An even more detailed topographic map of the Everest area was made in the late - 1980s under the direction of Bradford Washburn, using extensive aerial photography.
On 9 October 2005, after several months of measurement and calculation, the Chinese Academy of Sciences and State Bureau of Surveying and Mapping officially announced the height of Everest as 8,844.43 m (29,017.16 ft) with accuracy of ± 0.21 m (8.3 in). They claimed it was the most accurate and precise measurement to date. This height is based on the highest point of rock and not the snow and ice covering it. The Chinese team also measured a snow - ice depth of 3.5 m (11 ft), which is in agreement with a net elevation of 8,848 m (29,029 ft). The snow and ice thickness varies over time, making a definitive height of the snow cap impossible to determine.
It is thought that the plate tectonics of the area are adding to the height and moving the summit northeastwards. Two accounts suggest the rates of change are 4 mm (0.16 in) per year (upwards) and 3 to 6 mm (0.12 to 0.24 in) per year (northeastwards), but another account mentions more lateral movement (27 mm or 1.1 in), and even shrinkage has been suggested.
The summit of Everest is the point at which earth 's surface reaches the greatest distance above sea level. Several other mountains are sometimes claimed to be the "tallest mountains on earth ''. Mauna Kea in Hawaii is tallest when measured from its base; it rises over 10,200 m (33,464.6 ft) when measured from its base on the mid-ocean floor, but only attains 4,205 m (13,796 ft) above sea level.
By the same measure of base to summit, Denali, in Alaska, also known as Mount McKinley, is taller than Everest as well. Despite its height above sea level of only 6,190 m (20,308 ft), Denali sits atop a sloping plain with elevations from 300 to 900 m (980 to 2,950 ft), yielding a height above base in the range of 5,300 to 5,900 m (17,400 to 19,400 ft); a commonly quoted figure is 5,600 m (18,400 ft). By comparison, reasonable base elevations for Everest range from 4,200 m (13,800 ft) on the south side to 5,200 m (17,100 ft) on the Tibetan Plateau, yielding a height above base in the range of 3,650 to 4,650 m (11,980 to 15,260 ft).
The summit of Chimborazo in Ecuador is 2,168 m (7,113 ft) farther from earth 's centre (6,384.4 km (3,967.1 mi)) than that of Everest (6,382.3 km (3,965.8 mi)), because the earth bulges at the equator. This is despite Chimborazo having a peak 6,268 m (20,564.3 ft) above sea level versus Mount Everest 's 8,848 m (29,028.9 ft).
Geologists have subdivided the rocks comprising Mount Everest into three units called formations. Each formation is separated from the other by low - angle faults, called detachments, along which they have been thrust southward over each other. From the summit of Mount Everest to its base these rock units are the Qomolangma Formation, the North Col Formation, and the Rongbuk Formation.
The Qomolangma Formation, also known as the Jolmo Lungama Formation or the Everest Formation, runs from the summit to the top of the Yellow Band, about 8,600 m (28,200 ft) above sea level. It consists of greyish to dark grey or white, parallel laminated and bedded, Ordovician limestone inter layered with subordinate beds of recrystallised dolomite with argillaceous laminae and siltstone. Gansser first reported finding microscopic fragments of crinoids in this limestone. Later petrographic analysis of samples of the limestone from near the summit revealed them to be composed of carbonate pellets and finely fragmented remains of trilobites, crinoids, and ostracods. Other samples were so badly sheared and recrystallised that their original constituents could not be determined. A thick, white - weathering thrombolite bed that is 60 m (200 ft) thick comprises the foot of the "Third Step '', and base of the summit - pyramid of Everest. This bed, which crops out starting about 70 m (230 ft) below the summit of Mount Everest, consists of sediments trapped, bound, and cemented by the biofilms of micro-organisms, especially cyanobacteria, in shallow marine waters. The Qomolangma Formation is broken up by several high - angle faults that terminate at the low angle normal fault, the Qomolangma Detachment. This detachment separates it from the underlying Yellow Band. The lower five metres of the Qomolangma Formation overlying this detachment are very highly deformed.
The bulk of Mount Everest, between 7,000 and 8,600 m (23,000 and 28,200 ft), consists of the North Col Formation, of which the Yellow Band forms its upper part between 8,200 to 8,600 m (26,900 to 28,200 ft). The Yellow Band consists of intercalated beds of Middle Cambrian diopside - epidote - bearing marble, which weathers a distinctive yellowish brown, and muscovite - biotite phyllite and semischist. Petrographic analysis of marble collected from about 8,300 m (27,200 ft) found it to consist as much as five percent of the ghosts of recrystallised crinoid ossicles. The upper five metres of the Yellow Band lying adjacent to the Qomolangma Detachment is badly deformed. A 5 -- 40 cm (2.0 -- 15.7 in) thick fault breccia separates it from the overlying Qomolangma Formation.
The remainder of the North Col Formation, exposed between 7,000 to 8,200 m (23,000 to 26,900 ft) on Mount Everest, consists of interlayered and deformed schist, phyllite, and minor marble. Between 7,600 and 8,200 m (24,900 and 26,900 ft), the North Col Formation consists chiefly of biotite - quartz phyllite and chlorite - biotite phyllite intercalated with minor amounts of biotite - sericite - quartz schist. Between 7,000 and 7,600 m (23,000 and 24,900 ft), the lower part of the North Col Formation consists of biotite - quartz schist intercalated with epidote - quartz schist, biotite - calcite - quartz schist, and thin layers of quartzose marble. These metamorphic rocks appear to be the result of the metamorphism of Middle to Early Cambrian deep sea flysch composed of interbedded, mudstone, shale, clayey sandstone, calcareous sandstone, graywacke, and sandy limestone. The base of the North Col Formation is a regional low - angle normal fault called the "Lhotse detachment ''.
Below 7,000 m (23,000 ft), the Rongbuk Formation underlies the North Col Formation and forms the base of Mount Everest. It consists of sillimanite - K - feldspar grade schist and gneiss intruded by numerous sills and dikes of leucogranite ranging in thickness from 1 cm to 1,500 m (0.4 in to 4,900 ft). These leucogranites are part of a belt of Late Oligocene -- Miocene intrusive rocks known as the Higher Himalayan leucogranite. They formed as the result of partial melting of Paleoproterozoic to Ordovician high - grade metasedimentary rocks of the Higher Himalayan Sequence about 20 to 24 million years ago during the subduction of the Indian Plate.
Mount Everest consists of sedimentary and metamorphic rocks that have been faulted southward over continental crust composed of Archean granulites of the Indian Plate during the Cenozoic collision of India with Asia. Current interpretations argue that the Qomolangma and North Col formations consist of marine sediments that accumulated within the continental shelf of the northern passive continental margin of India before it collided with Asia. The Cenozoic collision of India with Asia subsequently deformed and metamorphosed these strata as it thrust them southward and upward. The Rongbuk Formation consists of a sequence of high - grade metamorphic and granitic rocks that were derived from the alteration of high - grade metasedimentary rocks. During the collision of India with Asia, these rocks were thrust downward and to the north as they were overridden by other strata; heated, metamorphosed, and partially melted at depths of over 15 to 20 kilometres (9.3 to 12.4 mi) below sea level; and then forced upward to surface by thrusting towards the south between two major detachments. The Himalayas are rising by about 5 mm per year.
There is very little native flora or fauna on Everest. A moss grows at 6,480 metres (21,260 ft) on Mount Everest. It may be the highest altitude plant species. An alpine cushion plant called Arenaria is known to grow below 5,500 metres (18,000 ft) in the region.
Euophrys omnisuperstes, a minute black jumping spider, has been found at elevations as high as 6,700 metres (22,000 ft), possibly making it the highest confirmed non-microscopic permanent resident on Earth. It lurks in crevices and may feed on frozen insects that have been blown there by the wind. There is a high likelihood of microscopic life at even higher altitudes.
Birds, such as the bar - headed goose, have been seen flying at the higher altitudes of the mountain, while others, such as the chough, have been spotted as high as the South Col at 7,920 metres (25,980 ft). Yellow - billed choughs have been seen as high as 7,900 metres (26,000 ft) and bar - headed geese migrate over the Himalayas. In 1953, George Lowe (part of the expedition of Tenzing and Hillary) said that he saw bar - headed geese flying over Everest 's summit.
Yaks are often used to haul gear for Mount Everest climbs. They can haul 100 kg (220 pounds), have thick fur and large lungs. One common piece of advice for those in the Everest region is to be on higher ground when around yaks and other animals, as they can knock people off the mountain if standing on the downhill edge of a trail. Other animals in the region include the Himalayan tahr which is sometimes eaten by the snow leopard. The Himalayan black bear can be found up to about 4,300 metres (14,000 ft) and the red panda is also present in the region. One expedition found a surprising range of species in the region including a pika and ten new species of ants.
In 2008, a new weather station at about 8,000 m altitude (26,246 feet) went online. The station 's first data in May 2008 were air temperature − 17 ° C (1 ° F), relative humidity 41.3 percent, atmospheric pressure 382.1 hPa (38.21 kPa), wind direction 262.8 °, wind speed 12.8 m / s (28.6 mph, 46.1 km / h), global solar radiation 711.9 watts / m, solar UVA radiation 30.4 W / m. The project was orchestrated by Stations at High Altitude for Research on the Environment (SHARE), which also placed the Mount Everest webcam in 2011. The solar - powered weather station is on the South Col.
One of the issues facing climbers is the frequent presence of high - speed winds. The peak of Mount Everest extends into the upper troposphere and penetrates the stratosphere, which can expose it to the fast and freezing winds of the jet stream. In February 2004 a wind speed of 280 km / h (175 mph) was recorded at the summit and winds over 160 km / h (100 mph) are common. These winds can blow climbers off Everest. Climbers typically aim for a 7 - to 10 - day window in the spring and fall when the Asian monsoon season is either starting up or ending and the winds are lighter. The air pressure at the summit is about one - third what it is at sea level, and by Bernoulli 's principle, the winds can lower the pressure further, causing an additional 14 percent reduction in oxygen to climbers. The reduction in oxygen availability comes from the reduced overall pressure, not a reduction in the ratio of oxygen to other gases.
In the summer, the Indian monsoon brings warm wet air from the Indian Ocean to Everest 's south side. During the winter the west - southwest flowing jet stream shifts south and blows on the peak.
Because Mount Everest is the highest mountain in the world, it has attracted considerable attention and climbing attempts. A set of climbing routes has been established over several decades of climbing expeditions to the mountain. Whether the mountain was climbed in ancient times is unknown. It may have been climbed in 1924.
Everest 's first known summitting occurred by 1953, and interest by climbers increased. Despite the effort and attention poured into expeditions, only about 200 people had summitted by 1987. Everest remained a difficult climb for decades, even for serious attempts by professional climbers and large national expeditions, which were the norm until the commercial era began in the 1990s.
By March 2012, Everest had been climbed 5,656 times with 223 deaths. Although lower mountains have longer or steeper climbs, Everest is so high the jet stream can hit it. Climbers can be faced with winds beyond 320 km / h (200 mph) when the weather shifts. At certain times of the year the jet stream shifts north, providing periods of relative calm at the mountain. Other dangers include blizzards and avalanches.
By 2013, The Himalayan Database recorded 6,871 summits by 4,042 different people.
In 1885, Clinton Thomas Dent, president of the Alpine Club, suggested that climbing Mount Everest was possible in his book Above the Snow Line.
The northern approach to the mountain was discovered by George Mallory and Guy Bullock on the initial 1921 British Reconnaissance Expedition. It was an exploratory expedition not equipped for a serious attempt to climb the mountain. With Mallory leading (and thus becoming the first European to set foot on Everest 's flanks) they climbed the North Col to an altitude of 7,005 metres (22,982 ft). From there, Mallory espied a route to the top, but the party was unprepared for the great task of climbing any further and descended.
The British returned for a 1922 expedition. George Finch climbed using oxygen for the first time. He ascended at a remarkable speed -- 290 metres (951 ft) per hour, and reached an altitude of 8,320 m (27,300 ft), the first time a human reported to climb higher than 8,000 m. Mallory and Col. Felix Norton made a second unsuccessful attempt. Mallory was faulted for leading a group down from the North Col which got caught in an avalanche. Mallory was pulled down too, but survived. Seven native porters were killed.
The next expedition was in 1924. The initial attempt by Mallory and Geoffrey Bruce was aborted when weather conditions prevented the establishment of Camp VI. The next attempt was that of Norton and Somervell, who climbed without oxygen and in perfect weather, traversing the North Face into the Great Couloir. Norton managed to reach 8,550 m (28,050 ft), though he ascended only 30 m (98 ft) or so in the last hour. Mallory rustled up oxygen equipment for a last - ditch effort. He chose young Andrew Irvine as his partner.
On 8 June 1924, George Mallory and Andrew Irvine made an attempt on the summit via the North Col - North Ridge - Northeast Ridge route from which they never returned. On 1 May 1999, the Mallory and Irvine Research Expedition found Mallory 's body on the North Face in a snow basin below and to the west of the traditional site of Camp VI. Controversy has raged in the mountaineering community whether one or both of them reached the summit 29 years before the confirmed ascent and safe descent of Everest by Sir Edmund Hillary and Tenzing Norgay in 1953.
In 1933, Lady Houston, a British millionairess, funded the Houston Everest Flight of 1933, which saw a formation of aircraft led by the Marquess of Clydesdale fly over the summit in an effort to deploy the British Union Flag at the top.
Early expeditions -- such as General Charles Bruce 's in the 1920s and Hugh Ruttledge 's two unsuccessful attempts in 1933 and 1936 -- tried to ascend the mountain from Tibet, via the North Face. Access was closed from the north to Western expeditions in 1950, after China took control of Tibet. In 1950, Bill Tilman and a small party which included Charles Houston, Oscar Houston, and Betsy Cowles undertook an exploratory expedition to Everest through Nepal along the route which has now become the standard approach to Everest from the south.
The Swiss Expedition of 1952, led by Edouard Wyss - Dunant, was granted permission to attempt a climb from Nepal. The expedition established a route through the Khumbu icefall and ascended to the South Col at an elevation of 7,986 m (26,201 ft). No attempt at an ascent of Everest was ever under consideration in this case. Raymond Lambert and Sherpa Tenzing Norgay were able to reach an elevation of about 8,595 m (28,199 ft) on the southeast ridge, setting a new climbing altitude record. Tenzing 's experience was useful when he was hired to be part of the British expedition in 1953.
In 1953, a ninth British expedition, led by John Hunt, returned to Nepal. Hunt selected two climbing pairs to attempt to reach the summit. The first pair, Tom Bourdillon and Charles Evans, came within 100 m (330 ft) of the summit on 26 May 1953, but turned back after running into oxygen problems. As planned, their work in route finding and breaking trail and their oxygen caches were of great aid to the following pair. Two days later, the expedition made its second and final assault on the summit with its second climbing pair, the New Zealander Edmund Hillary and Tenzing Norgay, a Nepali sherpa climber from Darjeeling, India. They reached the summit at 11: 30 local time on 29 May 1953 via the South Col route. At the time, both acknowledged it as a team effort by the whole expedition, but Tenzing revealed a few years later that Hillary had put his foot on the summit first. They paused at the summit to take photographs and buried a few sweets and a small cross in the snow before descending.
News of the expedition 's success reached London on the morning of Queen Elizabeth II 's coronation, 2 June. Returning to Kathmandu a few days later, Hunt (a Briton) and Hillary (a New Zealander) discovered that they had been promptly knighted in the Order of the British Empire for the ascent. Tenzing, a Nepali Sherpa who was a citizen of India, was granted the George Medal by the UK. Hunt was ultimately made a life peer in Britain, while Hillary became a founding member of the Order of New Zealand. Hillary and Tenzing are also recognised in Nepal, where annual ceremonies in schools and offices celebrate their accomplishment.
The next successful ascent was on 23 May 1956 by Ernst Schmied and Juerg Marmet. This was followed by Dölf Reist and Hans - Rudolf von Gunten on 24 May 1957. Wang Fuzhou, Gonpo and Qu Yinhua of China made the first reported ascent of the peak from the North Ridge on 25 May 1960. The first American to climb Everest, Jim Whittaker, joined by Nawang Gombu, reached the summit on 1 May 1963.
In 1970 Japanese mountaineers conducted a major expedition. The centrepiece was a large "siege '' - style expedition led by Saburo Matsukata, working on finding a new route up the southwest face. Another element of the expedition was an attempt to ski Mount Everest. Despite a staff of over one hundred people and a decade of planning work, the expedition suffered eight deaths and failed to summit via the planned routes. However, Japanese expeditions did enjoy some successes. For example, Yuichiro Miura became the first man to ski down Everest from the South Col (he descended nearly 4,200 vertical feet from the South Col before falling with extreme injuries). Another success was an expedition that put four on the summit via the South Col route. Miura 's exploits became the subject of film, and he went on to become the oldest person to summit Mount Everest in 2003 at age 70 and again in 2013 at the age of 80.
In 1975, Junko Tabei, a Japanese woman, became the first woman to summit Mount Everest.
The Polish climber Andrzej Zawada headed the first winter ascent of Mt. Everest, the first winter ascent of an eight - thousander. The team of 20 Polish climbers and 4 Sherpas established a base camp on Khumbu Glacier in early January 1980. On 15 January, the team managed to set up Camp III at 7150 meters above sea level, but further action was stopped by hurricane - force winds. The weather improved after 11 February, when Leszek Cichy, Walenty Fiut and Krzysztof Wielicki set up camp IV on South Col (7906 m). Cichy and Wielicki started the final ascent at 6: 50 AM on 17 February. At 2: 40 PM Andrzej Zawada at base camp heard the climbers ' voices over the radio - "We are on the summit! Strong wind blows all the time. It is unimaginably cold. ''. The successful winter ascent of Mt. Everest started a new decade of Winter Himalaism, which became a Polish specialisation. After 1980 Poles did ten first winter ascents on 8000 metre peaks, which earned Polish climbers a reputation of "Ice Warriors ''.
On 11 May 1996 eight climbers died after several expeditions were caught in a blizzard high up on the mountain. During the 1996 season, 15 people died while climbing on Mount Everest. These were the highest death tolls for a single event, and for a single season, until the sixteen deaths in the 2014 Mount Everest avalanche. The disaster gained wide publicity and raised questions about the commercialisation of climbing Mount Everest.
Journalist Jon Krakauer, on assignment from Outside magazine, was in one of the affected parties, and afterwards published the bestseller Into Thin Air, which related his experience. Anatoli Boukreev, a guide who felt impugned by Krakauer 's book, co-authored a rebuttal book called The Climb. The dispute sparked a debate within the climbing community.
In May 2004, Kent Moore, a physicist, and John L. Semple, a surgeon, both researchers from the University of Toronto, told New Scientist magazine that an analysis of weather conditions on 11 May suggested that freak weather caused oxygen levels to plunge approximately 14 percent.
One of the survivors was Beck Weathers, an American client of New Zealand - based guide service Adventure Consultants. Weathers was left for dead about 275 metres (900 feet) from Camp 4 at 7,950 metres (26,085 feet). After spending a night on the mountain, Weathers managed to find his way to Camp 4 with massive frostbite and vision impaired due to snow blindness. When he arrived at Camp 4, fellow climbers considered his condition terminal and left him in a tent to die overnight.
Before leaving Camp 4 Jon Krakauer heard Weathers calling for help from his tent. Weathers ' condition had not improved and an immediate descent to a lower elevation was deemed essential. A helicopter rescue was considered but was out of the question: Camp 4 was higher than the rated ceiling of any available helicopter and in any case would be extraordinarily dangerous. Eventually a rescue was organised thanks to a lieutenant colonel of the Nepalese Army who conducted the second - highest known helicopter medical evacuation up to that time.
The storm 's impact on climbers on the North Ridge of Everest, where several climbers also died, was detailed in a first - hand account by British filmmaker and writer Matt Dickinson in his book The Other Side of Everest. 16 - year - old Mark Pfetzer was on the climb and wrote about it in his account, Within Reach: My Everest Story.
The 2015 feature film Everest, directed by Baltasar Kormákur, is based on the events of this disaster.
In 2006 12 people died. One death in particular (see below) triggered an international debate and years of discussion about climbing ethics. The season was also remembered for the rescue of Lincoln Hall who had been left by his climbing team and declared dead, but was later discovered alive and survived being helped off the mountain.
There was an international controversy about the death of a solo British climber David Sharp, who attempted to climb Mount Everest in 2006 but died in his attempt. The story broke out of the mountaineering community into popular media, with a snow - balling series of interviews, allegations, critiques, and peace - making. The question was whether climbers that season had left a man to die, and whether he could have been saved. He was said to have attempted to summit Mount Everest by himself with no Sherpa or guide and fewer oxygen bottles than considered normal. He went with a low - budget Nepali guide firm that only provides support up to Base Camp, after which climbers go as a "loose group '', offering a high degree of independence. The manager at Sharp 's guide support said Sharp did not take enough oxygen for his summit attempt and did not have a Sherpa guide. It is less clear who knew Sharp was in trouble, and if they did know, whether they were qualified or capable of helping him.
Double - amputee climber Mark Inglis said in an interview with the press on 23 May 2006, that his climbing party, and many others, had passed Sharp, on 15 May, sheltering under a rock overhang 450 metres (1,480 ft) below the summit, without attempting a rescue. Inglis said 40 people had passed by Sharp, but he might have been overlooked as climbers assumed Sharp was the corpse nicknamed "Green Boots '', but Inglis was not aware that Turkish climbers had tried to help Sharp despite being in the process of helping an injured woman down (a Turkish woman named Burçak Poçan). There has also been some discussion about Himex in the commentary on Inglis and Sharp. In regards to Inglis 's initial comments, he later revised certain details because he had been interviewed while he was "... physically and mentally exhausted, and in a lot of pain. He had suffered severe frostbite -- he later had five fingertips amputated. '' When they went through Sharp 's possessions they found a receipt for $7,490 USD, believed to be the whole financial cost. Comparatively, most expeditions are between $35,000 to $100,000 USD plus an additional $20,000 in other expenses that range from gear to bonuses. It was estimated on 14 May that Sharp summitted Mount Everest and began his descent down, but 15 May he was in trouble but being passed by climbers on their way up and down. On 15 May 2006 it is believed he was suffering from hypoxia, and was about 1,000 feet from the summit on the North Side route.
"Dawa from Arun Treks also gave oxygen to David and tried to help him move, repeatedly, for perhaps an hour. But he could not get David to stand alone or even stand resting on his shoulders, and crying, Dawa had to leave him too. Even with two Sherpas it was not going to be possible to get David down the tricky sections below. ''
Some climbers who left him said that the rescue efforts would have been useless and only have caused more deaths. Beck Weathers of the 1996 Mount Everest disaster said that those who are dying are often left behind, and that he himself had been left for dead twice but was able to keep walking. The Tribune of India quoted someone who described what happened to Sharp as "the most shameful act in the history of mountaineering ''. In addition to Sharp 's death, at least nine other climbers perished that year, including multiple Sherpas working for various guiding companies.
"You are never on your own. There are climbers everywhere. ''
Much of this controversy was captured by the Discovery Channel while filming the television program Everest: Beyond the Limit. A crucial decision affecting the fate of Sharp is shown in the program, where an early returning climber Lebanese adventurer Maxim Chaya is descending from the summit and radios to his base camp manager (Russell Brice) that he has found a frostbitten and unconscious climber in distress. Chaya is unable to identify Sharp, who had chosen to climb solo without any support and so did not identify himself to other climbers. The base camp manager assumes that Sharp is part of a group that has already calculated that they must abandon him, and informs his lone climber that there is no chance of him being able to help Sharp by himself. As Sharp 's condition deteriorates through the day and other descending climbers pass him, his opportunities for rescue diminish: his legs and feet curl from frostbite, preventing him from walking; the later descending climbers are lower on oxygen and lack the strength to offer aid; time runs out for any Sherpas to return and rescue him.
David Sharp 's body remained just below the summit on the Chinese side next to "Green Boots ''; they shared a space in a small rock cave that was an ad hoc tomb for them. Sharp 's body was removed from the cave in 2007, according to the BBC, and since 2014, Green Boots has been missing, presumably removed or buried.
As the Sharp debate kicked off, on 26 May 2006 Australian climber Lincoln Hall was found alive, after being left for dead the day before. He was found by a party of four climbers (Dan Mazur, Andrew Brash, Myles Osborne and Jangbu Sherpa) who, giving up their own summit attempt, stayed with Hall and descended with him and a party of 11 Sherpas sent up to carry him down. Hall later fully recovered. His team assumed he had died from cerebral edema, and they were instructed to cover him with rocks. There were no rocks around to do this and he was abandoned. The erroneous information of his death was passed on to his family. The next day he was discovered by another party alive.
I was shocked to see a guy without gloves, hat, oxygen bottles or sleeping bag at sunrise at 28,200 - feet height, just sitting up there.
Lincoln greeted his fellow mountaineers with this:
I imagine you are surprised to see me here.
Lincoln Hall went on to live for several more years, often giving talks about his near - death experience and rescue, before dying from medical issues in 2012 at the age of 56 (born in 1955).
Similar heroic rescue actions have been recorded since Hall, including on 21 May 2007, when Canadian climber Meagan McGrath initiated the successful high - altitude rescue of Nepali Usha Bista. Recognising her heroic rescue, Major Meagan McGrath was selected as a 2011 recipient of the Sir Edmund Hillary Foundation of Canada Humanitarian Award, which recognises a Canadian who has personally or administratively contributed a significant service or act in the Himalayan Region of Nepal.
By the end of the 2010 climbing season, there had been 5,104 ascents to the summit by about 3,142 individuals, with 77 % of these ascents being accomplished since 2000. The summit was achieved in 7 of the 22 years from 1953 to 1974, and was not missed between 1975 and 2014. In 2007, the record number of 633 ascents was recorded, by 350 climbers and 253 sherpas.
A remarkable illustration of the explosion of popularity of Everest is provided by the numbers of daily ascents. Analysis of the 1996 Mount Everest disaster shows that part of the blame was on the bottleneck caused by the large number of climbers (33 to 36) attempting to summit on the same day; this was considered unusually high at the time. By comparison, on 23 May 2010, the summit of Mount Everest was reached by 169 climbers -- more summits in a single day than in the cumulative 31 years from the first successful summit in 1953 through 1983.
There have been 219 fatalities recorded on Mount Everest from the 1922 British Mount Everest Expedition through the end of 2010, a rate of 4.3 fatalities for every 100 summits (this is a general rate, and includes fatalities amongst support climbers, those who turned back before the peak, those who died en route to the peak and those who died while descending from the peak). Of the 219 fatalities, 58 (26.5 %) were climbers who had summited but did not complete their descent. Though the rate of fatalities has decreased since the year 2000 (1.4 fatalities for every 100 summits, with 3938 summits since 2000), the significant increase in the total number of climbers still means 54 fatalities since 2000: 33 on the northeast ridge, 17 on the southeast ridge, 2 on the southwest face, and 2 on the north face.
Nearly all attempts at the summit are done using one of the two main routes. The traffic seen by each route varies from year to year. In 2005 -- 07, more than half of all climbers elected to use the more challenging, but cheaper northeast route. In 2008, the northeast route was closed by the Chinese government for the entire climbing season, and the only people able to reach the summit from the north that year were athletes responsible for carrying the Olympic torch for the 2008 Summer Olympics. The route was closed to foreigners once again in 2009 in the run - up to the 50th anniversary of the Dalai Lama 's exile. These closures led to declining interest in the north route, and, in 2010, two - thirds of the climbers reached the summit from the south.
On 18 April 2014, an avalanche hit the area just below the Base Camp 2 at around 01: 00 UTC (06: 30 local time) and at an elevation of about 5,900 metres (19,400 ft). Sixteen people were killed in the avalanche (all Nepalese guides) and nine more were injured. This was not the only tragedy in the region, with over 43 killed in the 2014 Nepal snowstorm disaster, and they were not even summiting but rather trekking the Annapurna Circuit.
One positive outcome of the season was a 13 - year - old girl, Malavath Purna, reaching the summit, breaking the record for youngest female. Additionally, one team used a helicopter to fly from south base camp to Camp 2 to avoid the Khumbu Icefall, then reached the Everest summit. This team had to use the south side because the Chinese had denied them a permit to climb. Nepal turned Chinese reluctance into a success for the country, with the executive donating tens of thousands of dollars to local hospitals and achieving a new hybrid aviation - mountaineering technique. She was named the Nepalese "International Mountaineer of the Year ''.
Over 100 people summited Everest from China (Tibet region), and six from Nepal in the 2014 season. This included 72 - year - old Bill Burke, the Indian teenage girl, and a Chinese woman Jing Wang. Another teen girl summiter was Ming Kipa Sherpa who summited with her elder sister Lhakpa Sherpa in 2003, and who had achieved the most times for woman to the summit of Mount Everest at that time. (see also Santosh Yadav)
2015 was set to be a record - breaking season of climbs, with hundreds of permits issued in Nepal and many additional permits in Tibet (China). However, a magnitude 7.8 earthquake on 25 April 2015 effectively shut down the Everest climbing season. 2015 was the first time since 1974 with no spring summits, as all climbing teams pulled out after the quakes and avalanche.
One of the reasons for this was the high probability of aftershocks (over 50 percent according to the USGS). Just weeks after the first quake, the region was rattled again by a 7.3 magnitude quake and there were also many considerable aftershocks.
On 25 April 2015, an earthquake measuring 7.8 M triggered an avalanche that hit Everest Base Camp. Eighteen bodies were recovered from Mount Everest by the Indian Army mountaineering team. The avalanche began on Pumori, moved through the Khumbu Icefall on the southwest side of Mount Everest, and slammed into the South Base Camp.
The quakes trapped hundreds of climbers above the Khumbu icefall, and they had to be evacuated by helicopter as they ran low on supplies. The quake shifted the route through the ice fall, making it essentially impassable to climbers. Bad weather also made helicopter evacuation difficult. The Everest tragedy was small compared the impact overall on Nepal, with almost nine thousand dead and about 22 thousand injured. In Tibet, by 28 April at least 25 had died, and 117 were injured. By 29 April 2015, the Tibet Mountaineering Association (North / Chinese side) closed Everest and other peaks to climbing, stranding 25 teams and about 300 people on the north side of Everest. On the south side, helicopters evacuated 180 people trapped at Camps 1 and 2.
On 24 August 2015 Nepal re-opened Everest to tourism including mountain climbers. The only climber permit for the autumn season was awarded to Japanese climber Nobukazu Kuriki, who had tried four times previously to summit Everest without success. He made his fifth attempt in October, but had to give up just 700 m (2,300 ft) from the summit due to "strong winds and deep snow ''. Kurki noted the dangers of climbing Everest, having himself survived being stuck in a freezing snow hole for two days near the top, which came at the cost of all his fingertips and his thumb, lost to frostbite, which added further difficulty to his climb.
Some sections of the trail from Lukla to Everest Base Camp (Nepal) were damaged in the earthquakes earlier in the year and needed repairs to handle trekkers.
The Nepal Department of Tourism said by June 2016 that about 456 people made it to the summit of Mount Everest, including 45 women. They noted some good summit windows, and on one day, 19 May 2016, 209 climbers made it to the summit. By 11 May 2016 the lines were fixed on the south side of Everest, after which several hundred climbers would make it up in the critical weather windows. Alan Arnette published his Everest report by year end, based on results for the now 93 - year - old Elizabeth Hawley, which were released in December 2016. For 2016 her records indicate 641 made it to the summit early 2016.
On 11 May 2016, nine Sherpas summited Mount Everest. The next day another six persons reached the top. These were the first summitings since 2014, when 106 made it to the top. By 13 May, 42 climbers had reached the summit and by 22 May, good weather had allowed over 400 climbers to reach the summit. However, about 30 climbers developed frostbite or became sick, and two climbers died from what was reported as possible altitude sickness. Among those that had to turn back was a science expedition attempting to study the link between hypoxia and cognitive decline. Although it did not run its course, it did give some clues into the effects of high - altitude acclimatisation on human blood.
Adrian Ballinger and Cory Richards were sponsored by Eddie Bauer to climb Everest, and they relayed information from the Everest climb using the smartphone software application and service Snapchat. Mount Everest has had a 3G wireless data communication network since 2010. One of the things that was reported by them, was that bottled oxygen was stolen from them and there was some bad behaviour up there. The bottled oxygen was there for emergency back - up if they ran into trouble. Cory Richards summited Mount Everest without oxygen and returned safely, and Adrian made it almost to the top also. Another famous mountaineer, British climber Kenton Cool achieved his 12th Everest summit (the second - highest number of Everest summitings for a foreigner after Dave Hahn), and US celebrity mountaineer Melissa Arnot, completed her sixth summit, and achieved her personal goal of climbing Everest without supplementary bottled oxygen. This also turned out to be the most summits for a foreign female (not Nepali or Chinese), and one of the first US women to summit and survive without supplementary oxygen.
In 1998, Francys Arsentiev had made it to the summit, but died during the descent; she went on to become a famous corpse as a landmark known as "Sleeping Beauty '' until she was buried on Everest in 2007 by one of the people who had tried to help her. Another woman from the Americas, the Ecuadorian woman Carla Perez also summited Mount Everest in 2016 without supplementary oxygen. Perez and Arnot became the fifth and sixth women to summit Everest without supplementary oxygen. There is an ongoing discussion about the use of extra bottled oxygen in mountaineering. Also at issue is Dexamethasone (Dex), which is valuable as a lifesaver as it reduces swelling in the brain if someone comes down with high - altitude cerebral edema (HACE). When American Bill Burke was interviewed for his attempt, he noted how one of his team members had overdosed on Dex, prompting a medical evacuation even as in his more recent expedition, someone had 25 doses of Dex. He also noted it was hard to argue against large supplies of Dex, due its life - saving properties against some types of altitude sickness, especially HACE.
An example of a death in which Dex was implicated was Dr. Eberhard Schaaf in 2012 on Everest. Schaaf died on descent at the south summit from altitude sickness. It has a good reputation as a life saver, and is commonly given to Everest climbers for its ability to intervene in last desperate moments when altitude sickness sets in. For example, in the 2016 season Robert Gropel said he gave Dex to his wife (as reported by the Daily Telegraph) in attempt to save her as they tried to descend Everest. Dex is just the tip of the iceberg, with the UIAA noting the aforementioned dexamethasone, but also acetazolamide (aka Diamox), amphetamine, and alcohol use; and another noted Diamox (acetazolamide) use among trekkers. It is not really a matter of some authorities being for or against medications, but awareness, as misuse can cause drug interactions and various side effects. In particular it was noted that supplementary oxygen significantly lowers death rate on ultra-high altitude mountain climbing, and is generally not regulated as a drug, whereas the safe use of medications is less understood or even acknowledged in many cases. (see also: Effects of high altitude on humans)
"We estimate that during our informal survey on Everest spring 2012, at least two - thirds of climbers we contacted were prescribed several performance enhancing drugs (PEDs) and had intent to use them not for rescue, but to increase their chances of summit success ''
Mexican - American David Liaño Gonzalez (aka David Liano) summited for the sixth time, promoting a charity and also carrying a Seattle Seahawks flag with him to the Everest summit. Another sports team represented at the Everest summit was the Denver Broncos, with its flag unfurled by Kim Hess. Rounding out US mountaineering was news that a group of soldiers and veterans summited, including some who had been wounded in combat. A British wounded veteran (one - eyed) was also trying to summit but gave up his bid to help some Indian climbers.
In 2016 the first climbers from Sri Lanka, Myanmar, and Tunisia reached the summit of Mount Everest. Only two other people from North Africa have summited Everest, one from Algeria and the other from Morocco. The youngest Australian woman to summit Mount Everest was Alyssa Azar. She returned to Australia safely, but a bittersweet victory for Australia after the loss of another Australian woman who was also trying to summit that May with her husband. The youngest Japanese woman also summited (and returned alive) at the age of 19. Another woman record - breaker in 2016 was the first woman from Thailand to summit Mount Everest, Napassaporn Chumnarnsit, who was granted an audience with the Prime Minister of Thailand for her achievement. The first person with Cystic Fibrosis also summited Mount Everest on his third try. Also, a 61 - year old summited with two artificial knees. He had been trying for several years and had lost his Nepali friend Sherpa Nawang Tenzing in the 2015 earthquakes. He was not alone in being grief - stricken, as many climbers connected with the Everest mountaineering community lost climbing buddies in two years of disasters. One who narrowly survived the disasters himself climbed this year to bring attention to the disease Lewy body dementia (DLB) which afflicted his father.
A one - eyed British war veteran rescued a woman from India who was in trouble on her descent. The climber, Leslie Binns, successfully rescued her and tried to save another from her ill - fated eleven - person expedition which suffered three fatalities. About 500 metres from the summit, which he could see with his one eye, he heard a woman screaming for help so he gave up his summit bid to help her down. She had run out of bottled oxygen and was getting frostbitten.
On 11 May 2016 a Calgary physician died in Tibet, in the Chinese - side base camp. A 25 - year old Nepali named Phurba Sherpa, fixing lines near the Lhotse summit, fell to his death. A guide company, Arnold Coster Expeditions, suffered two fatalities, and a third client had to be airlifted out. One was a man from the Netherlands, and another was a South African - born Australian woman. Her husband had tried to save her, but he also ran into trouble and had to be airlifted out with medical complications. These deaths were very widely reported in international news and triggered some public discussion about Everest mountaineering and tourism.
An Indian expedition from West Bengal suffered a great tragedy, with the single expedition suffering three fatalities and third, a mother of an 11 - year old had to be rescued on her way down. At first it was reported one died and two were missing, but later the other two were located and had not survived. One British climber gave up his summit bid to help a Bengali woman that had fallen and was ailing on her descent. She was evacuated by the Himalayan Rescue Association and airlifted to Kathmandu with bad frostbite injuries. They were part of an eleven - member expedition from India. Eight had reached the summit, including the injured woman
The death toll for Everest climbers rose to five in most reports by late May 2016, and with a death of a high - altitude worker on Lhotse face during the season (Everest summiters sometimes need to climb Lhotse face depending on the route), gives a total of six known deaths from the Everest massif by the time the season drew to a close. Although not widely reported during May, a climber in Tibet had died on 11 May 2016 which makes it possibly six for Mount Everest and seven for the Everest Massif. The Nepal ministry of tourism said five people died (on the Nepali side).
2017 was the biggest season yet, permit-wise, yielding hundreds of summiters and a handful of deaths. On 27 May 2017, Kami Rita Sherpa made her 21st climb to the summit with the Alpine Ascents Everest Expedition, one of three people in the World along with Apa Sherpa and Phurba Tashi Sherpa to make it to the summit of Mount Everest 21 times. The season had a tragic start with the death of famous climber Ueli Steck of Switzerland, who died from a fall during a warm - up climb. There was a continued discussion about the nature of possible changes to the Hillary Step. Total summiters for 2017 was tallied up to be 648.
Famous Himalayan record keeper Elizabeth Hawley passed away in late January 2018 In 2018, Nepal may re-measure the height of Mount Everest, which is typically recognized as being 29,029, although re-measurement by teams have come up with somewhat varying figures including 29,022 and 29,035. One of the issues is if the height is from the rock summit or includes the ice and snow, which can add a significant amount of height. It is known the height of Everest may be changing due to its position on tectonic plates, which can alternately raise or lower it depending on the type of tectonic event. Another goal in 2018 that many organizations have, is to remove trash from the mountain and nature areas.
One of the big activities is trips to base camp (aka trekking), which can be higher than some of the highest mountains. The other big activity is serious attempts to make it to the top of Everest, and those in support of those attempts. The peak time for this is late May, because that is when the monsoons push the jet stream away, there is another time later in the year when the monsoon ends yielding another break in the weather, but there is more snow then. Some technology for climbing include crampons, fixed ropes, various cold - weather gear, bottled oxygen, and weather prediction. Predicting the weather is critical, one of the big disasters came in 1996 when a storm hit during a summit bid.
In modern times, there is greater on - demand logistical support available such as internet access, but also some new challenges like not offending the locals and watching out for oxygen - bottle thefts. Helicopter support has grown and the availability of helicopter rescues increased, but there are limits on how high and in what weather they are able to fly. Modern dangers include unexpected avalanches (these claimed many lives in 2014 and 2015), sudden onset of altitude sickness, and classic climbing danger - falling. For a price, permits are available from both China in the Tibet region and from Nepal; there is a multitude of mountaineering firms from all over the world operating on the mountain.
There were 334 climbing permits issued in 2014 in Nepal, these were extended until 2019 due to the closure. In 2015 there was 357 permits to climb Everest, but the mountain was closed again because of the avalanche and earthquake, and these permits were given a two - year extension to 2017 (not to 2019 as with the 2014 issue). This was an example of hospitality that Nepalis have become famous for; an extension was especially requested by expedition firms (which in turn bring resources into the country due for mountaineering). Nepal is essentially a "fourth world '' country, as of 2015 one of the poorest non-African countries along with Haiti and Myanmar, and the 19th poorest country in the world overall. Despite this, Nepal has been very welcoming to tourists and a significant tourism industry has been established. Although there is some difficulties, especially it can be hard to maintain order in distant areas or control the actions of trouble - makers there is along history of going the extra kilometer for tourists.
In 2017 a permit evader who tried to climb Everest without the 11,000 dollar permit, faced among other penalties a 22,000 dollar fine, bans, and a possible four years in jail after he was caught (he had made it up past the Khumbu icefall). In the end he was given a ten - year mountaineering ban in Nepal and allowed to return home according to the British newspaper Daily Mail. The rogue mountaineer, who in fact had not climbed other mountains, said he was "ecstatic '' and that he would try again but buy a permit next time.
Nepal permits by year:
The Chinese side in Tibet is also managed with permits for summiting Everest. They did not issue permits in 2008, due to the Olympic torch relay being taken to the summit of Mount Everest.
Mt. Everest has two main climbing routes, the southeast ridge from Nepal and the north ridge from Tibet, as well as many other less frequently climbed routes. Of the two main routes, the southeast ridge is technically easier and more frequently used. It was the route used by Edmund Hillary and Tenzing Norgay in 1953 and the first recognised of 15 routes to the top by 1996. This was, however, a route decision dictated more by politics than by design, as the Chinese border was closed to the western world in the 1950s, after the People 's Republic of China invaded Tibet.
Most attempts are made during May, before the summer monsoon season. As the monsoon season approaches, the jet stream shifts northward, thereby reducing the average wind speeds high on the mountain. While attempts are sometimes made in September and October, after the monsoons, when the jet stream is again temporarily pushed northward, the additional snow deposited by the monsoons and the less stable weather patterns at the monsoons ' tail end make climbing extremely difficult.
The ascent via the southeast ridge begins with a trek to Base Camp at 5,380 m (17,700 ft) on the south side of Everest, in Nepal. Expeditions usually fly into Lukla (2,860 m) from Kathmandu and pass through Namche Bazaar. Climbers then hike to Base Camp, which usually takes six to eight days, allowing for proper altitude acclimatisation in order to prevent altitude sickness. Climbing equipment and supplies are carried by yaks, dzopkyos (yak - cow hybrids), and human porters to Base Camp on the Khumbu Glacier. When Hillary and Tenzing climbed Everest in 1953, the British expedition they were part of (comprising over 400 climbers, porters, and Sherpas at that point) started from the Kathmandu Valley, as there were no roads further east at that time.
Climbers spend a couple of weeks in Base Camp, acclimatising to the altitude. During that time, Sherpas and some expedition climbers set up ropes and ladders in the treacherous Khumbu Icefall.
Seracs, crevasses, and shifting blocks of ice make the icefall one of the most dangerous sections of the route. Many climbers and Sherpas have been killed in this section. To reduce the hazard, climbers usually begin their ascent well before dawn, when the freezing temperatures glue ice blocks in place.
Above the icefall is Camp I at 6,065 metres (19,900 ft).
From Camp I, climbers make their way up the Western Cwm to the base of the Lhotse face, where Camp II or Advanced Base Camp (ABC) is established at 6,500 m (21,300 ft). The Western Cwm is a flat, gently rising glacial valley, marked by huge lateral crevasses in the centre, which prevent direct access to the upper reaches of the Cwm. Climbers are forced to cross on the far right, near the base of Nuptse, to a small passageway known as the "Nuptse corner ''. The Western Cwm is also called the "Valley of Silence '' as the topography of the area generally cuts off wind from the climbing route. The high altitude and a clear, windless day can make the Western Cwm unbearably hot for climbers.
From ABC, climbers ascend the Lhotse face on fixed ropes, up to Camp III, located on a small ledge at 7,470 m (24,500 ft). From there, it is another 500 metres to Camp IV on the South Col at 7,920 m (26,000 ft).
From Camp III to Camp IV, climbers are faced with two additional challenges: the Geneva Spur and the Yellow Band. The Geneva Spur is an anvil shaped rib of black rock named by the 1952 Swiss expedition. Fixed ropes assist climbers in scrambling over this snow - covered rock band. The Yellow Band is a section of interlayered marble, phyllite, and semischist, which also requires about 100 metres of rope for traversing it.
On the South Col, climbers enter the death zone. Climbers making summit bids typically can endure no more than two or three days at this altitude. That 's one reason why clear weather and low winds are critical factors in deciding whether to make a summit attempt. If weather does not cooperate within these short few days, climbers are forced to descend, many all the way back down to Base Camp.
From Camp IV, climbers begin their summit push around midnight, with hopes of reaching the summit (still another 1,000 metres above) within 10 to 12 hours. Climbers first reach "The Balcony '' at 8,400 m (27,600 ft), a small platform where they can rest and gaze at peaks to the south and east in the early light of dawn. Continuing up the ridge, climbers are then faced with the Three Steps, a series of imposing rock steps which usually forces them to the east into waist - deep snow, a serious avalanche hazard. At 8,750 m (28,700 ft), a small table - sized dome of ice and snow marks the South Summit.
From the South Summit, climbers follow the knife - edge southeast ridge along what is known as the "Cornice traverse '', where snow clings to intermittent rock. This is the most exposed section of the climb, and a misstep to the left would send one 2,400 m (7,900 ft) down the southwest face, while to the immediate right is the 3,050 m (10,010 ft) Kangshung Face. At the end of this traverse is an imposing 12 m (39 ft) rock wall, the Hillary Step, at 8,790 m (28,840 ft).
Hillary and Tenzing were the first climbers to ascend this step, and they did so using primitive ice climbing equipment and ropes. Nowadays, climbers ascend this step using fixed ropes previously set up by Sherpas. Once above the step, it is a comparatively easy climb to the top on moderately angled snow slopes -- though the exposure on the ridge is extreme, especially while traversing large cornices of snow. With increasing numbers of people climbing the mountain in recent years, the Step has frequently become a bottleneck, with climbers forced to wait significant amounts of time for their turn on the ropes, leading to problems in getting climbers efficiently up and down the mountain.
After the Hillary Step, climbers also must traverse a loose and rocky section that has a large entanglement of fixed ropes that can be troublesome in bad weather. Climbers typically spend less than half an hour at the summit to allow time to descend to Camp IV before darkness sets in, to avoid serious problems with afternoon weather, or because supplemental oxygen tanks run out.
The north ridge route begins from the north side of Everest, in Tibet. Expeditions trek to the Rongbuk Glacier, setting up base camp at 5,180 m (16,990 ft) on a gravel plain just below the glacier. To reach Camp II, climbers ascend the medial moraine of the east Rongbuk Glacier up to the base of Changtse, at around 6,100 m (20,000 ft). Camp III (ABC -- Advanced Base Camp) is situated below the North Col at 6,500 m (21,300 ft). To reach Camp IV on the North Col, climbers ascend the glacier to the foot of the col where fixed ropes are used to reach the North Col at 7,010 m (23,000 ft). From the North Col, climbers ascend the rocky north ridge to set up Camp V at around 7,775 m (25,500 ft). The route crosses the North Face in a diagonal climb to the base of the Yellow Band, reaching the site of Camp VI at 8,230 m (27,000 ft). From Camp VI, climbers make their final summit push.
Climbers face a treacherous traverse from the base of the First Step: ascending from 8,501 to 8,534 m (27,890 to 28,000 ft), to the crux of the climb, the Second Step, ascending from 8,577 to 8,626 m (28,140 to 28,300 ft). (The Second Step includes a climbing aid called the "Chinese ladder '', a metal ladder placed semi-permanently in 1975 by a party of Chinese climbers. It has been almost continuously in place since, and ladders have been used by virtually all climbers on the route.) Once above the Second Step the inconsequential Third Step is clambered over, ascending from 8,690 to 8,800 m (28,510 to 28,870 ft). Once above these steps, the summit pyramid is climbed by a snow slope of 50 degrees, to the final summit ridge along which the top is reached.
The routes usually share one spot in common, the summit itself. The summit of Everest has been described as "the size of a dining room table ''. The summit is capped with snow over ice over rock, and the layer of snow varies from year to year. The rock summit is made of Ordovician limestone and is a low - grade metamorphic rock according to Montana State University. (see survey section for more on its height and about the Everest rock summit)
Below the summit there is an area known as "rainbow valley '', filled with dead bodies still wearing brightly coloured winter gear. Down to about 8000 metres is an area commonly called the "death zone '', due to the high danger and low oxygen because of the low pressure.
Below the summit the mountain slopes downward to the three main sides, or faces, of Mount Everest: the North Face, the South - West Face, and the East / Kangshung Face.
At the higher regions of Mount Everest, climbers seeking the summit typically spend substantial time within the death zone (altitudes higher than 8,000 metres (26,000 ft)), and face significant challenges to survival. Temperatures can dip to very low levels, resulting in frostbite of any body part exposed to the air. Since temperatures are so low, snow is well - frozen in certain areas and death or injury by slipping and falling can occur. High winds at these altitudes on Everest are also a potential threat to climbers.
Another significant threat to climbers is low atmospheric pressure. The atmospheric pressure at the top of Everest is about a third of sea level pressure or 0.333 standard atmospheres (337 mbar), resulting in the availability of only about a third as much oxygen to breathe.
Debilitating effects of the death zone are so great that it takes most climbers up to 12 hours to walk the distance of 1.72 kilometres (1.07 mi) from South Col to the summit. Achieving even this level of performance requires prolonged altitude acclimatisation, which takes 40 -- 60 days for a typical expedition. A sea - level dweller exposed to the atmospheric conditions at the altitude above 8,500 m (27,900 ft) without acclimatisation would likely lose consciousness within 2 to 3 minutes.
In May 2007, the Caudwell Xtreme Everest undertook a medical study of oxygen levels in human blood at extreme altitude. Over 200 volunteers climbed to Everest Base Camp where various medical tests were performed to examine blood oxygen levels. A small team also performed tests on the way to the summit. Even at base camp, the low partial pressure of oxygen had direct effect on blood oxygen saturation levels. At sea level, blood oxygen saturation is generally 98 -- 99 %. At base camp, blood saturation fell to between 85 and 87 %. Blood samples taken at the summit indicated very low oxygen levels in the blood. A side effect of low blood oxygen is a greatly increased breathing rate, often 80 -- 90 breaths per minute as opposed to a more typical 20 -- 30. Exhaustion can occur merely attempting to breathe.
Lack of oxygen, exhaustion, extreme cold, and climbing hazards all contribute to the death toll. An injured person who can not walk is in serious trouble, since rescue by helicopter is generally impractical and carrying the person off the mountain is very risky. People who die during the climb are typically left behind. As of 2006, about 150 bodies had never been recovered. It is not uncommon to find corpses near the standard climbing routes.
Debilitating symptoms consistent with high altitude cerebral oedema commonly present during descent from the summit of Mount Everest. Profound fatigue and late times in reaching the summit are early features associated with subsequent death.
A 2008 study noted that the "death zone '' is indeed where most Everest deaths occur, but also noted that most deaths occur during descent from the summit. A 2014 article in the magazine The Atlantic about deaths on Everest noted that while falling is one of the greatest dangers the DZ presents for all 8000ers, avalanches are a more common cause of death at lower altitudes. However, Everest climbing is more deadly than BASE jumping, although some have combined extreme sports and Everest including a beverage company that had someone base - jumping off Everest in a wingsuit (they did survive, though).
Despite this, Everest is safer for climbers than a number of peaks by some measurements, but it depends on the period. Some examples are Kangchenjunga, K2, Annapurna, Nanga Parbat, and the Eiger (especially the nordwand). Mont Blanc has more deaths each year than Everest, with over one hundred dying in a typical year and over eight thousand killed since records were kept. Some factors that affect total mountain lethality include the level of popularity of the mountain, the skill of those climbing, and the difficulty of the climb.
Another health hazard is retinal haemorrhages, which can damage eyesight and cause blindness. Up to a quarter of Everest climbers can experience retinal haemorrhages, and although they usually heal within weeks of returning to lower altitudes, in 2010 a climber went blind and ended up dying in the death zone.
At one o'clock in the afternoon, the British climber Peter Kinloch was on the roof of the world, in bright sunlight, taking photographs of the Himalayas below, "elated, cheery and bubbly ''. But Mount Everest is now his grave, because only minutes later, he suddenly went blind and had to be abandoned to die from the cold.
The team made a huge effort for the next 12 hours to try to get him down the mountain, but to no avail, as they were unsuccessful in getting him through the difficult sections. Even for the able, the Everest North - East ridge is recognised as a challenge. It is hard to rescue someone who has become incapacitated and it can be beyond the ability of rescuers to save anyone in such a difficult spot. One way around this situation was pioneered by two Nepali men in 2011, who had intended to paraglide off the summit. They had no choice and were forced to go through with their plan anyway, because they had run out of bottled oxygen and supplies. They successfully launched off the summit and para-glided down to Namche in just 42 minutes, without having to climb down the mountain.
Most expeditions use oxygen masks and tanks above 8,000 m (26,000 ft). Everest can be climbed without supplementary oxygen, but only by the most accomplished mountaineers and at increased risk. Humans do not think clearly with low oxygen, and the combination of extreme weather, low temperatures, and steep slopes often requires quick, accurate decisions. While about 95 percent of climbers who reach the summit use bottled oxygen in order to reach the top, about five percent of climbers have summited Everest without supplemental oxygen. The death rate is double for those who attempt to reach the summit without supplemental oxygen. Travelling above 8,000 feet altitude is a factor in cerebral hypoxia. This decrease of oxygen to the brain can cause dementia and brain damage, as well as other symptoms. One study found that Mount Everest may be the highest an acclimatised human could go, but also found that climbers may suffer permanent neurological damage despite returning to lower altitudes.
Brain cells are extremely sensitive to a lack of oxygen. Some brain cells start dying less than 5 minutes after their oxygen supply disappears. As a result, brain hypoxia can rapidly cause severe brain damage or death.
The use of bottled oxygen to ascend Mount Everest has been controversial. It was first used on the 1922 British Mount Everest Expedition by George Finch and Geoffrey Bruce who climbed up to 7,800 m (25,600 ft) at a spectacular speed of 1,000 vertical feet per hour (vf / h). Pinned down by a fierce storm, they escaped death by breathing oxygen from a jury - rigged set - up during the night. The next day they climbed to 8,100 m (26,600 ft) at 900 vf / h -- nearly three times as fast as non-oxygen users. Yet the use of oxygen was considered so unsportsmanlike that none of the rest of the Alpine world recognised this high ascent rate.
George Mallory described the use of such oxygen as unsportsmanlike, but he later concluded that it would be impossible for him to summit without it and consequently used it on his final attempt in 1924. When Tenzing and Hillary made the first successful summit in 1953, they used bottled oxygen, with the expedition 's physiologist Griffith Pugh referring to the oxygen debate as a "futile controversy '', noting that oxygen "greatly increases subjective appreciation of the surroundings, which after all is one of the chief reasons for climbing. '' For the next twenty - five years, bottled oxygen was considered standard for any successful summit.
... although an acclimatised lowlander can survive for a time on the summit of Everest without supplemental oxygen, one is so close to the limit that even a modicum of excess exertion may impair brain function.
Reinhold Messner was the first climber to break the bottled oxygen tradition and in 1978, with Peter Habeler, made the first successful climb without it. In 1980, Messner summited the mountain solo, without supplemental oxygen or any porters or climbing partners, on the more difficult northwest route. Once the climbing community was satisfied that the mountain could be climbed without supplemental oxygen, many purists then took the next logical step of insisting that is how it should be climbed.
The aftermath of the 1996 disaster further intensified the debate. Jon Krakauer 's Into Thin Air (1997) expressed the author 's personal criticisms of the use of bottled oxygen. Krakauer wrote that the use of bottled oxygen allowed otherwise unqualified climbers to attempt to summit, leading to dangerous situations and more deaths. The disaster was partially caused by the sheer number of climbers (34 on that day) attempting to ascend, causing bottlenecks at the Hillary Step and delaying many climbers, most of whom summitted after the usual 14: 00 turnaround time. He proposed banning bottled oxygen except for emergency cases, arguing that this would both decrease the growing pollution on Everest -- many bottles have accumulated on its slopes -- and keep marginally qualified climbers off the mountain.
The 1996 disaster also introduced the issue of the guide 's role in using bottled oxygen.
Guide Anatoli Boukreev 's decision not to use bottled oxygen was sharply criticised by Jon Krakauer. Boukreev 's supporters (who include G. Weston DeWalt, who co-wrote The Climb) state that using bottled oxygen gives a false sense of security. Krakauer and his supporters point out that, without bottled oxygen, Boukreev could not directly help his clients descend. They state that Boukreev said that he was going down with client Martin Adams, but just below the south summit, Boukreev determined that Adams was doing fine on the descent and so descended at a faster pace, leaving Adams behind. Adams states in The Climb, "For me, it was business as usual, Anatoli 's going by, and I had no problems with that. ''
The low oxygen can cause a mental fog - like impairment of cognitive abilities described as "delayed and lethargic thought process, clinically defined as bradypsychia '' even after returning to lower altitudes. In severe cases, climbers can experience hallucinations. Some studies have found that high - altitude climbers, including Everest climbers, experience altered brain structure. The effects of high altitude on the brain, particularly if it can cause permanent brain damage, continue to be studied.
Although generally less popular than spring, Mount Everest has also been climbed in the autumn (also called the "post-monsoon season ''). For example, in 2010 Eric Larsen and five Nepali guides summited Everest in the autumn for the first time in ten years. The first mainland British ascent of Mount Everest (Hillary was from New Zealand), led by Chris Bonnington, was an autumn ascent in 1975. The autumn season, when the monsoon ends, is regarded as more dangerous because there is typically a lot of new snow which can be unstable. However, this increased snow can make it more popular with certain winter sports like skiing and snow boarding. Two Japanese also summited in October 1973.
Chris Chandler and Bob Cormack summited Everest in October 1976 as part of the American Bicentennial Everest Expedition that year, the first Americans to make an autumn ascent of Mt. Everest according to the Los Angeles Times. By the 21st century, summer and autumn can be more popular with skiing and snowboard attempts on Mount Everest. During the 1980s, climbing in autumn was actually more popular than in spring. The U.S. astronaut Karl Gordon Henize died in October 1993 on a fall (autumn) expedition conducting an experiment on radiation. The amount of background radiation increases with higher altitudes.
The mountain has also been climbed in the winter, but that is not popular because of the combination of cold high winds and shorter days. By January the peak is typically battered by 170 mph (270 km / h) winds and the average temperature of the summit is around − 33 ° F (− 36 ° C).
By the end of the 2010 climbing season, there had been 5,104 ascents to the summit by about 3,142 individuals. Some notable "firsts '' by climbers include:
Summiting Everest with disabilities such as amputations and diseases has become popular in the 21st century, with stories like that of Sudarshan Gautam, a man with no arms who made it to the top in 2013. A teenager with Down 's syndrome made it to Base camp, which has become a substitute for more extreme record - breaking because it carries many of the same thrills including the trip to the Himalayas and rustic scenery. Danger lurks even at base camp though, which was the site where dozens were killed in the 2015 Mount Everest avalanches. Others that have climbed Everest with amputations include Mark Inglis (no legs), Paul Hockey (1 arm only), and Arunima Sinha (1 leg).
On 26 September 1988, having climbed the mountain via the south - east ridge, Jean - Marc Boivin made the first paraglider descent of Everest, in the process creating the record for the fastest descent of the mountain and the highest paraglider flight. Boivin said: "I was tired when I reached the top because I had broken much of the trail, and to run at this altitude was quite hard. '' Boivin ran 18 m (60 ft) from below the summit on 40 - degree slopes to launch his paraglider, reaching Camp II at 5,900 m (19,400 ft) in 12 minutes (some sources say 11 minutes). Boivin would not repeat this feat, as he was killed two years later in 1990, base - jumping off Venezuela 's Angel Falls.
In 1991 four men in two balloons achieved the first hot - air balloon flight over Mount Everest. In one balloon was Andy Elson and Eric Jones (cameraman), and in the other balloon Chris Dewhirst and Leo Dickinson (cameraman). Leo went on to write a book about the adventure called Ballooning Over Everest. The hot - air balloons were modified to function at up to 40,000 feet altitude. Reinhold Messner called one of Leo 's panoramic views of Everest, captured on the now discontinued Kodak Kodachrome film, the "best snap on Earth '', according to UK newspaper The Telegraph. Dewhirst has offered to take passengers on a repeat of this feat for 2.6 million USD per passenger.
In May 2005, pilot Didier Delsalle of France landed a Eurocopter AS350 B3 helicopter on the summit of Mount Everest. He needed to land for two minutes to set the Fédération Aéronautique Internationale (FAI) official record, but he stayed for about four minutes, twice. In this type of landing the rotors stay engaged, which avoids relying on the snow to fully support the aircraft. The flight set rotorcraft world records, for highest of both landing and take - off.
Some press reports suggested that the report of the summit landing was a misunderstanding of a South Col landing, but he had also landed on South Col two days earlier, with this landing and the Everest records confirmed by the FAI. Delsalle also rescued two Japanese climbers at 4,880 m (16,000 ft) while he was there. One climber noted that the new record meant a better chance of rescue.
In 2011 two Nepali paraglided from the Everest Summit to Namche in 42 minutes. They had run out of oxygen and supplies, so it was a very fast way off the mountain. The duo won National Geographic Adventurers of the Year for 2012 for their exploits. After the paraglide they kayaked to the Indian Ocean, and they had made it to the Bay of Bengal by the end of June 2011. One had never climbed, and one had never para-glided, but together they accomplished a ground - breaking feat. By 2013 footage of the flight was shown on the television Nightline. (see also kayaking for more on this style of activity)
In 2014, a team financed and led by mountaineer Wang Jing used a helicopter to fly from South base camp to Camp 2 to avoid the Khumbu Icefall, and thence climbed to the Everest summit. This climb immediately sparked outrage and controversy in much of the mountaineering world over the legitimacy and propriety of her climb. Nepal ended up investigating Wang, who initially denied the claim that she had flown to Camp 2, admitting only that some support crew were flown to that higher camp, over the Khumbu Icefall. In August 2014, however, she stated that she had flown to Camp 2 because the icefall was impassable. "If you do n't fly to Camp II, you just go home, '' she said in an interview. In that same interview she also insisted that she had never tried to hide this fact.
Her team had had to use the south side because the Chinese had denied them a permit to climb. Ultimately, the Chinese refusal may have been beneficial to Nepalese interests, allowing the government to showcase improved local hospitals and provided the opportunity for a new hybrid aviation / mountaineering style, triggering discussions about helicopter use in the mountaineering world. National Geographic noted that a village festooned Wang with honours after she donated 30,000 USD to the town 's hospital. Wang won the International Mountaineer of the Year Award from the Nepal government in June 2014.
In 2016 the increased use of helicopters was noted for increased efficiency and for hauling material over the deadly Khumbu icefall. In particular it was noted that flights saved icefall porters 80 trips but still increased commercial activity at Everest. After many Nepalis died in the icefall in 2014, the government had wanted helicopters to handle more transportation to Camp 1 but this was not possible because of the 2015 deaths and earthquake closing the mountain, so this was then implemented in 2016 (helicopters did prove instrumental in rescuing many people in 2015 though). That summer Bell tested the 412EPI, which conducted a series of tests including hovering at 18,000 feet and flying as high as 20,000 feet altitude near Mount Everest.
Going with a "celebrity guide '', usually a well - known mountaineer typically with decades of climbing experience and perhaps several Everest summits, can cost over £ 100,000 as of 2015. On the other hand, a limited support service, offering only some meals at base camp and bureaucratic overhead like a permit, can cost as little as US $7,000 as of 2007. There are issues with the management of guiding firms in Nepal, and one Canadian woman was left begging for help when her guide firm, which she had paid perhaps US $40,000 to, could n't stop her from dying in 2012. She ran out of bottled oxygen after climbing for 27 hours straight. Despite decades of concern over inexperienced climbers, neither she nor the guide firm had summited Everest before. The Tibetan / Chinese side has been described as "out of control '' due to reports of thefts and threats. By 2015, Nepal was considering requiring that climbers have some experience and wanted to make the mountain safer, and especially increase revenue. One barrier to this is that low - budget firms make money not taking inexperienced climbers to the summit. (subscription required) Those turned away by Western firms can often find another firm willing to take them for a price -- that they return home soon after arriving after base camp, or part way up the mountain. Whereas a Western firm will convince those they deem incapable to turn back, other firms simply give people the freedom to choose.
Climbing Mount Everest can be a relatively expensive undertaking for climbers. Climbing gear required to reach the summit may cost in excess of US $8,000, and most climbers also use bottled oxygen, which adds around US $3,000. The permit to enter the Everest area from the south via Nepal costs US $10,000 to US $25,000 per person, depending on the size of the team. The ascent typically starts at one of the two base camps near the mountain, both of which are approximately 100 kilometres (60 mi) from Kathmandu and 300 kilometres (190 mi) from Lhasa (the two nearest cities with major airports). Transferring one 's equipment from the airport to the base camp may add as much as US $2,000.
By 2016, most guiding services cost between US $35,000 -- 200,000. However, the services offered vary widely and it is "buyer beware '' when doing deals in Nepal, one of the poorest and least developed countries in the world. Tourism is about four percent of Nepal 's economy, but Everest is special in that an Everest porter can make nearly double the nations 's average wage in a region in which other sources of income are lacking.
Beyond this point, costs may vary widely. It is technically possible to reach the summit with minimal additional expenses, and there are "budget '' travel agencies which offer logistical support for such trips. However, this is considered difficult and dangerous (as illustrated by the case of David Sharp). Many climbers hire "full service '' guide companies, which provide a wide spectrum of services, including acquisition of permits, transportation to / from base camp, food, tents, fixed ropes, medical assistance while on the mountain, an experienced mountaineer guide, and even personal porters to carry one 's backpack and cook one 's meals. The cost of such a guide service may range from US $40,000 -- 80,000 per person. Since most equipment is moved by Sherpas, clients of full - service guide companies can often keep their backpack weights under 10 kilograms (22 lb), or hire a Sherpa to carry their backpack for them. By contrast, climbers attempting less commercialised peaks, like Denali, are often expected to carry backpacks over 30 kilograms (66 lb) and, occasionally, to tow a sled with 35 kilograms (77 lb) of gear and food.
According to Jon Krakauer, the era of commercialisation of Everest started in 1985, when the summit was reached by a guided expedition led by David Breashears that included Richard Bass, a wealthy 55 - year - old businessman and an amateur mountain climber with only four years of climbing experience. By the early - 1990s, several companies were offering guided tours to the mountain. Rob Hall, one of the mountaineers who died in the 1996 disaster, had successfully guided 39 clients to the summit before that incident.
The degree of commercialisation of Mount Everest is a frequent subject of criticism. Jamling Tenzing Norgay, the son of Tenzing Norgay, said in a 2003 interview that his late father would have been shocked to discover that rich thrill - seekers with no climbing experience were now routinely reaching the summit, "You still have to climb this mountain yourself with your feet. But the spirit of adventure is not there any more. It is lost. There are people going up there who have no idea how to put on crampons. They are climbing because they have paid someone $65,000. It is very selfish. It endangers the lives of others. ''
Reinhold Messner concurred in 2004, "You could die in each climb and that meant you were responsible for yourself. We were real mountaineers: careful, aware and even afraid. By climbing mountains we were not learning how big we were. We were finding out how breakable, how weak and how full of fear we are. You can only get this if you expose yourself to high danger. I have always said that a mountain without danger is not a mountain... High altitude alpinism has become tourism and show. These commercial trips to Everest, they are still dangerous. But the guides and organisers tell clients, "Do n't worry, it 's all organised. '' The route is prepared by hundreds of Sherpas. Extra oxygen is available in all camps, right up to the summit. People will cook for you and lay out your beds. Clients feel safe and do n't care about the risks.
However, not all opinions on the subject among prominent mountaineers are strictly negative. For example, Edmund Hillary, who went on record saying that he has not liked "the commercialization of mountaineering, particularly of Mt. Everest '' and claimed that "Having people pay $65,000 and then be led up the mountain by a couple of experienced guides... is n't really mountaineering at all '', nevertheless noted that he was pleased by the changes brought to Everest area by Westerners, "I do n't have any regrets because I worked very hard indeed to improve the condition for the local people. When we first went in there they did n't have any schools, they did n't have any medical facilities, all over the years we have established 27 schools, we have two hospitals and a dozen medical clinics and then we 've built bridges over wild mountain rivers and put in fresh water pipelines so in cooperation with the Sherpas we 've done a lot to benefit them. ''
One of the early guided summiters, Richard Bass (of Seven Summits fame) responded in an interview about Everest climbers and what it took to survive there, "Climbers should have high altitude experience before they attempt the really big mountains. People do n't realise the difference between a 20,000 - foot mountain and 29,000 feet. It 's not just arithmetic. The reduction of oxygen in the air is proportionate to the altitude alright, but the effect on the human body is disproportionate -- an exponential curve. People climb Denali (20,320 feet) or Aconcagua (22,834 feet) and think, ' Heck, I feel great up here, I 'm going to try Everest. ' But it 's not like that. ''
Some climbers have reported life - threatening thefts from supply caches. Vitor Negrete, the first Brazilian to climb Everest without oxygen and part of David Sharp 's party, died during his descent, and theft from his high - altitude camp may have contributed.
"Several members were bullied, gear was stolen, and threats were made against me and my climbing partner, Michael Kodas, making an already stressful situation even more dire. '' said one climber.
In addition to theft, Michael Kodas describes in his book High Crimes: The Fate of Everest in an Age of Greed (2008), unethical guides and Sherpas, prostitution and gambling at the Tibet Base Camp, fraud related to the sale of oxygen bottles, and climbers collecting donations under the pretense of removing trash from the mountain.
The Chinese side of Everest in Tibet was described as "out of control '' after one Canadian had all his gear stolen and was abandoned by his Sherpa. Another sherpa helped him get off the mountain safely and gave him some spare gear. Other climbers have also reported missing oxygen bottles, which can be worth hundreds of dollars each. One problem is that hundreds of climbers pass by people 's tents. Also weather can damage or even blow people 's equipment away.
In the late 2010s the reports of theft of oxygen bottles from camps became more common. Westerners have sometimes struggled to understand the ancient culture and desperate poverty that drives some locals, some with a different concept of the value of a human life. For example, for just 1,000 rupees (£ 6.30) per person several foreigners were forced to leave the lodge where they were staying and tricked into believing they were being led to safety. Instead they were abandoned and died in the snowstorm.
On 18 April 2014, in one of the worst disasters to ever hit the Everest climbing community up to that time, 16 Sherpas died in Nepal due to the avalanche that swept them off Mount Everest. In response to the tragedy numerous Sherpa climbing guides walked off the job and most climbing companies pulled out in respect for the Sherpa people mourning the loss. Some still wanted to climb but there was really too much controversy to continue that year. One of the issues that triggered the work action by Sherpas was unreasonable client demands during climbs.
Mount Everest has been host to other winter sports and adventuring besides mountaineering, including snowboarding, skiing, paragliding, and BASE jumping.
Yuichiro Miura became the first man to ski down Everest in the 1970s. He descended nearly 4,200 vertical feet from the South Col before falling with extreme injuries. Stefan Gatt and Marco Siffredi snowboarded Mount Everest in 2001. Other Everest skiers include Davo Karničar of Slovenia, who completed a top to south base camp descent in 2000, Hans Kammerlander of Italy in 1996 on the north side, and Kit DesLauriers of the United States in 2006. In 2006 Tomas Olsson planned to ski down the north face, but his anchor broke while he was rappelling down a cliff in the Norton couloir at about 8,500 metres, resulting in his death from a two and a half kilometre fall. Also, Marco Siffredi died in 2002 on his second snow - boarding expedition.
Various types of gliding descents have slowly become more popular, and are noted for their rapid descents to lower camps. In 1986 Steve McKinney led an expedition to Mount Everest, during which he became the first person to fly a hang - glider off the mountain. Frenchman Jean - Marc Boivin made the first paraglider descent of Everest in September 1988, descending in minutes from the south - east ridge to a lower camp. In 2011, two Nepalese made a gliding descent from the Everest summit down 5,000 metres (16,400 ft) in 45 minutes. On 5 May 2013, the beverage company Red Bull sponsored Valery Rozov, who successfully BASE jumped off of the mountain while wearing a wingsuit, setting a record for world 's highest BASE jump in the process.
The southern part of Mt. Everest is regarded as one of several "hidden valleys '' of refuge designated by Padmasambhava, a ninth - century "lotus - born '' Buddhist saint.
Near the base of the north side of Everest lies Rongbuk Monastery, which has been called the "sacred threshold to Mount Everest, with the most dramatic views of the world. '' For Sherpas living on the slopes of Everest in the Khumbu region of Nepal, Rongbuk Monastery is an important pilgrimage site, accessed in a few days of travel across the Himalayas through Nangpa La.
Miyolangsangma, a Tibetan Buddhist "Goddess of Inexhaustible Giving '', is believed to have lived at the top of Mt Everest. According to Sherpa Buddhist monks, Mt Everest is Miyolangsangma 's palace and playground, and all climbers are only partially welcome guests, having arrived without invitation.
The Sherpa people also believe that Mt. Everest and its flanks are blessed with spiritual energy, and one should show reverence when passing through this sacred landscape. Here, the karmic effects of one 's actions are magnified, and impure thoughts are best avoided.
In 2015 the president of the Nepal Mountaineering Association warned that pollution, especially human waste, has reached critical levels. As much as "26,500 pounds of human excrement '' each season is left behind on the mountain. Human waste is strewn across the verges of the route to the summit, making the four sleeping areas on the route up Everest 's south side minefields of human excrement. Climbers above Base Camp -- for the 62 - year history of climbing on the mountain -- have most commonly either buried their excrement in holes they dug by hand in the snow, or slung it into crevasses, or simply defecated wherever convenient, often within meters of their tents. The only place where climbers can defecate without worrying about contaminating the mountain is Base Camp. At approximately 18,000 feet, Base Camp sees the most activity of all camps on Everest because climbers acclimate and rest there. In the late - 1990s, expeditions began using toilets that they fashioned from blue plastic 50 - gallon barrels fitted with a toilet seat and enclosed. The problem of human waste is compounded by the presence of more anodyne waste: spent oxygen tanks, abandoned tents, empty cans and bottles. The Nepalese government now requires each climber to pack out eight kilograms of waste when descending the mountain.
Nearby peaks include Lhotse, 8,516 m (27,940 ft); Nuptse, 7,855 m (25,771 ft), and Changtse, 7,580 m (24,870 ft) among others. Another nearby peak is Khumbutse, and many of the highest mountains in the world are near Mount Everest. On the southwest side, a major feature in the lower areas is the Khumbu icefall and glacier, which is a famous obstacle to climbers on those routes but also nearby the base camps of those attempting to climb it
|
the world's first billionare made his fortune in what industry | Billionaire - wikipedia
A billionaire, in countries that use the short scale number naming system, is a person with a net worth of at least one billion (1,000,000,000, i.e. a thousand million) units of a given currency, usually major currencies such as the United States dollar, the euro or the pound sterling. Additionally a centibillionaire (or centi - billionaire) is commonly used to reference a billionaire worth one hundred billion dollars (100,000,000,000). The American business magazine Forbes produces a complete global list of known U.S. dollar billionaires every year and updates an Internet version of this list in real time. The American oil magnate John D. Rockefeller became the world 's first confirmed U.S. dollar billionaire in 1916. As of 2017, there are over 2,000 U.S. dollar billionaires worldwide, with a combined wealth of over US $7.7 trillion. According to a 2017 Oxfam report, the top eight richest billionaires own as much combined wealth as "half the human race ''. Many current billionaires are extremely successful businesspeople, including Bill Gates, Jeff Bezos, Larry Ellison, Michael Dell, and Meg Whitman. They created their wealth through the ownership and control of major corporations. Some billionaires have inherited their wealth through family owned corporations such as Mars, Walton, Rockefeller, and Koch.
According to the Forbes report released in March 2017, there are currently 2,043 U.S. dollar billionaires worldwide, from 66 countries, with a combined net worth of $7.67 trillion, which is more than the combined GDP of 152 countries. The majority of billionaires are male, as fewer than 11 % (197 of 1,826) on the 2015 list were female billionaires. In 2015, there were ten LGBT billionaires. The United States has the largest number of billionaires of any country, with 536 as of 2015, while China, India and Russia are home to 213, 90 and 88 billionaires respectively. As of 2015, only 46 billionaires were under the age of 40, while the list of American - only billionaires, as of 2010, had an average age of 66.
According to a 2016 Oxfam report, the wealth of the poorest 50 % dropped by 41 % between 2010 and 2015, despite an increase in the global population of 400 million. In the same period, the wealth of the richest 62 people between the World 's Billionaires increased by $500 bn (£ 350bn) to $1.76 tn. This number has fallen dramatically from 388 as recently as 2010. More recently, in 2017 an Oxfam report noted that just eight billionaires own as much combined wealth as "half the human race ''.
The table below lists numerous statistics relating to billionaires, including the total number of known billionaires and the net worth of the world 's wealthiest individual for each year since 2008. Data for each year is from the annual Forbes list of billionaires, with currency figures given in U.S. dollars.
|
where is the jewish quarter in new york | New York City ethnic enclaves - wikipedia
Arabs African Americans Asian Indians Bangladeshis Brazilians Caribbeans Chinese (Fuzhounese) Filipinos Irish Italians Japanese Jews Koreans Puerto Ricans Russians
Since its founding in 1625 by Dutch traders as New Amsterdam, New York City has been a major destination for immigrants of many nationalities who have formed ethnic enclaves, neighborhoods dominated by one ethnicity. Freed African American slaves also moved to New York City in the Great Migration and the later Second Great Migration and formed ethnic enclaves. These neighborhoods are set apart from the main city by differences such as food, goods for sale, or even language. Ethnic enclaves provide inhabitants security in work and social opportunities, but limit economic opportunities, do not encourage the development of English speaking, and keep immigrants in their own culture.
As of 2000, 36 % of the population of New York City are immigrants. Jamaican Americans, Haitian Americans, Barbadian Americans, Guyanese Americans, Bahamian Americans, Grenadian Americans, Vincentian Americans and Trinidadian Americans have all formed Caribbean ethnic enclaves in New York. Asian ethnic groups with enclaves in New York include Chinese Americans, Japanese Americans, Filipino Americans, Indian Americans, Indo - Caribbean Americans, Afghan Americans, Burmese Americans, Bangladeshi Americans, Nepalese Americans, Sri Lankan Americans, Bhutanese Americans, Thai Americans, Pakistani Americans, Indonesian Americans, Malaysian Americans, Taiwanese Americans, Vietnamese Americans, Cambodian Americans and Korean Americans. European ethnic groups with ethnic enclaves include Greek Americans, Irish Americans, Italian Americans, Albanian Americans, Hungarian Americans, Polish Americans, Dutch Americans, German Americans, Ukrainian Americans and Russian Americans. Latin American groups with ethnic enclaves include Dominican Americans, Brazilian Americans, Salvadoran American, Ecuadorian American, Mexican Americans, Panamanian Americans, Colombian Americans, Peruvian Americans, Honduran Americans and Puerto Ricans. Middle Eastern ethnic groups that have formed ethnic enclaves include Palestinian Americans, Jordanian Americans, Egyptian Americans, Syrian Americans, Yemeni Americans and Lebanese Americans. There are also large enclaves of Jewish Americans, who immigrated from or whose ancestors immigrated from various countries. As many as 800 languages are spoken in New York, making it the most linguistically diverse city in the world.
New York City was founded in 1625, by Dutch traders as New Amsterdam. The settlement was a slow growing village, but was diverse. However, the Netherlands never had a large emigrant population, and the colony attracted few Dutch and more people from different ethnic groups. As early as 1646, 18 languages were spoken in New Amsterdam, and ethnic groups within New Amsterdam included Dutch, Danes, English, Flemish, French, Germans, Irish, Italians, Norwegians, Poles, Portuguese, Scots, Swedes, Walloons, and Bohemians. The young, diverse village also became a seafarer 's town, with taverns and smugglers. After Peter Stuyvesant became Director, New Amsterdam began to grow more quickly, achieving a population of 1,500, and growing to 2,000 by 1655 and almost to 9,000 in 1664, when the British seized the colony, renaming it New York.
Colonial New York City was also a center of religious diversity, including one of the first Jewish congregations, along with Philadelphia, Savannah, and Newport, in what was to become the United States.
The first recorded African Americans were brought to the present - day United States in 1619 as slaves. New York State began emancipating slaves in 1799, and in 1841, all slaves in New York State were freed, and many of New York 's emancipated slaves lived in or moved to Fort Greene, Brooklyn. All slaves in the United States were later freed in 1865, with the end of the American Civil War and the ratification of the Thirteenth Amendment. After the Civil War, African Americans left the South, where slavery had been the strongest, in large numbers. These movements are now known as the Great Migration, during the 1910s and 1920s and the Second Great Migration, from the end of World War II until 1970.
After arriving in New York, the African Americans formed neighborhoods, partially due to racism of the landlords at the time. The socioeconomic center of these neighborhoods, and all of "Black America '', was Harlem, in Northern Manhattan. Hamilton Heights, on Harlem 's western side, was a nicer part of Harlem, and Sugar Hill, named because its inhabitants enjoyed the "sweet life '', was the nicest part.
In the 1930s, after the Independent Subway System 's Eighth Avenue and Fulton Street subways opened, Harlem residents began to leave crowded Harlem for Brooklyn. The first neighborhood African Americans moved to in large numbers was Bedford - Stuyvesant, composed of the neighborhoods Bedford, Stuyvesant, Weeksville (which had an established African American community by the time of the New York Draft Riots), and Ocean Hill. From Bedford - Stuyvesant, African Americans moved into the surrounding neighborhoods, including Crown Heights, and Brownsville. After World War II, "white flight '' occurred, in which predominantly white residents moved to the suburbs and were replaced with minority residents. Neighborhoods that experienced this include Canarsie, Flatbush, and East Flatbush.
Queens also experienced "white flight ''. Jamaica and South Jamaica both underwent ethnic change. Some of Queens ' African American neighborhoods are housing projects or housing cooperatives, such as LeFrak City. Other African American neighborhoods include Laurelton, Cambria Heights, Hollis, Springfield Gardens, and St. Albans.
The Bronx experienced white flight, which was mostly confined to the South Bronx and mostly in the 1970s.
Staten Island is home to the oldest continuously settled free - black community in the United States, Sandy Ground. This community along the Southwestern shore of Staten Island was once home to thousands of free - black men and women, who came to Staten Island to work as oystermen. Members of this community also settled and established communities on the North Shore, such as West New Brighton and Port Richmond after oyster fishing became scarce in 1916. Many African Americans settled in several North Shore communities during the Great Migration, such as Arlington, Mariners Harbor, and New Brighton. Although the black community of Staten Island is mostly dispersed throughout the North Shore of the Island, there are several African Americans living on the South Shore.
Many Ghanaian people have settled in Concourse Village in the Bronx since an influx of Ghanaians began in the 1980s and 1990s. With over 27,000 in New York City, Ghanaians are the city 's largest African immigrant group. Most live in the Bronx, Queens, and Brooklyn. In Concourse Village, the intersection of Sheridan Avenue and McClellan Street is considered the Ghanaian population 's center of commerce, but people also socialize in this intersection.
There is at least one community of West Africans in New York, concentrated in Le Petit Senegal in Harlem, Manhattan. The enclave is situated on 116th Street between St. Nicholas and 8th Avenues, and is home to a large number of Francophone West Africans.
An enclave of Liberians developed in Staten Island at the end of the 20th century, following the turbulent Liberian Civil War.
According to the 2010 US Census data on brooklyn.com there are approximately 370,000 (16.4 %) Caribbean descendants in Brooklyn. That figure includes persons who identify as Dominican (3.3 %), but does not include the (7.4 %) Puerto Rican population. Including Puerto Ricans, there are approximately 560,000 (23.8 %) persons of Caribbean descent in Brooklyn. Similar, but not identical demographics in America exist in Miami, but there are fewer people of Cuban descent in New York.
New York City has large Guyanese, Surinamese, and Trinidadian communities, both primarily Indo - Guyanese, Indo - Surinamese, Indo - Jamaican, and Indo - Trinidadian (Indo - Caribbean Americans). The largest one is in Ozone Park, Queens, on 101st and Liberty Avenues; this neighborhood extends to Richmond Hill, along Liberty Avenue between Lefferts Boulevard and the Van Wyck Expressway. Guyanese and Trinidadians in New York City number around 227,582 as of 2014. Afro - Guyanese and Afro - Trinidadians live in neighborhoods like Canarsie or Flatbush in Brooklyn.
Indo - Guyanese, Indo - Jamaican, and, Indo - Surinamese, Indo - Trinidadians originated in India. After the abolition of slavery, South Asians were brought to Guyana, Suriname, Jamaica, Trinidad and Tobago, and other parts of the Caribbean to work as indentured servants. These South Asians were mostly Hindu, but there were also Muslims, and Christians who were brought from India. A majority of these South Asians spoke Bhojpuri or Caribbean Hindustani. The descendants of these indentured servants later immigrated to New York City and to other places around the world, such as Toronto. In NYC, they mostly live in Richmond Hill and Ozone Park, which have many Hindu, Muslim, and Christian people.
According to the 2000 census, there are about 200,000 Haitians / Haitian Americans in Brooklyn, showing that it is home to the largest number of Haitian immigrants in New York City. The neighborhood that has the largest Haitian community in New York is Flatbush, Brooklyn. The 2010 US Census indicates that 3 % of Brooklynites are of Haitian descent. On Flatbush Avenue, Nostrand Avenue and Church Avenue it is possible to find Haitian businesses and restaurants. Other prominent Haitian neighborhoods include East Flatbush, Canarsie, and Kensington in Brooklyn and Springfield Gardens, Queens Village, and Cambria Heights in Queens.
New York State has the largest population of Jamaican Americans in the United States. About 3.5 % of the population of Brooklyn is of Jamaican heritage. In 1655, Jamaica was captured by the British, who brought African slaves in large numbers to work on plantations. The African slaves were emancipated in 1838, and owners starting paying wages to workers, who were now free to immigrate to the United States. Many Jamaicans immigrated in the years following 1944, when the United States economy was rebuilding from World War II, seeing opportunity. After 1965, when immigration quotas were lifted, Jamaican immigration skyrocketed again.
Jamaican neighborhoods include Queens Village and Jamaica in Queens; Crown Heights, East Flatbush, Flatbush in Brooklyn, and Wakefield and Tremont in the Bronx.
As of 2013, there are more than 74,000 Bangladeshis in New York City, a majority of whom reside in the boroughs of Queens and Brooklyn. The Bangladeshis in New York tend to form enclaves in neighborhoods predominantly populated by Asian Indians. These enclaves include one in Kensington, Brooklyn, featuring Bangladeshi grocers, hairdressers, and halal markets. Kensington 's enclave, the biggest Bangladeshi enclave in the city, was formed in the mid-1990s as a small community of Bangladeshi shops, but the Bangladeshi population has now expanded to other boroughs as well. Bangladeshis have tried to leave a permanent legacy, making a failed attempt to rename McDonald Avenue after Sheikh Mujibur Rahman, the first president of Bangladesh, as well as nicknaming the area Bangla Town.
There is also an enclave on 73rd Street in Jackson Heights, Queens, as well as one on Hillside Avenue in Queens, and one in Parkchester, Bronx. As well as living alongside the Indians, Bangladeshis own many of the Indian restaurants in Brooklyn and Queens.
Until the late 20th century, the Chinese population was limited to one area in lower Manhattan. The New York metropolitan area contains the largest ethnic Chinese population outside of Asia, enumerating an estimated 735,019 individuals as of 2012, including at least 350,000 foreign born Chinese as of 2013, making them the city 's second largest ethnic group. The Chinese population in the New York City area is dispersed across at least 9 Chinatowns, comprising the original Manhattan Chinatown, three in Queens (the Flushing Chinatown, the Elmhurst Chinatown, and the newly emerged Chinatown in Corona), three in Brooklyn (the Sunset Park Chinatown, the Avenue U Chinatown, and the Bensonhurst Chinatown), and one each in Edison, New Jersey and Nassau County, Long Island, not to mention fledgling ethnic Chinese enclaves emerging throughout the New York City metropolitan area. Chinese Americans, as a whole, have had a (relatively) long tenure in New York City. New York City 's satellite Chinatowns in Queens and Brooklyn are thriving as traditionally urban enclaves, as large - scale Chinese immigration continues into New York.
The first Chinese immigrants came to lower Manhattan around 1870, looking for the "gold '' America had to offer. By 1880, the enclave around Five Points was estimated to have from 200 to as many as 1,100 members. However, the Chinese Exclusion Act, which went into effect in 1882, caused an abrupt decline in the number of Chinese who immigrated to New York and the rest of the United States. Later, in 1943, the Chinese were given a small quota, and the community 's population gradually increased until 1968, when the quota was lifted and the Chinese American population skyrocketed. Today, the Manhattan Chinatown (simplified Chinese: 纽约 华 埠; traditional Chinese: 紐約 華 埠; pinyin: Niŭyuē Huá Bù) is home to the largest concentration of Chinese people in the Western Hemisphere and is one of the oldest ethnic Chinese enclaves outside of Asia. Within Manhattan 's expanding Chinatown lies a "Little Fuzhou '' on East Broadway and surrounding streets, occupied predominantly by immigrants from the Fujian Province of Mainland China. Areas surrounding the "Little Fuzhou '' consist mostly of Cantonese immigrants from Guangdong Province, the earlier Chinese settlers, and in some areas moderately of Cantonese immigrants. In the past few years, however, the Cantonese dialect that has dominated Chinatown for decades is being rapidly swept aside by Mandarin, the national language of China and the lingua franca of most of the latest Chinese immigrants. The energy and population of Manhattan 's Chinatown are fueled by relentless, massive immigration from Mainland China, both legal and illegal in origin, propagated in large part by New York 's high density, extensive mass transit system, and huge economic marketplace.
The early settlers of Manhattan 's Chinatown were mostly from Hong Kong and from Taishan of the Guangdong Province of China, which are Cantonese - speaking, and also from Shanghai. They form most of the Chinese population of the area surrounded by Mott and Canal Streets. The later settlers, from Fuzhou, Fujian, form the Chinese population of the area bounded by East Broadway. Chinatown 's modern borders are roughly Grand Street on the north, Broadway on the west, Chrystie Street on the east, and East Broadway to the south. Little Fuzhou, a prime destination status for immigrants from the Fujian Province of China, is another, Fuzhouese, enclave in Chinatown and the Lower East Side of Manhattan. Manhattan 's Little Fuzhou is centered on the street of East Broadway. The neighborhood is named for the western portion of the street, which is primarily populated by mainland Chinese immigrants, (primarily Foochowese from Fuzhou, Fujian). The smaller, eastern portion has traditionally been home to a large number of Jews, Puerto Ricans, and African Americans.
The present Flushing Chinatown, in the Flushing area of the borough of Queens, was predominantly non-Hispanic white and Japanese until the 1970s when Taiwanese began a surge of immigration, followed by other groups of Chinese. By 1990, Asians constituted 41 % of the population of the core area of Flushing, with Chinese in turn representing 41 % of the Asian population. However, ethnic Chinese are constituting an increasingly dominant proportion. A 1986 estimate by the Flushing Chinese Business Association approximated 60,000 Chinese in Flushing alone. The popular styles of Chinese cuisine are ubiquitously accessible in Flushing Chinatown, including Taiwanese, Shanghainese, Hunanese, Szechuan, Cantonese, Fujianese, Xinjiang, Zhejiang, and Korean Chinese cuisine. Even the relatively obscure Dongbei style of cuisine indigenous to Northeast China is now available in Flushing Chinatown, as well as Mongolian cuisine. Mandarin Chinese (including Northeastern Mandarin), Fuzhou dialect, Min Nan Fujianese, Wu Chinese, Beijing dialect, Wenzhounese, Shanghainese, Suzhou dialect, Hangzhou dialect, Changzhou dialect, Cantonese, Taiwanese, and English are all prevalently spoken in Flushing Chinatown, while the Mongolian language is now emerging.
Elmhurst, another neighborhood in Queens, also has a large and growing Chinese community.
By 1988, 90 % of the storefronts on Eighth Avenue in Sunset Park, in southern Brooklyn, had been abandoned. Chinese immigrants then moved into this area, not only new arrivals from China, but also members of Manhattan 's Chinatown, seeking refuges from high rents, who fled to the cheap property costs and rents of Sunset Park and formed the Brooklyn Chinatown, which now extends for 20 blocks along Eighth Avenue, from 42nd to 62nd Streets. This relatively new but rapidly growing Chinatown located in Sunset Park was originally settled by Cantonese immigrants like Manhattan 's Chinatown in the past, but is now being repopulated by Fujianese (including Fuzhou people) and Wenzhounese immigrants.
Another Chinatown has developed in southern Brooklyn, on Avenue U in the Homecrest area, as evidenced by the growing number of Chinese - run fruit markets, restaurants, beauty and nail salons, and computer and general electronics dealers, spread among a community formerly composed mainly of Georgians, Vietnamese, Italians, Russians, and Greeks. The population of Homecrest in 2013 was more than 40 % Chinese. Also emerging in southern Brooklyn, in the Bensonhurst neighborhood, below the BMT West End Line (D train) along on 86th Street between 18th Avenue and Stillwell Avenue, is Brooklyn 's third Chinatown. The second Chinatown and the third, emerging Chinatown of Brooklyn are now increasingly carrying the majority of the Cantonese population in Brooklyn as the Cantonese dissipate from the main Brooklyn Chinatown in Sunset Park. With the migration of the Cantonese in Brooklyn now to Bensonhurst, and along with new Chinese immigration, small clusters of Chinese people and businesses in different parts of Bensonhurst have grown integrating with other ethnic groups and businesses. Smaller enclaves also exist in nearby Dyker Heights, Gravesend, and Bath Beach.
In Woodside, Queens, 13,000 out of 85,000 (~ 15 %) of the population is Filipino. Woodside 's "Little Manila '' extends along Roosevelt Avenue.
The first Filipino settlement in the United States was Saint Malo, Louisiana, established in 1763. Mass immigration started in the late 19th century, to service the plantations of Hawaii and the farms of California. The immigration quota was lowered to 50 Filipinos a year, however, Filipinos in the United States Navy were exempt from this. Therefore, Filipinos settled near naval bases and formed ethnic enclaves due to discrimination. The quota was raised in the second half of the 20th century, starting another wave of Filipino immigration, looking for political freedom and opportunity, and one which has extended until present.
Indian Americans are another group that has settled in New York City, forming a few different ethnic enclaves. One of these is called "Curry Row '' and is in the East Village, Manhattan, centered on 6th Street between 1st and 2nd Avenues, another is called "Curry Hill '' or "Little India '', centered on Lexington Avenue between 26th and 31st Streets, and another is in Jackson Heights, Queens, centered on 74th Street between Roosevelt and 37th Avenue.
Richmond Hill, Queens is another "Little India '' community. This area has the largest Sikh population in the New York City area. It is also known as "Little Punjab ''. There is also a "Little Indo - Caribbean '' community in Richmond Hill, Queens with many Indo - Caribbean Americans.
Some of the region 's main centers of Indian culture are located in central New Jersey, particularly in Middlesex County. In Edison, New Jersey, ethnic Asian Indians represent more than 28 % of the population, the highest percentage of any place in the United States with more than 1,000 residents identifying their ancestry. The Oak Tree Road area, which crosses through Edison and Iselin is a growing cultural hub with high concentrations of Indian stores and restaurants.
There have been three major waves of Indian immigrants, the first between 1899 and 1913, the second after India was granted independence from the United Kingdom in 1947, and the third after the immigration quota for individual countries was lifted in 1965. As of 2010, the New York City metropolitan area contains the largest Asian Indian population in North America.
As of the 2000 Census, over half of the 37,279 people of Japanese ancestry in New York State lived in New York City.
As of 2011 within the city the largest groups of Japanese residents are in Astoria, Queens and Yorkville in the Upper East Side of Manhattan. As of the 2010 U.S. Census there are about 1,300 Japanese in Astoria and about 1,100 Japanese in Yorkville. 500 Japanese people lived in East Village. As of the same year, there are about 6,000 Japanese in Bergen County, New Jersey and 5,000 Japanese in Westchester County, New York. As of that year most short - term Japanese business executives in Greater New York City reside in Midtown Manhattan or in New York City suburbs. In 2011 Dolnick and Semple wrote that while other ethnic groups in the New York City region cluster in specific areas, the Japanese were distributed "thinly '' and "without a focal point '' such as Chinatown for the Chinese.
New York City is home to the second largest population of ethnic Koreans outside of Korea. Koreans started immigrating with the signing of the Korean - American Treaty of Amity and Commerce, which allowed them to do so freely. The first wave of Korean immigration lasted from 1903 -- 1905, when 7,000 Koreans came to the United States. After this first wave, the 1907 "Gentlemen 's Agreement '' of President Theodore Roosevelt restricted Korean immigration to the United States. President Harry Truman repealed this in 1948. and from 1951 -- 1964, another wave of Koreans migrated to the United States, and a third wave lasted from 1969 -- 1987. As economic conditions improved in Korea, many Koreans chose to stay.
Korean communities in New York include Koreatown in Manhattan; Bedford Park in the Bronx; and Sunnyside, Woodside, Elmhurst, Flushing, Murray Hill, Bayside, and Douglaston -- Little Neck, in Queens. The Korean enclave in Flushing spread eastward across Queens and into Nassau County, forming a large Long Island Koreatown --. In Murray Hill -- part of the large Long Island Koreatown -- the station of the same name on the Long Island Rail Road is close to a row of Korean - owned businesses and a mainly Korean - speaking community; the neighborhood culminates with Meokjagolmok (Restaurant Street) with two dozen restaurants, bars, cafes, a bakery, and some karaoke establishments.
Pakistani Americans have a large presence in New York, with the city (along with New Jersey) hosting the largest Pakistani population of any region in the United States. The population of Pakistanis is estimated at around 35,000; they are settled primarily in the boroughs of Queens (more specifically Jackson Heights) and Brooklyn (Coney Island Avenue). These numbers make Pakistani Americans the fifth largest Asian American group in New York City. As of 2006, 50,000 people of Pakistani descent were said to be living in New York City. This figure rises to 70,000 when illegal immigrants are also included. Pakistani migration to New York has occurred heavily only since the past two to three decades, reflecting the history of Pakistani migration elsewhere in the country; "Little Pakistans '' or ethnic enclaves populated by Pakistanis tend to be characterised and populated by other South Asian Americans as well, including Indians and Bangladeshis and thus are dominated by South Asian culture. Pakistani restaurants, grocery markets and halal shops are abound in such areas.
Many Sri Lankan people settle in Tompkinsville, Staten Island, which has one of the highest concentrations of Sri Lankans outside of their native country. More than 5,000 Sri Lankans live in Staten Island. The Sri Lankan commercial center is at the corner of Victory Boulevard and Cebra Avenue. They often hold festive New Year celebrations on Staten Island, including a traditional oil - lighting ceremony, live baila music, and competitive events like coconut - scraping and bun - eating contests.
There is a community of Vietnamese at the Bowery in an area unofficially known as "Little Saigon. '' The area is overshadowed by neighboring Chinatown in that it is relatively indistinguishable. The area however is marked by an abundance of Vietnamese restaurants.
Many European ethnic groups have formed enclaves in New York. These include Albanian, Croatian, German, Hungarian, Greek, Irish, Italian, Jewish, Polish, Russian, and Ukrainian.
Albanians first immigrated to the United States from Southern Italy, Greece, and Kosovo in the 1920s. Later, in the 1990s, after the fall of communism in Eastern Europe, many Albanians flocked to the United States. Two neighborhoods that became Albanian are Belmont and Pelham Parkway.
In April 2012, it was reported by the New York Times that 9,500 people in the Bronx identify themselves as Albanian. Many live near Pelham Parkway and Allerton Ave in the Bronx.
Germans starting immigrating to the United States in the 17th century, and until the late 19th century, when Germany was the country of origin for the largest number of immigrants to the United States. In fact, Over one million Germans entered the United States in the 1850s alone.
German American ethnic enclaves in New York City include the now - defunct Little Germany, in Manhattan and the extant Yorkville, Manhattan. Little Germany, or as it was called in German, Kleindeutschland, was positioned in the Lower East Side, around Tompkins Square, in what would later become known as Alphabet City. The General Slocum disaster in 1904 wiped out the social core of the neighborhood, and many Germans moved to Yorkville. Yorkville, part of the Upper East Side, is bounded (roughly) by 79th Street to the south, 96th Street and Spanish Harlem to the north, the East River to the east, and Third Avenue to the west. The main artery of the neighborhood, 86th Street, has been called the "German Broadway ''. For much of the 20th century, Yorkville was inhabited by German and Hungarian Americans.
The Queens neighborhoods of Ridgewood and Glendale include small populations of Germans. Ridgewood notably includes Gottschee expatriates from modern - day Slovenia.
Astoria, Queens, is home to the largest concentration of Greek Americans in New York. When one would walk down a street in the 1970s, one would see Greek restaurants, Hellenic clubs, and many Greek - owned businesses. Now, Astoria has become more diverse, with Mexican Americans, Colombian Americans, Pakistani Americans, and Russian Americans all calling Astoria home, among others. Many Greeks are leaving Astoria for Whitestone, Queens, but many of the buildings in Astoria are still owned by Greeks.
The largest Greek migration to the United States began around 1910 and ended around 1930, with most migrating for the economic opportunity, but as living conditions in Greece improved in the 1980s, Greek migration slowed. However, Astoria remains New York 's "Greektown. ''
There is a significant orthodox Jewish Hungarian population in the rapidly growing neighborhood of Borough Park, Brooklyn. In December 2012, the stretch of 13th Avenue from 36th to 60th Streets was co-named Raoul Wallenberg Way in honor of the Swedish diplomat who saved 100,000 Hungarian Jews during the Holocaust. Many of these survivors settled in Borough Park after the war and raised their families here. There is also a Hungarian population in Williamsburg, Brooklyn and an affluent population in Yorkville, Manhattan.
Irish Americans make up approximately 5.3 % of New York City 's population, composing the second largest non-Hispanic white ethnic group. Irish Americans first came to America in colonial years (pre-1776), with immigration rising in the 1820s due to poor living conditions in Ireland. But the largest wave of Irish immigration came after the Great Famine in 1845.
After they came, Irish immigrants often crowded into subdivided homes, only meant for one family, and cellars, attics, and alleys all became home for some Irish immigrants. In fact, New York once had more Irishmen than Dublin itself. The Irish in New York developed a particular reputation for joining the New York City Police Department as well as the New York Fire Department.
Bay Ridge, Brooklyn, was originally developed as a resort for wealthy Manhattanites in 1879, but instead became a family - oriented Italian - and Irish - American community. Another large Irish - American community is located in Woodlawn, Bronx, but Woodlawn also has a mix of different ethnic groups. One large Irish community in Manhattan was Hell 's Kitchen. Other sizable Irish - American communities include Belle Harbor and Breezy Point, both in Queens. Two big Irish communities are Marine Park and neighboring Gerritsen Beach.
At 8.3 % of the population, Italian Americans compose the largest European American ethnic group in New York City, and are the largest ethnic group in Staten Island (Richmond County), making it the most Italian county in the United States, with 37.7 % of the population reporting Italian American ancestry.
Though Italian immigration began as early as the 17th century, with Pietro Cesare Alberti, from Venice, being the first reported Italian living in the New Amsterdam colony, effective immigration started around 1860 with the founding of the Kingdom of Italy. Italian immigration skyrocketed, and lasted that way until 1921, when Congress passed the Emergency Quota Act that slowed the immigration of Italians. Most of the Italian immigrants to New York were from Southern Italy, from cities, Sicily, or Naples.
At one time, Little Italy in Manhattan had over 40,000 Italians and covered seventeen blocks. In fact, much of the Lower East Side in general and, until recently, Greenwich Village contained a high Italian population. Increasing rent prices, gentrification, and the enlargement of Chinatown have resulting in the shrinking of Little Italy. Little Italy is now concentrated around Mulberry Street between Kenmare and Grand streets, with about 5,000 Italian Americans. Italian Harlem, which was once home to over 100,000 Italian - Americans, has also largely disappeared since the 1970s, with the exception of Pleasant Avenue. East New York, Flatbush, Brooklyn, and Brownsville, Brooklyn also had sizable Italian communities that gradually shrank by the 1970s, though pockets of the older Italian - American communities still exist in these neighborhoods.
Today, Italian neighborhoods with large Italian - American populations include Morris Park, Bronx; Fordham, Bronx, around Arthur Avenue; Country Club, Bronx; Pelham Bay, Bronx; Bay Ridge, Brooklyn; Bensonhurst, Brooklyn; Williamsburg, Brooklyn and East Williamsburg; Dyker Heights, Brooklyn, the city 's largest Italian neighborhood (as of 2009); Cobble Hill, Brooklyn and Carroll Gardens, Brooklyn; Canarsie, Brooklyn; Astoria, Howard Beach, Middle Village, Whitestone and Ozone Park, Queens; and much of Staten Island.
Lapskaus Boulevard in Bay Ridge, Brooklyn recalls a Norwegian enclave, which became mostly assimilated in the late 20th century. At its peak the area was home to 60,000 Norwegians. This neighborhood also hosted New York 's principal Finnish enclave.
Polish American communities in New York include Greenpoint ("Little Poland '') and North Williamsburg in Brooklyn, Maspeth, the East Village near 7th Street, and Ridgewood, Queens around both Fresh Pond Road and Forest Avenue, in Queens.
Brooklyn has several Russian American communities, including Bay Ridge, Gravesend, Sheepshead Bay, and Midwood. Staten Island 's Russian American communities are in South Beach, and New Dorp. The largest Russian - speaking community in the United States is Brighton Beach. Many Russians in New York are Jews from the former Soviet Union, which broke up in 1991, and most still retain at least part of their Russian culture. The primary language of Brighton Beach is Russian, as seen from businesses, clubs, and advertisements. A significant portion of the community is not proficient in English, and about 98 % speak Russian as their native language.
There is a small Ukrainian American community in the East Village, centered on Second Avenue between 6th and 10th Streets. The community was there when the East Village was still referred to as the Lower East Side, and was a moderately large community. Though it has since declined, the number of Ukrainians in the neighborhood may have been as high as 60,000 after World War II.
Many ethnic enclaves in New York City are Latin American - centric. Latin American ethnic groups with enclaves in New York include Argentinians, Brazilians, Colombians, Dominicans, Peruvians, Salvadorans, Ecuadorians, Haitians, Mexicans, and Puerto Ricans.
More than half of the population of Jackson Heights, Queens, are immigrants, primarily South Asians, and Latin Americans, including Argentinians, Colombians, and Uruguayans.
Most Brazilian Americans in New York can be found in two areas -- in Astoria, Queens, and on a section of West 46th Street between Fifth Avenue and Sixth Avenue in Midtown Manhattan. In Astoria, the area around 36th Avenue and 30th Street is the most Brazilian in character, despite the prevalence of other ethnic groups, like Bengali, Pakistani, Indian, Mexican, Arab, Japanese, Korean, Greek, Dominican, and Italian people. The top three languages in Astoria are Bengali, Spanish, and Brazilians ' native Portuguese. The other Brazilian neighborhood, 46th Street between Fifth and Sixth Avenues, was officially named "Little Brazil '', but resident Brazilians call it "Rua 46. ''
One of many Latin American groups represented in New York, Colombian Americans have a very strong presence in Jackson Heights and a nearby neighborhood, Elmhurst, especially along Roosevelt Avenue.
New York City also has some Salvadoran American ethnic enclaves such as the one in Flushing; others are in Corona, Williamsburg, and Parkchester. There is a sizable Honduran American population in the South Bronx and West Bronx.
Immigration records of Dominicans in the United States date from the late 19th century, with New York City having a Dominican community since the 1930s. Large scale immigration of Dominicans began after 1961 onward when dictator Rafael Trujillo died. Other catalysts in Dominican immigration were the invasion of Santo Domingo in 1965, and the regime of Joaquín Balaguer from 1966 -- 1978. In part due to these catalysts, starting in the 1970s and lasting until the early 1990s, Dominicans were the largest group of immigrants coming into New York City. Now, Dominicans compose 7 % of New York 's population and are the largest immigrant group. Major Dominican neighborhoods in New York include Washington Heights, Manhattan; Inwood, Manhattan; Bushwick, Brooklyn; Williamsburg, Brooklyn; Sunset Park, Brooklyn; East New York, Brooklyn; Corona, Queens; Sunnyside, Queens; Woodside, Queens; and the West Bronx; particularly the Morris Heights; Highbridge; and Fordham - Bedford sections. In fact Dominicans are the most dominant Hispanic group in many areas of the Bronx west of Grand Concourse.
The Dominican population of Woodside is concentrated on three blocks of identical apartment buildings. More immigrants groups are found in large numbers in Woodside, including Irish, Chinese, Koreans, Islamics, Mexicans, and Colombians.
The South Bronx is another neighborhood with a Dominican population. During the 1970s, the area, while heavily populated by Puerto Ricans & African Americans, became infamous for poverty and arson, a lot by landlords seeking insurance money on "coffin ships '' of buildings. By 1975, the South Bronx was the most devastated urban landscape in America, and had experienced the largest population drop in urban history, given the exception of the aftermath of war. The South Bronx has started to recover, and most of it has recovered from the damage done in the 1970s.
By 1984, the traditionally heavily Italian neighborhood of Corona had instead become heavily Dominican, and Corona experienced rapid economic growth -- 59 % -- as compared to the rest of the city experiencing 7 %, as well as having the most overcrowded school district in the city as of 2006.
The Dominican population of Washington Heights is significant, and candidates for political office in the Dominican Republic will run parades up Broadway.
In some of these neighborhoods, shops advertise in Spanish and English, the Dominican flag is hung from windows, storefronts, and balconies, and the primary language is Dominican Spanish.
New York City has a few Ecuadorian American ethnic enclaves, and there are over 137,000 Ecuadorians in the city as of 2013, making them the sixth largest ethnic population in the city. A part of Southside Williamsburg in Brooklyn is Ecuadorian in nature, with Spanish being the primary language of most Ecuadorians in the area, bodegas advertising goods in Spanish, and churches advertising bingo games in Spanish.
Other Ecuadorian neighborhoods include Tremont in the Bronx, and several neighborhoods in Queens, including Jackson Heights, Corona, and Ridgewood, have significant Ecuadorian communities. Corona 's Ecuadorian community, notably, is the fastest - growing, with parts of Corona being over 25 % Ecuadorian.
Mexican Americans, as of 2004, were New York 's fastest growing ethnic group, with 186,000 immigrants as of 2013; they were also the third largest Hispanic group in New York City, after Puerto Ricans and Dominicans. Close to 80 % of New York Mexicans were born outside the United States, and more than 60 % of Mexican New Yorkers reside in Brooklyn and Queens.
In Brooklyn, Sunset Park and Flatbush have the highest concentration of Mexicans, and Bushwick and Brighton Beach also have significant Mexican populations. In Queens, Elmhurst, East Elmhurst, and Jackson Heights have the largest Mexican populations, but Corona and Kew Gardens also have sizable communities. Spanish Harlem in Manhattan, around 116th Street and Second Avenue, has a large community of Mexicans, which is still small compared to the area 's predominant Puerto Rican population; Staten Island has a large Mexican community in the Port Richmond, West Brighton, and Tompkinsville areas.
The densest population of Mexicans in the city is in Sunset Park, Brooklyn, in an area bounded by Second and Fifth Avenues and by 35th and 63rd Streets. This area is centered around a Fifth Avenue commercial strip. The main church is Basilica of Our Lady of Perpetual Help, with over 3,000 Mexican Catholic parishioners.
Puerto Ricans have been immigrating to New York since 1838, though they did not arrive in large numbers until the 20th century. In 1910 only 500 Puerto Ricans lived in New York, but by 1970 that number had skyrocketed to over 800,000, and 40 % of those lived in the Bronx. Unlike the other four boroughs, Puerto Rican populations are significant throughout the Bronx, though there is slightly higher concentrations in the South Bronx. The first group of Puerto Ricans immigrated to New York City in the mid-19th century when Puerto Rico was a Spanish colony and its people Spanish subjects and as such they were immigrants. The following wave of Puerto Ricans to move to New York City did so after the Spanish -- American War in 1898. Puerto Ricans were no longer Spanish subjects and citizens of Spain, they were now Puerto Rican citizens of an American possession and needed passports to travel to the mainland of the United States. That was until 1917, when the United States Congress approved Jones - Shafroth Act which gave Puerto Ricans in the island a U.S. citizenship with certain limitations. Puerto Ricans living in the mainland U.S. however, were given full American citizenship and were allowed to seek political office in the states which they resided. Two months later, when Congress passed the Selective Service Act, conscription was extended to the Puerto Ricans both in the island and in the U.S. It was expected that Puerto Rican men 18 years and older serve in the military during World War I. The Jones - Shafroth Act also allowed Puerto Ricans to travel between Puerto Rico and the United States mainland without the need of a passport, thereby becoming migrants. The advent of air travel was one of the principal factors that led to the largest wave of migration of Puerto Ricans to New York City in the 1950s, known as "The Great Migration ''. Although Florida has received some dispersal of the population, there has been a resurgence in Puerto Rican migration to New York and New Jersey - consequently, the New York City Metropolitan Area has witnessed an increase in its Puerto Rican population from 1,177,430 in 2010 to a Census - estimated 1,201,850 in 2012, maintaining its status by a significant margin as the most important cultural and demographic center for Puerto Rican Americans outside San Juan.
Brooklyn has several neighborhoods with a Puerto Rican presence, many of the ethnic Puerto Rican neighborhoods in Brooklyn formed before the Puerto Rican neighborhoods in the South Bronx because of the work demand in the Brooklyn Navy Yard in the 1940s and 50s. Bushwick has the highest concentration of Puerto Ricans in Brooklyn. Other neighborhoods with significant populations include Williamsburg, East New York, Brownsville, Coney Island, Red Hook, Sunset Park, and parts of Bay Ridge In Williamsburg; Graham Avenue is nicknamed "Avenue of Puerto Rico '' because of the high density and strong ethnic enclave of Puerto Ricans who have been living in the neighborhood since the 1950s. The Puerto Rican day parade is also hosted on the avenue.
Ridgewood, Queens, also has a significant Puerto Rican population, which is now spreading to other places in Central Queens such as Maspeth, Glendale, and Middle Village; as does neighboring community Bushwick, Brooklyn. Other neighborhoods in Queens such as Woodhaven also have a sizable population.
Puerto Rican neighborhoods in Manhattan include Spanish Harlem and Loisaida. Spanish Harlem was "Italian Harlem '' from the 1880s until the 1940s. By 1940, however, the name "Spanish Harlem '' was becoming widespread, and by 1950, the area was predominantly Puerto Rican and African American. Loisaida is an enclave east of Avenue A that originally comprised German, Jewish, Irish, and Italian working class residents who lived in tenements without running water; the German presence, already in decline, virtually ended after the General Slocum disaster in 1904. Since them, the community has become Puerto Rican and Latino in character, despite the "gentrification '' that has affected the East Village and the Lower East Side since the late 20th century.
Staten Island has a fairly large Puerto Rican population along the North Shore, especially in the Mariners ' Harbor, Arlington, Elm Park, Graniteville, Port Richmond & Stapleton neighborhoods, where the population is in the 20 % range.
In New York and many other cities, Puerto Ricans usually live in close proximity with Dominicans and African Americans. High concentrations of Puerto Ricans are also present in numerous public housing developments throughout the city.
In some places in the South Bronx, Spanish is the primary language. Throughout the 1970s, the South Bronx became known as the epitome of urban decay, but has since made a recovery.
Several Middle Eastern ethnic groups have immigrated to New York and formed several neighborhoods with a high concentration of people who are of Arab descent. Between the 1870s and the 1920s, the first wave of Arab immigrants brought mostly Syrians and Lebanese people to NYC. The majority of them were Christian. A lot of the Syrian immigrants settled on Washington Street. In this area, the first Arabic neighborhood was formed. During the second immigration wave in the ' 60s, Little Syria became more affluent and moved to the area around Atlantic Avenue. After a certain period, the Arabic inhabitants of this area moved to other parts of the city, such as Astoria and Bay Ridge. There are now around 160,000 Arabic people in NYC and more than 480,000 in the New York State. According to the Arab American Institute the population of people who identify themselves as Arab, grew by 23 % between 2000 and 2008.
Astoria, Queens, has an Egyptian American community, dubbed "Little Egypt '', centered on Steinway Street between Broadway and Astoria Boulevard. It features many Middle Eastern and North African cafés, restaurants, and shops, including other businesses from countries like Morocco, Lebanon, and Syria.
On Atlantic Avenue between the East River and Grand Army Plaza, there is also a significant population of Middle Easterners. There are a few shops which still exist in this street, such as Sahadi 's. A little part of this community remained in the neighborhoods Boerum Hill and Park Slope. There is also a significant Middle Eastern population in Midwood, Brooklyn and Bay Ridge, Brooklyn. Especially Bay Ridge has a dramatically growing concentration of Arabs. You can find a lot of Yemenis and Palestinians in this neighborhood.
Other boroughs: Staten Island has a Palestinian community, found in the New Springville area. There are a lot of Arab restaurants in Manhattan.
See Arab Americans & Arab immigration to the United States.
North Williamsburg is an ethnic enclave centered on Israeli Americans. There is also a small community of Israelis centered on Kings Highway, also in Brooklyn. Israelis first immigrated to the United States after 1948. United Kingdom, and the United States has experienced two large waves of immigration from Israel. The first was during the 1950s and early 1960s, 300,000 Israelis immigrated to the United States, and another wave, starting in the mid-1970s and lasting through the present, in which 100,000 to 500,000 Israelis have immigrated to the United States.
The first Jews arrived in New York City in 1654, when it was still New Amsterdam, from Recife (Brazil) following the First Anglo - Dutch War, resulting a decade later in the first known civil rights case in the New World when a Jew named Asser Levy successfully appealed to the New Amsterdam colonial council for the right to serve in the army. Later German immigration brought large communities of Ashkenazi Jews. Starting then until 1820 was the first wave of Jewish immigration to America, bringing fewer than 15,000 Jews. The first wave of Jewish people were fleeing religious persecution in Brazil, Portugal, Spain, Bordeaux, Jamaica, England, Curaçao, Holland, and conquered by Russian Empire former Poland (Rzeczpospolita Obojga Narodów), and founded communities in New York, Newport, Charleston, Savannah, and Philadelphia. From 1820 to 1880 came the second wave, in which a quarter million German Jews migrated to America. A third major wave of Sephardi Jews coming from the Balkans and the Middle East after the Turkish revolution. The outbreak of World War I and the Holocaust caused many German Jews to immigrate to the United States. During this period, 1881 to 1924, over 2,000,000 Eastern European Jews immigrated, fleeing anti-semitic persecution in their home countries. A later wave from Eastern Europe, from 1985 -- 1990, over 140,000 Jews immigrated from the former Soviet Union. 50,000 Jews a year still immigrate to the United States.
New York today has the second largest number of Jews in a metropolitan area, behind Gush Dan (the Tel Aviv Metropolitan Area) in Israel. Borough Park, Brooklyn, (also known as Boro Park) is one of the largest Orthodox Jewish communities in the world. Crown Heights, Brooklyn, also has a large Orthodox Jewish community. Flatbush, Brooklyn, Riverdale, Bronx, Williamsburg, Brooklyn, Midwood, Brooklyn, Forest Hills, Queens, Kew Gardens Hills, Queens, Kew Gardens, Queens, Fresh Meadows, Queens and the Upper East Side, Washington Heights, Manhattan because of the proximity of the renowned Yeshiva U and Upper West Side, Manhattan, are also home to Jewish communities. Another neighborhood, the Lower East Side, though presently known as a mixing pot for people of many nationalities, including German, Puerto Rican, Italian, and Chinese, was primarily a Jewish neighborhood. Although the Jewish community of Staten Island is dispersed throughout the Island, enclaves of Hasidic Jews are found in the Willowbrook, New Springville, Eltingville, and New Brighton areas.
|
who played adam sandler's daughter in click | Click (2006 film) - wikipedia
Click is a 2006 American science fiction comedy - drama film directed by Frank Coraci, written by Steve Koren and Mark O'Keefe, and produced by Adam Sandler, who also starred in the lead role. The film co-stars Kate Beckinsale as his wife Donna and Christopher Walken as Morty. Sandler plays an overworked architect who neglects his family. When he acquires a universal remote that enables him to "fast forward '' through unpleasant or outright dull parts of his life, he soon learns that those seemingly bad moments that he skips over contained valuable time with his family and important life lessons. Throughout the story, a man named Morty explains how the remote works and issues warnings.
Filming began in late 2005 and was finished by early 2006. The film was released in the United States on June 23, 2006, by Columbia Pictures. It was nominated for an Academy Award for Best Makeup, making this the only Sandler film to be nominated for an Oscar.
Michael Newman (Adam Sandler) is a hardworking architect, married to his longtime sweetheart, Donna Newman (Kate Beckinsale) with two children, Ben and Samantha. Michael is easily pushed around by his overbearing boss, John Ammer (David Hasselhoff), and often chooses work over family. While going in search of a universal remote control at the retail store Bed Bath & Beyond, Michael falls onto a bed and then proceeds to the section marked "Beyond ''. He befriends a mysterious man named Morty (Christopher Walken), who gives him the remote control for free and warns him that it can never be returned.
To Michael 's amazement, he finds that this remote can actually control time and space, as if reality were a television program. He can revisit past memories, lower volume of surrounding noise, and freeze or fast forward through time. At first, Michael uses the remote to have fun, but then starts to use it for his benefit. He uses its universal language adaptor to interpret for foreign clients or to relive moments from his childhood. During a camping trip in Lake Winnipesaukee, his parents had a barbeque, but the other kids had declined his invitation in favor of another family that was better off. This affected Michael 's adult personality; he is determined to succeed in order to avoid raising his family in the same squalor his parents did. Morty attempts to explain the error of Michael 's work ethic, and reminds him to not to take Donna for granted by showing some of Michael 's past girlfriends who were homely or had awful personalities. However, Michael ultimately uses the remote to skip quarrels with Donna, to avoid suffering a cold by skipping to the point at which he recovers, and to skip a family dinner in order to finish an important project. Later, Morty reveals that when Michael fast - forwards through time, his body is on "auto - pilot '', meaning his mind skips ahead while his body goes through the motions of everyday life. After seeing his children upset that he can not afford new bicycles he promised for them, Michael decides to use the remote to fast forward to when Ammer delivers on his promised partnership, but learns that it was one whole year ahead. Michael finds out that, during the span of that year, he entered marriage counseling; his children prefer to watch CSI instead of Dragon Tales; and he missed the death of his dog. To make matters worse, the remote begins fast - forwarding on its own, as a feature that would automatically follow Michael 's preferences based on how he used it. Michael 's various attempts to dispose of or destroy the remote fail, giving him the resolve to regain agency of his life.
The next day, Ammer tells Michael he is retiring and suggests that one day Michael may end up CEO. Momentarily forgetting his plan to outfox the remote, Michael says he would like to end up CEO; the remote reacts accordingly and fast - forwards to the year 2017. In the future Michael has achieved everything he wished, and is now the new CEO of the company after Ammer moves to Morocco with Donna 's best friend, Janine (Jennifer Coolidge). Michael has all the material wealth and luxuries he could ever want, albeit at the cost of his family 's state. Michael has become very unhealthy over the ten years causing him to become obese and discovers he resides in a penthouse by himself. After discovering his new life, and unaware that he and Donna have been divorced, Michael wishes to go back to his house, and the remote fast forwards him there. However, when he returns to his old house Michael is resented by everyone in his family and realizes the divorce. He also discovers that Donna has married Bill, who coached Ben 's swim team. While visiting his family, Michael meets Ben who is now an overweight teenager with self - esteem issues from copying his father 's eating habits and Samantha who has grown into a beautiful young lady, but moody with similar attitude problems like Ben 's. He then fights with Donna and Bill, before confronting Morty who reveals his life choices caused him to become an overweight workaholic who neglected his family. The new family dog then pounces on him, causing him to fall and hit his head on a brick wall, knocking him into a coma.
Having "learned '' from Michael when he skipped his cold, the remote fast forwards through Michael 's coma and transports him to 2023, where he has finally woken up. Michael had suffered a heart attack while in a coma, but is no longer obese. Ben is also in shape and is now Michael 's partner. Michael is devastated when Ben tells him his father Ted (Henry Winkler) has died, and Michael, while visiting his father 's grave, tries to use the remote to go to the moment when Ted was on his deathbed, but Morty appears, saying that it will not take him there; the time travel function only works for events Michael was physically present. Michael uses the remote to take him to the point when he last saw Ted alive, which is when Ted made an impromptu visit to his son 's and grandson 's office. Michael sees that, while on auto - pilot, he had remained emotionally distant, and brusquely rejected his father 's offer for a night out with him. Even coldly telling Ted, that he "always knew '', when Ted said he would teach Michael the infamish coin trick (something that Ted is extremely prideful of). Heartbroken, Ted walked away with Ben. Although aghast, Michael "pauses '' Ted to get in his last words and tell his unaware father that he loves him too and will miss him.
At the graveyard, Morty reveals he is in fact the Angel of Death. Upset with his life, Michael begs to go to a "good place '', and fast forwards to Ben 's wedding in the year 2029. Although Michael is glad, he also sees Samantha calling Bill "Dad '', and the shock triggers a second heart attack. When Michael awakens in a hospital wing, Morty appears and tells him that he has chosen his path and is powerless to do anything about it. Michael 's family arrives, and Ben tells his dad their business is in trouble, so he has cancelled his honeymoon to leave temporarily and patch things up. Michael tries to tell Ben it is n't fair for him to neglect his wife, but to no avail. After Ben and Samantha leave, Michael struggles out of the bed and feebly chases after them. Not wanting Ben to make the same mistakes he did, Michael tries to reach Ben before he leaves for his flight. Although Morty protests that Michael will die unless he goes back to the hospital, Michael insists he needs to speak his last words to his family. Michael reaches his family and collapses, but manages to convince Ben that family must come first; he reassures the others that he still loves them, and Morty approaches to take him. The family weeps as Michael dies.
Michael wakes up in the bed onto which he had collapsed at Bed Bath & Beyond, convinced that the events have all been a crazy dream. As he wakes up he sees a store employee (Nick Swardson), who earlier said he had no friends, so Michael picks him up and throws him onto the bed in a playful manner. He cheerfully drives through town and reminds his parents that his door is always open for them. When he arrives back at his own home, he finds the remote on the kitchen counter along with a note by Morty, who says "good guys need a break ''. Michael now realizes his experience was not a dream, but a warning. He throws the remote in the trash, and it does not reappear, indicating that he has ultimately made the right choice. The film ends with Michael offering to play with his kids.
On February 15, 2003, Frank Coraci was hired and set to direct Click. Steve Koren and Mark O'Keefe wrote the script for the film. Adam Sandler, Jack Giarraputo and Neal H. Moritz produced the film with the budget of $82.5 million for release in 2006. On March 11, 2004, it was announced that Adam Sandler, Emilio Cast, Kate Beckinsale, Christopher Walken, David Hasselhoff, Henry Winkler, Julie Kavner, Sean Astin, Jennifer Coolidge, Sophie Monk, Michelle Lombardo, Joseph Castanon, Jonah Hill, Jake Hoffman, Danielle Tatum McCann, Lorraine Nicholson, Katie Cassidy, Cameron Monaghan, Rachel Dratch, Nick Swardson, Rob Schneider and Billy Slaughter joined the film. James Earl Jones joined the cast on March 14 to play himself and the narrator. On April 8, 2005, it was announced that Rupert Gregson - Williams would compose the music for the film. In May 8, Columbia Pictures acquired distribution rights to the film. Click was filmed at Shreveport, Louisiana.
The film 's plot is similar to a story from the Goosebumps book series, also entitled "Click '', which was made into an episode of the franchise 's television series in 1997. The content of the show prompted widespread discussion over whether the material was influential or borrowed for the 2006 film.
The film screened out of competition at the San Sebastian International Film Festival.
Rotten Tomatoes gave the film a score of 32 % based on 167 reviews. The average score is a 4.7 out of 10, and the consensus is: "This latest Adam Sandler vehicle borrows shamelessly from It 's a Wonderful Life and Back to the Future, and fails to produce the necessary laughs that would forgive such imitation. '' Metacritic gave it a score of 45 out of 100, which indicates "mixed or average reviews ''. Audiences polled by CinemaScore gave the film an average grade of "B + '' on an A+ to F scale. Click grossed $137,355,633 in the United States and $100,325,666 internationally, with a total gross of $237,681,299 worldwide.
|
what hotel was the basis for the shining | The Stanley Hotel - wikipedia
The Stanley Hotel is a 142 - room Colonial Revival hotel in Estes Park, Colorado, United States of America. Approximately five miles from the entrance to Rocky Mountain National Park, the Stanley offers panoramic views of Lake Estes, the Rockies and especially Long 's Peak. It was built by Freelan Oscar Stanley of Stanley Steamer fame and opened on July 4, 1909, catering to the American upper class at the turn of the century. The hotel and its surrounding structures are listed on the National Register of Historic Places.
The Stanley Hotel hosted the horror novelist Stephen King, serving as inspiration for the Overlook Hotel in his 1977 bestseller The Shining and its 1980 film adaption of the same name, as well as the location for the 1997 miniseries. Today, it includes a restaurant, spa, and bed - and - breakfast, and provides guided tours which feature the history and alleged paranormal activity of the site.
In 1903, the Yankee steam - powered car inventor Freelan Oscar Stanley (1849 - 1940) was stricken with a life - threatening resurgence of tuberculosis. The most highly recommended treatment of the day was fresh, dry air with lots of sunlight and a hearty diet. Therefore, like many "lungers '' of his day, Stanley resolved to take the curative air of Rocky Mountain Colorado. He and Flora arrived in Denver in March and, in June, decided to spend the rest of the summer in the mountains, in Estes Park. Over the course of the season, Stanley 's health improved dramatically. Impressed by the beauty of the valley and grateful for his recovery, he decided to return every year. He lived to the ripe age of 91, dying of a heart attack in Newton, Massachusetts, one year after his wife, in 1940.
By 1907, Stanley had recovered completely. However, not content with the rustic accommodations, lazy pastimes and relaxed social scene of their new summer home, Stanley resolved to turn Estes Park into a resort town. In 1907, construction began on the Hotel Stanley, a 48 - room grand hotel that catered to the class of wealthy urbanites who composed the Stanleys ' social circle back east.
The land was officially purchased in 1908 through the representatives of Lord Dunraven, the Anglo - Irish peer who had originally acquired it by stretching the provisions of the Homestead Act and pre-emption rights. Between 1872 and 1884, Dunraven claimed 15,000 acres (61 km) of the Estes Valley in an unsuccessful attempt to create a private hunting preserve, making him one of the largest foreign holders of American lands. Unpopular with the local ranchers and farmers, Dunraven left the area for the last time in 1884 relegating the ranch to the management of an overseer. Dunraven 's presence in Colorado had become so well known in the United States that his situation was parodied in Charles King 's novel Dunraven Ranch (1892) as well as James A. Michener 's Centennial (1974). His reputation was such that, when Stanley suggested "The Dunraven '' as a name for his new hotel, 180 people signed a buckskin petition requesting that he name it for himself instead.
The structure was completed in 1909 and featured a hydraulic elevator, dual electric and gas lighting, running water, a telephone in every guest room and a fleet of specially - designed Stanley "Model Z '' Mountain Wagons to bring guests from the train depot twenty miles away; all of this at a time when Estes Park was little more than a locale for hunters and naturalists. Originally, Stanley chose an ocher color for the hotel 's exterior with white accents and trim. The hotel was not equipped with heat until 1983 and closed for the winter every year. The presence of the hotel and Stanley 's own involvement greatly contributed to the growth of Estes Park (incorporated in 1917) and the creation of the Rocky Mountain National Park (established in 1915).
Stanley operated the hotel almost as a pastime remarking once that he spent more money than he made each summer. In 1926, he sold the Stanley to a private company incorporated for the sole purpose of running it. The venture failed and, in 1929, Stanley purchased his property out of foreclosure selling it again, in 1930, to fellow auto and hotel magnate Roe Emery of Denver. During Emery 's tenure as owner, the structures were painted white inside and out and most of the original electro - gas fixtures were replaced.
June 2008
February 2011
South elevation
Lobby looking east
Music room looking south
The Stanley Hotel National Register Historic District contains eleven contributing buildings including the main hotel, a concert hall, carriage house and The Lodge, a smaller bed - and - breakfast originally called Stanley Manor. The buildings were designed by F.O. Stanley with the professional assistance of Denver architect T. Robert Wieger, Henry "Lord Cornwallis '' Rogers, and contractor Frank Kirchoff. The site was chosen for its vantage overlooking the Estes valley and Long 's Peak within the National Park. The main building is a steel - frame structure with wood cladding resting upon a granite masonry foundation. Wood for flooring, clapboarding and finishing was brought from Kirchoff 's Denver Lumberyard and the Bluff City Lumber Company of Pine Bluff, Arkansas. The Griffith sawmill near Bierstadt Lake and Stanley 's own Hidden Valley lumber operation, located in the future national park, supplied framing material. The materials were brought to Lyons, Colorado by rail and thence to Estes Park by mule - drawn wagon. Simultaneously, Stanley oversaw the construction of a hydroelectric power plant which brought electricity to Estes Park for the first time in 1909. Stanley claimed that his hotel was the first to be powered entirely by electricity from the lighting to the kitchens. Water was supplied by the Black Canyon Creek which was dammed in 1906.
The style of the campus is so - called Colonial Revival. Although rare in the western United States, F.O. Stanley chose the Colonial Revival for its fashionable popularity in New England where he had already designed his own home and a social club in the style. The hotel 's clientele would presumably, like the Stanleys, have identified the style with New England respectability and sophistication in contrast to the rusticity of the surrounding town. At one time, Stanley planned to build another, more economical hotel in Estes Park as well as a headquarters and residence for the superintendent of the Rocky Mountain National Park in the same style, to harmonize with his grand hotel. While the forms and layouts of the buildings are suited to their modern uses, their ornamentation exhibits the stylistic hallmarks of Georgian or Federal architecture from the staunch symmetry of the south elevation to the cupola, dormers, Palladian windows, side - and fan - lights, scroll brackets and "swan 's neck '' pediments that articulate the exterior.
The floor plan of the main hotel building was laid out to accommodate the various activities popular with the American upper class at the turn of the twentieth century and the spaces are decorated accordingly. The music room, for instance, with its cream - colored walls (originally green and white), large windows and fine, classical plaster - work was designed for letter - writing during the day and chamber music at night - cultured pursuits perceived as feminine. On the other hand, the smoking lounge (today the Piñon Room) and adjoining billiard room, with their dark stained - wood elements and granite fireplace were designated for enjoyment by male guests. Stanley himself, having been raised in a Protestant household in the tee - totaling State of Maine and having recovered from a serious lung disease, did not smoke cigars or drink alcohol, but these were essential after - dinner activities for most men at the time. Billiards, however, was among Stanley 's most cherished pastimes.
The layout was also determined by air circulation. The window at the top of the grand stair provides a pleasant breeze across the lobby, French doors in the state rooms open onto shaded verandas and the two curving staircases connecting the guest corridors prevent stagnant air in the upper floors. Although the hotel was finally updated with central heating in 1983, guests still depend on Rocky Mountain breezes for cooling in the summer. Also completed in 1983, the hotel 's service tunnel connects the basement level to the staff entrance. It is cut directly through the granite on which the hotel sits.
The concert hall, east of the hotel, was built by Stanley in 1909 with the assistance of Henry "Lord Cornwallis '' Rogers, the same architect who designed his summer cottage. According to popular legend, it was built as a gift for Flora Stanley who was an avid pianist despite her failing eyesight. The interior is decorated in the same manner as the smaller music room and somewhat resembles that of the Boston Symphony Hall (McKim, Mead & White, 1900) with which the Stanleys may have been familiar. The stage features a trap door, used for theatrical entrances and exits. The lower level once housed a bowling alley. This feature has long since disappeared but it possibly resembled the one at the Stanley 's Hunnewell Club in Newton, pictures of which are archived in the Newton Free Library. The hall underwent extensive repair and renovation in the 2000s.
Once called Stanley Manor, this smaller hotel between the main structure and the concert hall is a 2: 3 scaled - down version of the main hotel. Unlike its model, the manor was fully heated from completion in 1910 which may indicate that Stanley planned to use it as a winter resort when the main building was closed for the season. However, unlike many other Colorado mountain towns now famous for their winter sports, Estes Park never attracted off - season visitors in Stanley 's day and the manor remained empty for much of the year. Today it is called The Lodge and serves as a bed - and - breakfast off - limits to the public.
In 2015, the open area in front of the hotel, originally a driveway for Stanley Steamers and a promenade for guests to enjoy mountain views, was replaced with a hedge maze. This non-historic feature was added to evoke the hotel 's connection to The Shining. Although a hedge maze features prominently in Stanley Kubrick 's version of The Shining, no such feature can be found in Stephen King 's novel. The lawn of the Overlook Hotel, as King imagined it, was adorned with topiary animals and it was set that way for the shooting of the miniseries in 1996. Historically, neither feature existed at the Stanley. The design was the result of a competition, chosen from amongst 300 entrants from across the globe. The winning submission was designed by New York architect Mairim Dallaryan Standing.
In 2016, John Cullen, owner of the Stanley Hotel, announced a competition for a sculpture to be the centerpiece of the terrace in front of the hotel. Sculptors Sutton Betti and Daniel Glanz won the competition with a sculpture of F.O. Stanley holding one of his violins. The sculpture was installed and dedicated on September 29, 2016. Another hotel was used for filming the outside shots, and the Stanley for inside shots.
On October 30, 1974, horror writer Stephen King and his wife Tabitha spent one night at the Stanley Hotel during their one year of residency in Boulder, Colorado. They were driving through Estes Park as night approached when they happened upon the Stanley Hotel. They decided to book a room and, upon checking in, discovered that they were the only overnight guests. "They were just getting ready to close for the season, and we found ourselves the only guests in the place -- with all those long, empty corridors. '' He and his wife were served dinner in an empty dining room accompanied by canned orchestral music. "Except for our table all the chairs were up on the tables. So the music is echoing down the hall, and, I mean, it was like God had put me there to hear that and see those things. '' That night, according to King, "I dreamed of my three - year - old son running through the corridors, looking back over his shoulder, eyes wide, screaming. He was being chased by a fire - hose. I woke up with a tremendous jerk, sweating all over, within an inch of falling out of bed. I got up, lit a cigarette, sat in a chair looking out the window at the Rockies, and by the time the cigarette was done, I had the bones of The Shining firmly set in my mind. '' "Any big hotels have got scandals, '' King wrote in The Shining. "Just like every big hotel has got a ghost. Why? Hell, people come and go. Sometimes one of ' em will pop off in his room, heart attack or stroke or something like that. Hotels are superstitious places. ''
The Shining was published in 1977 and became the third great success of King 's career after Carrie and ' Salem 's Lot. The Overlook Hotel - the fictional hotel in King 's book inspired by the Stanley - is an evil entity haunted by its many victims. Room 217 of the Overlook features prominently in the novel, having been the room where King spent the night at the Stanley. This is the room on the second floor in the center of the west wing with a balcony overlooking the south terrace. Room 217 remains the hotel 's most requested accommodation.
In 1980, The Shining became the basis for a film adaptation directed by Stanley Kubrick. Kubrick 's vision for the movie differed from King 's significantly in many ways, including the portrayal of the Overlook Hotel. The exteriors of Kubrick 's Overlook were supplied by the Timberline Lodge on the slopes of Mt. Hood in Oregon. Inspiration for the interior sets (erected at Elstree Studios in England) came from the 1927 Ahwahnee Hotel in Yosemite National Park. The management of the Timberline Lodge, fearful that guests would refuse to stay in their Room 217 if it were featured in a horror movie, insisted that Kubrick change the haunted room in the film to Room 237.
In 1997 The Shining TV series was produced, with the Stanley Hotel as the primary shooting location. A playhouse version of the main building which adorned the lawn of the Overlook Hotel in the series can be viewed in the basement of the Stanley.
The Hotel has also been used as a filming location for other movies and TV shows; most notably, as the fictional "Hotel Danbury '' of Aspen, Colorado, in the 1994 film Dumb and Dumber.
From 2013 to 2015, the hotel property hosted the Stanley Film Festival, an independent horror film festival operated by the Denver Film Society, held in early May. The festival featured screenings, panels, student competitions, audience awards and receptions. The Stanley Film Festival was put on hiatus in 2016, and canceled for 2017.
The historic Stanley Concert Hall serves as venue for various musical groups such as country - punk band Murder By Death which has performed a Shining - themed series of concerts in the space.
Bravo 's cooking competition, Top Chef, also used the Stanley as a venue for Episode 10 of Season 15, all of which took place in various locations in Colorado.
Despite a peaceful early history, in the years following the publication of The Shining, The Stanley Hotel has gained a reputation among paranormal investigators for frequent activity. The hotel offers guided "Ghost Tours '' to guests and visitors which feature spaces reputed to be exceptionally active. The hotel has also served as a location for numerous paranormal investigation shows such as Ghost Hunters and Ghost Adventures.
The Stanley Hotel has hosted the following persons of note:
|
when can the term of lok sabha be extended beyond five years | Lok Sabha - Wikipedia
Coordinates: 28 ° 37 ′ 3 '' N 77 ° 12 ′ 30 '' E / 28.61750 ° N 77.20833 ° E / 28.61750; 77.20833
Government coalition (335) National Democratic Alliance (335)
Opposition Parties (210) United Progressive Alliance (50)
Janata Parivar Parties (6)
Unaligned Parties (144)
Others (10)
The Lok Sabha (House of the People) is the Lower house of India 's bicameral Parliament, with the Upper house being the Rajya Sabha. Members of the Lok Sabha are elected by adult universal suffrage and a first - past - the - post system to represent their respective constituencies, and they hold their seats for five years or until the body is dissolved by the President on the advice of the council of ministers. The house meets in the Lok Sabha Chambers of the Sansad Bhavan in New Delhi.
The maximum strength of the House envisaged by the Constitution of India is 552, which is made up by election of up to 530 members to represent the states; up to 20 members to represent the Union Territories and not more than two members of the Anglo - Indian Community to be nominated by the President of India, if, in his / her opinion, that community is not adequately represented in the House. Under the current laws, the strength of Lok Sabha is 545, including the two seats reserved for members of the Anglo - Indian community. The total elective membership is distributed among the states in proportion to their population. A total of 131 seats (18.42 %) are reserved for representatives of Scheduled Castes (84) and Scheduled Tribes (47). The quorum for the House is 10 % of the total membership.
The Lok Sabha, unless sooner dissolved, continues to operate for five years from the date appointed for its first meeting and the expiration of the period of five years. However, while a proclamation of emergency is in operation, this period may be extended by Parliament by law for a period not exceeding one year at a time and not extending, in any case, beyond a period of six months after the proclamation has ceased to operate.
An exercise to redraw Lok Sabha constituencies ' boundaries has been carried out by the Delimitation Commission based on the Indian census of 2001. This exercise, which was supposed to be carried out after every census, was suspended in 1976 following a constitutional amendment to avoid adverse effects on the family planning programme which was being implemented. The 16th Lok Sabha was elected in May 2014 and is the latest to date.
The Lok Sabha has its own television channel, Lok Sabha TV, headquartered within the premises of Parliament.
A major portion of the Indian subcontinent was under British rule from 1858 to 1947. During this period, the office of the Secretary of State for India (along with the Council of India) was the authority through whom British Parliament exercised its rule in the Indian sub-continent, and the office of Viceroy of India was created, along with an Executive Council in India, consisting of high officials of the British government. The Indian Councils Act 1861 provided for a Legislative Council consisting of the members of the Executive Council and non-official members. The Indian Councils Act 1892 established legislatures in each of the provinces of British India and increased the powers of the Legislative Council. Although these Acts increased the representation of Indians in the government, their power still remained limited, and the electorate very small. The Indian Councils Act 1909 and the Government of India Act 1919 further expanded the participation of Indians in the administration. The Indian Independence Act, passed by the British parliament on 18 July 1947, divided British India (which did not include the Princely States) into two new independent countries, India and Pakistan, which were to be dominions under the Crown until they had each enacted a new constitution. The Constituent Assembly was divided into two for the separate nations, with each new Assembly having sovereign powers transferred to it for the respective dominion.
The Constitution of India was adopted on 26 November 1949 and came into effect on 26 January 1950, proclaiming India to be a sovereign, democratic republic. This contained the founding principles of the law of the land which would govern India in its new form, which now included all the princely states which had not acceded to Pakistan.
According to Article 79 (Part V - The Union.) of the Constitution of India, the Parliament of India consists of the President of India and the two Houses of Parliament known as the Council of States (Rajya Sabha) and the House of the People (Lok Sabha).
The Lok Sabha (House of the Leaders) was duly constituted for the first time on 17 April 1952 after the first General Elections held from 25 October 1951 to 21 February 1952.
Article 84 (Part V. -- The Union) of Indian Constitution sets qualifications for being a member of Lok Sabha, which are as follows:
However, a member can be disqualified of being a member of Parliament:
A seat in the Lok Sabha will become vacant in the following circumstances: (during normal functioning of the House)
Furthermore, as per article 101 (Part V. -- The Union) of Indian Constitution; A person can not be: (1) a member of both Houses of Parliament and provision shall be made by Parliament by law for the vacation by a person who is chosen a member of both Houses of his seat in one House or the other. (2) a member both of Parliament and of a House of the Legislature of a State.
System of elections in Lok Sabha
Members of the Lok Sabha are directly elected by the people of India, on the basis of Universal Suffrage. For the purpose of holding direct elections to Lok Sabha; each state is divided into territorial constituencies. In this respect, the constitution of India makes the following two provisions:
Note: The expression population here refers to the population ascertained at the preceding census (2001 Census) of which relevant figure have been published.
The Lok Sabha has certain powers that make it more powerful than the Rajya Sabha.
In conclusion, it is clear that the Lok Sabha is more powerful than the Rajya Sabha in almost all matters. Even in those matters in which the Constitution has placed both Houses on an equal footing, the Lok Sabha has more influence due to its greater numerical strength. This is typical of any Parliamentary democracy, with the lower House always being more powerful than the upper.
The Rules of Procedure and Conduct of Business in Lok Sabha and Directions issued by the Speaker from time to time there under regulate the procedure in Lok Sabha. The items of business, notice of which is received from the Ministers / Private Members and admitted by the Speaker, are included in the daily List of Business which is printed and circulated to members in advance. For various items of business to be taken up in the House the time is allotted by the House on the recommendations of the Business Advisory Committee. The Speaker presides over the sessions of the House and regulates procedure.
Three sessions of Lok Sabha take place in a year:
When in session, Lok Sabha holds its sittings usually from 11 A.M. to 1 P.M. and from 2 P.M. to 6 P.M. On some days the sittings are continuously held without observing lunch break and are also extended beyond 6 P.M. depending upon the business before the House. Lok Sabha does not ordinarily sit on Saturdays and Sundays and other closed holidays.
The first hour of every sitting is called Question Hour. Asking questions in Parliament is the free and unfettered right of members, and during Question Hour they may ask questions of ministers on different aspects of administration and government policy in the national and international spheres. Every minister whose turn it is to answer to questions has to stand up and answer for his department 's acts of omission or commission.
Questions are of three types -- Starred, Unstarred and Short Notice. A Starred Question is one to which a member desires an oral answer in the House and which is distinguished by an asterisk mark. An unstarred Question is one which is not called for oral answer in the house and on which no supplementary questions can consequently be asked. An answer to such a question is given in writing. Minimum period of notice for starred / unstarred question is 10 clear days. If the questions given notice of are admitted by the Speaker, they are listed and printed for answer on the dates allotted to the Ministries to which the subject matter of the question pertains.
The normal period of notice does not apply to short notice questions which relate to matters of urgent public importance. However, a Short Notice Question may be answered only on short notice if so permitted by the Speaker and the Minister concerned is prepared to answer it at shorter notice. A short notice question is taken up for answer immediately after the Question Hour, popularly known as Zero Hour.
Zero Hour: The time immediately following the Question Hour has come to be known as "Zero Hour ''. It starts at around 12 noon (hence the name) and members can, with prior notice to the Speaker, raise issues of importance during this time. Typically, discussions on important Bills, the Budget, and other issues of national importance take place from 2 pm onwards.
After the Question Hour, the House takes up miscellaneous items of work before proceeding to the main business of the day. These may consist of one or more of the following: Adjournment Motions, Questions involving breaches of Privileges, Papers to be laid on the Table, Communication of any messages from Rajya Sabha, Intimations regarding President 's assent to Bills, Calling Attention Notices, Matters under Rule 377, Presentation of Reports of Parliamentary Committee, Presentation of Petitions, miscellaneous statements by Ministers, Motions regarding elections to Committees, Bills to be withdrawn or introduced.
The main business of the day may be consideration of a Bill or financial business or consideration of a resolution or a motion.
Legislative proposals in the form of a Bill can be brought forward either by a Minister or by a private member. In the former case it is known as Government Bill and in the latter case it is known as a Private Members ' Bill. Every Bill passes through three stages -- called three readings -- before it is passed. To become law it must be passed by both the Houses of Parliament, Lok Sabha and Rajya Sabha, and then assented to by the president.
The presentation, discussion of, and voting on the annual General and Railways budgets -- followed by the passing of the Appropriations Bill and the Finance Bill -- is a long, drawn - out process that takes up a major part of the time of the House during its Budget Session every year.
Among other kinds of business that come up before the House are resolutions and motions. Resolutions and motions may be brought forward by Government or by private members. Government may move a resolution or a motion for obtaining the sanction to a scheme or opinion of the House on an important matter of policy or on a grave situation. Similarly, a private member may move a resolution or motion in order to draw the attention of the House and of the Government to a particular problem. The last two and half hours of sitting on every Friday are generally allotted for transaction of private members ' business. While private members ' bills are taken up on one Friday, private members ' resolutions are taken up on the succeeding Friday, and so on.
A Half - an - Hour Discussion can be raised on a matter of sufficient public importance which has been the subject of a recent question in Lok Sabha irrespective of the fact whether the question was answered orally or the answer was laid on the Table of the House and the answer which needs elucidation on a matter of fact. Normally not more than half an hour is allowed for such a discussion. Usually, half - an - hour discussion is listed on Mondays, Wednesdays and Fridays only. In one session, a member is allowed to raise not more than two half - an - hour discussions. During the discussion, the member, who has given notice, makes a short statement and not more than four members, who have intimated earlier and have secured one of the four places in the ballot, are permitted to ask a question each for further elucidating any matter of fact. Thereafter, the Minister concerned replies. There is no formal motion before the House nor voting.
Members may raise discussions on matters of urgent public importance with the permission of the Speaker. Such discussions may take place on two days in a week. No formal motion is moved in the House nor is there any voting on such a discussion.
After the member who initiates discussion on an item of business has spoken, other members can speak on that item of business in such order as the Speaker may call upon them. Only one member can speak at a time and all speeches are directed to the Chair. A matter requiring the decision of the House is decided by means of a question put by the Speaker on a motion made by a member.
A division is one of the forms in which the decision of the House is ascertained. Normally, when a motion is put to the House members for and against it indicate their opinion by saying "Aye '' or "No '' from their seats. The Chair goes by the voices and declares that the motion is either accepted or rejected by the House. If a member challenges the decision, the Chair orders that the lobbies be cleared. Then the division bell is rung and an entire network of bells installed in the various parts and rooms in Parliament House and Parliament House Annexe rings continuously for three and a half minutes. Members and Ministers rush to the Chamber from all sides. After the bell stops, all the doors to the Chamber are closed and nobody can enter or leave the Chamber till the division is over. Then the Chair puts the question for second time and declares whether in its opinion the "Ayes '' or the "Noes '', have it. If the opinion so declared is again challenged, the Chair asks the votes to be recorded by operating the Automatic Vote Recording Equipment.
With the announcement of the Speaker for recording the votes, the Secretary - General presses the button of a key board. Then a gong sounds serving as a signal to members for casting their votes. For casting a vote each member present in the Chamber has to press a switch and then operate one of the three push buttons fixed in his seat. The push switch must be kept pressed simultaneously until the gong sounds for the second time after 10 seconds. There are two Indicator Boards installed in the wall on either side of the Speaker 's Chair in the Chamber. Each vote cast by a member is flashed here. Immediately after the votes are cast, they are totalled mechanically and the details of the results are flashed on the Result Indicator Boards installed in the railings of the Speaker 's and Diplomatic Galleries.
Divisions are normally held with the aid of the Automatic Vote Recording Equipment. Where so directed by the Speaker in terms of relevant provision in the Rules of Procedure etc. in Lok Sabha, Divisions may be held either by distribution of ' Aye ' / ' No ' and ' Abstention ' slips to members in the House or by the members recording their votes by going into the lobbies. There is an Indicator Board in the machine room showing the name of each member. The result of Division and vote cast by each member with the aid of Automatic Vote Recording Equipment appear on this Board also. Immediately a photograph of the Indicator Board is taken. Later the Photograph is enlarged and the names of members who voted ' Ayes ' and for ' Noes ' are determined with the help of the photograph and incorporated in Lok Sabha Debates.
Three versions of Lok Sabha Debates are prepared viz., the Hindi version, the English version and the Original version. Only the Hindi and English versions are printed. The Original version, in cyclostyled form, is kept in the Parliament Library for record and reference. The Hindi version all Questions asked and Answers given thereto in Hindi and the speeches made in Hindi as also verbatim Hindi translation of Questions and Answers and of speeches made in English or in regional languages. The English version contains Lok Sabha proceedings in English and the English translation of the proceedings which take place in Hindi or in any regional language. The Original version, however, contains proceedings in Hindi or in English as they actually take place in the House and also the English / Hindi translation of speeches made in regional languages.
If conflicting legislation is enacted by the two Houses, a joint sitting is held to resolve the differences. In such a session, the members of the Lok Sabha would generally prevail, since the Lok Sabha includes more than twice as many members as the Rajya Sabha.
Speaker and Deputy Speaker
As per Article 93 of Indian Constitution, the Lok Sabha has a Speaker and a Deputy Speaker. In the Lok Sabha, the lower House of the Indian Parliament, both presiding officers -- the Speaker and the Deputy Speaker - are elected from among its members by a simple majority of members present and voting in the House. As such, no specific qualifications are prescribed for being elected the Speaker. The Constitution only requires that Speaker should be a member of the House. But an understanding of the Constitution and the laws of the country and the rules of procedure and conventions of Parliament is considered a major asset for the holder of the office of the Speaker. Vacation and resignation of, and removal from, the offices of Speaker and Deputy Speaker is mentioned under Article 94 of the Constitution of India. As per Article 94 of Indian Constitution. A Speaker or a Deputy Speaker, should vacate his / her office, a) if he / she ceases to be a member of the House of the People, b) he / she resigns, c) removed from his office by a resolution of the House of the People passed by a majority.
The Speaker of Lok Sabha is at once a member of the House and also its Presiding Officer. The Speaker of the Lok Sabha conducts the business in the house. He / she decides whether a bill is a money bill or not. He / she maintains discipline and decorum in the house and can punish a member for their unruly behaviour by suspending them. He / she permits the moving of various kinds of motions and resolutions like the motion of no confidence, motion of adjournment, motion of censure and calling attention notice as per the rules. The Speaker decides on the agenda to be taken up for discussion during the meeting. It is the Speaker of the Lok Sabha who presides over joint sittings called in the event of disagreement between the two Houses on a legislative measure. Following the 52nd Constitution amendment, the Speaker is vested with the power relating to the disqualification of a member of the Lok Sabha on grounds of defection. The Speaker makes obituary references in the House, formal references to important national and international events and the valedictory address at the conclusion of every Session of the Lok Sabha and also when the term of the House expires. Though a member of the House, the Speaker does not vote in the House except on those rare occasions when there is a tie at the end of a decision. Till date, the Speaker of the Lok Sabha has not been called upon to exercise this unique casting vote. While the office of Speaker is vacant due to absence / resignation / removal, the duties of the office shall be performed by the Deputy Speaker or, if the office of Deputy Speaker is also vacant, by such member of the House of the People as the President may appoint for the purpose.
Shri G.V. Mavalankar was the first Speaker of Lok Sabha (15 May 1952 - 27 February 1956) and Shri M. Ananthasayanam Ayyangar was the first Deputy Speaker of Lok Sabha (30 May 1952 -- 7 March 1956). In the 16th Lok Sabha, Sumitra Mahajan was elected as the speaker on 3 June 2014, and is its second woman speaker and Shri M. Thambidurai as the deputy speaker.
The Lok Sabha has also a separate non-elected Secretariat staff.
Lok Sabha is constituted after the general election as follows:
Currently elected members of 16th Lok Sabha by their political party (As of 23 October 2017):
|
skeeter davis the end of the world fallout 4 | The End of the World (Skeeter Davis song) - wikipedia
"The End of the World '' is a country pop song written by Arthur Kent and Sylvia Dee, for American singer Skeeter Davis. It had success in the 1960s and spawned many covers.
"The End of the World '' was written by Arthur Kent and Sylvia Dee; the latter drew on her sorrow from her father 's death.
Davis recorded her version on June 8, 1962, at the RCA Studios in Nashville, produced by Chet Atkins, and featuring Floyd Cramer. Released by RCA Records in December 1962, "The End of the World '' peaked in March 1963 at No. 2 on the Billboard Hot 100 (behind "Our Day Will Come '' by Ruby & the Romantics), No. 2 on the Billboard country singles, No. 1 on Billboard 's easy listening, and No. 4 on Billboard 's rhythm and blues. It is the first, and, to date, only time that a song cracked the Top 10 on all four Billboard charts. Billboard ranked the record as the No. 3 song of 1963.
In the Skeeter Davis version, after she sings the whole song through in the key of B - flat, the song modulates up by a half step to the key of B, where Skeeter speaks the first two lines of the final stanza, before singing the rest of the stanza, ending the song.
Davis 's recording of "The End of the World '' was played at Atkins 's funeral in an instrumental by Marty Stuart, and at Davis 's own funeral at the Ryman Auditorium. Her version has been featured in several TV shows, video games and films including Girl, Interrupted, Riding in Cars with Boys, Daltry Calhoun, ' ' An American Affair ' ', The Boat That Rocked, Mad Men, Under the Dome and Fallout 4 in addition to the TV spot for Wayward Pines season 2, and opening credits of the BYU TV series Granite Flats.
In 1975, American pop music duo Carpenters released a cover of "The End of the World '' as a promotional single from their live album Live in Japan. It was recorded at the Festival Hall, Osaka, Japan.
In 1990, British singer Sonia covered "End of the World ''. The fifth and final single from her debut album, Everybody Knows, it reached number 18 in the UK, the same chart position as the original. The single 's B - side "Ca n't Help the Way That I Feel '' also appeared on Sonia 's debut album. This was her final single with Stock Aitken Waterman.
A No. 2 hit in Sweden in September 1966 via a local cover by Mike Wallace & the Caretakers, "The End of the World '' has also been remade by a number of other artists including Jessica Andersson, Anika (as B - side to her single "Yang Yang '' and on her album Anika), Eddy Arnold, Best Coast, Debby Boone, Brilliant, Carola (in Finnish as "Maailmain ''), Chantal Pary (in French "Le jour se lèvera), the Carpenters, Rivers Cuomo, Bobby Darin, Lana Del Rey, Barbara Dickson, Dion, Mary Duff, Allison Durbin, Judith Durham, Exposé, Agnetha Fältskog, Rosie Flores, Emi Fujita, Girls, Nina Gordon, Herman 's Hermits (as the B - side of "I 'm Henery the Eighth, I Am ''), Grethe & Jørgen Ingmann (released on the b - side to Eurovision Song Contest winner song 1963; Dansevise), Satoko Ishimine, Joni James, Cyndi Lauper, Brenda Lee, Vic Dana, Lobo, Julie London, Claudine Longet, Loretta Lynn, Al Martino, Johnny Mathis, Anne Mattila (in Finnish as "Maailmain ''), Imelda May, Maywood, John Cougar Mellencamp, Anita Meyer, the Mills Brothers, Ronnie Milsap, Dorothy Moore, Mud, Anne Murray, Leigh Nash, Nomeansno, Patti Page, Helen Shapiro, Anne Shelton, Vonda Shepard, Nancy Sinatra, Sonia, the Tokens, Twiggy, Twinkle, the Vanguards, Bobby Vinton, Jeff Walker, Dottie West, Lena Zavaroni, and Tracy Huang (黃鶯 鶯). In 2009 Susan Boyle remade "The End of the World '' for her debut album, I Dreamed a Dream.
A cover version by Allison Paige peaked at number 72 on the Billboard Hot Country Singles & Tracks chart in May 2000.
The Brazilian band Roupa Nova made a cover version in Portuguese, in 1997, named "O Sonho Acabou ''.
There have been three Cantonese versions covered by three different Hong Kong singers, namely "長 相思 '' by Pauline Chan (陳寶珠) in 1968, "含羞 草 '' by Annabelle Lui (雷 安娜) in 1987 and "冬 戀 '' by Danny Chan (陳百強) in 1988.
For the Mandarin version, the first one "打 不動 你 的 心 '' was covered by Hong Kong veteran female singer Rebecca Pan (潘迪華) in 1965 while the second one "後 會 無期 '' by another Hong Kong female singer G.E.M. Tang (鄧紫棋) in 2014. Subsequently, there was the third one titled "星 夢 之 光 '', lyrics written by an SNH48 member, Wu Yanwen (吳燕 文), and performed by SNH48 themselves in 2015.
Cyndi Lauper covered the song in 2016, the cover is part of the album Detour.
The song was used as the opening and closing theme for the political thriller radio drama Pandemic, produced by BBC Radio 4. It was also used in the 1999 drama film Girl, Interrupted, as well as in the Stephen King / Steven Spielberg CBS TV series Under the Dome season one episode five, "Blue on Blue. '' An abbreviated version of the song is the theme music for the TV series Granite Flats. The song was used for the main title and credit sequences in the 2008 film An American Affair. The song appears on the in - game radio in the video game Fallout 4. In episode 8 ("End of the World '') of the 2015 TV series The Man in the High Castle, an American singer performs the song in Japanese. The song was also used in the second part of the 2015 Japanese dark fantasy action horror film Attack on Titan. The song appears in the episode of Lost, What Kate Did. The song appears in an episode of the TV series Wayward Pines, Season 2, Episode 9, "Walcott Prep ''. The song also makes an appearance at the end of Season 3, episode 7, of HBO 's The Leftovers. The song appears at the end of episode 12 of the third season of Mad Men; "The Grown - Ups ''. The song appears at the closing credits of Darren Aronofsky 's controversial film mother! (2017).
|
why does each element have different emission spectrum | Emission spectrum - wikipedia
The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. The photon energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference. This collection of different transitions, leading to different radiated wavelengths, make up an emission spectrum. Each element 's emission spectrum is unique. Therefore, spectroscopy can be used to identify the elements in matter of unknown composition. Similarly, the emission spectra of molecules can be used in chemical analysis of substances.
In physics, emission is the process by which a higher energy quantum mechanical state of a particle becomes converted to a lower one through the emission of a photon, resulting in the production of light. The frequency of light emitted is a function of the energy of the transition. Since energy must be conserved, the energy difference between the two states equals the energy carried off by the photon. The energy states of the transitions can lead to emissions over a very large range of frequencies. For example, visible light is emitted by the coupling of electronic states in atoms and molecules (then the phenomenon is called fluorescence or phosphorescence). On the other hand, nuclear shell transitions can emit high energy gamma rays, while nuclear spin transitions emit low energy radio waves.
The emittance of an object quantifies how much light is emitted by it. This may be related to other properties of the object through the Stefan -- Boltzmann law. For most substances, the amount of emission varies with the temperature and the spectroscopic composition of the object, leading to the appearance of color temperature and emission lines. Precise measurements at many wavelengths allow the identification of a substance via emission spectroscopy.
Emission of radiation is typically described using semi-classical quantum mechanics: the particle 's energy levels and spacings are determined from quantum mechanics, and light is treated as an oscillating electric field that can drive a transition if it is in resonance with the system 's natural frequency. The quantum mechanics problem is treated using time - dependent perturbation theory and leads to the general result known as Fermi 's golden rule. The description has been superseded by quantum electrodynamics, although the semi-classical version continues to be more useful in most practical computations.
When the electrons in the atom are excited, for example by being heated, the additional energy pushes the electrons to higher energy orbitals. When the electrons fall back down and leave the excited state, energy is re-emitted in the form of a photon. The wavelength (or equivalently, frequency) of the photon is determined by the difference in energy between the two states. These emitted photons form the element 's spectrum.
The fact that only certain colors appear in an element 's atomic emission spectrum means that only certain frequencies of light are emitted. Each of these frequencies are related to energy by the formula:
where E photon (\ displaystyle E_ (\ text (photon))) is the energy of the photon, ν (\ displaystyle \ nu) is its frequency, and h (\ displaystyle h) is Planck 's constant. This concludes that only photons with specific energies are emitted by the atom. The principle of the atomic emission spectrum explains the varied colors in neon signs, as well as chemical flame test results (described below).
The frequencies of light that an atom can emit are dependent on states the electrons can be in. When excited, an electron moves to a higher energy level or orbital. When the electron falls back to its ground level the light is emitted.
The above picture shows the visible light emission spectrum for hydrogen. If only a single atom of hydrogen were present, then only a single wavelength would be observed at a given instant. Several of the possible emissions are observed because the sample contains many hydrogen atoms that are in different initial energy states and reach different final energy states. These different combinations lead to simultaneous emissions at different wavelengths.
As well as the electronic transitions discussed above, the energy of a molecule can also change via rotational, vibrational, and vibronic (combined vibrational and electronic) transitions. These energy transitions often lead to closely spaced groups of many different spectral lines, known as spectral bands. Unresolved band spectra may appear as a spectral continuum.
Light consists of electromagnetic radiation of different wavelengths. Therefore, when the elements or their compounds are heated either on a flame or by an electric arc they emit energy in the form of light. Analysis of this light, with the help of a spectroscope gives us a discontinuous spectrum. A spectroscope or a spectrometer is an instrument which is used for separating the components of light, which have different wavelengths. The spectrum appears in a series of lines called the line spectrum. This line spectrum is called an atomic spectrum when it originates from an atom in elemental form. Each element has a different atomic spectrum. The production of line spectra by the atoms of an element indicate that an atom can radiate only a certain amount of energy. This leads to the conclusion that bound electrons can not have just any amount of energy but only a certain amount of energy.
The emission spectrum can be used to determine the composition of a material, since it is different for each element of the periodic table. One example is astronomical spectroscopy: identifying the composition of stars by analysing the received light. The emission spectrum characteristics of some elements are plainly visible to the naked eye when these elements are heated. For example, when platinum wire is dipped into a strontium nitrate solution and then inserted into a flame, the strontium atoms emit a red color. Similarly, when copper is inserted into a flame, the flame becomes green. These definite characteristics allow elements to be identified by their atomic emission spectrum. Not all emitted lights are perceptible to the naked eye, as the spectrum also includes ultraviolet rays and infrared lighting. An emission is formed when an excited gas is viewed directly through a spectroscope.
Emission spectroscopy is a spectroscopic technique which examines the wavelengths of photons emitted by atoms or molecules during their transition from an excited state to a lower energy state. Each element emits a characteristic set of discrete wavelengths according to its electronic structure, and by observing these wavelengths the elemental composition of the sample can be determined. Emission spectroscopy developed in the late 19th century and efforts in theoretical explanation of atomic emission spectra eventually led to quantum mechanics.
There are many ways in which atoms can be brought to an excited state. Interaction with electromagnetic radiation is used in fluorescence spectroscopy, protons or other heavier particles in Particle - Induced X-ray Emission and electrons or X-ray photons in Energy - dispersive X-ray spectroscopy or X-ray fluorescence. The simplest method is to heat the sample to a high temperature, after which the excitations are produced by collisions between the sample atoms. This method is used in flame emission spectroscopy, and it was also the method used by Anders Jonas Ångström when he discovered the phenomenon of discrete emission lines in the 1850s.
Although the emission lines are caused by a transition between quantized energy states and may at first look very sharp, they do have a finite width, i.e. they are composed of more than one wavelength of light. This spectral line broadening has many different causes.
Emission spectroscopy is often referred to as optical emission spectroscopy because of the light nature of what is being emitted.
Emission lines from hot gases were first discovered by Ångström, and the technique was further developed by David Alter, Gustav Kirchhoff and Robert Bunsen.
See the history of spectroscopy for details.
The solution containing the relevant substance to be analysed is drawn into the burner and dispersed into the flame as a fine spray. The solvent evaporates first, leaving finely divided solid particles which move to the hottest region of the flame where gaseous atoms and ions are produced. Here electrons are excited as described above. It is common for a monochromator to be used to allow for easy detection.
On a simple level, flame emission spectroscopy can be observed using just a flame and samples of metal salts. This method of qualitative analysis is called a flame test. For example, sodium salts placed in the flame will glow yellow from sodium ions, while strontium (used in road flares) ions color it red. Copper wire will create a blue colored flame, however in the presence of chloride gives green (molecular contribution by CuCl).
Emission coefficient is a coefficient in the power output per unit time of an electromagnetic source, a calculated value in physics. The emission coefficient of a gas varies with the wavelength of the light. It has units of ms sr. It is also used as a measure of environmental emissions (by mass) per MWh of electricity generated, see: Emission factor.
In Thomson scattering a charged particle emits radiation under incident light. The particle may be an ordinary atomic electron, so emission coefficients have practical applications.
If X dV dΩ dλ is the energy scattered by a volume element dV into solid angle dΩ between wavelengths λ and λ + dλ per unit time then the Emission coefficient is X.
The values of X in Thomson scattering can be predicted from incident flux, the density of the charged particles and their Thomson differential cross section (area / solid angle).
A warm body emitting photons has a monochromatic emission coefficient relating to its temperature and total power radiation. This is sometimes called the second Einstein coefficient, and can be deduced from quantum mechanical theory.
|
who was the ruler of india when the english east india company was formed | East India Company - wikipedia
The East India Company (EIC), also known as the Honourable East India Company (HEIC) or the British East India Company and informally as John Company, was an English and later British joint - stock company, formed to trade with the East Indies (in present - day terms, Maritime Southeast Asia), but ended up trading mainly with Qing China and seizing control of large parts of the Indian subcontinent.
Originally chartered as the "Governor and Company of Merchants of London trading into the East Indies '', the company rose to account for half of the world 's trade, particularly in basic commodities including cotton, silk, indigo dye, salt, spices, saltpetre, tea, and opium. The company also ruled the beginnings of the British Empire in India.
The company received a Royal Charter from Queen Elizabeth I on 31 December 1600, coming relatively late to trade in the Indies. Before them the Portuguese Estado da Índia had traded there for much of the 16th century and the first of half a dozen Dutch Companies sailed to trade there from 1595, which amalgamated in March 1602 into the United East Indies Company (VOC), which introduced the first permanent joint stock from 1612 (meaning investment into shares did not need to be returned, but could be traded on a stock exchange). By contrast, wealthy merchants and aristocrats owned the EIC 's shares. Initially the government owned no shares and had only indirect control until 1657 when permanent joint stock was established.
During its first century of operation, the focus of the company was trade, not the building of an empire in India. Company interests turned from trade to territory during the 18th century as the Mughal Empire declined in power and the East India Company struggled with its French counterpart, the French East India Company (Compagnie française des Indes orientales) during the Carnatic Wars of the 1740s and 1750s. The battles of Plassey and Buxar, in which the British defeated the Bengali powers, left the company in control of Bengal and a major military and political power in India. In the following decades it gradually increased the extent of the territories under its control, controlling the majority of the Indian subcontinent either directly or indirectly via local puppet rulers under the threat of force by its Presidency armies, much of which were composed of native Indian sepoys.
By 1803, at the height of its rule in India, the British East India company had a private army of about 260,000 -- twice the size of the British Army, with Indian revenues of £ 13,464,561, and expenses of £ 14,017,473. The company eventually came to rule large areas of India with its private armies, exercising military power and assuming administrative functions. Company rule in India effectively began in 1757 and lasted until 1858, when, following the Indian Rebellion of 1857, the Government of India Act 1858 led to the British Crown 's assuming direct control of the Indian subcontinent in the form of the new British Raj.
Despite frequent government intervention, the company had recurring problems with its finances. It was dissolved in 1874 as a result of the East India Stock Dividend Redemption Act passed one year earlier, as the Government of India Act had by then rendered it vestigial, powerless, and obsolete. The official government machinery of British India had assumed its governmental functions and absorbed its armies.
Soon after the defeat of the Spanish Armada in 1588, captured Spanish and Portuguese ships with their cargoes enabled English voyagers to potentially travel the globe in search of riches. London merchants presented a petition to Queen Elizabeth I for permission to sail to the Indian Ocean. The aim was to deliver a decisive blow to the Spanish and Portuguese monopoly of Far Eastern Trade. Elizabeth granted her permission and on 10 April 1591 James Lancaster in the Edward Bonaventure with two other ships sailed from Torbay around the Cape of Good Hope to the Arabian Sea on one of the earliest English overseas Indian expeditions. Having sailed around Cape Comorin to the Malay Peninsula, they preyed on Spanish and Portuguese ships there before returning to England in 1594.
The biggest capture that galvanised English trade was the seizing of the great Portuguese Carrack Madre de Deus by Sir Walter Raleigh and the Earl of Cumberland at the Battle of Flores (1592). When she was brought in to Dartmouth she was the largest vessel that had been seen in England and her cargo consisted of chests filled with jewels, pearls, gold, silver coins, ambergris, cloth, tapestries, pepper, cloves, cinnamon, nutmeg, benjamin, red dye, cochineal and ebony. Equally valuable was the ship 's rutter containing vital information on the China, India, and Japan trades. These riches aroused the English to engage in this opulent commerce.
In 1596, three more English ships sailed east but were all lost at sea. A year later however saw the arrival of Ralph Fitch, an adventurer merchant who, along with his companions, had made a remarkable fifteen - year overland journey to Mesopotamia, the Persian Gulf, the Indian Ocean, India and Southeast Asia. Fitch was then consulted on the Indian affairs and gave even more valuable information to Lancaster.
On 22 September 1599, a group of merchants met and stated their intention "to venture in the pretended voyage to the East Indies (the which it may please the Lord to prosper), and the sums that they will adventure '', committing £ 30,133. Two days later, "the Adventurers '' reconvened and resolved to apply to the Queen for support of the project. Although their first attempt had not been completely successful, they nonetheless sought the Queen 's unofficial approval to continue. They bought ships for their venture and increased their capital to £ 68,373.
The Adventurers convened again a year later, on 31 December, and this time they succeeded; the Queen granted a Royal Charter to "George, Earl of Cumberland, and 215 Knights, Aldermen, and Burgesses '' under the name, Governor and Company of Merchants of London trading with the East Indies. For a period of fifteen years, the charter awarded the newly formed company a monopoly on English trade with all countries east of the Cape of Good Hope and west of the Straits of Magellan. Any traders in breach of the charter without a licence from the company were liable to forfeiture of their ships and cargo (half of which went to the Crown and the other half to the company), as well as imprisonment at the "royal pleasure ''.
The governance of the company was in the hands of one governor and 24 directors or "committees '', who made up the Court of Directors. They, in turn, reported to the Court of Proprietors, which appointed them. Ten committees reported to the Court of Directors. According to tradition, business was initially transacted at the Nags Head Inn, opposite St Botolph 's church in Bishopsgate, before moving to India House in Leadenhall Street.
Sir James Lancaster commanded the first East India Company voyage in 1601 aboard the Red Dragon. After capturing a rich 1,200 ton Portuguese Carrack in the Malacca Straits the trade from the booty enabled the voyagers to set up two "factories '' - one at Bantam on Java and another in the Moluccas (Spice Islands) before leaving. They returned to England in 1603 to learn of Elizabeth 's death but Lancaster was Knighted by the new King James I. By this time the war with Spain had ended but the Company had successfully and profitably breached the Spanish and Portuguese monopoly, with new horizons opened for the English.
In March 1604 Sir Henry Middleton commanded the second voyage. General William Keeling, a captain during the second voyage, led the third voyage aboard the Red Dragon from 1607 to 1610 along with the Hector under Captain William Hawkins and the Consent under Captain David Middleton.
Early in 1608 Alexander Sharpeigh was appointed captain of the company 's Ascension, and general or commander of the fourth voyage. Thereafter two ships, Ascension and Union (captained by Richard Rowles) sailed from Woolwich on 14 March 1607 -- 08. This expedition would be lost.
Initially, the company struggled in the spice trade because of the competition from the already well - established Dutch East India Company. The company opened a factory in Bantam on the first voyage, and imports of pepper from Java were an important part of the company 's trade for twenty years. The factory in Bantam was closed in 1683. During this time ships belonging to the company arriving in India docked at Surat, which was established as a trade transit point in 1608.
In the next two years, the company established its first factory in south India in the town of Machilipatnam on the Coromandel Coast of the Bay of Bengal. The high profits reported by the company after landing in India initially prompted James I to grant subsidiary licences to other trading companies in England. But in 1609 he renewed the charter given to the company for an indefinite period, including a clause that specified that the charter would cease to be in force if the trade turned unprofitable for three consecutive years.
English traders frequently engaged in hostilities with their Dutch and Portuguese counterparts in the Indian Ocean. The company achieved a major victory over the Portuguese in the Battle of Swally in 1612, at Suvali in Surat. The company decided to explore the feasibility of gaining a territorial foothold in mainland India, with official sanction from both Britain and the Mughal Empire, and requested that the Crown launch a diplomatic mission.
In 1612, James I instructed Sir Thomas Roe to visit the Mughal Emperor Nur - ud - din Salim Jahangir (r. 1605 -- 1627) to arrange for a commercial treaty that would give the company exclusive rights to reside and establish factories in Surat and other areas. In return, the company offered to provide the Emperor with goods and rarities from the European market. This mission was highly successful, and Jahangir sent a letter to James through Sir Thomas Roe:
Upon which assurance of your royal love I have given my general command to all the kingdoms and ports of my dominions to receive all the merchants of the English nation as the subjects of my friend; that in what place soever they choose to live, they may have free liberty without any restraint; and at what port soever they shall arrive, that neither Portugal nor any other shall dare to molest their quiet; and in what city soever they shall have residence, I have commanded all my governors and captains to give them freedom answerable to their own desires; to sell, buy, and to transport into their country at their pleasure. For confirmation of our love and friendship, I desire your Majesty to command your merchants to bring in their ships of all sorts of rarities and rich goods fit for my palace; and that you be pleased to send me your royal letters by every opportunity, that I may rejoice in your health and prosperous affairs; that our friendship may be interchanged and eternal.
The company, which benefited from the imperial patronage, soon expanded its commercial trading operations. It eclipsed the Portuguese Estado da Índia, which had established bases in Goa, Chittagong, and Bombay, which Portugal later ceded to England as part of the dowry of Catherine of Braganza on her marriage to King Charles II. The East India Company also launched a joint attack with the Dutch United East India Company (VOC) on Portuguese and Spanish ships off the coast of China, which helped secure EIC ports in China. The company established trading posts in Surat (1619), Madras (1639), Bombay (1668), and Calcutta (1690). By 1647, the company had 23 factories, each under the command of a factor or master merchant and governor, and 90 employees in India. The major factories became the walled forts of Fort William in Bengal, Fort St George in Madras, and Bombay Castle.
In 1634, the Mughal emperor extended his hospitality to the English traders to the region of Bengal, and in 1717 completely waived customs duties for their trade. The company 's mainstay businesses were by then cotton, silk, indigo dye, saltpetre, and tea. The Dutch were aggressive competitors and had meanwhile expanded their monopoly of the spice trade in the Straits of Malacca by ousting the Portuguese in 1640 -- 41. With reduced Portuguese and Spanish influence in the region, the EIC and VOC entered a period of intense competition, resulting in the Anglo - Dutch Wars of the 17th and 18th centuries.
Within the first two decades of the 17th century, the Dutch East India Company or Vereenigde Oostindische Compagnie, (VOC) was the wealthiest commercial operation in the world with 50,000 employees worldwide and a private fleet of 200 ships. It specialised in the spice trade and gave its shareholders 40 % annual dividend.
The British East India Company was fiercely competitive with the Dutch and French throughout the 17th and 18th centuries over spices from the Spice Islands. Spices, at the time, could only be found on these islands, such as pepper, ginger, nutmeg, cloves and cinnamon could bring profits as high as 400 percent from one voyage.
The tension was so high between the Dutch and the British East Indies Trading Companies that it escalated into at least four Anglo - Dutch Wars between them: 1652 - 1654, 1665 - 1667, 1672 - 1674 and 1780 - 1784.
The Dutch Company maintained that profit must support the cost of war which came from trade which produced profit.
Competition arose in 1635 when Charles I granted a trading licence to Sir William Courteen, which permitted the rival Courteen association to trade with the east at any location in which the EIC had no presence.
In an act aimed at strengthening the power of the EIC, King Charles II granted the EIC (in a series of five acts around 1670) the rights to autonomous territorial acquisitions, to mint money, to command fortresses and troops and form alliances, to make war and peace, and to exercise both civil and criminal jurisdiction over the acquired areas.
In 1689 a Mughal fleet commanded by Sidi Yaqub attacked Bombay. After a year of resistance the EIC surrendered in 1690, and the company sent envoys to Aurangzeb 's camp to plead for a pardon. The company 's envoys had to prostrate themselves before the emperor, pay a large indemnity, and promise better behaviour in the future. The emperor withdrew his troops, and the company subsequently re-established itself in Bombay and set up a new base in Calcutta.
Eventually, the East India Company seized control of Bengal and slowly the whole Indian subcontinent with its private armies, composed primarily of Indian sepoys. As historian William Dalrymple observes,
We still talk about the British conquering India, but that phrase disguises a more sinister reality. It was not the British government that seized India at the end of the 18th century, but a dangerously unregulated private company headquartered in one small office, five windows wide, in London, and managed in India by an unstable sociopath -- (Robert) Clive.
In 1613, during the rule of Tokugawa Hidetada of the Tokugawa shogunate, the British ship Clove, under the command of Captain John Saris, was the first British ship to call on Japan. Saris was the chief factor of the EIC 's trading post in Java, and with the assistance of William Adams, a British sailor who had arrived in Japan in 1600, he was able to gain permission from the ruler to establish a commercial house in Hirado on the Japanese island of Kyushu:
We give free license to the subjects of the King of Great Britaine, Sir Thomas Smythe, Governor and Company of the East Indian Merchants and Adventurers forever safely come into any of our ports of our Empire of Japan with their shippes and merchandise, without any hindrance to them or their goods, and to abide, buy, sell and barter according to their own manner with all nations, to tarry here as long as they think good, and to depart at their pleasure.
However, unable to obtain Japanese raw silk for import to China and with their trading area reduced to Hirado and Nagasaki from 1616 onwards, the company closed its factory in 1623.
In September 1695, Captain Henry Every, an English pirate on board the Fancy, reached the Straits of Bab - el - Mandeb, where he teamed up with five other pirate captains to make an attack on the Indian fleet on return from the annual pilgrimage to Mecca. The Mughal convoy included the treasure - laden Ganj - i - Sawai, reported to be the greatest in the Mughal fleet and the largest ship operational in the Indian Ocean, and its escort, the Fateh Muhammed. They were spotted passing the straits en route to Surat. The pirates gave chase and caught up with Fateh Muhammed some days later, and meeting little resistance, took some £ 50,000 to £ 60,000 worth of treasure.
Every continued in pursuit and managed to overhaul Ganj - i - Sawai, which resisted strongly before eventually striking. Ganj - i - Sawai carried enormous wealth and, according to contemporary East India Company sources, was carrying a relative of the Grand Mughal, though there is no evidence to suggest that it was his daughter and her retinue. The loot from the Ganj - i - Sawai had a total value between £ 325,000 and £ 600,000, including 500,000 gold and silver pieces, and has become known as the richest ship ever taken by pirates.
In a letter sent to the Privy Council by Sir John Gayer, then governor of Bombay and head of the East India Company, Gayer claims that "it is certain the Pirates... did do very barbarously by the People of the Ganj - i - Sawai and Abdul Ghaffar 's ship, to make them confess where their money was. '' The pirates set free the survivors who were left aboard their emptied ships, to continue their voyage back to India.
When the news arrived in England it caused an outcry. To appease Aurangzeb, the East India Company promised to pay all financial reparations, while Parliament declared the pirates hostis humani generis ("enemies of the human race ''). In mid-1696 the government issued a £ 500 bounty on Every 's head and offered a free pardon to any informer who disclosed his whereabouts. When the East India Company later doubled that reward, the first worldwide manhunt in recorded history was underway.
The plunder of Aurangzeb 's treasure ship had serious consequences for the English East India Company. The furious Mughal Emperor Aurangzeb ordered Sidi Yaqub and Nawab Daud Khan to attack and close four of the company 's factories in India and imprison their officers, who were almost lynched by a mob of angry Mughals, blaming them for their countryman 's depredations, and threatened to put an end to all English trading in India. To appease Emperor Aurangzeb and particularly his Grand Vizier Asad Khan, Parliament exempted Every from all of the Acts of Grace (pardons) and amnesties it would subsequently issue to other pirates.
An 18th - century depiction of Henry Every, with the Fancy shown engaging its prey in the background
British pirates that fought during the Child 's War engaging the Ganj - i - Sawai
Depiction of Captain Every 's encounter with the Mughal Emperor 's granddaughter after his September 1695 capture of the Mughal trader Ganj - i - Sawai
The prosperity that the officers of the company enjoyed allowed them to return to Britain and establish sprawling estates and businesses, and to obtain political power. The company developed a lobby in the English parliament. Under pressure from ambitious tradesmen and former associates of the company (pejoratively termed Interlopers by the company), who wanted to establish private trading firms in India, a deregulating act was passed in 1694.
This allowed any English firm to trade with India, unless specifically prohibited by act of parliament, thereby annulling the charter that had been in force for almost 100 years. By an act that was passed in 1698, a new "parallel '' East India Company (officially titled the English Company Trading to the East Indies) was floated under a state - backed indemnity of £ 2 million. The powerful stockholders of the old company quickly subscribed a sum of £ 315,000 in the new concern, and dominated the new body. The two companies wrestled with each other for some time, both in England and in India, for a dominant share of the trade.
It quickly became evident that, in practice, the original company faced scarcely any measurable competition. The companies merged in 1708, by a tripartite indenture involving both companies and the state. Under this arrangement, the merged company lent to the Treasury a sum of £ 3,200,000, in return for exclusive privileges for the next three years, after which the situation was to be reviewed. The amalgamated company became the United Company of Merchants of England Trading to the East Indies.
In the following decades there was a constant battle between the company lobby and the Parliament. The company sought a permanent establishment, while the Parliament would not willingly allow it greater autonomy and so relinquish the opportunity to exploit the company 's profits. In 1712, another act renewed the status of the company, though the debts were repaid. By 1720, 15 % of British imports were from India, almost all passing through the company, which reasserted the influence of the company lobby. The licence was prolonged until 1766 by yet another act in 1730.
At this time, Britain and France became bitter rivals. Frequent skirmishes between them took place for control of colonial possessions. In 1742, fearing the monetary consequences of a war, the British government agreed to extend the deadline for the licensed exclusive trade by the company in India until 1783, in return for a further loan of £ 1 million. Between 1756 and 1763, the Seven Years ' War diverted the state 's attention towards consolidation and defence of its territorial possessions in Europe and its colonies in North America.
The war took place on Indian soil, between the company troops and the French forces. In 1757, the Law Officers of the Crown delivered the Pratt - Yorke opinion distinguishing overseas territories acquired by right of conquest from those acquired by private treaty. The opinion asserted that, while the Crown of Great Britain enjoyed sovereignty over both, only the property of the former was vested in the Crown.
With the advent of the Industrial Revolution, Britain surged ahead of its European rivals. Demand for Indian commodities was boosted by the need to sustain the troops and the economy during the war, and by the increased availability of raw materials and efficient methods of production. As home to the revolution, Britain experienced higher standards of living. Its spiralling cycle of prosperity, demand and production had a profound influence on overseas trade. The company became the single largest player in the British global market. William Henry Pyne notes in his book The Microcosm of London (1808) that:
On the 1 March 1801, the debts of the East India Company to £ 5,393,989 their effects to £ 15,404,736 and their sales increased since February 1793, from £ 4,988,300 to £ 7,602,041.
Sir John Banks, a businessman from Kent who negotiated an agreement between the king and the company, began his career in a syndicate arranging contracts for victualling the navy, an interest he kept up for most of his life. He knew that Samuel Pepys and John Evelyn had amassed a substantial fortune from the Levant and Indian trades.
He became a Director and later, as Governor of the East India Company in 1672, he arranged a contract which included a loan of £ 20,000 and £ 30,000 worth of saltpetre -- also known as potassium nitrate, a primary ingredient in gunpowder -- for the King "at the price it shall sell by the candle '' -- that is by auction -- where bidding could continue as long as an inch - long candle remained alight.
Outstanding debts were also agreed and the company permitted to export 250 tons of saltpetre. Again in 1673, Banks successfully negotiated another contract for 700 tons of saltpetre at £ 37,000 between the king and the company. So urgent was the need to supply the armed forces in the United Kingdom, America and elsewhere that the authorities sometimes turned a blind eye on the untaxed sales. One governor of the company was even reported as saying in 1864 that he would rather have the saltpetre made than the tax on salt.
The Seven Years ' War (1756 -- 63) resulted in the defeat of the French forces, limited French imperial ambitions, and stunted the influence of the Industrial Revolution in French territories. Robert Clive, the Governor General, led the company to a victory against Joseph François Dupleix, the commander of the French forces in India, and recaptured Fort St George from the French. The company took this respite to seize Manila in 1762.
By the Treaty of Paris, France regained the five establishments captured by the British during the war (Pondichéry, Mahe, Karikal, Yanam and Chandernagar) but was prevented from erecting fortifications and keeping troops in Bengal (art. XI). Elsewhere in India, the French were to remain a military threat, particularly during the War of American Independence, and up to the capture of Pondichéry in 1793 at the outset of the French Revolutionary Wars without any military presence. Although these small outposts remained French possessions for the next two hundred years, French ambitions on Indian territories were effectively laid to rest, thus eliminating a major source of economic competition for the company.
In its first century and half, the EIC used a few hundred soldiers as guards. The great expansion came after 1750, when it had 3,000 regular troops. By 1763, it had 26,000; by 1778, it had 67,000. It recruited largely Indian troops, and trained them along European lines. The military arm of the East India Company quickly developed to become a private corporate armed force, and was used as an instrument of geo - political power and expansion, rather than its original purpose as a guard force, and became the most powerful military force in the Indian subcontinent. As it increased in size the army was divided into the Presidency Armies of Bengal, Madras and Bombay each recruiting their own infantry, cavalry, and artillery units. The navy also grew significantly, vastly expanding its fleet and although made up predominantly of heavily armed merchant vessels, called East Indiamen, it also included warships.
The company, fresh from a colossal victory, and with the backing of its own private well - disciplined and experienced army, was able to assert its interests in the Carnatic region from its base at Madras and in Bengal from Calcutta, without facing any further obstacles from other colonial powers.
It continued to experience resistance from local rulers during its expansion. Robert Clive led company forces against Siraj Ud Daulah, the last independent Nawab of Bengal, Bihar, and Midnapore district in Odisha to victory at the Battle of Plassey in 1757, resulting in the conquest of Bengal. This victory estranged the British and the Mughals, since Siraj Ud Daulah was a Mughal feudatory ally.
With the gradual weakening of the Marathas in the aftermath of the three Anglo - Maratha wars, the British also secured the Ganges - Jumna Doab, the Delhi - Agra region, parts of Bundelkhand, Broach, some districts of Gujarat, the fort of Ahmmadnagar, province of Cuttack (which included Mughalbandi / the coastal part of Odisha, Garjat / the princely states of Odisha, Balasore Port, parts of Midnapore district of West Bengal), Bombay (Mumbai) and the surrounding areas, leading to a formal end of the Maratha empire and firm establishment of the British East India Company in India.
Hyder Ali and Tipu Sultan, the rulers of the Kingdom of Mysore, offered much resistance to the British forces. Having sided with the French during the Revolutionary War, the rulers of Mysore continued their struggle against the company with the four Anglo - Mysore Wars. Mysore finally fell to the company forces in 1799, in the fourth Anglo - Mysore war during which Tipu Sultan was killed.
The last vestiges of local administration were restricted to the northern regions of Delhi, Oudh, Rajputana, and Punjab, where the company 's presence was ever increasing amidst infighting and offers of protection among the remaining princes. The hundred years from the Battle of Plassey in 1757 to the Indian Rebellion of 1857 were a period of consolidation for the company, during which it seized control of the entire Indian subcontinent and functioned more as an administrator and less as a trading concern.
A cholera pandemic began in Bengal, then spread across India by 1820. 10,000 British troops and countless Indians died during this pandemic. Between 1760 and 1834 only some 10 % of the East India Company 's officers survived to take the final voyage home.
In the early 19th century the Indian question of geopolitical dominance and empire holding remained with the East India Company. The three independent armies of the company 's Presidencies, with some locally raised irregular forces, expanded to a total of 280,000 men by 1857. The troops were first recruited from mercenaries and low - caste volunteers, but in time the Bengal Army in particular was composed largely of high - caste Hindus and landowning Muslims.
Within the Army British officers, who initially trained at the company 's own academy at the Addiscombe Military Seminary, always outranked Indians, no matter how long the Indians ' service. The highest rank to which an Indian soldier could aspire was Subadar - Major (or Rissaldar - Major in cavalry units), effectively a senior subaltern equivalent. Promotion for both British and Indian soldiers was strictly by seniority, so Indian soldiers rarely reached the commissioned ranks of Jamadar or Subadar before they were middle aged at best. They received no training in administration or leadership to make them independent of their British officers.
During the wars against the French and their allies in the late eighteenth and early nineteenth centuries, the East India Company 's armies were used to seize the colonial possessions of other European nations, including the islands of Réunion and Mauritius.
There was a systemic disrespect in the company for the spreading of Protestantism, although it fostered respect for Hindu and Muslim, castes, and ethnic groups. The growth of tensions between the EIC and the local religious and cultural groups grew in the 19th century as the Protestant revival grew in Great Britain. These tensions erupted at the Indian Rebellion of 1857 and the company ceased to exist when the company dissolved through the East India Stock Dividend Redemption Act 1873.
In the 18th century, Britain had a huge trade deficit with Qing dynasty China and so, in 1773, the company created a British monopoly on opium buying in Bengal, India, by prohibiting the licensing of opium farmers and private cultivation. The monopoly system established in 1799 continued with minimal changes until 1947. As the opium trade was illegal in China, Company ships could not carry opium to China. So the opium produced in Bengal was sold in Calcutta on condition that it be sent to China.
Despite the Chinese ban on opium imports, reaffirmed in 1799 by the Jiaqing Emperor, the drug was smuggled into China from Bengal by traffickers and agency houses such as Jardine, Matheson & Co and Dent & Co. in amounts averaging 900 tons a year. The proceeds of the drug - smugglers landing their cargoes at Lintin Island were paid into the company 's factory at Canton and by 1825, most of the money needed to buy tea in China was raised by the illegal opium trade.
The company established a group of trading settlements centred on the Straits of Malacca called the Straits Settlements in 1826 to protect its trade route to China and to combat local piracy. The settlements were also used as penal settlements for Indian civilian and military prisoners.
In 1838 with the amount of smuggled opium entering China approaching 1,400 tons a year, the Chinese imposed a death penalty for opium smuggling and sent a Special Imperial Commissioner, Lin Zexu, to curb smuggling. This resulted in the First Opium War (1839 -- 42). After the war Hong Kong island was ceded to Britain under the Treaty of Nanking and the Chinese market opened to the opium traders of Britain and other nations. The Jardines and Apcar and Company dominated the trade, although P&O also tried to take a share. A Second Opium War fought by Britain and France against China lasted from 1856 until 1860 and led to the Treaty of Tientsin, which legalised the importation of opium. Legalisation stimulated domestic Chinese opium production and increased the importation of opium from Turkey and Persia. This increased competition for the Chinese market led to India 's reducing its opium output and diversifying its exports.
The company employed many junior clerks, known as "writers '', to record the details of accounting, managerial decisions, and activities related to the company, such as minutes of meetings, copies of Company orders and contracts, and filings of reports and copies of ship 's logs. Several well - known British scholars and literary men had Company writerships, such as Henry Thomas Colebrooke in India and Charles Lamb in England. One Indian writer of some importance in the 19th century was Ram Mohan Roy, who learned English, Sanskrit, Persian, Arabic, Greek, and Latin.
Though the company was becoming increasingly bold and ambitious in putting down resisting states, it was becoming clearer that the company was incapable of governing the vast expanse of the captured territories. The Bengal famine of 1770, in which one - third of the local population died, caused distress in Britain. Military and administrative costs mounted beyond control in British - administered regions in Bengal because of the ensuing drop in labour productivity.
At the same time, there was commercial stagnation and trade depression throughout Europe. The directors of the company attempted to avert bankruptcy by appealing to Parliament for financial help. This led to the passing of the Tea Act in 1773, which gave the company greater autonomy in running its trade in the American colonies, and allowed it an exemption from tea import duties which its colonial competitors were required to pay.
When the American colonists and tea merchants were told of this Act, they boycotted the company tea. Although the price of tea had dropped because of the Act, it also validated the Townshend Acts, setting the precedent for the king to impose additional taxes in the future. The arrival of tax - exempt Company tea, undercutting the local merchants, triggered the Boston Tea Party in the Province of Massachusetts Bay, one of the major events leading up to the American Revolution.
By the Regulating Act of 1773 (later known as the East India Company Act 1773), the Parliament of Great Britain imposed a series of administrative and economic reforms; this clearly established Parliament 's sovereignty and ultimate control over the company. The Act recognised the company 's political functions and clearly established that the "acquisition of sovereignty by the subjects of the Crown is on behalf of the Crown and not in its own right ''.
Despite stiff resistance from the East India lobby in parliament and from the company 's shareholders, the Act passed. It introduced substantial governmental control and allowed British India to be formally under the control of the Crown, but leased back to the company at £ 40,000 for two years. Under the Act 's most important provision, a governing Council composed of five members was created in Calcutta. The three members nominated by Parliament and representing the Government 's interest could, and invariably would, outvote the two Company members. The Council was headed by Warren Hastings, the incumbent Governor, who became the first Governor - General of Bengal, with an ill - defined authority over the Bombay and Madras Presidencies. His nomination, made by the Court of Directors, would in future be subject to the approval of a Council of Four appointed by the Crown. Initially, the Council consisted of Lt. General Sir John Clavering, The Honourable Sir George Monson, Sir Richard Barwell, and Sir Philip Francis.
Hastings was entrusted with the power of peace and war. British judges and magistrates would also be sent to India to administer the legal system. The Governor General and the council would have complete legislative powers. The company was allowed to maintain its virtual monopoly over trade in exchange for the biennial sum and was obligated to export a minimum quantity of goods yearly to Britain. The costs of administration were to be met by the company. The company initially welcomed these provisions, but the annual burden of the payment contributed to the steady decline of its finances.
The East India Company Act 1784 (Pitt 's India Act) had two key aspects:
Pitt 's Act was deemed a failure because it quickly became apparent that the boundaries between government control and the company 's powers were nebulous and highly subjective. The government felt obliged to respond to humanitarian calls for better treatment of local peoples in British - occupied territories. Edmund Burke, a former East India Company shareholder and diplomat, was moved to address the situation and introduced a new Regulating Bill in 1783. The bill was defeated amid lobbying by company loyalists and accusations of nepotism in the bill 's recommendations for the appointment of councillors.
The Act of 1786 (26 Geo. 3 c. 16) enacted the demand of Earl Cornwallis that the powers of the Governor - General be enlarged to empower him, in special cases, to override the majority of his Council and act on his own special responsibility. The Act enabled the offices of the Governor - General and the Commander - in - Chief to be jointly held by the same official.
This Act clearly demarcated borders between the Crown and the company. After this point, the company functioned as a regularised subsidiary of the Crown, with greater accountability for its actions and reached a stable stage of expansion and consolidation. Having temporarily achieved a state of truce with the Crown, the company continued to expand its influence to nearby territories through threats and coercive actions. By the middle of the 19th century, the company 's rule extended across most of India, Burma, Malaya, Singapore, and British Hong Kong, and a fifth of the world 's population was under its trading influence. In addition, Penang, one of the states in Malaya, became the fourth most important settlement, a presidency, of the company 's Indian territories.
The company 's charter was renewed for a further 20 years by the Charter Act of 1793. In contrast with the legislative proposals of the previous two decades, the 1793 Act was not a particularly controversial measure, and made only minimal changes to the system of government in India and to British oversight of the company 's activities. Sale of liquor was forbidden without licence. It was pointed that the payment of the staff of the board of council should not be made from the Indian revenue.
The aggressive policies of Lord Wellesley and the Marquess of Hastings led to the company 's gaining control of all India (except for the Punjab and Sindh), and some part of the then kingdom of Nepal under the Sugauli Treaty. The Indian Princes had become vassals of the company. But the expense of wars leading to the total control of India strained the company 's finances. The company was forced to petition Parliament for assistance. This was the background to the Charter Act of 1813 which, among other things:
The Industrial Revolution in Britain, the consequent search for markets, and the rise of laissez - faire economic ideology form the background to the Government of India Act 1833 (3 & 4 Will. 4 c. 85). The Act:
British influence continued to expand; in 1845, Great Britain purchased the Danish colony of Tranquebar. The company had at various stages extended its influence to China, the Philippines, and Java. It had solved its critical lack of cash needed to buy tea by exporting Indian - grown opium to China. China 's efforts to end the trade led to the First Opium War (1839 -- 1842).
The English Education Act by the Council of India in 1835 reallocated funds from the East India Company to spend on education and literature in India.
This Act (16 & 17 Vict. c. 95) provided that British India would remain under the administration of the company in trust for the Crown until Parliament should decide otherwise. It also introduced a system of open competition as the basis of recruitment for civil servants of the company and thus deprived the Directors of their patronage system.
Under the act, for the first time the legislative and executive powers of the governor general 's council were separated. It also added six additional members to the governor general 's executive committee.
The Indian Rebellion of 1857 (also known as the Indian Mutiny) resulted in widespread devastation in India: many condemned the East India Company for permitting the events to occur. In the aftermath of the Rebellion, under the provisions of the Government of India Act 1858, the British Government nationalised the company. The Crown took over its Indian possessions, its administrative powers and machinery, and its armed forces.
The company remained in existence in vestigial form, continuing to manage the tea trade on behalf of the British Government (and the supply of Saint Helena) until the East India Stock Dividend Redemption Act 1873 came into effect, on 1 January 1874. This Act provided for the formal dissolution of the company on 1 June 1874, after a final dividend payment and the commutation or redemption of its stock. The Times commented on 8 April 1873:
It accomplished a work such as in the whole history of the human race no other trading Company ever attempted, and such as none, surely, is likely to attempt in the years to come.
In the 1980s, a group of investors purchased the rights to the moribund corporate brand and founded a clothing company, which lasted until the 1990s. The corporate vestiges were again purchased by another group of investors who opened their first store in 2010.
The company 's headquarters in London, from which much of India was governed, was East India House in Leadenhall Street. After occupying premises in Philpot Lane from 1600 to 1621; in Crosby House, Bishopsgate, from 1621 to 1638; and in Leadenhall Street from 1638 to 1648, the company moved into Craven House, an Elizabethan mansion in Leadenhall Street. The building had become known as East India House by 1661. It was completely rebuilt and enlarged in 1726 -- 9; and further significantly remodelled and expanded in 1796 -- 1800. It was finally vacated in 1860 and demolished in 1861 -- 62. The site is now occupied by the Lloyd 's building.
In 1607, the company decided to build its own ships and leased a yard on the River Thames at Deptford. By 1614, the yard having become too small, an alternative site was acquired at Blackwall: the new yard was fully operational by 1617. It was sold in 1656, although for some years East India Company ships continued to be built and repaired there under the new owners.
In 1803, an Act of Parliament, promoted by the East India Company, established the East India Dock Company, with the aim of establishing a new set of docks (the East India Docks) primarily for the use of ships trading with India. The existing Brunswick Dock, part of the Blackwall Yard site, became the Export Dock; while a new Import Dock was built to the north. In 1838 the East India Dock Company merged with the West India Dock Company. The docks were taken over by the Port of London Authority in 1909, and closed in 1967.
The East India College was founded in 1806 as a training establishment for "writers '' (i.e. clerks) in the company 's service. It was initially located in Hertford Castle, but moved in 1809 to purpose - built premises at Hertford Heath, Hertfordshire. In 1858 the college closed; but in 1862 the buildings reopened as a public school, now Haileybury and Imperial Service College.
The East India Company Military Seminary was founded in 1809 at Addiscombe, near Croydon, Surrey, to train young officers for service in the company 's armies in India. It was based in Addiscombe Place, an early 18th - century mansion. The government took it over in 1858, and renamed it the Royal Indian Military College. In 1861 it was closed, and the site was subsequently redeveloped.
In 1818, the company entered into an agreement by which those of its servants who were certified insane in India might be cared for at Pembroke House, Hackney, London, a private lunatic asylum run by Dr George Rees until 1838, and thereafter by Dr William Williams. The arrangement outlasted the company itself, continuing until 1870, when the India Office opened its own asylum, the Royal India Asylum, at Hanwell, Middlesex.
The East India Club in London was formed in 1849 for officers of the company. The Club still exists today as a private gentlemen 's club with its club house situated at 16 St. James 's Square, London.
The East India Company was one of the most powerful and enduring organisations in history and had a long lasting impact on the Indian Subcontinent, with both positive and harmful effects. Although dissolved by the East India Stock Dividend Redemption Act 1873 following the rebellion of 1857, it stimulated the growth of the British Empire. Its armies were to become the armies of British India after 1857, and it played a key role in introducing English as an official language in India. This also led to Macaulayism in the Indian subcontinent.
Once the East India Company took over Bengal in the treaty of Allahabad (1765) it collected taxes which it used to further its expansion to the rest of India and did not have to rely on venture capital from London. It returned a high profit to those who risked original money for earlier ventures into Bengal.
During the first century of the East India Company 's expansion in India, most people in India lived under regional kings or Nawabs. By the late 18th century many Moghuls were weak in comparison to the rapidly expanding Company as it took over cities and land, built railways, roads and bridges. The first railway of 21 mile (33.8 km), known as the Great Indian Peninsula Railway ran between Bombay (Mumbai) and Tannah (Thane) in 1849. The Company sought quick profits because the financial backers in England took high risks: their money for possible profits or losses through shipwrecks, wars or calamities.
The increasingly large territory the Company was annexing and collecting taxes was also run by the local Nawabs. In essence, it was a dual administration. Between 1765 and 1772 Robert Clive gave the responsibility of tax collecting, diwani, to the Indian deputy and judicial and police responsibilities to other Indian deputies. The Company concentrated its new power of collecting revenue and left the responsibilities to the Indian agencies. The East India Company took the beginning steps of British takeover of power in India for centuries to come. In 1772 the Company made Warren Hastings, who had been in India with the Company since 1750, its first governor general to manage and overview all of the annexed lands. The dual administration system came to an end.
Hastings learned Urdu and Persian and took great interest in preserving ancient Sanskrit manuscripts and having them translated into English. He employed many Indians as officials.
Hastings used Sanskrit texts for Hindus and Arabic texts for Muslims. This is still used in Indian, Pakistani and Bangladeshi courts today in civil law. Hastings also annexed lands and kingdoms and enriched himself in the process. His enemies in London used this against him to have him impeached. See (Impeachment of Warren Hastings)
Charles Cornwallis, widely remembered as having surrendered to George Washington in 1781, replaced Hastings. Cornwallis distrusted Indians and replaced Indians with English. He introduced a system of personal land ownership for Indians. This change caused much conflict since most illiterate people had no idea why they suddenly became land owners to land renters.
Mughals often had to choose to fight against the Company and lose everything or cooperate with the Company and receive a big pension but lose the throne. The British East India Company gradually took over most of India by threat, intimidation, bribery or outright war.
The East India Company was the first company to record the Chinese usage of orange - flavoured tea, which led to the development of Earl Grey tea.
The East India Company introduced a system of merit - based appointments that provided a model for the British and Indian civil service.
Widespread corruption and looting of Bengal resources and treasures during its rule resulted in poverty. Famines, such as the Great Bengal famine of 1770 and subsequent famines during the 18th and 19th centuries, became more widespread, chiefly because of exploitative agriculture promulgated by the policies of the East India company and the forced cultivation of opium in place of grain.
Downman (1685)
Lens (1700)
National Geographic (1917)
Rees (1820)
Laurie (1842)
1600 -- 1707
1707 -- 1801
1801 -- 1874
The English East India Company flag changed with history, with a canton based on the current flag of the Kingdom, and a field of 9 to 13 alternating red and white stripes.
From the period of 1600, the canton consisted of a St George 's Cross representing the Kingdom of England. With the Acts of Union 1707, the canton was updated to be the new Union Flag -- consisting of an English St George 's Cross combined with a Scottish St Andrew 's cross -- representing the Kingdom of Great Britain. After the Acts of Union 1800 that joined Ireland with Great Britain to form the United Kingdom, the canton of the East India Company flag was altered accordingly to include a Saint Patrick 's Saltire replicating the updated Union Flag representing the United Kingdom of Great Britain and Ireland.
Regarding the field of the flag, there has been much debate and discussion regarding the number and order of the stripes. Historical documents and paintings show many variations from 9 to 13 stripes, with some images showing the top stripes being red and others showing the top stripe being white.
At the time of the American Revolution the East India Company flag was nearly identical to the Grand Union Flag. Historian Charles Fawcett argued that the East India Company Flag inspired the Stars and Stripes.
The East India Company 's original coat of arms was granted in 1600. The blazon of the arms is as follows:
"Azure, three ships with three masts, rigged and under full sail, the sails, pennants and ensigns Argent, each charged with a cross Gules; on a chief of the second a pale quarterly Azure and Gules, on the 1st and 4th a fleur - de-lis or, on the 2nd and 3rd a leopard or, between two roses Gules seeded Or barbed Vert. '' The shield had as a crest: "A sphere without a frame, bounded with the Zodiac in bend Or, between two pennants flottant Argent, each charged with a cross Gules, over the sphere the words DEUS INDICAT '' (Latin: God Indicates). The supporters were two sea lions (lions with fishes ' tails) and the motto was DEO DUCENTE NIL NOCET (Latin: Where God Leads, Nothing Harms).
The East India Company 's arms, granted in 1698, were: "Argent a cross Gules; in the dexter chief quarter an escutcheon of the arms of France and England quarterly, the shield ornamentally and regally crowned Or. '' The crest was: "A lion rampant guardant Or holding between the forepaws a regal crown proper. '' The supporters were: "Two lions rampant guardant Or, each supporting a banner erect Argent, charged with a cross Gules. '' The motto was AUSPICIO REGIS ET SENATUS ANGLIÆ (Latin: Under the auspices of the King and the Senate of England).
HEIC Merchant 's mark on East India Company Coin: 1791 Half Pice
HEIC Merchant 's mark on a Blue Scinde Dawk postage stamp (1852)
When the East India Company was chartered in 1600, it was still customary for individual merchants or members of companies such as the Company of Merchant Adventurers to have a distinguishing merchant 's mark which often included the mystical "Sign of Four '' and served as a trademark. The East India Company 's merchant mark consisted of a "Sign of Four '' atop a heart within which was a saltire between the lower arms of which were the initials "EIC ''. This mark was a central motif of the East India Company 's coinage and forms the central emblem displayed on the Scinde Dawk postage stamps.
Ships of the East India Company were called East Indiamen or simply "Indiamen ''.
During the French Revolutionary and Napoleonic Wars, the East India Company arranged for letters of marque for its vessels such as the Lord Nelson. This was not so that they could carry cannon to fend off warships, privateers, and pirates on their voyages to India and China (that they could do without permission) but so that, should they have the opportunity to take a prize, they could do so without being guilty of piracy. Similarly, the Earl of Mornington, an East India Company packet ship of only six guns, also sailed under a letter of marque.
In addition, the company had its own navy, the Bombay Marine, equipped with warships such as Grappler. These vessels often accompanied vessels of the Royal Navy on expeditions, such as the Invasion of Java.
At the Battle of Pulo Aura, which was probably the company 's most notable naval victory, Nathaniel Dance, Commodore of a convoy of Indiamen and sailing aboard the Warley, led several Indiamen in a skirmish with a French squadron, driving them off. Some six years earlier, on 28 January 1797, five Indiamen, the Woodford, under Captain Charles Lennox, the Taunton - Castle, Captain Edward Studd, Canton, Captain Abel Vyvyan, Boddam, Captain George Palmer, and Ocean, Captain John Christian Lochner, had encountered Admiral de Sercey and his squadron of frigates. On this occasion the Indiamen also succeeded in bluffing their way to safety, and without any shots even being fired. Lastly, on 15 June 1795, the General Goddard played a large role in the capture of seven Dutch East Indiamen off St Helena.
East Indiamen were large and strongly built and when the Royal Navy was desperate for vessels to escort merchant convoys it bought several of them to convert to warships. Earl of Mornington became HMS Drake. Other examples include:
Their design as merchant vessels meant that their performance in the warship role was underwhelming and the Navy converted them to transports.
Unlike all other British Government records, the records from the East India Company (and its successor the India Office) are not in The National Archives at Kew, London, but are held by the British Library in London as part of the Asia, Pacific and Africa Collections. The catalogue is searchable online in the Access to Archives catalogues. Many of the East India Company records are freely available online under an agreement that the Families in British India Society has with the British Library. Published catalogues exist of East India Company ships ' journals and logs, 1600 -- 1834; and of some of the company 's daughter institutions, including the East India Company College, Haileybury, and Addiscombe Military Seminary.
The Asiatic Journal and Monthly Register for British India and its Dependencies, first issued in 1816, was sponsored by the East India Company, and includes much information relating to the EIC.
East India Company:
General:
|
what issues divided the western and eastern christian churches | East -- West Schism - Wikipedia
The East -- West Schism, also called the Great Schism and the Schism of 1054, was the break of communion between what are now the Catholic Church and Eastern Orthodox churches, which has lasted since the 11th century. The Schism was the culmination of theological and political differences between the Christian East and West which had developed over the preceding centuries.
A succession of ecclesiastical differences and theological disputes between the Greek East and Latin West pre-dated the formal rupture that occurred in 1054. Prominent among these were the issues of the source of the Holy Spirit, whether leavened or unleavened bread should be used in the Eucharist, the Bishop of Rome 's claim to universal jurisdiction, and the place of the See of Constantinople in relation to the Pentarchy.
In 1053, the first step was taken in the process which led to formal schism: the Greek churches in southern Italy were forced either to close or to conform to Latin practices. In retaliation, the Ecumenical Patriarch of Constantinople Michael I Cerularius ordered the closure of all Latin churches in Constantinople. In 1054, the papal legate sent by Leo IX travelled to Constantinople for purposes that included refusing to Cerularius the title of "Ecumenical Patriarch '' and insisting that he recognize the Pope 's claim to be the head of all the churches. The main purpose of the papal legation was to seek help from the Byzantine Emperor in view of the Norman conquest of southern Italy and to deal with recent attacks by Leo of Ohrid against the use of unleavened bread and other Western customs, attacks that had the support of Cerularius. Historian Axel Bayer says the legation was sent in response to two letters, one from the Emperor seeking assistance in arranging a common military campaign by the eastern and western empires against the Normans, and the other from Cerularius. On the refusal of Cerularius to accept the demand, the leader of the legation, Cardinal Humbert of Silva Candida, O.S.B., excommunicated him, and in return Cerularius excommunicated Humbert and the other legates. This was only the first act in a centuries - long process that eventually became a complete schism.
The validity of the Western legates ' act is doubtful, since Pope Leo had died and Cerularius ' excommunication applied only to the legates personally. Still, the Church split along doctrinal, theological, linguistic, political, and geographical lines, and the fundamental breach has never been healed, with each side sometimes accusing the other of having fallen into heresy and of having initiated the division. The Crusades, the Massacre of the Latins in 1182, the West 's retaliation in the Sacking of Thessalonica in 1185, the capture and Siege of Constantinople in 1204, and the imposition of Latin patriarchs made reconciliation more difficult. Establishing Latin hierarchies in the Crusader states meant that there were two rival claimants to each of the patriarchal sees of Antioch, Constantinople, and Jerusalem, making the existence of schism clear. Several attempts at reconciliation did not bear fruit. In 1965, Pope Paul VI and the Ecumenical Patriarch of Constantinople Athenagoras I nullified the anathemas of 1054, although this nullification of measures taken against a few individuals was essentially a goodwill gesture and did not constitute any sort of reunion. Contacts between the two sides continue: every year a delegation from each joins in the other 's celebration of its patronal feast, Saints Peter and Paul (29 June) for Rome and Saint Andrew (30 November) for Constantinople, and there have been a number of visits by the head of each to the other. The efforts of the Ecumenical Patriarchs towards reconciliation with the Catholic Church have often been the target of sharp criticism from some fellow Orthodox.
The schism between the Western and Eastern Mediterranean Christians resulted from a variety of political, cultural and theological factors which transpired over centuries. Historians regard the mutual excommunications of 1054 as the terminal event. It is difficult to agree on an exact date for the event where the start of the schism was apparent. It may have started as early as the Quartodeciman controversy at the time of Victor of Rome (c. 180). Orthodox apologists point to this incident as an example of claims by Rome to papal primacy and its rejection by Eastern Churches.
Sporadic schisms in the common unions took place under Pope Damasus I in the 4th and 5th centuries. Disputes about theological and other questions led to schisms between the Churches in Rome and Constantinople for 37 years from 482 to 519 (the Acacian Schism). Most sources agree that the separation between East and West is clearly evident by the Photian schism for 13 years from 866 -- 879.
Apart from Rome in the West, "many major Churches of the East claim to have been founded by the apostles: Antioch by Peter and Paul, Alexandria by Mark, Constantinople by Andrew, Cyprus by Barnabas, Ethiopia by Matthew, India by Thomas, Edessa in eastern Syria by Thaddeus, Armenia by Bartholomew, Georgia by Simon the Zealot. '' Famous also are the seven churches of Asia (the Roman province of Asia), mentioned in the opening chapters of the Book of Revelation.
While the church at Rome claimed a special authority over the other churches, the extant documents of that era yield "no clear - cut claims to, or recognition, of papal primacy. ''
Towards the end of the 2nd century, Victor, the Bishop of Rome, attempted to resolve the Quartodeciman controversy. The question was whether to celebrate Easter concurrently with the Jewish Passover, as Christians in the Roman province of Asia did, or to wait until the following Sunday, as was unanimously decreed by synods held in other Eastern provinces, such as those of Palestine and Pontus, the acts of which were still extant at the time of Eusebius, and in Rome. The pope attempted to excommunicate the churches in Asia, which refused to accept the observance on Sunday. Other bishops rebuked him for doing so. Laurent Cleenewerck comments:
Victor obviously claimed superior authority, probably from St. Peter, and decided -- or at least "attempted '' to excommunicate a whole group of Churches because they followed a different tradition and refused to conform. One could therefore argue that the Great schism started with Victor, continued with Stephen and remained underground until the ninth century! But the question is this: even if Victor was not acting wisely, did he not have the power to "cut off whole Churches ''? This is what Roman Catholics argue with the implication that such an excommunication would be ontologically meaningful and put someone "outside the Catholic Church ''. Yet, we do not see bishops "pleading '' but indeed "sharply rebuking '' and "admonishing '' Victor. Ultimately this is why his letters of excommunication came to no effect. Nevertheless it is possible to read in Eusebius ' account the possibility that St. Irenaeus recognized that Victor could indeed "cut off whole Churches '' and that such excommunication would have been ontologically meaningful... In the end, it took some patience and an Ecumenical Council to achieve what Victor could not achieve by his threat to excommunicate.
Despite Victor 's failure to carry out his intent to excommunicate the Asian churches, many Catholic apologists point to this episode as evidence of papal primacy and authority in the early Church, citing the fact that none of the bishops challenged his right to excommunicate but rather questioned the wisdom and charity of his action.
The opinion of the Bishop of Rome was often sought, especially when the patriarchs of the Eastern Mediterranean were locked in fractious dispute. However, the Bishop of Rome 's opinion was by no means accepted automatically. The bishops of Rome never obviously belonged to either the Antiochian or the Alexandrian schools of theology, and usually managed to steer a middle course between whatever extremes were being propounded by theologians of either school. Because Rome was remote from the centres of Christianity in the eastern Mediterranean, it was frequently hoped its bishop would be more impartial. For instance, in 431, Cyril, the patriarch of Alexandria, appealed to Pope Celestine I, as well as the other patriarchs, charging Constantinople Patriarch Nestorius with heresy, which was dealt with at the Council of Ephesus.
In 342, Pope Julius I wrote: "The custom has been for word to be written first to us (in the case of bishops under accusation, and notably in apostolic churches), and then for a just sentence to be passed from this place ''.
In 382 a synod in Rome protested against the raising of Constantinople to a position above that of Alexandria, and spoke of Rome as "the apostolic see ''. Pope Siricius (384 -- 399) claimed for papal decretals the same binding force as decisions of synods, Pope Innocent I (401 -- 417) said that all major judicial cases should be reserved for the see of Rome, and Pope Boniface I (418 -- 422) declared that the church of Rome stands to "the churches throughout the world as the head to its members '' and that bishops everywhere, while holding the one same episcopal office, must "recognise those to whom, for the sake of ecclesiastical discipline, they should be subject ''. Celestine I (r. 422 -- 432) considered that the condemnation of Nestorius by his own Roman synod in 430 was sufficient, but consented to the general council as "of benefit in manifesting the faith ''. Pope Leo I and his successors rejected canon 28 of the Council of Chalcedon, as a result of which it was not officially recorded even in the East until the 6th century. The Acacian schism (484 -- 519), when, "for the first time, West lines up against East in a clear - cut fashion '', ended with acceptance of a declaration insisted on by Pope Hormisdas (514 -- 523) that "I hope I shall remain in communion with the apostolic see in which is found the whole, true, and perfect stability of the Christian religion ''. Earlier, in 494, Pope Gelasius I (492 -- 496) wrote to Byzantine Emperor Anastasius, distinguishing the power of civil rulers from that of the bishops (called "priests '' in the document), with the latter supreme in religious matters; he ended his letter with: "And if it is fitting that the hearts of the faithful should submit to all priests in general who properly administer divine affairs, how much the more is obedience due to the bishop of that see which the Most High ordained to be above all others, and which is consequently dutifully honoured by the devotion of the whole Church. '' Pope Nicholas I (858 -- 867) made it clear that he believed the power of the papacy extended "over all the earth, that is, over every church ''.
In 330, Emperor Constantine moved the imperial capital to Byzantium, a strategically located city on the Bosporus. He renamed the capital Nova Roma ("New Rome ''), but the city would become known as Constantinople. The centre of gravity in the empire was fully recognised to have completely shifted to the eastern Mediterranean. Rome lost the Senate to Constantinople and lost its status and gravitas as imperial capital.
The bishop of Byzantium was under the authority of the metropolitan of Heraclea when in 330 Roman Emperor Constantine I moved his residence to this town, which, rebuilt on a larger scale, became known as Constantinople. Thereafter, the bishop 's connection with the imperial court meant that he was able to free himself from ecclesiastical dependency on Heraclea and in little more than half a century to obtain recognition of next - after - Rome ranking from the First Council of Constantinople (381), held in the new capital. No Western bishop took part in this council, and the Latin Church recognized it as ecumenical only in the mid-6th century. It decreed: "The Bishop of Constantinople, however, shall have the prerogative of honour after the Bishop of Rome; because Constantinople is New Rome '', thus raising it above the sees of Alexandria and Antioch. This has been described as sowing the seed for the ecclesiastical rivalry between Constantinople and Rome that was a factor leading to the schism between East and West. The website of the Orthodox Church in America says that the Bishop of Byzantium was elevated to Patriarch already in the time of Constantine.
Disunion in the Roman Empire contributed to disunion in the Church. Theodosius the Great, who in 380 established Nicene Christianity as the official religion of the Roman Empire (see Edict of Thessalonica), was the last Emperor to rule over a united Roman Empire. Following the death of Theodosius in 395, the Empire was divided for the final time into western and eastern halves. In the 4th century, the Roman emperor (reigning in Constantinople) started to control the Church in his territory.
The patriarchs of Constantinople often tried to adopt an imperious position over the other patriarchs, provoking their resistance. For example, in 431 Patriarch Cyril, of Alexandria, impeached for heresy, Patriarch Nestorius, of Constantinople.
Alexandria 's objections to Constantinople 's promotion, which led to a constant struggle between the two sees in the first half of the 5th century, were supported by Rome, which proposed the theory that the most important sees were the three Petrine ones, of Rome, Antioch, and Alexandria, with Rome in first place.
However, the power of the patriarch of Constantinople continued to grow. Eastern Orthodox state that the 28th canon of the Council of Chalcedon (451) explicitly proclaimed the equality of the Bishops of Rome and Constantinople, and that it established the highest court of ecclesiastical appeal in Constantinople. The patriarch of the imperial capital succeeded in his efforts to become the leading bishop in the Byzantine Empire: he "headed a vast curia and other bishops who resided in Constantinople constituted a permanent synod, which became the real governing body of the church ''.
Patriarch John IV of Constantinople, who died in 595, assumed the title of "Ecumenical Patriarch ''.
The idea that with the transfer of the imperial capital from Rome to Constantinople, primacy in the Church was also transferred, is found in undeveloped form as early as John Philoponus (c. 490 -- c. 570). It was enunciated in its most advanced form by Photios I of Constantinople (c. 810 -- c. 893). Constantinople, as the seat of the ruler of the empire and therefore of the world, was the highest among the patriarchates and, like the emperor, had the right to govern them.
After the Roman Emperor Constantine the Great legalized Christianity (with the Edict of Milan), he summoned the First Ecumenical Council at Nicaea in 325. The bishops at the council confirmed the position of the metropolitan sees of Rome and Alexandria as having authority outside their own province, and also the existing privileges of the churches in Antioch and the other provinces. These sees were later called Patriarchates. These were given an order of precedence: Rome, as capital of the empire was naturally given first place, then came Alexandria and Antioch. In a separate canon the Council also approved the special honor given to Jerusalem over other sees subject to the same metropolitan.
Roman dominate Emperor Theodosius I convened the second ecumenical council (Constantinople I) at the imperial capital city in 381. The council elevated the see of Constantinople, to a position ahead of the other chief metropolitan sees, except that of Rome thus raising it above the sees of Alexandria and Antioch. This action has been described as sowing the seed for the ecclesiastical rivalry between Constantinople and Rome which was ultimately a factor leading to the schism between East and West. It demarcated the territory within the praetorian prefecture of the East into five canonical territories corresponding to the five civil dioceses: Diocese of Egypt (metropolis in Alexandria), Diocese of the East (metropolis in Antioch), Diocese of Asia (Metropolis of Ephesus), Diocese of Pontus (metropolis in Caesarea Cappadociae), and Diocese of Thrace (metropolis in Heraclea, later under Constantinople); The council mentioned the churches in the civil dioceses of Asia, Pontus, and Thrace, it decreed that the synod of each province should manage the ecclesiastical affairs of that province alone, except for the privileges already recognized for sees of Alexandria and Antioch.
No Western bishops attended the council and no legate of the bishop of Rome was present. The Latin Church recognized the council as ecumenical about 150 years later, in the mid-6th century.
Rome 's Tome of Leo (449) was highly regarded, and formed the basis for the Council of Chalcedon formulation. But it was not universally accepted and was even called "impious '' and "blasphemous '' by those who condemned the council that approved and accepted it. The next ecumenical council corrected a possible imbalance in Pope Leo 's presentation. Although the Bishop of Rome was well respected even at this early date, the East holds that the concept of the primacy of the Roman See and Papal Infallibility were only developed much later.
The disputed canon 28 of the Council of Chalcedon in 451, confirming the authority already held by Constantinople, granted its archbishop jurisdiction over Pontus and Thrace.
The council also ratified an agreement between Antioch and Jerusalem, whereby Jerusalem held jurisdiction over three provinces, numbering it among the five great sees. As thus interpreted, there were now five patriarchs presiding over the Church within the Byzantine Empire, in the following order of precedence: the Patriarch of Rome, the Patriarch of Constantinople, the Patriarch of Alexandria, the Patriarch of Antioch and the Patriarch of Jerusalem.
Although Leo I, whose delegates were absent when this resolution was passed, recognized the council as ecumenical and confirmed its doctrinal decrees, he rejected its canon 28 on the ground that it contravened the sixth canon of Nicaea and infringed the rights of Alexandria and Antioch. However, by that time Constantinople, the permanent residence of the emperor, had in reality enormous influence, and had it not been for the opposition of Rome, its bishop could easily have been given first place among all the bishops.
This canon would remain a constant source of friction between East and West, until the mutual excommunications of 1054 made it irrelevant in that regard; but controversy about its applicability to the authority of the patriarchate of Constantinople still continues.
The same disputed canon also recognized an authority of Constantinople over bishops of dioceses "among the barbarians '', which has been variously interpreted as referring either to all areas outside the Byzantine Empire or only to those in the vicinity of Pontus, Asia and Thrace or to non-Greeks within the empire.
Canon 9 of the Council also declared: "If a bishop or clergyman should have a difference with the metropolitan of the province, let him have recourse to the Exarch of the Diocese, or to the throne of the Imperial City of Constantinople, and there let it be tried. '' This has been interpreted as conferring on the see of Constantinople a greater privilege than what any council ever gave Rome, or as of much lesser significance than that.
After the Council of Chalcedon (451), the position of the Patriarchate of Alexandria was weakened by a division in which the great majority of its Christian population followed the form of Christianity that its opponents called Monophysitism.
In 476, when the last emperor of the western part of the Roman Empire was deposed and the western imperial insignia were sent to Constantinople, there was once again a single Roman Emperor. However, he had little power in the West, which was ruled almost entirely by various Germanic tribes. In the opinion of Randall R. Cloud, the permanent separation of the Greek East from the Latin West was "the fundamental reason for the estrangement that soon followed between the Greek and the Latin Christians ''.
The dominant language of the West was Latin, while that of the East was Greek. Soon after the fall of the West to invaders, the number of individuals who spoke both languages dwindled, and communication between East and West grew much more difficult. With linguistic unity gone, cultural unity began to crumble as well. The two halves of the Church were naturally divided along similar lines; they developed different rites and had different approaches to religious doctrines. Although the schism was still centuries away, its outlines were already perceptible.
In the areas under his control, Justinian I established caesaropapism as the constitution of the Church in a scheme according to which the emperor "had the right and duty of regulating by his laws the minutest detail of worship and discipline, and also of dictating the theological opinions to be held in the Church ''. According to the Westminster Dictionary of Theological Terms, this caesaropapism was "a source of contention between Rome and Constantinople that led to the schism of 1054 ''. Explicit approval of the emperor in Constantinople was required for consecration of bishops within the empire. During the period called the Byzantine Papacy, this applied to the bishops of Rome, most of whom were of Greek or Syrian origin. Resentment in the West against the Byzantine emperor 's governance of the Church is shown as far back as the 6th century, when "the tolerance of the Arian Gothic king was preferred to the caesaropapist claims of Constantinople ''. The origins of the distinct attitudes in West and East are sometimes traced back even to Augustine of Hippo, who "saw the relationship between church and state as one of tension between the ' city of God ' and the ' city of the world ' '', and Eusebius, who "saw the state as the protector of the church and the emperor as God 's vicar on earth ''.
By 661, Muslim Arabs had taken over the territories assigned to the patriarchates of Alexandria, Antioch and Jerusalem, which thereafter were never more than partially and temporarily recovered. In 732, Emperor Leo III the Isaurian, in revenge for the opposition of Pope Gregory III to the emperor 's iconoclast policies, transferred Sicily, Calabria and Illyria from the patriarchate of Rome (whose jurisdiction until then extended as far east as Thessalonica) to that of Constantinople. The Constantinople patriarchate, after expanding eastward at the time of the Council of Chalcedon to take in Pontus and the Roman province of Asia, which at that time were still under the emperor 's control, thus expanded equally to the west, and was practically coextensive with the Byzantine Empire.
The West 's rejection of the Quinisext Council of 692 led to pressure from the Eastern Empire on the West to reject many Latin customs as non-Orthodox. The Latin practices that had got the attention of the other Patriarchates and that had been condemned by this Council included the practice of celebrating Mass on weekdays in Lent (rather than having Pre-Sanctified Liturgies); fasting on Saturdays throughout the year; omitting the "Alleluia '' in Lent; depicting Christ as a lamb; using unleavened bread. Larger disputes were revealed regarding Eastern and Western attitudes toward celibacy for priests and deacons, with the Council affirming the right of married men to become priests (though forbidding priests to marry and forbidding bishops to live with their wives) and prescribing deposition for anyone who attempted to separate a clergyman other than a bishop from his wife, or for any cleric other than a bishop who dismissed his wife.
Pope Sergius I, who was of Syrian ancestry, rejected the council. Emperor Justinian II ordered his arrest. This was thwarted.
In 694, in Visigothic Spain, the council was ratified by the Eighteenth Council of Toledo at the urging of the king, Wittiza. Fruela I of Asturias reversed the decision of Toledo sometime during his reign (757 -- 768).
The primary causes of the schism were disputes over conflicting claims of jurisdiction, in particular over papal authority -- Pope Leo IX claimed he held authority over the four Eastern patriarchs -- and over the insertion of the Filioque clause into the Nicene Creed by the Western patriarch in 1014. Eastern Orthodox today state that Council of Chalcedon canon 28 explicitly proclaimed the equality of the Bishops of Rome and Constantinople and that it established the highest court of ecclesiastical appeal in Constantinople. Council of Ephesus canon 7 declared:
It is unlawful for any man to bring forward, or to write, or to compose a different (ἑτέραν) Faith as a rival to that established by the holy Fathers assembled with the Holy Ghost in Nicæa. But those who shall dare to compose a different faith, or to introduce or offer it to persons desiring to turn to the acknowledgment of the truth, whether from Heathenism or from Judaism, or from any heresy whatsoever, shall be deposed, if they be bishops or clergymen; bishops from the episcopate and clergymen from the clergy; and if they be laymen, they shall be anathematized
Eastern Orthodox today state that this canon of the Council of Ephesus explicitly prohibited modification of the Nicene Creed drawn up by the first Ecumenical Council in 325, the wording of which, it is claimed, but not the substance, had been modified by the second Ecumenical Council, making additions such as "who proceeds from the Father ''.
Eastern Orthodox argue that First Council of Ephesus canon 7 explicitly prohibited modification of the Nicene Creed by any man (not by ecumenical church council) drawn up by the first Ecumenical Council in 325. In reality, the Council made no exception for an ecumenical council or any other body of bishops, and the Greeks participating in the Council of Florence emphatically denied that even an ecumenical council had the power to add anything to the creed. The creed quoted in the Acts of the Council of Ephesus of 431 (the third ecumenical council) is that of the first ecumenical council, that of Nicaea (325), without the modifications that the second ecumenical council, held in Constantinople in 381, is understood to have made to it, such as the addition of "who proceeds from the Father ''. Eastern Orthodox theologians state this change of the wording of the churches ' original creed, was done to address various teachings outside of the church in specific the Macedonius I of Constantinople teaching which the council claimed was a distortion of the church 's teaching on the Holy Spirit. This was not a change of the orthodoxy of the churches ' original creed. Thus the word ἑτέραν in the seventh canon of the later Council of Ephesus is understood as meaning "different '' or "contradictory '' and not "another '' in the sense of mere explanatory additions to the already existing creed. Some scholars hold that the additions attributed to the First Council of Constantinople were adopted only with the 451 Council of Chalcedon, 20 years after that of Ephesus, and even that the Council of Ephesus, in which Alexandrian influence was dominant, was by this canon excluding the Constantinopolitan Creed, which eventually annexed the name and fame of the creed adopted at Nicaea.
Many other issues increased tensions.
In Eastern Christendom, the teaching of papal supremacy is said to be based on the pseudo-Isidorian Decretals, documents attributed to early popes but actually forged, probably in the second quarter of the 9th century, with the aim of defending the position of bishops against metropolitans and secular authorities. The Orthodox East contests the teaching that Peter was the Patriarch of Rome, a title that the West too does not give him. Early sources such as St. Irenaeus can be interpreted as describing Pope Linus as the first bishop of Rome and Pope Cletus the second. The Oxford Dictionary of Popes states: "In the late 2nd or early 3rd cent. the tradition identified Peter as the first bishop of Rome. This was a natural development once the monarchical episcopate, i.e. government of the local church by a single bishop, as distinct from a group of presbyter bishops, finally emerged in Rome in the mid-2nd cent. The earlier tradition, however, which placed Peter and Paul in a class apart as the pioneers who together established the Roman church and its ministry, was never lost sight of. '' St. Peter was according to tradition bishop of Antioch at one point, and was then succeeded by Evodius and Ignatius. The Eastern Orthodox do not hold the primacy of the Pope of Rome over the Eastern church; they teach that the Pope of Rome is the first among equals. The first seven Ecumenical Councils were held in the East and called by the Eastern Emperors, Roman pontiffs never presided over any of them.
Three councils were held, two by Constantinople, one by Rome. Rome attempted to replace a seated Patriarch with one amenable to the Filioque dispute. The Orthodox responded by denouncing the replacement and excommunicating the pope convening the Roman council, denouncing the pope 's attempt to control affairs outside the purview of Rome, and denouncing the addition of Filioque as a heresy. Each church recognizes its own council (s) as legitimate and does not recognize the other 's council (s).
In 1053 Leo of Ohrid, at the instigation, according to J.B. Bury, of Patriarch Michael Cerularius of Constantinople, wrote to Bishop John of Trani a letter, intended for all the Latin bishops, including the pope, in which he attacked Western practices such as using unleavened bread for the Eucharist, and fasting rules that differed from those in Constantinople, while Cerularius himself closed all Latin churches in Constantinople.
In response, Leo IX wrote the letter In terra pax of 2 September 1053, addressed to Cerularius and Leo of Ohrid, in which he speaks at length of the privileges granted through Saint Peter to the see of Rome. In one of the 41 sections of his letter he also speaks of privileges granted by the emperors, quoting from the Donation of Constantine document, which he believed to be genuine (section 20). Some scholars say that this letter was never actually dispatched, but was set aside, and that the papal reply actually sent was the softer but still harsh letter Scripta tuae of January 1054.
The advance of the Norman conquest of southern Italy constituted a threat to the possessions of both the Byzantine Empire and the papacy, each of which sought the support of the other. Accordingly, conciliatory letters, the texts of which have not been preserved, were written to the pope by the emperor and Cerularius. In his January 1054 reply to the emperor, Quantas gratias, Leo IX asks for his assistance against the Normans and complains of what the pope saw as Caerularius 's arrogance. In his reply to Caerularius, he upbraided the patriarch for trying to subject the patriarchs of Alexandria and Antioch to himself and for adopting the title of Ecumenical Patriarch, and insisted on the primacy of the see of Rome.
These two letters were entrusted to a delegation of three legates headed by the undiplomatic Humbert of Silva Candida. They were given friendship and support by the emperor but were spurned by the patriarch. Finally, on 16 July 1054, three months after Pope Leo 's death in April 1054 and nine months before the next pope took office, they laid on the altar of Hagia Sophia, which was prepared for celebration of the Divine Liturgy, a bull of excommunication of Cerularius and his supporters. At a synod held on 20 July 1054, Cerularius in turn excommunicated the legates. In reality, only Michael may have been excommunicated along with his then - living adherents.
At the time of the excommunications, many contemporary historians, including Byzantine chroniclers, did not consider the event significant.
Efforts were made in subsequent centuries by emperors, popes and patriarchs to heal the rift between the churches. However, a number of factors and historical events worked to widen the separation over time.
"Even after 1054 friendly relations between East and West continued. The two parts of Christendom were not yet conscious of a great gulf of separation between them.... The dispute remained something of which ordinary Christians in East and West were largely unaware ''.
There was no single event that marked the breakdown. Rather, the two churches slid into and out of schism over a period of several centuries, punctuated with temporary reconciliations.
Starting from the late 11th century, dependency of Byzantine Empire on the naval forces of Republic of Venice and, to a lesser extent, Republic of Genoa and Republic of Pisa led to predominance of Roman Catholic merchants in Byzantium (they were getting major trading concessions starting from 1080s), subsequently causing economic and social upheaval. Together with the perceived arrogance of the Italians, it fueled popular resentment amongst the middle and lower classes both in the countryside and in the cities.
By the second half of the 12th century practically uncontrollable rivalry between competitors from different city states made it to Italians raiding quarters of other Italians in the capital, and retaliatory draconian measures by the Byzantine authorities led to subsequent deterioration of inter-religious relations in the city.
When in 1182 regency of empress mother Maria of Antioch, an ethnical French notorious for the favoritism shown to Latin merchants and the big aristocratic land - owners, was deposed by Andronikos I Komnenos on the wake of popular support, the new emperor allowed mobs to massacre hated foreigners. Henceforth Byzantine foreign policy was invariably perceived as sinister and anti-Latin in the West.
During the Fourth Crusade Latin crusaders and Venetian merchants sacked Constantinople itself, looting The Church of Holy Wisdom and various other Orthodox Holy sites, and converting them to Latin Catholic worship. The Norman Crusaders also destroyed the Imperial Library of Constantinople. Various holy artifacts from these Orthodox holy places were taken to the West. The crusaders also appointed a Latin Patriarch of Constantinople. This event and the final treaty established the Latin Empire of the East and the Latin Patriarch of Constantinople (with various other Crusader states). Later some religious artifacts were sold in Europe to finance or fund the Latin Empire in Byzantium as can be seen, when Emperor Baldwin II of Constantinople sold the relic of the Crown of Thorns while in France trying to raise new funds to maintain his hold on Byzantium. The Latin Empire was terminated in 1261 by Byzantine Emperor Michael VIII Palaiologos. This attack on the heart of the Byzantine Empire is seen as a factor that led to its conquest by Ottoman Muslims.
In northern Europe, the Teutonic Knights, after their successes in the northern crusades, attempted to conquer the Orthodox Russian Republics of Pskov and Novgorod, an enterprise endorsed by Pope Gregory IX. One of the major defeats they suffered was the Battle of the Ice in 1242. Catholic Sweden also undertook several campaigns against Orthodox Novgorod. There were also conflicts between Catholic Poland and Orthodox Russia. Such conflicts solidified the schism between East and West.
The Second Council of Lyon was convoked to act on a pledge by Michael VIII to reunite the Eastern church with the West. Wishing to end the Great Schism that divided Rome and Constantinople, Gregory X had sent an embassy to Michael VIII, who had reconquered Constantinople, putting an end to the remnants of the Latin Empire in the East, and he asked Latin despots in the East to curb their ambitions.
On 29 June (Feast of Saints Peter and Paul patronal feast of Popes), Gregory X celebrated a Mass in St John 's Church, where both sides took part. The council declared that the Roman church possessed "the supreme and full primacy and authority over the universal Catholic Church. ''
The union effected was "a sham and a political gambit '', a fiction maintained by the emperor to prevent westerners from recovering the city of Constantinople, which they had lost just over a decade before, in 1261. It was fiercely opposed by clergy and people and never put into effect, in spite of a sustained campaign by Patriarch John XI of Constantinople (John Bekkos), a convert to the cause of union, to defend the union intellectually, and vigorous and brutal repression of opponents by Michael. In 1278 Pope Nicholas III, learning of the fictitious character of Greek conformity, sent legates to Constantinople, demanding the personal submission of every Orthodox cleric and adoption of the Filioque, as already the Greek delegates at Lyon had been required to recite the Creed with the inclusion of Filioque and to repeat it two more times. Emperor Michael 's attempts to resolve the schism ended when Pope Martin IV, seeing that the union was only a sham, excommunicated Michael VIII 1281 in support of Charles of Anjou 's attempts to mount a new campaign to retake the Eastern Roman provinces lost to Michael. Michael VIII 's son and successor Andronicus II repudiated the union, and Bekkos was forced to abdicate, being eventually exiled and imprisoned until his death in 1297.
In the 15th century, the eastern emperor John VIII Palaiologos, pressed hard by the Ottoman Turks, was keen to ally himself with the West, and to do so he arranged with Pope Eugene IV for discussions about reunion to be held again, this time at the Council of Ferrara - Florence. After several long discussions, the emperor managed to convince the Eastern representatives to accept the Western doctrines of Filioque, Purgatory and the supremacy of the Papacy. On 6 June 1439 an agreement was signed by all the Eastern bishops present but one, Mark of Ephesus, who held that Rome continued in both heresy and schism. It seemed that the Great Schism had been ended. However, upon their return, the Eastern bishops found their agreement with the West broadly rejected by the populace and by civil authorities (with the notable exception of the Emperors of the East who remained committed to union until the Fall of Constantinople two decades later). The union signed at Florence has never been accepted by the Eastern churches.
In May 1453, the capital of the Eastern Roman Empire fell to the invading Ottoman Empire. But Orthodox Christianity was already entrenched in Russia, whose political and de facto religious centre had shifted from Kiev to Moscow. The Russian Church, a part of the Church of Constantinople until the mid-15th century, was granted full independence (autocephaly) and elevated to the rank of Patriarchate in 1589. The Russian political and ecclesiastical elite came to view Moscow as the Third Rome, a legitimate heir to Constantinople and Byzantium.
Under Ottoman rule, the Orthodox Church acquired the status of an autonomous millet, specifically the Rum Millet. The Ecumenical Patriarch became the ruler (millet başı) of all the Orthodox Christian subjects of the empire, including non-Greeks. Upon conquering Constantinople, Mehmed II assumed the legal function of the Byzantine Emperors and appointed Patriarch Gennadius II. The sultans enhanced the temporal powers of the Greek orthodox hierarchy that came to be politically beholden solely to the Ottoman sultan and, along with other Ottoman Greek nobles, came to run the Balkan Orthodox domains of the Ottoman Empire. As a result, the entire Orthodox communion of the Balkans and the Near East became isolated from the rest of Christendom. For the next four hundred years, it would be confined within the Islamic world, with which it had little in common religiously or culturally.
In Russia, the anti-Catholic sentiments came to be entrenched by the Polish intervention during the Time of Troubles in the early 17th century, which was seen as an attempt to convert Moscow to Roman Catholicism. The modern Russian national holiday, Unity Day, was established on the day of church celebration in honour of the Our Lady of Kazan icon, which is believed to have miraculously saved Moscow from outright Polish conquest in 1612. Patriarch Hermogenes of Moscow was martyred by the Poles and their supporters during this period (see also Polish -- Lithuanian -- Muscovite Commonwealth).
The doctrine of papal primacy was further developed at the First Vatican Council, which declared that "in the disposition of God the Roman church holds the preeminence of ordinary power over all the other churches ''. This council also affirmed the dogma of papal infallibility, declaring that the infallibility of the Christian community extends to the pope himself, when he defines a doctrine concerning faith or morals to be held by the whole Church. These new dogma, as well as the dogma of the Immaculate Conception promulgated in Ineffabilis Deus a few years prior, are unequivocally rejected by the Eastern Church as heretical.
A major event of the Second Vatican Council (Vatican II), was the issuance by Pope Paul and Orthodox Patriarch Athenagoras I of Constantinople of the Catholic -- Orthodox Joint Declaration of 1965. At the same time, they lifted the mutual excommunications dating from the 11th century. The act did not result in restoration of communion.
The Eastern Catholic Churches, historically referred to as '' uniate '' by the Orthodox, consider themselves to have reconciled the East and West Schism by having accepted the primacy of the Bishop of Rome while retaining some of the canonical rules and liturgical practices in line with the Eastern tradition such as the Byzantine Rite that is prevalent in the Orthodox Churches. Some Eastern Orthodox charge that joining in this unity comes at the expense of ignoring critical doctrinal differences and past atrocities.
There have been periodic conflicts between the Orthodox and Eastern Catholics in Ukraine and Belarus, then under Polish rule, and later also in Transylvania (see the Romanian Greek Catholic Church United with Rome). Pressure and government - sponsored reprisals were used against Eastern Catholic Churches such as the Ukrainian Greek Catholic Church in the Russian Empire and later in the USSR. Since the late 1980s, the Moscow Patriarchate (the Russian Orthodox Church) has criticised the methods of restoration of the '' uniate '' church structures in Ukraine as well as what it called Catholic proselytism in Russia.
In 1993, a report written by the Joint International Commission for Theological Dialogue Between the Catholic Church and the Orthodox Church during its 7th plenary session at Balamand School of Theology in Lebanon stated: "Because of the way in which Catholics and Orthodox once again consider each other in their relationship to the mystery of the Church and discover each other once again as Sister Churches, this form of ′ missionary apostolate ′ described above, and which has been called ′ uniatism ′, can no longer be accepted either as a method to be followed nor as a model of the unity our Churches are seeking ''. At the same time, the document inter alia stated:
In February 2016, Pope Francis and Patriarch Kirill of the Russian Orthodox Church (ROC), had a meeting in Cuba and signed a joint declaration that stated inter alia: "It is our hope that our meeting may also contribute to reconciliation wherever tensions exist between Greek Catholics and Orthodox. It is today clear that the past method of ' uniatism ', understood as the union of one community to the other, separating it from its Church, is not the way to re-establish unity. Nonetheless, the ecclesial communities which emerged in these historical circumstances have the right to exist and to undertake all that is necessary to meet the spiritual needs of their faithful, while seeking to live in peace with their neighbours. Orthodox and Greek Catholics are in need of reconciliation and of mutually acceptable forms of co-existence. '' Meanwhile, in the interview published on the eve of the meeting in Cuba, Metropolitan Hilarion Alfeyev, the chairman of the Department of External Church Relations and a permanent member of the Holy Synod of the ROC, said that tensions between the Ukrainian Greek Catholic Church and the ROC ′ s Ukrainian Orthodox Church had been recently heightened mainly due to the conflict in Ukraine. The declaration was sharply criticised by Sviatoslav Shevchuk, the Primate of the Ukrainian Greek Catholic Church, who said that his flock felt '' betrayed '' by the Vatican.
Inspired by the spirit of Vatican II that adopted the Unitatis Redintegratio decree on ecumenism in 1964 as well as the change of heart toward Ecumenism on the part of the Moscow Patriarchate that had occurred in 1961, the Vatican and 14 universally recognised autocephalous Orthodox Churches established the Joint International Commission for Theological Dialogue Between the Catholic Church and the Orthodox Church that first met in Rhodes in 1980 and is an ongoing endeavour.
On a number of occasions, Pope John Paul II recited the Nicene Creed with patriarchs of the Eastern Orthodox Church in Greek according to the original text. Both he and his successor, Pope Benedict XVI, have recited the Nicene Creed jointly with Patriarchs Demetrius I and Bartholomew I in Greek without the Filioque clause, "according to the usage of the Byzantine Churches ''. This accords with the Roman Catholic Church 's practice of including the clause when reciting the Creed in Latin, but not when reciting it in Greek.
In June 1995, Patriarch Bartholomew I, of Constantinople, visited Vatican City for the first time, and joined in the historic inter-religious day of prayer for peace at Assisi. John Paul II and Bartholomew I explicitly stated their mutual "desire to relegate the excommunications of the past to oblivion and to set out on the way to re-establishing full communion ''.
In May 1999, John Paul II was the first pope since the Great Schism to visit an Eastern Orthodox country: Romania. Upon greeting John Paul II, the Romanian Patriarch Teoctist stated: "The second millennium of Christian history began with a painful wounding of the unity of the Church; the end of this millennium has seen a real commitment to restoring Christian unity. '' John Paul II visited other heavily Orthodox areas such as Ukraine, despite lack of welcome at times, and he said that healing the divisions between Western and Eastern Christianity was one of his fondest wishes.
In June 2004, Bartholomew I 's visit to Rome for the Feast of Saints Peter and Paul (29 June) afforded him the opportunity for another personal meeting with John Paul II, for conversations with the Pontifical Council for Promoting Christian Unity and for taking part in the celebration for the feast day in St. Peter 's Basilica.
The Patriarch 's partial participation in the Eucharistic liturgy at which the Pope presided followed the program of the past visits of Patriarch Dimitrios (1987) and Patriarch Bartholomew I himself: full participation in the Liturgy of the Word, joint proclamation by the Pope and by the Patriarch of the profession of faith according to the Nicene - Constantinopolitan Creed in Greek and as the conclusion, the final Blessing imparted by both the Pope and the Patriarch at the Altar of the Confessio. The Patriarch did not fully participate in the Liturgy of the Eucharist involving the consecration and distribution of the Eucharist itself.
Despite efforts on the part of Catholic Popes and Orthodox Patriarchs to heal the schism, only limited progress towards reconciliation has been made over the last half century. One stumbling block is the fact that the Orthodox and the Catholics have different perceptions of the nature of the divide. The official Catholic teaching is that the Orthodox are schismatic, meaning that there is nothing heretical about their theology, only their unwillingness to accept the supremacy of the Pope which is presented in Catholic teaching as chiefly an ecclesiological issue, not so much a theological one. The Orthodox object to the Catholic doctrines of Purgatory, Substitutionary atonement, the Immaculate Conception, and papal supremacy, among others, as heretical doctrines. With respect to Primacy of the Pope, the two churches agree that the Pope, as Bishop of Rome, has primacy although they continue to have different interpretations of what that primacy entails.
The Roman Catholic Church 's attitude was expressed by John Paul II in the image of the Church "breathing with her two lungs ''. He meant that there should be a combination of the more rational, juridical, organization - minded "Latin '' temperament with the intuitive, mystical and contemplative spirit found in the East.
In the Orthodox view, the Bishop of Rome (i.e. the Pope) would have universal primacy in a reunited Christendom, as primus inter pares without power of jurisdiction.
The Eastern Orthodox insist that the primacy is largely one of honor, the Pope being "first among equals '' primus inter pares. The Catholic Church, on the other hand, insists on the doctrine of Supremacy. It is widely understood that, if there is to be reconciliation, both sides will have to compromise on this doctrine. Although some commentators have proposed ways in which such compromise can be achieved, there is no official indication that such compromise is being contemplated.
In his book Principles of Catholic Theology, Pope Benedict XVI (then Cardinal Ratzinger) assessed the range of "possibilities that are open to Christian ecumenism. '' He characterized the "maximum demand '' of the West as the recognition by the East of and submission to the "primacy of the bishop of Rome in the full scope of the definition of 1870... '' The "maximum demand '' of the East was described as a declaration by the West of the 1870 doctrine of papal primacy as erroneous along with the "removal of the Filioque from the Creed and including the Marian dogmas of the nineteenth and twentieth centuries. '' Ratzinger asserted that "(n) one of the maximum solutions offers any real hope of unity. '' Ratzinger wrote that, "Rome must not require more from the East than had been formulated and what was lived in the first millenium. '' He concluded that "Reunion could take place in this context if, on the one hand, the East would cease to oppose as heretical the developments that took place in the West in the second millennium and would accept the Catholic Church as legitimate and orthodox in the form she had acquired in the course of that development, while on the other hand, the West would recognize the Church of the East as orthodox in the form she has always had. ''
The declaration of Ravenna in 2007 re-asserted the belief that the bishop of Rome is indeed the protos, although future discussions are to be held on the concrete ecclesiological exercise of papal primacy.
Some scholars such as Jeffrey Finch assert that "the future of East -- West rapprochement appears to be overcoming the modern polemics of neo-scholasticism and neo-Palamism ''.
These doctrinal issues center around the Orthodox perception that the Catholic theologians lack the actual experience of God called theoria and thereby fail to understand the importance of the heart as a noetic or intuitive faculty. It is what they consider to be the Catholic Church 's reliance on pagan metaphysical philosophy and rational methods such as scholasticism rather than on intuitive experience of God (theoria) that causes Orthodox to consider the Catholic Church heretical. Other points of doctrinal difference include a difference regarding human nature as well as a difference regarding original sin, purgatory, and the nature of Hell.
One point of theological difference is embodied in the dispute regarding the inclusion of the Filioque in the Nicene Creed. In the view of the Roman Catholic Church, what it calls the legitimate complementarity of the expressions "from the Father '' and "from the Father and the Son '' does not, provided it does not become rigid, affect the identity of faith in the reality of the same mystery confessed. The Orthodox, on the other hand, view inclusion of the phrase to be almost heretical (see also the Trinity section).
More importantly, the Orthodox see the Filioque as just the tip of the iceberg and really just a symptom of a much more deeply rooted problem of theology, one so deeply rooted that they consider it to be heretical and even, by some characterizations, an inability to "see God '' and know God. This heresy is allegedly rooted in Frankish paganism, Arianism, Platonist and Aristotelian philosophy and Thomist rational and objective Scholasticism. In opposition to what they characterize as pagan, heretical and "godless '' foundations, the Orthodox rely on an intuitive and mystical knowledge and vision of God (theoria) based on hesychasm and noesis. Catholics accept as valid the Eastern Orthodox intuitive and mystical understanding of God and consider it complementary to the rational Western reflection.
Most Orthodox Churches through economy do not require baptism in the Orthodox Church for one who has been previously baptized in the Roman Catholic Church. Most Orthodox jurisdictions, based on that same principle of economy, allow a sacramental marriage between an Orthodox Christian and some non-Orthodox Christians. The Catholic Church allows its clergy to administer the sacraments of Penance, the Eucharist and Anointing of the Sick to members of the Eastern Orthodox Church, if these spontaneously ask for the sacraments and are properly disposed. It also allows Catholics who can not approach a Catholic minister to receive these three sacraments from clergy of the Eastern Orthodox Church, whenever necessity requires or a genuine spiritual advantage commends it, and provided the danger of error or indifferentism is avoided. Catholic canon law allows marriage between a Catholic and an Orthodox. The Orthodox Church will only administer the sacraments to Christians who are n't Orthodox if there is an emergency.
The Code of Canons of the Eastern Churches authorizes the local Catholic bishop to permit a Catholic priest, of whatever rite, to bless the marriage of Orthodox faithful who being unable without great difficulty to approach a priest of their own Church, ask for this spontaneously. In exceptional circumstances Catholics may, in the absence of an authorized priest, marry before witnesses. If a priest who is not authorized for the celebration of the marriage is available, he should be called in, although the marriage is valid even without his presence. The Code of Canons of the Eastern Churches specifies that, in those exceptional circumstances, even a "non-Catholic '' priest (and so not necessarily one belonging to an Eastern Church) may be called in.
The efforts of Orthodox patriarchs towards reconciliation with the Catholic Church has been strongly criticized by some elements of Eastern Orthodoxy, such as the Metropolitan of Kalavryta, Greece, in November 2008.
In 2010, Patriarch Bartholomew I issued an encyclical lauding the ongoing dialogue between the Orthodox Church and other Christian churches and criticizing those who are "unacceptably fanatical '' in challenging such dialogue. The encyclical lamented that the dialogues between the two churches were being criticized in "an unacceptably fanatical way '' by some who claim to be defenders of Orthodoxy despite the fact that these dialogues are being conducted "with the mutual agreement and participation of all local Orthodox Churches ''. The Patriarch warned that "such opponents raise themselves above episcopal synods and risk creating schisms ''. He further accused some critics of distorting reality to "deceive and arouse the faithful '' and of depicting theological dialogue not as a pan-Orthodox effort, but an effort of the Ecumenical Patriarchate alone. As an example, he pointed to "false rumors that union between the Roman Catholic and Orthodox Churches is imminent '' claiming that the disseminators of such rumors were fully aware that "the differences discussed in these theological dialogues remain numerous and require lengthy debate ''. The Patriarch re-emphasized that "union is not decided by theological commissions but by Church Synods ''.
Jaroslav Pelikan emphasizes that "while the East -- West schism stemmed largely from political and ecclesiastical discord, this discord also reflected basic theological differences ''. Pelikan further argues that the antagonists in the 11th century inappropriately exaggerated their theological differences whereas modern historians tend to minimize them. Pelikan asserts that the documents from that era evidence the "depths of intellectual alienation that had developed between the two sections of Christendom. '' While the two sides were technically more guilty of schism than heresy, they often charged each other with allegations of heresy. Pelikan describes much of the dispute as dealing with "regional differences in usages and customs '' some of which were adiaphorus (i.e. neither right nor wrong). However, he goes on to say that while it was easy in principle to accept the existence of adiaphora, it was difficult in actual practice to distinguish customs which were innocuously adiaphoric from those that had doctrinal implications.
Philip Sherrard, an Orthodox theologian asserts that the underlying cause of the East -- West schism was and continues to be "the clash of these two fundamentally irreconcilable ecclesiologies. '' Roger Haight characterizes the question of episcopal authority in the Church as "acute '' with the "relative standings of Rome and Constantinople a recurrent source of tension. '' Haight characterizes the difference in ecclesiologies as "the contrast between a pope with universal jurisdiction and a combination of patriarchal superstructure with an episcopal and synodal communion ecclesiology analogous to that found in Cyprian. ''
However, Nicholas Afansiev has criticized both the Catholic and Orthodox churches for "subscribing to the universal ecclesiology of St. Cyprian of Carthage according to which only one true and universal church can exist. ''
There are several different ecclesiologies: "communion ecclesiology '', "eucharistic ecclesiology '', "baptismal ecclesiology '', "trinitarian ecclesiology '', "kerygmatic theology ''. Other ecclesiologies are the "hierarchical - institutional '' and the "organic - mystical '', and the "congregationalist ''.
The Eastern Churches maintained the idea that every local city - church with its bishop, presbyters, deacons and people celebrating the Eucharist constituted the whole Church. In this view called Eucharistic ecclesiology (or more recently holographic ecclesiology), every bishop is Saint Peter 's successor in his church ("the Church '') and the churches form what Eusebius called a common union of churches. This implied that all bishops were ontologically equal, although functionally particular bishops could be granted special privileges by other bishops and serve as metropolitans, archbishops or patriarchs. Within the Roman Empire, from the time of Constantine to the final extinction of the empire in 1453, universal ecclesiology, rather than eucharistic, "became the operative principle ''. The view prevailed that, "when the Roman Empire became Christian the perfect world order willed by God had been achieved: one universal empire was sovereign, and coterminous with it was the one universal church ''. Early on, the ecclesiology of the Roman Church was universal in nature, with the idea that the Church was a worldwide organism with a divinely (not functionally) appointed center: the Church / Bishop of Rome. These two views are still present in modern Eastern Orthodoxy and Roman Catholicism and can be seen as foundational causes for the schisms and Great Schism between East and West.
"The Orthodox Church does not accept the doctrine of Papal authority set forth in the Vatican Council of 1870, and taught today in the Roman Catholic Church. '' The Orthodox Church has always maintained the original position of collegiality of the bishops resulting in the structure of the church being closer to a confederacy. The Orthodox have synods where the highest authorities in each Church community are brought together, but unlike Roman Catholicism no central individual or figure has the absolute and infallible last word on church doctrine. In practice, this has sometimes led to divisions among Greek, Russian, Bulgarian and Ukrainian Orthodox churches, as no central authority can serve as a rallying point for various internal disputes.
Early on, the ecclesiology of the Roman Church was universal in nature, with the idea that the Church was a worldwide organism with a divinely (not functionally) appointed center: the Church / Bishop of Rome. Vatican II re-asserted the importance of collegiality to a degree that appears satisfying to most if not all ecclesial parties. Starting from the second half of the 20th century, eucharistic ecclesiology is upheld by Roman Catholic theologians. Henri de Lubac wrote: "The Church, like the Eucharist, is a mystery of unity -- the same mystery, and one with inexhaustible riches. Both are the body of Christ -- the same body. '' Joseph Ratzinger called eucharistic ecclesiology "the real core of Vatican II 's teaching on the cross ''. According to Ratzinger, the one church of God exists in no other way than in the various individual local congregations. In these the Eucharist is celebrated in union with the Church everywhere. Eucharistic ecclesiology led Vatican II to "affirm the theological significance of the local church. If each celebration of the Eucharist is a matter not only of Christ 's sacramental presence on the altar, but also of his ecclesial presence in the gathered community, then each eucharistic local church must be more than a subset of the universal church; it must be the body of Christ ' in that place '. ''
The ecclesiological dimension of the East -- West schism revolves around the authority of bishops within their dioceses and the lines of authority between bishops of different dioceses. It is common for Catholics to insist on the primacy of Roman and papal authority based on patristic writings and conciliar documents.
The Roman Catholic Church 's current official teachings about papal privilege and power that are unacceptable to the Eastern Orthodox churches are the dogma of the pope 's infallibility when speaking officially "from the chair of Peter (ex cathedra Petri) '' on matters of faith and morals to be held by the whole Church, so that such definitions are irreformable "of themselves, and not by the consent of the Church '' (ex sese et non ex consensu ecclesiae) and have a binding character for all (Catholic) Christians in the world; the pope 's direct episcopal jurisdiction over all (Catholic) Christians in the world; the pope 's authority to appoint (and so also to depose) the bishops of all (Catholic) Christian churches except in the territory of a patriarchate; and the affirmation that the legitimacy and authority of all (Catholic) Christian bishops in the world derive from their union with the Roman see and its bishop, the Supreme Pontiff, the unique Successor of Peter and Vicar of Christ on earth.
Principal among the ecclesiastical issues that separate the two churches is the meaning of papal primacy within any future unified church. The Orthodox insist that it should be a "primacy of honor '', as in the ancient church, and not a "primacy of authority '', whereas the Catholics see the pontiff 's role as requiring for its exercise power and authority the exact form of which is open to discussion with other Christians.
According to Orthodox belief, the test of catholicity is adherence to the authority of Scripture and then by the Holy Tradition of the church. It is not defined by adherence to any particular See. It is the position of the Orthodox Church that it has never accepted the pope as de jure leader of the entire church. All bishops are equal ' as Peter ' therefore every church under every bishop (consecrated in apostolic succession) is fully complete (the original meaning of catholic).
Referring to Ignatius of Antioch Carlton says
Contrary to popular opinion, the word catholic does not mean "universal ''; it means "whole, complete, lacking nothing. ''... Thus, to confess the Church to be catholic is to say that She possesses the fullness of the Christian faith. To say, however, that Orthodox and Rome constitute two lungs of the same Church is to deny that either Church separately is catholic in any meaningful sense of the term. This is not only contrary to the teaching of Orthodoxy, it is flatly contrary to the teaching of the Roman Catholic Church, which considered itself truly catholic
The church is in the image of the Trinity and reflects the reality of the incarnation.
The body of Christ must always be equal with itself... The local church which manifests the body of Christ can not be subsumed into any larger organisation or collectivity which makes it more catholic and more in unity, for the simple reason that the principle of total catholicity and total unity is already intrinsic to it.
Any changes to the understanding of the church would reflect a change in the understanding of the Trinity.
From the perspective of the Catholic Church, the ecclesiological issues are the central issue which is why they characterize the split between the two churches as a schism. In their view, the Eastern Orthodox are very close to them in theology and the Catholic Church does not consider the Orthodox beliefs to be heretical. However, from the perspective of Orthodox theologians, there are theological issues that run much deeper than just the theology around the primacy and / or supremacy of the Pope. In fact, unlike the Catholics, who do not generally consider the Orthodox heretical and speak instead about the Eastern "schism '', some prominent Orthodox theologians do consider the Catholic Church to be heretical on fundamental doctrinal issues of theology, such as the Filioque. These issues have a long history as can be seen in the 11th - century works of Orthodox theologian and Saint Nikitas Stithatos.
In the Roman Catholic Church too, some writers can be found who speak pejoratively of the Eastern Orthodox Church and its theology, but these writers are marginal. The official view of the Catholic Church is that expressed in the decree Unitatis redintegratio of Vatican II:
In the study of revelation East and West have followed different methods, and have developed differently their understanding and confession of God 's truth. It is hardly surprising, then, if from time to time one tradition has come nearer to a full appreciation of some aspects of a mystery of revelation than the other, or has expressed it to better advantage. In such cases, these various theological expressions are to be considered often as mutually complementary rather than conflicting. Where the authentic theological traditions of the Eastern Church are concerned, we must recognize the admirable way in which they have their roots in Holy Scripture, and how they are nurtured and given expression in the life of the liturgy. They derive their strength too from the living tradition of the apostles and from the works of the Fathers and spiritual writers of the Eastern Churches. Thus they promote the right ordering of Christian life and, indeed, pave the way to a full vision of Christian truth.
Although the Western churches do not consider the Eastern and Western understanding of the Trinity to be radically different, Eastern theologians such as John S. Romanides and Michael Pomazansky argue that the Filioque clause is symptomatic of a fatal flaw in the Western understanding, which they attribute to the influence of Augustine and, by extension, to that of Thomas Aquinas.
Filioque, Latin for "and (from) the Son '', was added in Western Christianity to the Latin text of the Nicene - Constantinopolitan Creed, which also varies from the original Greek text in having the additional phrase Deum de Deo (God from God) and in using the singular "I believe '' (Latin, Credo, Greek Πιστεύω) instead of the original "We believe '' (Greek Πιστεύομεν), which Oriental Orthodoxy preserves. The Assyrian Church of the East, which is in communion neither with the Eastern Orthodox Church nor with Oriental Orthodoxy also uses "We believe ''. Filioque states that the Holy Spirit proceeds from the Son as well as from the Father, a doctrine accepted by the Catholic Church, by Anglicanism and by Protestant churches in general. Christians of these groups generally include it when reciting the Nicene Creed. Nonetheless, these groups recognize that Filioque is not part of the original text established at the First Council of Constantinople in 381 and they do not demand that others too should use it when saying the Creed. Indeed, the Roman Catholic Church does not add the phrase corresponding to Filioque (καὶ τοῦ Υἱοῦ) to the Greek text of the Creed, even in the liturgy for Latin Rite Catholics.
At the 879 -- 880 Council of Constantinople the Eastern Orthodox Church anathematized the "Filioque '' phrase, "as a novelty and augmentation of the Creed '', and in their 1848 encyclical the Eastern Patriarchs spoke of it as a heresy. It was qualified as such by some of the Eastern Orthodox Church 's saints, including Photios I of Constantinople, Mark of Ephesus, Gregory Palamas, who have been called the Three Pillars of Orthodoxy.
The Eastern church believes by the Western church inserting the Filioque unilaterally (without consulting or holding council with the East) into the Creed that the Western church broke communion with the East.
Orthodox theologians such as Vladimir Lossky criticize the focus of Western theology of God in ' God in uncreated essence ' as misguided, which he alleges is a modalistic and therefore a speculative expression of God that is indicative of the Sabellian heresy. Orthodox theologian Michael Pomazansky argues that, in order for the Holy Spirit to proceed from the Father and the Son in the Creed, there would have to be two sources in the deity (double procession), whereas in the one God there can only be one source of divinity, which is the Father hypostasis of the Trinity, not God 's essence per se. In contrast, Bishop Kallistos Ware suggests that the problem is more in the area of semantics than of basic doctrinal differences.
Lossky, a noted modern Eastern Orthodox theologian, argues the difference in East and West is due to the Roman Catholic Church 's use of pagan metaphysical philosophy (and scholasticism) rather than actual experience of God called theoria, to validate the theological dogmas of Roman Catholic Christianity. For this reason, Lossky argues that Eastern Orthodox and Roman Catholics have become "different men ''. Other Eastern Orthodox theologians such as Romanides. and Metropolitan Hierotheos of Nafpaktos have made similar pronouncements. According to the Orthodox teachings, theoria can be achieved through ascetic practices like hesychasm (see St John Climacus), which was condemned as a heresy by Barlaam of Seminara.
Orthodox theologians charge that, in contrast to Orthodox theology, western theology is based on philosophical discourse which reduces humanity and nature to cold mechanical concepts. Orthodox theologians argue that the mind (reason, rationality) is the focus of Western theology, whereas in Eastern theology, the mind must be put in the heart, so they are united into what is called nous, this unity as heart is the focus of Eastern Orthodox Christianity involving the unceasing Prayer of the heart.
In Orthodox theology, in the Eastern ascetic traditions one of the goals of ascetic practice is to obtain sobriety of consciousness, awakeness (nepsis). For humankind this is reached in the healing of whole person called the soul, heart. When a person 's heart is reconciled with their mind, this is referred to as a healing of the nous or the "eye, focus of the heart or soul ''. Part of this process is the healing and or reconciliation of humankind 's reason being called logos or dianoia with the heart, soul. While mankind 's spirit and body are energies vivified by the soul, Orthodoxy teaches man 's sin, suffering, sorrow is caused by his heart and mind being a duality and in conflict. According to Orthodox theology, lack of noetic understanding (sickness) can be neither circumvented nor satisfied by rational or discursive thought (i.e. systematization), and denying the needs of the human heart (a more Western expression would be the needs of the soul) causes various negative or destructive manifestations such as addiction, atheism and evil thoughts etc. A cleaned, healed or restored Nous creates the condition of sobriety or nepsis of the mind.
Orthodox theologians assert that the theological division of East and West culminated into a direct theological conflict known as the Hesychasm controversy during several councils at Constantinople New Rome, between the years 1341 -- 1351. They argue that this controversy highlighted the sharp contrast between what is embraced by the Roman Catholic Church as proper (or orthodox) theological dogma and how theology is validated and what is considered valid theology by the Eastern Orthodox. The essence of the disagreement is that in the East a person can not be a genuine true theologian, or teach knowledge of God, without having experienced God, as is defined as the vision of God (theoria). At the heart of the issue was the teaching of the Essence - Energies distinctions (which states that while creation can never know God 's uncreated essence, it can know his uncreated energies) by Gregory Palamas.
The Eastern Orthodox do not accept Augustine 's teaching of original sin. His interpretation of ancestral sin is rejected in the East as well. Nor is Augustine 's teaching accepted in its totality in the West. The Roman Catholic Church rejects traducianism and affirms creationism. Its teaching on original sin is largely based on but not identical with that of Augustine, and is opposed to the interpretation of Augustine advanced by Martin Luther and John Calvin. Its teaching departs from Augustine 's ideas in some respects. The Eastern Church makes no use at all of Augustine. Another Orthodox view is expressed by Christos Yannaras, who described Augustine as "the fount of every distortion and alteration in the Church 's truth in the West ''.
What the Eastern Orthodox accepts is that ancestral sin corrupted their existence (their bodies and environment) that each person is born into and thus we are born into a corrupted existence (by the ancestral sin of Adam and Eve) and that "original sin is hereditary. It did not remain only Adam and Eve 's. As life passes from them to all of their descendants, so does original sin. All of us participate in original sin because we are all descended from the same forefather, Adam. '' And the teaching of the Eastern Orthodox Church is that, as a result of Adam 's sin, "hereditary sin flowed to his posterity; so that everyone who is born after the flesh bears this burden, and experiences the fruits of it in this present world. ''
Similarly, what the Catholic Church holds is that the sin of Adam that we inherit, and for the remission of which even babies who have no personal sin are baptized, is called "sin '' only in an analogical sense, since it is not an act committed like the personal sin of Adam and Eve, but a fallen state contracted by the transmission of a human nature deprived of original holiness and justice.
Both East and West hold that each person is not called to atone for the actual sin committed by Adam and Eve.
According to the Western Church, "original sin does not have the character of a personal fault in any of Adam 's descendants '', and the Eastern Church teaches that "by these fruits and this burden we do not understand (actual) sin ''. The Orthodox and the Catholics believe that people inherit only the spiritual sickness (in which all suffer and sin) of Adam and Eve, caused by their ancestral sin (what has flowed to them), a sickness leaving them weakened in their powers, subject to ignorance, suffering and the domination of death, and inclined to sin.
The Catholic doctrine of the Immaculate Conception, which claims that God protected the Virgin Mary from original sin through no merit of her own, was dogmatically defined by Pope Pius IX in 1854. Instead, Orthodox theology proclaims that Mary was chosen to bear Christ, having first found favor of God by her purity and obedience.
Another point of theological contention between the western and eastern churches is the doctrine of purgatory (as it was shown at the Second Council of Lyons and the Council of Ferrara -- Florence). It was developed in time in western theology, according to which, "all who die in God 's grace and friendship, but still imperfectly purified, are indeed assured of their eternal salvation; but after death they undergo purification, so as to achieve the holiness necessary to enter the joy of heaven. '' However, some eastern theologians, while agreeing that there is beyond death a state in which believers continue to be perfected and led to full divinization, consider that it is a state not of punishment but of growth; hold that suffering can not purify sin, since they have a different view of sin and consider suffering as a result of a spiritual sickness. Western theology usually considers sin not only as a sickness that weakens and impedes, but also as something that merits punishment.
The Eastern Orthodox Church holds that "there is a state beyond death where believers continue to be perfected and led to full divinization ''. Although some Orthodox have described this intermediate state as purgatory, others distinguish it from aspects associated with it in the West: at the Council of Ferrara -- Florence, the Orthodox Bishop Mark of Ephesus argued that there are in it no purifying fires.
The traditional Orthodox teaching is that "those who reject Christ will face punishment. According to the Confession of Dositheus, persons go immediately to joy in Christ or to the torments of punishment ''.
In Orthodox doctrine there is no place without God. In eternity there is no hiding from God. In Catholic theology, God is present everywhere not only by his power but in himself. Hell is a state of self - selected separation from God.
Eastern theology considers the desire to sin to be the result of a spiritual sickness (caused by Adam and Eve 's pride), which needs to be cured. One such theologian gives his interpretation of Western theology as follows: "According to the holy Fathers of the Church, there is not an uncreated Paradise and a created Hell, as the Franco -- Latin tradition teaches ''. The eastern Church believes that hell or eternal damnation and heaven exist and are the same place, which is being with God, and that the very same divine love (God 's uncreated energies) which is a source of bliss and consolation for the righteous (because they love God, His love is heaven for them) is also a source of torment (or a "Lake of Fire '') for sinners (because they do n't love God, they will feel His love this way). The Western Church speaks of heaven and hell as states of existence rather than as places, while in Eastern Orthodoxy there is no hell per se, there is damnation or punishment in eternity for the rejection of God 's grace.
|
who got eliminated from super dancer 2 this week | Super Dancer - Wikipedia
Super Dancer is an Indian Hindi kids dance reality television series, which airs on Sony Entertainment Television and Sony Entertainment Television Asia. The winner of season 1 of this series is Ditya Bhande and Bishal Sharma is the winner of Super Dancer Chapter 2. The series is produced by Ranjeet Thakur and Hemant Ruprell for their production house Frames Production.
The show Super Dancer - Dance Ka Kal aims to find a kid prodigy who has the potential to be the future of dance. The kids are aged between 4 and 13 years of age. They are not only required to have 3Ds of dancing - Desire, Discipline and Determination - but also should be a keen learner, adaptable to all dance styles and circumstances and should be a passionate dancer with a unique personality. This show is an ideal opportunity for every kid to hone their talent and dancing skills.
After the initial auditions and mega auditions, 12 Super Dancers are selected to be competing for the title of Dance Ka Kal (future of dance). They are each paired with one choreographer (Super Guru) who has a unique style similar to theirs. These Super Gurus train them, choreograph acts for them and also perform with them. The Super Dancers perform on Saturdays and along with their choreographers on Sunday. The performances are voted by the audience every week on the website or the SonyLiv App. On the basis of the number of votes, one kid is being eliminated every week.
Eliminated on 23 October 2016
Power Card entry on 30 October 2016
Power Card entry on 30 October 2016
Eliminated on 26 November 2016
Five contestants - Laxman Kumbhar, Siddhanth Damedhar (both of whom were eliminated from the show earlier), Lakshay Sinha (who could n't perform earlier in the competition due to his injury), Tiyash Saha & Harsh Dhara (both standby contestants) were called back on 30 October 2016 to gain a Power Card Entry into the competition. The judges chose Laxman Kumbhar & Siddhanth Damedhar and they Re-entered the competition.
Winner
Ditya Sagar Bhande from Mumbai won the first season of Super dancer. Her choreographer was Ruel Dausan.
Palden Lama Mawroh
Power card entry on 26th November 2017
Eliminated on 12 November 2017
Three contestants, Vaishnavi Prajapati from Panipat with her Super Guru Manan, Akash Mitra from Patna with his Super Guru Rishikesh Jogdand and Mishti Sinha from Ahmednagar with her Super Guru Palden Lama Mowroh took a return on Power Card Entry. With top 9, voting started. For almost 6 weeks at a stretch, there was not a single elimination on the show. Firstly, Arushi Saxena (known as all rounder of the show) had to leave the show by normal elimination and provided the show Super 8. Mishti Sinha had to bid goodbye to the show due to an injury in her leg. Doctors suggested her to have respite and show got Super 7.
Muskan Sharma termed as "robotic girl '' of the show had set a trend of perfect indomitable robotics. She pursued such a different and tough dance form making many persons her fan. She performed robotics with contemporary and salsa which gave her a distinct look. Appreciated, she was said as trendsetter by shilpa shetty as the incredible girl invigorated and emboldened many other girls to chase robotics. Her Super Guru Paul always tried to present a message in her dance in which they were invincible. But voted less till super 7, she was eliminated.
Akash Mitra is an amazing child on the show. He is super cute and seems to be Anurag Basu 's favorite. He is always indifferent to all the situations and is termed as "God Ka Favourite Bachcha ''. With his Super Guru Rishikesh, this 5 year old showcase his talent in many styles. He is majorly seen in feud with his friend Vaishnavi. he later got eliminated just before top 5.
Vaishnavi is a 5 year old talented girl on the show who gets tickled easily. Judges were shocked when they come to know about her handicapped but willing mentor. She and her Super Guru Manan does simple but effective acts on the stage without getting encumbered.
Moved by the plight of one of the contestants Ritik Diwaker, bollywood actor Varun Dhawan decides to sponsor the 11 - year - old boy 's education.
In the day of grand finale, Ritik danced on a medley of songs, including "Bulleya '', "Dil Diyan Gallan '' and the title track of Dangal, and Varun was impressed. He 's seen some of his earlier acts too and was bowled over by his happy feet. When he learnt that Ritik 's father, Gaurishankar Diwaker, is unable to work as his left hand is nonfunctional, Varun decided to help the child as he did n't want his studies to suffer.
Super Dancer Season 2: Winner Name 2018
|
ranks of the british army in the revolutionary war | British army during the American Revolutionary war - wikipedia
The British Army during the American Revolutionary War served for eight years in campaigns fought around the globe. Defeat at the Siege of Yorktown to a combined Franco - US force ultimately led to the loss of the Thirteen Colonies in eastern North America, and the concluding Treaty of Paris deprived Britain of many of the gains achieved in the Seven Years ' War. However several victories elsewhere meant that much of the British Empire remained intact.
In 1775 the British Army was a volunteer force. The army had suffered from lack of peacetime spending and ineffective recruitment in the decade since the Seven Years ' War, circumstances which had left it in a dilapidated state at the outbreak of war in North America. To offset this the British government quickly hired contingents of German mercenaries to serve as auxiliaries alongside the regular army units in campaigns from 1776. Limited army impressment was also introduced in England and Scotland to bolster recruitment in 1778, however the practice proved too unpopular and was proscribed again in 1780.
The attrition of constant fighting, the inability of the Royal Navy to decisively defeat the French Navy, and the withdrawal of the majority of British forces from North America in 1778 ultimately led to the British army 's defeat. The surrender of Cornwallis 's army at Yorktown in 1781 allowed the Whig opposition to gain a majority in parliament, and British operations were brought to an end.
Britain had incurred a large national debt fighting the Seven Years ' War, during which the armies ' establishment strength had been increased to an unprecedented size. With the ascension of peace in 1763 the army was dramatically reduced to a peacetime home establishment of just over 11,000 men, with a further 10,000 for the Irish establishment and 10,000 for the colonies. This meant 20 regiments of infantry totaling just over 11,000 men were stationed in England, 21 regiments were stationed in Ireland, 18 regiments were stationed in the Americas, and 7 regiments stationed in Gibraltar. Alongside this the army could call on 16 regiments of the cavalry, a total of 6,869 men and 2,712 men in the artillery. This gave a theoretical strength of just over 45,000 men exclusive of the artillery. The British government deemed this troop strength to be inadequate to prosecute an insurrection in the Americas, as well as deal with defence of the rest of its territories, so treaties with German states (mainly Hesse - Kassel and Brunswick) were negotiated for a further 18,000 men (half of which were stationed in garrisons to release regular British units from other theaters). This measure brought the Army 's total establishment strength to around 55,000 men.
After the losses at the Battles of Saratoga and the outbreak of hostilities with France and Spain, the existing voluntary enlistment measures were judged to be insufficient. Between 1775 and 1781, the regular army increased from 48,000 to 121,000. In 1778 the army adopted some non traditional recruiting measures to further augment its strength, a system of private subscription was established, whereby some 12 new regiments totaling 15,000 men were raised by individual towns and nobles. In the same year the government passed the first of two recruiting acts which allowed a limited form of impressment in parts of England and Scotland under strict conditions, however the measure proved unpopular and both acts were repealed in May 1780, permanently discontinuing impressment in the army. The recruiting acts of 1778 and 1779 also provided greater incentives for voluntarily joining the regular army, including a bounty of £ 3 and the entitlement to discharge after three years unless the nation remained at war. Thousands of volunteer militia battalions were raised for home defense in Ireland and England, and some of the most competent of these were embodied to the regular army. The British Government took a further step by releasing criminals and debtors from prison on the condition they joined the army. Three entire regiments were raised from this early release program.
In Nov 1778 the establishment was set at 121,000 men, of whom 24,000 were foreigners, along with 40,000 embodied militia. This was raised the next year to 104,000 men on the British establishment, 23,000 on the Irish establishment, 25,000 foreigners (the "Hessians ''), and 42,000 embodied militia, for a total force of about 194,000 men.
Although a large portion of the rank and file were lower class and the officers upper class, the army recruited from a variety of social backgrounds, both among the regular and officer ranks. According to Reid, the Georgian army through necessity drew its officers from a far wider base than its later Victorian counterpart and was much more open to promotion from the ranks. Officers were required to be literate, but there was no formal requirement on the level of education or their social standing, and most regimental officers did not come from the landed gentry, but from middle class private individuals in search of a career. Although the system of sale of commissions officially governed the selection and promotion of officers, in practice the system was considerably relaxed during wartime, with far more stringent requirements placed on promotion. Many British officers were professional soldiers rather than wealthy dilettantes and showed themselves ready to discard their drill manuals and use innovative methods and tactics.
The Commander - in - Chief, India formally held command over crown forces in the East Indies and the Commander - in - Chief, North America commanded crown forces in the Americas. However, the British Army had no formal command structure, so British commanders often worked on their own initiative during the war. The position of Commander - in - Chief of the Forces remained vacant until 1778 when it was given to Jeffery Amherst, 1st Baron Amherst who held it until the end of the war. However his role in advising the government on strategy was limited and Amherst found himself primarily occupied with the organisation of home forces to oppose the threatened invasion in 1779, and suppress the outbreak of severe anti-Catholic rioting in 1780.
The direction of the British war effort ultimately fell to the Secretary of State for the Colonies, George Germain, 1st Viscount Sackville. Despite holding no formal position in the army, he appointed or relieved generals, took care of provisions and supplies, and directed much of the strategic planning. While some historians argue Sackville carried out his role effectively, even brilliantly, others have argued he made several miscalculations and struggled to hold genuine authority over his subordinates in the army.
In 1776, there were 119 generals of various grades in the British Army. However, since generals never retired, perhaps a third of this number were too old or infirm to command in the field. Others were opposed to war against the colonists, or unwilling to serve for years in America. Sir William Howe, who was chosen to succeed Sir Thomas Gage as Commander in Chief in North America, was only 111th in seniority.
Gage and Howe had both served as light infantry commanders in America during the French and Indian War. However, Gage was blamed for underestimating the strength of republican sympathy and was relieved in 1776. Howe had the advantage of large numbers of reinforcements, and was the brother of Admiral Richard Howe, the Royal Navy 's commander in chief in America. The two brothers gained much success in 1776, but failed to destroy Washington 's Army. They also tried to initiate peace talks but these came to nothing.
In 1777, General John Burgoyne was allowed to mount an ambitious campaign southwards from Canada. After early success, he pushed ahead despite major supply difficulties, and was surrounded and forced to capitulate at Saratoga, an event which precipitated intervention by Britain 's European rivals. After Howe 's Philadelphia campaign in the same year failed to achieve decisive results, Howe was recalled and replaced by Sir Henry Clinton.
Clinton was regarded as one of the most studious and well - read experts on tactics and strategy. However, even before becoming commander in chief, he had been reluctant to succeed Howe. He took command when the widening of the war compelled him to relinquish troops to other theatres, and became embittered at the Government 's demands that he bring the war to a successful conclusion with fewer troops and resources than had been available to Howe. He repeatedly tried to resign, and quarrelled with the Navy 's commanders and his own subordinates.
While Clinton held New York, Lord Cornwallis conducted a largely separate campaign in the southern states. Cornwallis was the one of the most aristocratic of the British generals who served in America, but had been dedicated to a military career since an early age, and insisted on sharing his soldiers ' hardships. After early victories, he was unable to destroy the American Continental armies opposing him or to raise substantial loyalist support. On Clinton 's orders, he tried to create a fortified enclave on the Chesapeake coast, but was cut off by a French fleet and forced to surrender at the Siege of Yorktown, which signalled the end of effective British attempts to retake America.
The final effective British commander in chief in America was Sir Guy Carleton, who had defended Quebec in 1775, but had been passed over in favour of Burgoyne in 1777 as a result of his perceived over-caution. As commander in chief, his main concern was to secure the safety of the many Loyalists and former slaves in the British enclave in New York.
Infantry formed the backbone of crown forces throughout the war. Two of the most heavily engaged infantry regiments, the 23rd and the 33rd, earned enduring reputations for their competence and professionalism in the field.
In the middle of the eighteenth century, the Army 's uniforms were highly elaborate, and manoeuvres were ponderous and slow, with "innumerable words of command. '' Experience of the conditions and terrain in North America during the French and Indian War prompted changes to its tactics and dress. In battle the redcoats usually formed in two ranks rather than three, to increase mobility and firepower. The Army further adapted this formation during the American Revolution by forming and fighting in looser ranks, a tactic that was known as "loose files and American scramble ''. Soldiers stood at a greater distance apart and three ' orders ' were used to specify the distance to be expanded or contracted as necessary; "order '' (two intervals), "open order '' (four intervals), and "extended order '' (ten intervals). British infantry advanced at the ' trott ' and fought fluid battles primarily using the bayonet. Although this new formation increased the British army 's mobility and tactical flexibility, the abandonment of linear formation was later blamed by some British officers for defeats in the later stages of the war like the Battle of Cowpens, in which British troops engaged denser bodies of men deployed in successive lines.
The hired German regiments that joined Howe 's army in 1776 also adopted the two rank formation used by the British army, but retained the traditional close order system of fighting throughout the war.
In 1758 Thomas Gage (then a lieutenant colonel) had formed an experimental light infantry regiment known as 80th Regiment of Light - Armed Foot, considered to be the first such unit to serve in the British army. Other officers, notably George Howe, the elder brother of William Howe, had adapted their regiments to serve as light infantry on their own initiative. On becoming commander - in - chief in North America in 1758, General Jeffery Amherst ordered every regiment to form light infantry companies from their ranks. The 80th regiment was disbanded in 1764 and the other ad - hoc light infantry units were converted back to "line '' units, but infantry regiments retained their light companies until the mid-nineteenth century.
In 1771 - 72 the British army began implementing a new training scheme for light infantry companies. Much of the early training was found to be inadequate, with officers unsure how to use light companies. Many of the brightest young officers of light companies sought commissions elsewhere because being a "light - bob '' officer lacked social prestige. In 1772 General George Townshend, 1st Marquess Townshend wrote Instructions, and Training and Equipping of the new Light Companies which was issued to regiments on the Irish establishment and offered a practical guide for training light companies and guidance for tactics such as skirmishing in broken terrain when acting independently, in sections or in large groups. Townshend also introduced a new communication method for light infantry officers when in command of loosely deployed, scattered troops; whistle signals rather than drums would indicate movements such as advance, retire, extend or contract. In 1774 William Howe wrote the Manual for Light Infantry Drill and formed an experimental Light Infantry battalion trained at Salisbury camp. This became the pattern for all regular light infantry serving in North America. Howe 's system differed in that it focused on development composite battalions of light infantry more suited to large scale campaigning in North America, rather than individual companies. On taking command in America, Howe gave orders that every regiment which had not already done so to form a company of light infantry. These men were generally hand picked from the fittest and most proficient of the rank and file.
The light infantry companies of several regiments were usually combined in composite light infantry battalions. Similar composite battalions were often formed from the grenadier companies of line regiments. Grenadiers were historically chosen from the tallest soldiers, but as with light infantry companies, were often selected from among the most proficient soldiers in their parent units.
At the Battle of Vigie Point in 1778 a force of British infantry who were veterans of colonial fighting inflicted heavy casualties on a far larger force of regular French troops who advanced in columns. Clayton describes how "... the use of light infantry, well led by their officers and NCOs, was of key importance in advance as skirmishers fired on French columns from behind cover; when the French attempted to extend they were threatened with bayonet charge... and when the French advanced they fell back to prepare for further skirmishing and ambushes from all directions. '' Fortescue, similarly describes the action; "Advancing in skirmish order and keeping themselves always under cover, the light companies maintained at close range the most destructive fire on the Heavy French columns... At last one of the enemy 's battalions fairly gave way and the light companies followed them to complete the rout with the bayonet ''.
Large numbers of scouts and skirmishers were also formed from loyalists and Native Americans. The renowned Robert Rogers formed the Queen 's Rangers, while his brother James Rogers led the King 's Rangers. Loyalist pioneer John Butler raised the provincial regiment known as Butler 's Rangers, who were heavily engaged in the Northern colonies during which they were accused of participating in massacres at Wyoming and Cherry Valley. The majority of Native Americans favoured the British cause and Mohawk leader Joseph Brant commanded Iroquois and Loyalists in campaigns on the New York Frontier. Colonel Thomas Brown led another group of King 's Rangers in the Southern colonies, defending East Florida from invasion, raiding the southern frontier and participating in the conquest of the southern colonies. Colonial Governor John Murray, 4th Earl of Dunmore raised a regiment composed entirely of freed slaves known as the Ethiopian Regiment, which served through the early skirmishes of the war.
The loyalist units were vital to the British primarily for their knowledge of local terrain. One of the most successful of these units was formed by an escaped slave, and veteran of the Ethiopian Regiment known as Colonel Tye, who led the so - called Black Brigade in numerous raids in New York and New Jersey, interrupting supply lines, capturing rebel officers, and killing suspected leaders. He died from wounds in 1780.
The standard uniform of the British army consisted of the traditional red coat with cocked hats, white breeches and black gaiters with leather knee caps. Hair was usually cut short or fixed in plaits at the top of the head. As the war progressed many line regiments replaced their cocked hats with slouch hats. The full "marching order '' a line infantryman was expected to carry on campaign was extensive, and British soldiers often dropped much of their equipment before battle. Soldiers were also issued with greatcoats to be worn in adverse conditions, which were often used as tents or blankets. Drummers usually wore colours in reverse of their regimental colour, they carried the coat of arms of their colonel and wore mitre caps. Most German regiments wore dark blue coats, while cavalry and loyalists often wore green.
Grenadiers often wore bearskin headdress and usually carried cavalry sabers as a side arm. Light infantry were issued with short coats, without lace, with an ammunition box containing nine cartridges lined up in a row for easy access worn across the stomach rather than at the side. They did not use bayonets but carried naval boarding axes (which became known as tomahawks).
The most common infantry weapon was the Brown Bess used with a fixed bayonet. However some of the light companies were issued with the short barrel muskets or the Pattern 1776 Rifle. The British army also conducted limited experimental use of the breech - loading Ferguson Rifle, which proved too difficult to mass - produce to be used more extensively. Major Patrick Ferguson formed a small experimental company of riflemen armed with this weapon, but this was disbanded in 1778. In many instances, British forces relied on Jagers from among the German contingents to provide skirmishers armed with rifles.
British infantry regiments possessed two flags; the King 's Colour (the Union flag) and their regimental colour, which displayed colour of the regiment 's facings. In 18th and 19th century warfare ' the colours ' often became a rallying point in the most bitter actions. Both regimental standards were highly regarded and a source of pride each regiment. However, because of the tactical constraints in conducting the war and the adapted mode of fighting, it is likely that British regiments only used their colours for ceremonial purposes in America, particularly the armies commanded by Howe and Cornwallis. However, in the early years of the war the Hessians continued to carry their colours on campaign, Major - General Baron Friedrich Wilhelm von Lossberg wrote; "They (the British) have their colours with them only when quartered, while we carry them with us wherever the regiments go... the country is bad for fighting. Nothing worries me more than the colours, for the regiments can not stay together in an attack because of the many walls, swamps, and stone cliffs. The English can not lose their colours, for they do not carry them with them. '' During the Saratoga campaign Baroness Riedesel, the wife of a German officer, saved the colours of the Brunswick regiments by burning the staffs and hiding the flags in her mattress.
The harsh conditions of life in the army meant that discipline was severe. Crimes such as theft or desertion could result in hanging and punishments such as lashings were administered publicly. Soldiers spent a great deal of time cleaning and preparing their clothing and equipment. Families were permitted to join soldiers in the field. Wives often washed, cooked, mended uniforms and served as nurses in the time of battle or sickness. The army often suffered from poor discipline away from the battlefield, gambling and heavy drinking were common among all ranks. The distance between the colonies and the British Isles meant logistics were stretched to breaking point, with the army often running out of food and supplies in the field, and forced to live off the land.
Training was rigorous; firing, bayonet drills, movements, physical exercise, marching and forming were all part of the daily regimen to prepare for campaigns.
During the course of the war the British army conducted large scale mock battles at Warley and Coxheath camps in southern England. The primary motivation behind this was in preparation for the threatened invasion. By all accounts the camps were massive in scale involving upwards of 18,000 men. One militia officer wrote to his friend in August 1778: "We are frequently marched out in considerable bodies to the heaths or commons adjacent, escorted by the artillery, where we go through various movements, manoeuvres and firings of a field of battle. In these expeditions, let me assure you, there is much fatigue, and no little danger... the most grand and beautiful imitations of action are daily presented to us; and believe me, the army in general are becoming greatly enamoured by war. '' The maneuvers carried out at Warley camp were subject of a painting by Philip James de Loutherbourg known as Warley Camp: The Mock Attack, 1779. He also drew detailed illustrations of the uniforms of the light infantry and grenadiers present at the camp which are considered some of the most accurate surviving illustrations of 18th century British soldiers.
Cavalry played a smaller role in British armies than other European armies of the same era. Britain possessed no armoured Cuirassiers or Heavy cavalry. British doctrine tended to favour the use of medium cavalry, and light dragoons. The cavalry establishment consisted of three regiments of Household Cavalry, seven regiments of Dragoon Guards and six regiments of light Dragoons. Several hundred officers and enlisted men of cavalry regiments which remained stationed in Britain volunteered for service in America and transferred to infantry regiments.
Because of the logistical limitations of campaigning in North America, cavalry played a limited role in the war. The transport of horses by ship was extremely difficult. Most of the horses died during the long journey and the ones that survived usually required several weeks to recuperate on landing. The British army primarily adopted small numbers of light dragoons who worked as scouts and were used extensively in irregular operations. One of the most successful of these units, the British Legion combined, light cavalry and light infantry and conducted raiding operations into enemy held territory. The lack of cavalry had great tactical implications on how the war was fought, it meant that British forces could not fully exploit their victories when out maneuvering Continental armies at battles like Long Island and Brandywine. Without a large cavalry force to follow up the infantry, retreating American forces could often escape destruction.
Manpower problems at the outbreak of war led to the British government employing large numbers of German mercenaries, primarily recruited from Hesse - Cassel. Units were sent by Count William of Hesse - Hanau; Duke Charles I of Brunswick - Wolfenbüttel; Prince Frederick of Waldeck; Margrave Karl Alexander of Ansbach - Bayreuth; and Prince Frederick Augustus of Anhalt - Zerbst.
Approximately 9,000 Hessians arrived with Howe 's army in 1776 and served with British forces through the campaigns in New York and New Jersey. In all 25,000 hired auxiliaries served with Britain in the various campaigns during the war.
The German units were found to be different in tactics and approach to the regular British troops. Many British officers regarded the German regiments as slow in mobility, therefore British generals utilised them as heavy infantry. This is primarily because of the German officers ' reluctance to adopt loose formations. British Lieutenant William Hale commented on the tactical limitations of the German tactical methods: "I believe them steady, but their slowness is of the greatest disadvantage in a country almost covered with woods, and against an enemy whose chief qualification is agility in running from fence to fence keeping up an irregular, but galling fire on troops who advance with the same pace as at their exercise... At Brandywine, when the first line formed, the Hessian Grenadiers were close to our rear, and began beating their march at the same time as us. From that minute we saw them no more until the action was over, and only one man of them was wounded, by a random shot which came over us. ''
The Hessians served in some capacity in most of the major battles of the war. Duke Karl I provided Great Britain with almost 4,000 foot soldiers and 350 dragoons under General Friedrich Adolf Riedesel. These soldiers were the majority of the German regulars under General John Burgoyne in the Saratoga campaign of 1777, and were generally referred to as "Brunswickers. '' The combined forces from Braunschwieg and Hesse - Hanau accounted for nearly half of Burgoyne 's army.
The Jagers were greatly prized by British commanders, their skill in skirmishing and scouting meant they continued to serve in the Southern campaigns under Cornwallis until the end of the war.
Soldiers from Hanover also formed part of the garrisons at Gibraltar and Minorca, and two regiments participated in the Siege of Cuddalore.
Other than mercenary troops, the Company army serving in India consisted of regular British troops alongside native Indian Sepoys. Foreigners were also present among the regular British officer corps. The Swiss - born Major - General Augustine Prévost commanded the successful defense of Savannah in 1779. The former Jacobite officer Allan Maclean of Torloisk, who had previously held commission in the Dutch service, was second in command during the successful defense of Quebec in 1775. Another Swiss - born officer Frederick Haldimand served as Governor of Quebec in the later stages of the war. Huguenots, and exiled Corsicans also served amongst the regular and officers ranks.
"The rebels have done more in one night than my whole army would have done in a month. '' -- General Howe, March 5, 1776
British troops had been stationed in Boston since 1769 amid rising tensions between colonial subjects and the parliament in Great Britain. Fearing the impending insurrection General Thomas Gage dispatched an expedition to remove gunpowder from the powder magazine in Massachusetts on 1 September 1774. The next year on the night of April 18, 1775, General Gage sent a further 700 men to seize munitions stored by the colonial militia at Concord. The Battles of Lexington and Concord were fought. The British troops stationed in Boston were inexperienced, and by the time the redcoats began the return march to Boston, several thousand militiamen had gathered along the road. A running battle ensued, and the British detachment suffered heavily before reaching Charlestown. The British army in Boston found itself under siege by thousands of colonial militia. On June 17, British forces now under the command of General William Howe attacked and seized the Charlestown peninsula in the Battle of Bunker Hill. Although successful in his objective, the British forces suffered heavy casualties in taking the position. Both sides remained at stalemate until guns were placed on the Dorchester Heights, at which point Howe 's position became untenable and the British abandoned Boston entirely.
After capturing Fort Ticonderoga, American forces under the command of General Richard Montgomery launched an invasion of British controlled Canada. They besieged and captured Fort Saint - Jean, while another army moved on Montreal. However they were defeated at the Battle of Quebec and British forces under the command of General Guy Carleton launched a counter invasion which drove the colonial forces from the province entirely and reached all the way to Lake Chaplain, however came short of recapturing Fort Ticonderoga.
"I can not too much commend Lord Cornwallis 's good services during this campaign, and particularly the ability and conduct he displayed in the pursuit of the enemy from Fort Lee to Trenton, a distance exceding eighty miles, in which he was well supported by the ardour of his corps, who cheerfully quitted their tents and heavy baggage as impediments to their march. '' -- General Howe, December 20, 1776
After withdrawing from Boston, Howe immediately began preparations to seize New York which was considered the ' hinge ' of the colonies. In late August, 22,000 men (including 9,000 Hessians) were rapidly landed on Long Island using flat bottomed boats, this would be the largest amphibious operation undertaken by the British army until the Normandy landings almost 200 years later. In the ensuing Battle of Long Island on August 27, 1776, the British outflanked the American positions, driving the Americans back to the Brooklyn Heights fortifications. General Howe not wishing to risk the lives of his men in a bloody frontal assault then began to lay siege works. The navy had failed to properly blockade the East river which left an escape route open for Washington 's army, which he fully exploited, managing a nighttime retreat through his unguarded rear to Manhattan Island. British forces then fought a series of actions to consolidate control of Manhattan Island, culminating in the Battle of Fort Washington which resulted in the capture of close to 3,000 Continental troops. Following the conquest of Manhattan, Howe ordered Charles Cornwallis to "clear the rebel troops from New Jersey without a major engagement, and to do it quickly before the weather changed. '' Cornwallis ' force drove Washington 's army entirely from New Jersey and across the Delaware River. However, in the pre-dawn hours of December 26, Washington crossed back into New Jersey and captured a garrison of Hessians at Trenton. Several days later, Washington outmaneuvered Cornwallis at Assunpink Creek and overwhelmed a British outpost at Princeton on January 3, 1777. Cornwallis rallied and again drove Washington away, however the defeats showed the British army had become too overstretched and Howe abandoned most of his outposts in New Jersey.
"I fear it bears heavy on Burgoyne... If this campaign does not finish the war, I prophesy that there is an end of British dominion in America. '' -- General Henry Clinton, July, 1777
Following the failure of the New York and New Jersey campaign to bring about a decisive victory over the Americans, the British army adopted a radically new strategy. Two armies would invade from the north to capture Albany, one of 8,000 men (British and Germans) under the command of General John Burgoyne, and another of 1,000 men (British, German, Indian, Loyalists, Canadians) under Brigadier General Barry St. Leger, while a third army under the command of General Howe would advance from New York in support. Through poor co-ordination and unclear orders the plan failed. Howe believed that he could not support a Northern army until the threat of Washington 's army had been dealt with and moved on Philadelphia instead. The early stages of Burgoyne 's campaign met with success, capturing the forts Crown Point, Ticonderoga and Anne. However part of his army was destroyed at Bennington. After winning a hard fought battle at Freeman 's Farm, bought with heavy casualties, Burgoyne complained at the inexperience of his soldiers, that his men were too impetuous and uncertain in their aim, and that his troops remained in position to exchange volleys too long, rather than switch to the bayonet. Following the battle he ordered the retraining of his army. Burgoyne did not want to lose the initiative and immediately prepared a second assault to puncture the Gates ' army scheduled for the following morning, however his subordinate General Fraser advised him of the fatigued state of the British light infantry and Grenadiers and that a renewed assault following a further night 's rest would be carried out with greater vivacity. That night Burgoyne received word that Clinton would launch his own offensive. The news convinced Burgoyne to wait, believing that the American General Gates would be forced to commit part of his own force to oppose Clinton, however Gates was being continually reinforced. Burgoyne launched the second attempt to breakthrough the American lines early in the following month which failed at Bemis Heights with losses that Burgoyne 's force could not sustain. Burgoyne was finally compelled surrender after it had become clear he was surrounded. Burgoyne 's campaign tactics were greatly criticised, the composition of his force was disjointed, and his decision to overload his army with artillery (expecting a long siege) meant his army could not advance rapidly enough through the difficult terrain, allowing the Americans too much time to gather an overwhelming force to oppose him. The defeat had far reaching consequences as the French (who had already been secretly supporting the colonists) decided to openly support the rebellion and eventually declared war on Britain in 1778.
"... I do not think that there exists a more select corps than that which General Howe has assembled here. I am too young and have seen too few different corps, to ask others to take my word; but old Hessian and old English officers who have served a long time, say that they have never seen such a corps in respect to quality... '' -- Captain Muenchhausen, June, 1777
While Burgoyne invaded from the North, Howe took an army of 15,000 men (including 3,500 Hessians) by sea to attack Philadelphia. Howe rapidly outflanked Washington at the Battle of Brandywine, but most of Washington 's army managed to escape destruction. After inconclusive skirmishing with Washington 's army at the Battle of the Clouds, a battalion of British light infantry were accused of committing a massacre in a surprise assault on a rebel camp at the Battle of Paoli. All remaining resistance to Howe was eliminated in this attack, and the rest of Howe 's army marched on the rebel capital unopposed. The capture of Philadelphia did not turn the war in Britain 's favour, and Burgoyne 's army was left isolated with only limited support from Sir Henry Clinton, who was responsible for defending New York. Howe remained garrisoned in Philadelphia with 9,000 troops. He came under heavy attack from Washington but at the Battle of Germantown, Washington was driven off. After an unsuccessful attempt to capture Fort Mifflin, Howe eventually took the forts of Mifflin and Mercer. After probing Washington 's fortifications at the Battle of White Marsh, he returned to winter quarters. Howe resigned shortly afterwards, complaining that he had been inadequately supported. Command was given to Clinton who, after the French declaration of war, carried out orders to evacuate the British army from Philadelphia to New York. He did this with an overland march, fighting a large action at the Battle of Monmouth on the way
In August 1778 a combined Franco - American attempt to drive British forces from Rhode Island failed. One year later an American expedition to drive British forces from Penobscot Bay also failed. In the same year Americans launched a successful expedition to drive Native Americans from the frontier of New York, and captured a British outpost in a nighttime raid. During this period the British army carried out a series of successful raiding operations, taking supplies, destroying military defenses, outposts, stores, munitions, barracks, shops and houses.
"Whenever the Rebel Army is said to have been cut to pieces it would be more consonant with truth to say that they have been dispersed, determined to join again... in the meantime they take oaths of allegiance, and live comfortably among us, to drain us of our monies, get acquainted with our numbers and learn our intentions. '' -- Brigadier General Charles O'Hara, March, 1781
The first major British operation in the Southern colonies occurred in 1776, when a force under General Henry Clinton unsuccessfully besieged the fort at Sullivan 's Island. In 1778 a British army of 3,000 men under Lieutenant Colonel Archibald Campbell successfully captured Savannah, beginning a campaign to bring the colony of Georgia under British control. A Franco - American attempt to retake Savannah in 1779 ended in failure. In 1780 the main British strategic focus turned to the south. British planners mistakenly believed a large base of loyalism existed in the southern colonies, and based plans on the flawed assumption that a large loyalist army could be raised to occupy the territories that had been pacified by regular British troops. In May 1780 an army of 11,000 men under the command of Henry Clinton and Charles Corwnallis captured Charleston along with 5,000 of the Continental army. Shortly afterwards Clinton returned to New York leaving Cornwallis with a force of less than 4,000 men and instructions to secure control of the southern colonies. At first Cornwallis was successful, winning a lopsided victory at the Battle of Camden and sweeping most resistance aside. However failing supplies and increasing partisan activity gradually wore down his occupying troops, and the destruction of a loyalist force under Major Ferguson at King 's Mountain, all but ended any hopes of large scale loyalist support. In January 1781 Tarleton 's cavalry force was destroyed at the Battle of Cowpens. Cornwallis then determined to destroy the Continental army under Nathaniel Greene. Cornwallis invaded North Carolina and engaged in a pursuit over hundreds of miles that became known as the "Race to the Dan ''. Cornwallis 's ravaged army met Greene 's army at Battle of Guilford Court House, and although Cornwallis was victorious he suffered heavy casualties. With little hope of reinforcements from Clinton, Cornwallis then decided to move out of North Carolina and invade Virginia. Meanwhile, Greene moved back into South Carolina and began attacking the British outposts there.
"If you can not relieve me very soon, you must prepare to hear the worst. '' -- General Charles Cornwallis, September 17, 1781.
In early 1781 the British army began conducting raids into Virginia. The former Continental army officer, Benedict Arnold, now a Brigadier of the British army, led a force with William Phillips raiding and destroying rebel supply bases. He later occupied Petersburg and fought a small action at Blandford.
On hearing the news British forces were in Virginia and believing that North Carolina could not be subdued unless its supply lines from Virginia were cut, Cornwallis decided to join forces with Phillips and Arnold. Cornwallis 's army fought a series of skirmishes with the rebel forces commanded by Lafayette before fortifying himself with his back to the sea, believing the Royal Navy could maintain supremacy over the Chesapeake Bay. He then sent requests to Clinton to be either resupplied or evacuated. The reinforcements took too long to arrive and in September the French fleet successfully blockaded Cornwallis in Chesapeake Bay. Royal Navy Admiral Graves believed that the threat posed to New York was more critical and withdrew. Cornwallis then became surrounded by armies commanded by Washington and the French General Rochambeau. Outnumbered and with no avenue of relief or escape, Cornwallis was compelled to surrender his army.
In 1776, an American force captured the British island of Nassau. After the French entry into the war, numerous poorly defended British islands fell quickly. In December 1778 a force of veteran British troops under the command of General James Grant were landed in St. Lucia and successfully captured the high grounds of the islands. Three days later 9,000 French reinforcements were landed and attempted to assault the British position, however they were repulsed with heavy casualties. Despite this victory, numerous other British islands fell during the war. On 1 April 1779, Lord Germain instructed Grant to establish small garrisons throughout the West Indies, Grant believed this would be unwise and instead concentrated defences to cover the major naval bases. He posted the 15th, 28th, and 55th Foot and 1,500 gunners at Saint Kitts. The 27th, 35th, and 49th Foot and 1,600 gunners defended Saint Lucia. Meanwhile, the royal dockyard at Antigua was held by an 800 - man garrison of the 40th and 60th Foot. Grant also reinforced the fleet with 925 soldiers. Although Britain lost other islands, his dispositions provided the basis for the British successes in the Caribbean during the final years of the war.
In 1778 British forces began attacking French enclaves in India, first capturing the French port of Pondicherry, and seizing the port of Mahé. The Mysorean ruler Hyder Ali, an important ally of France, declared war on Britain in 1780. Ali invaded Carnatic with 80,000 men, laying siege to British forts in Arcot. A British attempt to relieve the siege ended in disaster at Pollilur. Ali continued his sieges taking fortresses, before another British force under General Eyre Coote defeated the Mysoreans at Porto Novo. Fighting continued until 1783 when the British captured Mangalore, and the Treaty of Mangalore was signed which restored both sides lands to Status quo ante bellum.
From 1779 the Governor of Spanish Louisiana Bernardo de Gálvez led a successful offensive to conquer British West Florida, culminating in the Siege of Pensacola in 1781.
Britain made two attempts to capture Spanish territory in Central America: in 1779 at the Battle of San Fernando de Omoa; and in 1780 in the San Juan Expedition. In both cases initial British military success was defeated by tropical diseases, with the 2,500 dead of the San Juan Expedition giving it the highest British death toll of the war.
Europe was the setting of three of the largest engagements of the entire war. The unsuccessful Franco - Spanish attempt to invade England, unsuccessful attempt to capture Gibraltar and the successful Franco - Spanish campaign to capture Minorca, had by 1783 involved over 100,000 men, and hundreds of guns and ships. In September 1782 the "Grand Assault '' on the besieged Gibraltar garrison took place, which was the largest single battle of the war, involving over 60,000 soldiers, sailors and marines. France also twice unsuccessfully attempted to capture the British channel island of Jersey, first in 1779 and again in 1781.
Following the Treaty of Paris, the British army began withdrawing from its remaining posts in the Thirteen Colonies. In mid-August 1783, General Guy Carleton began the evacuation of New York, informing the President of the Continental Congress that he was proceeding with the withdrawal of refugees, freed slaves and military personnel. More than 29,000 Loyalist refugees were evacuated from the city.
The British army was dramatically reduced again in peacetime. Morale and discipline became extremely poor, and troops levels fell. When the wars with France commenced again in 1793 its total strength stood at 40,000 men. In idleness the army again became riddled with corruption and inefficiency.
Many British officers returned from America with the belief in the superiority of the firearm and formations adapted with a greater frontage of firepower. However officers who had not served in America questioned whether the irregular and loose system of fighting which had become prevalent in America was suitable for future campaigns against European powers. In 1788 the British army was reformed by General David Dundas, an officer who had not served in America. Dundas wrote many training manuals which were adopted by the army, the first of which was the Principles of Military Movements. He chose to ignore the light infantry and flank battalions the British army had come to rely on in North America. Instead, after witnessing Prussian army maneuvers in Silesia in 1784, he pushed for drilled battalions of heavy infantry. He also pushed for uniformity in training, eliminating the ability of colonels to develop their own systems of training for their regiments. Charles Cornwallis, an experienced "American '' officer who witnessed the same maneuvers in Prussia, wrote disparagingly; "their maneuvers were such as the worst general in England would be hooted at for practicing; two lines coming up within six yards of one another and firing until they had no ammunition left, nothing could be more ridiculous ''. The failure to formally absorb the tactical lessons of the American War of Independence contributed to the early difficulties experienced by the British army during the French Revolutionary Wars.
|
when is a medical student considered a doctor | Medical school - wikipedia
A medical school is a tertiary educational institution -- or part of such an institution -- that teaches medicine, and awards a professional degree for physicians and surgeons. Such medical degrees include the Bachelor of Medicine, Bachelor of Surgery (MBBS, MBChB, BMBS), Doctor of Medicine (MD), or Doctor of Osteopathic Medicine (DO). Many medical schools offer additional degrees, such as a Doctor of Philosophy, Master 's degree, a physician assistant program, or other post-secondary education.
Medical schools can also employ medical researchers and operate hospitals. Around the world, criteria, structure, teaching methodology, and nature of medical programs offered at medical schools vary considerably. Medical schools are often highly competitive, using standardized entrance examinations, as well as grade point average and leadership roles, to narrow the selection criteria for candidates. In most countries, the study of medicine is completed as an undergraduate degree not requiring prerequisite undergraduate coursework. However, an increasing number of places are emerging for graduate entrants who have completed an undergraduate degree including some required courses. In the United States and Canada, almost all medical degrees are second entry degrees, and require several years of previous study at the university level.
Medical degrees are awarded to medical students after the completion of their degree program, which typically lasts five or more years for the undergraduate model and four years for the graduate model. Many modern medical schools integrate clinical education with basic sciences from the beginning of the curriculum (e.g.). More traditional curricula are usually divided into preclinical and clinical blocks. In preclinical sciences, students study subjects such as biochemistry, genetics, pharmacology, pathology, anatomy, physiology and medical microbiology, among others. Subsequent clinical rotations usually include internal medicine, general surgery, pediatrics, psychiatry, and obstetrics and gynecology, among others.
Although medical schools confer upon graduates a medical degree, a physician typically may not legally practice medicine until licensed by the local government authority. Licensing may also require passing a test, undergoing a criminal background check, checking references, paying a fee, and undergoing several years of postgraduate training. Medical schools are regulated by each country and appear in the World Directory of Medical Schools which was formed by the merger of the AVICENNA Directory for medicine and the FAIMER International Medical Education Directory.
By 2005 there were more than 100 medical schools across Africa, most of which had been established after 1970.
There are seven medical schools in Ghana: The University of Ghana Medical School in Accra, the KNUST School of Medical Sciences in Kumasi, University for Development Studies School of Medicine in Tamale, University of Cape Coast Medical School and the University of Allied Health Sciences in Ho, Volta Region, the leading private medical school in Ghana - the Accra College of Medicine, and Family Health Medical School another private medical school.
Basic Medical education lasts 6 years in all the medical schools. Entry into these medical schools are highly competitive and it is usually based on successful completion of the Senior High School Examinations. The University of Ghana Medical School has however introduced a graduate entry medical program to admit students with mainly science - related degrees into a 4 - year medical school program.
Students graduating from any of these medical schools get the MBChB degree and the title "Dr ''. For the First 3 years Students are awarded BSc in the field of Medical science for University of Ghana medical school; and Human biology for KNUST and UDS medical schools. The University of Ghana Medical School and KNUST School of Medical Sciences in Kumasi use the Tradition medical education model whiles University for Development Studies School of Medicine uses the Problem - based learning model.
Medical graduates are then registered provisionally with the Medical and Dental Council (MDC) of Ghana as House Officers (Interns). Upon completion of the mandatory 2 - year housemanship, these medical doctors are permanently registered with the MDC and can practice as medical officers (General Practitioners) anywhere in the country. The housemanship training is done only in hospitals accredited for such purposes by the Medical and Dental Council of Ghana
Following the permanent registration with the medical and dental council, doctors can specialize in any of the various fields that is organized by either the West African college of Physicians and Surgeons or the Ghana College of Physician and Surgeons.
Medical officers are also sometimes hired by the Ghana Health Service to work in the Districts / Rural areas as Primary Care Physicians.
In Kenya, medical school is a faculty of a university. Medical education lasts for 5 years after which the student graduates with an undergraduate (MBChB) degree. This is followed by a mandatory 12 - month full - time internship at an approved hospital after which one applies for registration with the Kenya Medical Practitioners and Dentists Board if they intend to practice medicine in the country. The first two years of medical school cover the basic medical (preclinical) sciences while the last four years are focused on the clinical sciences and internship.
There are no medical school entry examinations or interviews and admission is based on students ' performance in the high school exit examination (Kenya Certificate of Secondary Education - KCSE). Students who took the AS Level or the SAT can also apply but there is a very strict quota limiting the number of students that get accepted into public universities. This quota does not apply to private universities.
There are four established public medical schools:
Both Nairobi and Moi Universities run post graduate medical training programs that run over 3 years and lead to the award of master of medicine, MMed, in the respective specialty.
There has been progress made by the Aga Khan University in Karachi, Pakistan and the Aga Khan University Hospital (AKUH) in Nairobi towards the establishment of a Health Sciences University in Kenya with an associated medical school. AKUH in Nairobi, already offers post graduate MMed programmes. These are run over 4 years.
Completion of formal specialty training in Kenya is followed by two years of supervised clinical work before one can apply for recognition as a specialist, in their respective field, by the medical board.
There are several medical schools in Nigeria. Entrance into these schools is highly competitive. Candidates graduating from high school must attain high scores on the West African Examination Council 's (WAEC) Senior School Certificate Exam (SSCE / GCE) and high scores in four subjects (Physics, English, Chemistry, and Biology) in the University Matriculation Examination (UME). Students undergo rigorous training for 6 years and culminate with a Bachelor of Medicine and Bachelor of Surgery (MBBS / MBChB). The undergraduate program is six years and one year of work experience in government hospitals. After medical school, graduates are mandated to spend one year of housemanship (internship) and one year of community service before they are eligib
There are eight medical schools in South Africa, each under the auspices of a public university. As the country is a former British colony, most of the institutions follow the British - based undergraduate method of instruction, admitting students directly from high school into a 6 or occasionally five - year program. Some universities such as the University of the Witwatersrand in Johannesburg and the University of Cape Town have started offering post-graduate medical degrees that run concurrently with their undergraduate programs. In this instance, a student having completed an appropriate undergraduate degree with basic sciences can enter into a four - year postgraduate program.
South African medical schools award the MBChB degree, except the University of the Witwatersrand, which styles its degree MBBCh. Some universities allow students to earn an intercalated degree, completing a BSc (Medical) with an additional year of study after the second or third year of the MBChB. The University of Cape Town, in particular, has spearheaded a recent effort to increase the level of medical research training and exposure of medical students through an Intercalated Honours Programme, with the option to extend this to a PhD.
Following successful completion of study, all South African medical graduates must complete a two - year internship as well as a further year of community service in order to register with the Health Professions Council and practice as a doctor in the country.
Specialisation is usually a five - to seven - year training process (depending on the specialty) requiring registering as a medical registrar attached to an academic clinical department in a large teaching hospital with appropriate examinations. The specialist qualification may be conferred as a Fellowship by the independent Colleges of Medicine of South Africa (CMSA), following British tradition, or as a Magisterial degree by the university (usually the M Med, Master of Medicine, degree). The Medical schools and the CMSA also offer Higher Diplomas in many fields. Research degrees are the M. Med and Ph. D. or M.D., depending on university.
Medical students from all over the world come to South Africa to gain practical experience in the country 's many teaching hospitals and rural clinics. The language of instruction is English but a few indigenous languages are studied briefly. The University of the Free State has a parallel medium policy, meaning all English classes are also presented in Afrikaans, therefore students who choose to study in Afrikaans, do so separately from the English class.
In Sudan, medical school is a faculty of a university. Medical school is usually 6 years, and by the end of the 6 years the students acquires a bachelor 's degree of Medicine and Surgery. Post graduating there is a mandatory one - year full - time internship at one of the university or Government Teaching hospitals, then a license is issued.
During the first three years the curriculum is completed, and throughout the next three years it is repeated with practical training. Students with high grades are accepted for free in Government Universities. Students who score a grade less than the required would have to pay and must also acquire a still high grade. Students who take foreign examinations other than the Sudanese High School Examination are also accepted in Universities, students taking IGCSE / SATs and the Saudi Arabia examination.
In Tunisia, education is free for all Tunisian citizens and for foreigners who have scholarships. The oldest Medical school is a faculty of the University of Tunis. There are four medicine faculties situated in the major cities of Tunis, Sfax, Sousse and Monastir. Admission is bound to the success and score in the baccalaureate examination. Admission score threshold is very high, based on competition among all applicants throughout the nation. Medical school curriculum consists of five years. The first two years are medical theory, containing all basic sciences related to medicine, and the last three years consists of clinical issues related to all medical specialties. During these last three years, the student gets the status of "Externe ''. The student has to attend at the university hospital every day, rotating around all wards. Every period is followed by a clinical exam regarding the student 's knowledge in that particular specialty. After those five years, there are two years on internship, in which the student is a physician but under the supervision of the chief doctor; the student rotates over the major and most essential specialties during period of four months each. After that, student has the choice of either passing the residency national exam or extending his internship for another year, after which he gains the status of family physician. The residency program consists of four to five years in the specialty he qualifies, depending on his score in the national residency examination under the rule of highest score chooses first. Whether the student chooses to be a family doctor or a specialist, he has to make a doctorate thesis, which he will be defending in front of a jury, after which he gains his degree of Doctor of Medicine (MD).
As of April 2017, there are nine accredited medical schools in Uganda. Training leading to the award of the degree of Bachelor of Medicine and Bachelor of Surgery (MBChB) lasts five years, if there are no re-takes. After graduating, a year of internship in a hospital designated for that purpose, under the supervision of a specialist in that discipline is required before an unrestricted license to practice medicine and surgery is granted by the Uganda Medical and Dental Practitioners Council (UMDPC).
There is Postgraduate training such as the degree of Master of Medicine (MMed) which is a three - year programme, available at Makerere University School of Medicine in several disciplines. Makerere University School of Public Health, offers the degree of Master of Public Health (MPH) following a twenty - two (22) - month period of study, which includes field work.
In Zimbabwe there are three medical schools is offering Medical degrees. For undergrads, these are University of Zimbabwe - College of Health Sciences (MBChB), National University of Science and Technology (NUST) Medical school (MBBS) and Midlands State University (MSU) (MBChB). Only UZ is offering postgrad degrees in the Medical faculty.
Training lasts 5 1 / 2 years. The curriculum is as follows:
Internship is 2 years duration, with the first year spent in medicine and surgery and the second year doing pediatrics, anesthesia / psychiatry and obstetrics and gynecology. Thereafter one can apply for MMED at the university which last 4 -- 5 years depending on specialty. Currently no subspecialist education is available.
Medical degree programs in Argentina typically are six years long, with some universities opting for 7 year programs. Each one of the 3000 medical students who graduate each year in Argentina are required before graduation to dedicate a minimum of 8 months to community service without pay; although in some provinces (especially round the more developed south) there are government - funded hospitals who pay for this work. Some universities have cultural exchange programmes that allow a medical student in their final year to serve their community time overseas.
Upon graduation, one of the following degrees is obtained, according to the university: Doctor of Medicine, or both Doctor of Medicine and Doctor of Surgery. Public universities usually confer both degrees, and private universities bestow only Doctor of Medicine. In daily practice, however, there is no substantial difference between what a Doctor of Medicine or a Doctor of Medicine and Doctor of Surgery are allowed to do. When the degree is obtained, a record is created for that new doctor in the index of the National Ministry of Education (Ministerio Nacional de Educación) and the physician is given their corresponding medical practitioner 's ID, which is a number that identifies him and his academic achievements. In addition, there is a provincial ID, i.e. a number to identify doctors in the province they practise medicine in.
Doctors wishing to pursue a speciality must take entrance exams at the public / private institution of their choice that offers them. It is easier for students in private Medical Schools to obtain a residency in a Private Hospital, especially when the university has its own hospital, as the university holds positions specifically for its graduates. Speciality courses last about two to five years, depending on the branch of medicine the physician has chosen. There is no legal limit for the number of specialities a doctor can learn, although most doctors choose to do one and then they sub-specialise for further job opportunities and less overall competition, along with higher wages.
In Argentina there are public and private medical schools, however the prestige of the public institutions is undeniable and the private institutions do not normally appear in international rankings. A person who can afford to attend a private university, quite expensive for the average Argentinian, will choose that option over public education because of the smaller groups of students in each class and because of the lack of strictness in course evaluation. By law entrance into public institutions is open and tuition - free to all who have a high school diploma, and universities are expressly forbidden from restricting access with difficult entrance exams. Point in case, in 2016 La Universidad Nacional de la Plata was obligated by the governing bodies to stop forcing its students to write an entrance exam. As a result, that university experienced a major increase in the size of its student population. When it comes to educational quality, la Universidad de Buenos Aires, a public university, is widely recognised as the top medical school in the country.
In Bolivia, all medical schools are faculties within a university and offer a five - year M.D. equivalent. To acquire a license to exercise medical science from the government, all students must also complete 1 year and 3 months of internship. This consists of 3 months each of surgery, internal medicine, gynecology, pediatrics and public health. At least one of the internships must be done in a rural area of the country. After getting the degree and license, a doctor may take a post-graduate residency in order to acquire a specialty.
The Brazilian medical schools follow the European model of a six - year curriculum, divided into three cycles of two years each. The first two years are called basic cycle (ciclo básico). During this time students are instructed in the basic sciences (anatomy, physiology, pharmacology, immunology etc.) with activities integrated with the medical specialties, allowing the student an overview of the practical application of such content. After its completion, the students advance to the clinical cycle (ciclo clinico). At this stage contacts with patients intensify and work with tests and diagnostics, putting into practice what was learned in the first two years. The last two are called cycle internship (ciclo do internato). In this last step the students focus on clinical practice, through training in teaching hospitals and clinics. The teaching of this last step respecting an axis of increasing complexity, enabling students to make decisions and participate effectively in form and operative care under the direct supervision of faculty and qualified to act as teaching aids physicians. The performance of the internal develops redemption of ethical and humanistic dimensions of care, causing the student to recognize the values and principles that guide the physician - patient relationship.
After six years of training, students graduate and are awarded the title of physician (Médico) allowing them to register with the Regional Council of Medicine (Conselho Regional de Medicina). The recent graduate will be able to exercise the medical profession as a general practitioner and may apply to undertake postgraduate training. In 2012, the Regional Council of Medicine of São Paulo (Conselho Regional de Medicina do Estado de São Paulo) established that physicians who graduate from this year must pass a test to obtain professional registration. Passing the exam, however, is not linked to obtaining registration. It required only the presence of the candidate and the test performance. Already at the national level, pending in the Senate a bill creating the National Proficiency Examination in Medicine (Exame Nacional de Proficiência em Medicina), which would make the race a prerequisite for the exercise of profession.
Physicians who want to join a specialization program must undergo a new selection examination considered as competitive as that required to join a medical school. Works in health institutions under the guidance of medical professionals with high ethical and professional qualification. The specialization programs are divided into two categories: direct access and prerequisite. The specialties with direct access are those in which the doctor can enroll without having any prior expertise. Any physicians can apply to examinations for these specialties, regardless of time of training or prior experience. To apply to proprietary pre-requisite, the doctor should have already completed a specialty prior. The programs may range from 2 to 6. In Brazil are currently recognized by the Federal Council of Medicine, the Brazilian Medical Association and the National Commission of Medical Residency 53 residency programs. Fully complied with, gives the title of resident physician specialist.
In 2013, the Association of American Medical Colleges lists 17 accredited MD - granting medical schools in Canada.
In Canada, a medical school is a faculty or school of a university that offers a three - or four - year Doctor of Medicine (M.D. or M.D.C.M.) degree. Generally, medical students begin their studies after receiving a bachelor 's degree in another field, often one of the biological sciences. however, admittance can still be granted during third and fourth year. Minimum requirements for admission vary by region from two to four years of post-secondary study. The Association of Faculties of Medicine of Canada publishes a detailed AFMC.ca, guide to admission requirements of Canadian faculties of medicine on a yearly basis.
Admission offers are made by individual medical schools, generally on the basis of a personal statement, undergraduate record (GPA), scores on the Medical College Admission Test (MCAT), and interviews. Volunteer work is often an important criterion considered by admission committees. All four medical schools in Quebec and two Ontario schools (University of Ottawa, Northern Ontario School of Medicine) do not require the MCAT. McMaster requires that the MCAT be written, though they only look for particular scores (6 or better) on the verbal reasoning portion of the test.
The first half of the medical curriculum is dedicated mostly to teaching the basic sciences relevant to medicine. Teaching methods can include traditional lectures, problem - based learning, laboratory sessions, simulated patient sessions, and limited clinical experiences. The remainder of medical school is spent in clerkship. Clinical clerks participate in the day - to - day management of patients. They are supervised and taught during this clinical experience by residents and fully licensed staff physicians.
Students enter into the Canadian Resident Matching Service, commonly abbreviated as CaRMS in the fall of their final year. Students rank their preferences of hospitals and specialties. A computerized matching system determines placement for residency positions. ' Match Day ' usually occurs in March, a few months before graduation. The length of post-graduate training varies with choice of specialty.
During the final year of medical school, students complete part 1 of the Medical Council of Canada Qualifying Examination (MCCQE). Upon completion of the final year of medical school, students are awarded the degree of M.D. Students then begin training in the residency program designated to them by CaRMS. Part 2 of the MCCQE, an Objective Structured Clinical Examination, is taken following completion of twelve months of residency training. After both parts of the MCCQE are successfully completed, the resident becomes a Licentiate of the Medical Council of Canada. However, in order to practice independently, the resident must complete the residency program and take a board examination pertinent to his or her intended scope of practice. In the final year of residency training, residents take an exam administered by either the College of Family Physicians of Canada or the Royal College of Physicians and Surgeons of Canada, depending on whether they are seeking certification in family medicine or another specialty.
In 2011, the International Medical Education Directory listed 59 current medical schools in the Caribbean. 54 grant the MD degree, 3 grant the MBBS degree, and 2 grant either the MD or MBBS degree.
30 of the medical schools in the Caribbean are regional, which train students to practice in the country or region where the school is located. The remaining 29 Caribbean medical schools are known as offshore schools, which primarily train students from the United States and Canada who intend to return home for residency and clinical practice after graduation. At most offshore schools, basic sciences are completed in the Caribbean while clinical clerkships are completed at teaching hospitals in the United States.
Several agencies may also accredit Caribbean medical schools, as listed in the FAIMER Directory of Organizations that Recognize / Accredit Medical Schools (DORA). 25 of the 29 regional medical schools in the Caribbean are accredited, while 14 of the 30 offshore medical schools are accredited.
Curaçao currently (2015), has 5 medical schools and one other medical university under construction. The majority are located within the city of Willemstad. All six medical schools on the island of Curaçao, only provide education in Basic Medical Science (BMS) which goes towards the degree of Medical Doctor or Doctor of Medicine (2016). Presently, none of the medical schools offer other degrees; such as MBBS or PhD (2016). All students after completing their medical school 's Basic Medical Science program in Curaçao; will then have to apply to either take USMLE Step Exams, The Canadian or UK Board Exams. A large percentage of these medical students who attend these medical schools in Curaçao are either from North America, Africa, Europe or Asia.
In Chile, there are 21 medical schools. Principal medical schools are Pontificia Universidad Católica de Chile in Santiago, Universidad de Chile, Universidad de Concepción and Universidad de Santiago de Chile. The pre-grade studies are distributed in 7 years, where the last 2 are the internship, that include at least surgery, internal medicine, gynecology and pediatrics. After getting the degree of Licenciate in Medicine (General Medicine) the M.D. must pass a medicine knowledge exam called National Unic Exam of Medical Knowledge (EUNACOM "Examen Único Nacional de Conocimientos de Medicina '' in Spanish) and can take a direct specialty or work before in primary attention in order to gain access to a residency.
In Colombia, there are 50 medical schools listed in the World Directory of Medical Schools, 27 of which have active programs and are currently registered and accredited as high - quality programs by the Colombian Ministry of Education. The main medical programs are offered by the Universidad Nacional de Colombia, Pontificia Universidad Javeriana, Universidad del Rosario, Universidad El Bosque, Universidad de los Andes, Universidad del Valle, Universidad de Antioquia, and Universidad de la Sabana. Most programs require between 6 -- 7 years of study, and all offer a Doctor of Medicine (MD) degree. In some cases the school also allows for a second degree to be studied for at the same time (this is chosen by the student, though most students end up needing to do alternate semesters between their degrees, and mostly in careers like microbiology or biomedical engineering). For example, the Universidad de los Andes has a program whereby the medical student could graduate with both an MD and a Master of Business Administration (MBA) degree, or an MD and a master 's degree in public health. Admission to medical school varies with the school, but is usually dependent on a combination of a general application to the university, an entrance exam, a personal statement or interview, and secondary (high) school performance mostly as reflected on the ICFES score (the grade received on the state exam in the final year of secondary / high school).
In most medical programs, the first two years deal with basic scientific courses (cellular and molecular biology, chemistry, organic chemistry, mathematics, and physics), and the core medical sciences (anatomy, embryology, histology, physiology, and biochemistry). The following year may change in how it is organized in different schools, but is usually organ system - based pathophysiology and therapeutics (general and systems pathology, pharmacology, microbiology, parasitology, immunology, and medical genetics are also taught in this block). In the first two years, the programs also usually begin the courses in the epidemiology track (which may or may not include biostatistics), a clinical skills track (semiology and the clinical examination), a social medicine / public health track, and a medical ethics and communication skills track. Modes of training vary, but are usually based on lectures, simulations, standardized - patient sessions, problem - based learning sessions, seminars, and observational clinical experiences. By year three, most schools have begun the non-elective, clinical - rotation block with accompanying academic courses (these include but are not limited to internal medicine, pediatrics, general surgery, anaesthesiology, orthopaedics, gynaecology and obstetrics, emergency medicine, neurology, psychiatry, oncology, urology, physical medicine and rehabilitation, ophthalmology, and otorhinolaryngology). Elective rotations are usually introduced in the fourth or fifth year, though as in the case of the non-elective rotations, the hospitals the medical students may be placed in or apply to for a given rotation depend entirely on the medical schools. This is important in terms of the medical training, given the particular distinction of patients, pathologies, procedures, and skills seen and learned in private vs. public hospitals in Colombia. Most schools, however, have placements in both types of hospitals for many specialties.
The final year of medical school in Colombia is referred to as the internship year ("internado ''). The internship year is usually divided into two semesters. The first semester is made up of obligatory rotations that every student does though in different orders, and the medical intern serves in 5 - 7 different specialties, typically including internal medicine, paediatrics, general surgery, anaesthesiology, orthopaedics, gynaecology and obstetrics, and emergency medicine. The extent of the responsibilities of the intern varies with the hospital, as does the level of supervision and teaching, but generally, medical interns in Colombia extensively take, write, and review clinical histories, answer and discuss referrals with their seniors, do daily progress notes for the patients under their charge, participate in the service rounds, present and discuss patients at rounds, serve shifts, assist in surgical procedures, and assist in general administrative tasks. Sometimes, they are charged with ordering diagnostic testing, but, under Colombian law they can not prescribe medication as they are not graduate physicians. This, of course, are to be completed in addition to their academic responsibilities. The second semester is made up of elective rotations, which can be at home or abroad, in the form of clerkships or observerships. A final graduation requirement is to sit a standardized exam, the State Exam for Quality in Higher Education ("Examen de Estado de Calidad de la Educación Superior '' or ECAES, also known as SABER PRO) specific to medicine, which tests, for example, knowledge in public health and primary care.
After graduation, the physician is required to register with the Colombian Ministry of Health, in order to complete a year of obligatory social service ("servicio social obligatorio ''), after which they qualify for a professional license to practice general medicine and apply for a medical residency within Colombia. If, however, the student wishes to practice general medicine abroad or continue onto their postgraduate studies, for example, they can independently begin the appropriate application / equivalency process, without doing their obligatory social service. In this case they would not be licenced to practice medicine in Colombia and if they wish to do so, will have to register with the Ministry of Health. N.B. If the graduate physician gets accepted immediately into a residency within Colombia in internal medicine, paediatrics, family medicine, gynecology and obstetrics, general surgery or anaesthesiology, they are allowed to complete a 6 - month - long social service after their residency.
In contrast with most countries, residencies in Colombia are NOT paid positions, since one applies for the program through the university offering the post, which requires a tuition. However, on 9th May, 2017, legislation was formally introduced in Congress that would seek to regulate payment for medical residents, regulate their tuitions, and advocate for their vacation time and working hours. As in other countries, length of residency training depends upon the specialty chosen, and, following its completion, the physician may choose to apply for a fellowship (subspecialty) at home or abroad depending on the availability of their desired training programs, or practice in their specialty.
The Universidad de El Salvador (University of El Salvador) has a program of 8 years for students who want to study medicine. The first six years are organized in a two semesters fashion, the seventh year is used for a rotating internship through the mayor specialty areas in a 10 - week periods fashion (psychiatry and public health share a period) and the eighth year is designated for Social service in locations approved by the Ministry of Health (usually as attending physician in Community Health Centers or non-profit organizations). The graduates receive the degree of MD and must register in the Public Health Superior Council (CSSP) to get the medical license and a registered national number that allows them to prescribe barbiturates and other controlled drugs. In order to attend further studies (Surgery, Internal medicine, G / OB, Pediatrics, Psychiatry), the students in the year of Social service or graduates of any Salvadorian university must apply independently for the residency to the hospital of choice; the preliminary selection process is based on the results of clinical knowledge tests, followed by psychiatric evaluations and interviews with the hospital medical and administrative staff. The basic residencies mentioned above commonly last 3 years; at the last trimester of the third year, the residents can apply to the position of Chief of residents (1 year) or follow further studies as resident (3 years) of a specialty (for example: orthopedic surgery, urology, neurology, endocrinology...). No further studies are offered to the date; therefore, specialist looking for training or practice in a specific area (For example: a neurosurgeon looking for specialty in endovascular neurosurgery, spine surgery or pediatric neurosurgery) must attend studies in other countries and apply for such positions independently.
In Guyana the medical school is accredited by the National Accreditation Council of Guyana. The medical program ranges from 4 years to 6 years. Students are taught the basic sciences aspect of the program within the first 2 years of medical school. In the clinical sciences program, students are introduced to the hospital setting where they gain hands on training from the qualifying physicians and staff at the various teaching hospitals across Guyana.
Students graduating from the University of Guyana are not required to sit a board exams before practicing medicine in Guyana. Students graduating from the American International School of Medicine sit the USMLE, PLAB or CAMC exams.
Medical schools in Haiti conduct training in French. The universities offering medical training in Haiti are the Université Notre Dame d'Haïti, Université Quisqueya, Université d'Etat d'Haïti and Université Lumière.
The Université Notre Dame d'Haïti (UNDH) is a private Catholic university established by the Episcopal Conference of Haiti. According to the UNDH website, "the UNDH is not just about academic degrees, it is mainly the formation of a new type of Haiti, which includes in its culture and moral values of the Gospel, essential for serious and honest people that the country needs today. ''
The other two private schools offering medical degrees are Université Quisqueya and Université Lumière. The Université d'Etat d'Haïti is a public school.
Attending medical school in Haiti may be less expensive than attending medical universities located in other parts of the world, but the impact of the country 's political unrest should be considered, as it affects the safety of both visitors and Haitians.
Duration of basic medical degree course, including practical training: 6 years
Title of degree awarded: Docteur en Médecine (Doctor of Medicine)
Medical registration / license to practice: Registration is obligatory with the Ministère de la Santé publique et de la Population, Palais des Ministères, Port - au - Prince. The license to practice medicine is granted to medical graduates who have completed 1 year of social service. Those who have qualified abroad must have their degree validated by the Faculty of Medicine in Haiti. Foreigners require special authorization to practice.
The system of Medical education in Panama usually takes students from high school directly into Medical School for a 6 - year course, typically with a two years internship.
In 2012, the Association of American Medical Colleges and American Association of Colleges of Osteopathic Medicine listed 141 accredited M.D. - granting and 30 accredited D.O. - granting medical schools in the United States.
The Doctor of Medicine (MD) and Doctor of Osteopathic Medicine (DO) are graded to be equivalent to a Professional Doctorate.
Admission to medical school in the United States is based mainly on a GPA, MCAT score, admissions essay, interview, clinical work experience, and volunteering activities, along with research and leadership roles in an applicant 's history. While obtaining an undergraduate degree is not an explicit requirement for a few medical schools, virtually all admitted students have earned at least a bachelor 's degree. A few medical schools offer pre-admittance to students directly from high school by linking a joint 3 - year accelerated undergraduate degree and a standard 4 - year medical degree with certain undergraduate universities, sometimes referred to as a "7 - year program '', where the student receives a bachelor 's degree after their first year in medical school.
As undergraduates, students must complete a series of prerequisites, consisting of biology, physics, and chemistry (general chemistry and organic). Many medical schools have additional requirements including calculus, genetics, statistics, biochemistry, English, and / or humanities classes. In addition to meeting the pre-medical requirements, medical school applicants must take and report their scores on the MCAT, a standardized test that measures a student 's knowledge of the sciences and the English language. Some students apply for medical school following their third year of undergraduate education while others pursue advanced degrees or other careers prior to applying to medical school.
In the nineteenth century, there were over four hundred medical schools in the United States. By 1910, the number was reduced to one hundred and forty - eight medical schools and by 1930 the number totaled only seventy - six. Many early medical schools were criticized for not sufficiently preparing their students for medical professions, leading to the creation of the American Medical Association in 1847 for the purpose of self - regulation of the profession. Abraham Flexner (who in 1910 released the Flexner report with the Carnegie Foundation), the Rockefeller Foundation, and the AMA are credited with laying the groundwork for what is now known as the modern medical curriculum. The restriction of the supply of physicians that resulted from the Flexner Report has been criticized by classical economists as one of the principal factors in the increased prices relative to quality observed in medicine over the past 100 years.
The standard U.S. medical school curriculum is four years long. Traditionally, the first two years are composed mainly of classroom basic science education, while the final two years primarily include rotations in clinical settings where students learn patient care firsthand. Today, clinical education is spread across all four years with the final year containing the most clinical rotation time. The Centers for Medicare and Medicaid Services (CMS) of the U.S. Department of Health and Human Services (HHS) has published mandatory rules, obliging on all inpatient and outpatient teaching settings, laying down the guidelines for what medical students in the United States may do, if they have not completed a clerkship or sub-internship. These rules apply to when they are in the clinical setting in school, not when they are, for example, helping staff events or in other non-formal educational settings, even if they are helping provide certain clinical services along with nurses and the supervising physicians - for example, certain basic screening procedures. In the formal clinical setting in school, they can only assist with certain patient evaluation and management tasks, after the vital signs, chief complaint and the history of present illness have been discerned, but prior to the physical examination: reviewing the patient 's signs and symptoms in each body system, and then reviewing the patient 's personal medical, genetic, family, educational / occupational, and psychosocial history. The student 's supervising physician (or another physician with supervisory privileges if the original doctor is no longer available, for some reason) must be in the room during the student 's work, and must conduct this same assessment of the patient before performing the actual physical examination, and after finishing and conferring with the student, will review his or her notes and opinion, editing or correcting them if necessary, and will also have his or her own professional notes; both must then sign and date and I.D. the student 's notes and the medical record. They may observe, but not perform, physical examinations, surgeries, endoscopic or laparoscopic procedures, radiological or nuclear medicine procedures, oncology sessions, and obstetrics. The patient must give consent for their presence and participation in his or her care, even at a teaching facility. Depending on the time they have completed in school, their familiarity with the area of medicine and the procedure, and the presence of their supervisor, and any others needed, in the room or nearby, they may be allowed to conduct certain very minor tests associated with the physical examination, such as simple venipuncture blood draws, and electrocardiograms and electroencephalograms, for learning and experience purposes, especially when there is no intern or resident available.
Upon successful completion of medical school, students are granted the title of Doctor of Medicine (M.D.) or Doctor of Osteopathic Medicine (D.O.). Residency training, which is a supervised training period of three to seven years (usually incorporating the 1st year internship) typically completed for specific areas of specialty. Physicians who sub-specialize or who desire more supervised experience may complete a fellowship, which is an additional one to four years of supervised training in their area of expertise.
Unlike those in many other countries, US medical students typically finance their education with personal debt. In 1992, the average debt of a medical doctor after residency was $25,000. For the class of 2009, the average debt of a medical student is $157,990 and 25.1 % of students had debt in excess of $200,000 (prior to residency). For the past decade the cost of attendance has increased 5 - 6 % each year (roughly 1.6 to 2.1 times inflation).
Licensing of medical doctors in the United States is coordinated at the state level. Most states require that prospective licensees complete the following requirements:
The University of Montevideo in Uruguay is the oldest in Latin America, being public and free, co-governed by students, graduates and teachers. The progress of medical and biological sciences in the nineteenth century, the impact of the work of Claude Bernard (1813 -- 1878), Rudolf Virchow (1821 -- 1902) Robert Koch (1843 -- 1910), Louis Pasteur (1822 -- 1895) and all the splendor of French medical schools, Vienna, Berlin and Edinburgh, was a stimulus for the creation of a medical school in the country. The basic medical school program lasts seven years. There is also a second medical school in the country, it is private and located in Punta del Este, Maldonado.
These are the universities with a medical school in Venezuela:
Historically, Australian medical schools have followed the British tradition by conferring the degrees of Bachelor of Medicine and Bachelor of Surgery (MBBS) to its graduates whilst reserving the title of Doctor of Medicine (MD) for their research training degree, analogous to the PhD, or for their honorary doctorates. Although the majority of Australian MBBS degrees have been graduate programs since the 1990s, under the previous Australian Qualifications Framework (AQF) they remained categorised as Level 7 Bachelor degrees together with other undergraduate programs.
The latest version of the AQF includes the new category of Level 9 Master 's (Extended) degrees which permits the use of the term ' Doctor ' in the styling of the degree title of relevant professional programs. As a result, various Australian medical schools have replaced their MBBS degrees with the MD to resolve the previous anomalous nomenclature. With the introduction of the Master 's level MD, universities have also renamed their previous medical research doctorates. The University of Melbourne was the first to introduce the MD in 2011 as a basic medical degree, and has renamed its research degree to Doctor of Medical Science (DMedSc).
In Bangladesh, admission to medical colleges is organized by the Governing Body of University of Dhaka. A single admission test is held for government and private colleges. Due to the highly competitive nature of these exams, the total number of applicants across the country is around 78 times the number of students accepted. Admission is based on the entrance examination, as well as students ' individual academic records.
The entrance examination consists carries a time limit of one hour. 100 marks are allocated based on objective questions, in which the mark allocation is distributed between a variety of subjects. Biology questions carry 30 marks, Chemistry carries 25, Physics carries 20, English carries 15, and general knowledge carries 10.
Additionally, students ' previous SSC (Secondary School Certificate) and HSC (Higher Secondary School Certificate) scores each carry up to 100 marks towards the overall examination result.
English students prepare themselves for the admission exam ahead of time. This is because as the GCSE and A-Level exams do not cover parts of the Bangladesh syllabus.
The undergraduate program consists of five years study, followed by a one - year internship. The degrees granted are Bachelor of Medicine and Bachelor of Surgery (M.B.B.S.). Further postgraduate qualifications may be obtained in the form of Diplomas or Degrees (MS or MD), M. Phil and FCPS (Fellowship of the College of Physicians and Surgeons).
The University of Dhaka launched a new BSc in "Radiology and Imaging Technology, '' offering 30 students the opportunity to contribute towards their entrance exam grade. For students who have passed the HSC, this course contributes towards 25 % of the mark. The course contributes up to 75 % for Diploma - holding students. The duration of the course is four years (plus 12 weeks for project submission). The course covers a variety of topics, including behavioural science, radiological ethics, imaging physics and general procedure.
After 6 years of general medical education (a foundation year + 5 years), all students will graduate with Bachelor of Medical Sciences (BMedSc) បរិញ្ញាប័ត្រ វិទ្យាសាស្រ្ត វេជ្ជសាស្ត្រ. This degree does not allow graduates to work independently as Physician, but it is possible for those who wish to continue to master 's degrees in other fields relating to medical sciences such as Public Health, Epidemiology, Biomedical Science, Nutrition...
Medical graduates, who wish to be fully qualified as physicians or specialists must follow the rule as below:
All Medical graduates must complete Thesis Defense and pass the National Exit Exam ប្រឡង ចេញ ថ្នាក់ ជាតិ ក្នុង វិស័យ សុខាភិបាល to become either GPs or Medical or Surgical Specialists.
Hong Kong has only two comprehensive medical faculties, the Li Ka Shing Faculty of Medicine, University of Hong Kong and the Faculty of Medicine, Chinese University of Hong Kong, and they are also the sole two institutes offering medical and pharmacy programs. Other healthcare discipline programs (like nursing) are dispersed among some other universities which do not host a medical faculty.
Prospective medical students enter either one of the two faculties of medicine available (held by The University of Hong Kong and The Chinese University of Hong Kong) from high schools. The medical program consists of 5 years for those who take the traditional Hong Kong 's Advanced Level Examination (HKALE) for admission, or 6 years for those who take the new syllabus Hong Kong 's Diploma of Secondary School Education Examination (HKDSE). International students who take examinations other the two mentioned will be assessed by the schools to decide if they will take the 5 - year program or the 6 - year one.
The competition of entering the medical undergraduate programs is cut - throat as the number of intake each year is very limited with a quota of 210 from each school (420 in total) and candidates need to attain an excellent examination result and good performance in interview. The schools put a great emphasis on students ' languages (both Chinese and English) and communication skills as they need to communicate with other health care professionals and patients or their family in the future.
During their studies at the medical schools, students need to accumulate enough clinical practicing hours in addition before their graduation.
The education leads to a degree of Bachelor of medicine and Bachelor of surgery (M.B., B.S. by HKU or M.B., Ch. B. by CUHK). After a 5 - or 6 - year degree, one year of internship follows in order to be eligible to practice in Hong Kong.
Both HKU and CUHK provide a prestigious bachelor of pharmacy course that is popular among local and overseas students. Students of most other health care disciplines have a study duration of 4 years, except nursing programs which require 5 years.
In India, admission to medical colleges is organized both by the central government CBSE as well as the state governments through tests known as entrance examination. Students who have successfully completed their 10 + 2 (Physics, Chemistry and Biology Marks are considered and PCB is mandatory) education (higher secondary school) can appear for the tests the same year.
The All - India Pre Medical / Dental Test for filling up of 15 % of total MBBS seats in India, conducted by CBSE (Central Board for Secondary Education) in the month of April / May intakes about only 2,500 students out of a total applicants of over 600,000. The Supreme Court Of India has mandated the necessity of entrance examination based upon multiple choice questions and negative marking for wrong answers with subsequent merit over 50 % for selection into MBBS as well as higher medical education. The entrance exams are highly competitive.
The graduate program consists of three professionals consisting of 9 semesters, followed by one - year internship (rotating housemanship). The degree granted is Bachelor of Medicine and Bachelor of Surgery (M.B.B.S.) of five years and six months.
The graduate degree of MBBS is divided into 3 professionals, with each professional ending with a professional exam conducted by the university (a single university may have up to dozens of medical colleges offering various graduate / post-graduate / post-doctoral degrees). After clearing this the student moves into the next professional. Each professional exam consists of a theory exam and a practical exam conducted not only by the same college but also external examiners. The exams are tough and many students are unable to clear them, thereby prolonging their degree time. The first professional is for 1 year and includes preclinical subjects, anatomy, physiology and biochemistry. The second professional is for 1 and a half year and has subjects pathology, pharmacology, microbiology (including immunology) and forensic medicine. Clinical exposure starts in the second professional. The third professional is divided into two parts. Part 1 consists of ophthalmology, ENT, and PSM (preventive and social medicine) and part 2 consists of general - medicine (including dermatology, psychiatry as short subjects), general surgery (including radiology, anaesthesiology and orthopaedics as short subjects) and pediatrics and gynaecology and obstetrics. This is followed by one - year of compulsory internship (rotatory house - surgeonship). After internship, the degree of MBBS is awarded by the respective university. Some states have made rural service compulsory for a certain period of time after MBBS.
Selection for higher medical education is through entrance examinations as mandated by the Supreme Court Of India. Further postgraduate qualifications may be obtained as Post-graduate Diploma of two years residency or Doctoral Degree (MS: Master of Surgery, or MD) of three years of residency under the aegis of the Medical Council of India. 50 % of all MD / MS seats in India are filled up through "All - India Post-Graduate Medical Entrance Examination conducted by AIIMS (All - India Institute Of Medical Sciences) under the supervision of the Directorate General Of Health Services. Theses / Dissertations are mandatory to be submitted and cleared by university along with examinations (written and clinicals) to obtain MD / MS degree. Further sub-speciality post-doctoral qualification (DM - Doctorate of Medicine, or MCh - Magister of Chirurgery) of three years of residency followed by university examinations may also be obtained.
PG (post-graduate) qualification is equivalent to M.D. / M.S., consisting of two / three - years residency after MBBS. A PG diploma may also be obtained through the National Board of Examinations (NBE), which also offers three - years residency for sub-specialisation. All degrees by NBE are called DNB (Diplomate of National Board). DNB 's are awarded only after clearance of theses / dissertations and examinations. DNBs equivalent to DM / MCh have to clear examinations mandatorily.
In Indonesia, high school graduates who want to enroll to public medical schools must have their names enlisted by their high school faculty in the "SNMPTN Undangan '' program, arranged by Directorate General of Higher Education, Ministry of National Education. Depending on the high school accreditation, only the class ' top 10 % - 15 % will be considered for admissions. Fewer places are available through entrance exam conducted autonomously by each university. These exams are highly competitive for medicine, especially in prestigious institutions such as University of Indonesia in Jakarta, Airlangga University in Surabaya, and Gadjah Mada University in Yogyakarta. For private medical school, almost all places are offered through independently run admission tests.
The standard Indonesian medical school curriculum is six years long. The four years undergraduate program is composed mainly of classroom education, continued with the last two years in professional program primarily includes rotations in clinical settings where students learn patient care firsthand. If they pass undergraduate program they will have "S. Ked '' (Bachelor of Medicine) in their title and if they finished the professional program and pass the national examination arranged by IDI (Indonesian Medical Association) they will become general physician and receive "dr. (doctor) ''.
Upon graduation, a physician planning to become a specialist in specific field of medicine must complete a residency, which is a supervised training with period of three to four years. A physician who sub-specializes or who desires more supervised experience may complete a fellowship, which is an additional one to three years of supervised training in his / her area of expertise
General medicine education in Iran takes 7 to 7.5 years. Students enter the university after high school. Students study basic medical science (such as anatomy, physiology, biochemistry, histology, biophysics, embryology, etc.) for 2.5 years. At the end of this period they should pass a "basic science '' exam. Those who passed the exam will move on to study physiopathology of different organs in the next 1.5 years. The organ - based learning approach emphasizes critical thinking and clinical application. In the next period of education students enter clinics and educational hospitals for two years. During this period, they will also learn practical skills such as history taking and physical examination. Students should then pass the "pre-internship '' exam to enter the last 1.5 years of education in which medical students function as interns. During this period, medical students participate in all aspects of medical care of the patients and they take night calls. At the end of these 7.5 years students are awarded an M.D degree. M.D doctors can continue their educations through residency and fellowship.
There are five university medical schools in Israel: the Technion in Haifa, Ben Gurion University in Be'er Sheva, Tel Aviv University, the Hebrew University in Jerusalem and the Medical school of the Bar - Ilan University in Ramat Gan. These all follow the European 6 - year model except Bar - Ilan University which has a four - year program similar to the US system.
The Technion Medical School, Ben Gurion University, and Tel Aviv University Sackler Faculty of Medicine offer 4 - year MD programs for American Bachelor 's graduates who have taken the MCAT, interested in completing rigorous medical education in Israel before returning to the US or Canada.
The entrance requirements of the various schools of medicine are very strict. Israeli students require a high school Baccalaureate average above 100 and psychometric examination grade over 700. The demand for medical education is strong and growing and there is a lack of doctors in Israel.
The degree of Doctor of Medicine (MD) is legally considered to be equivalent to a Masters degree within the Israeli Educational System.
In Japan, medical schools are faculties of universities and thus they are undergraduate programs that generally last for six years. Admission is based on an exam taken at the end of high school and an entrance exam at the university itself, which is the most competitive.
Medical students study Liberal Arts and Science for the first 1 -- 2 years, which include Physics, Mathematics, Chemistry, and Foreign Languages together with 2 years long Basic Medicine (Anatomy, Physiology, Pharmacology, Immunology), Clinical Medicine, Public health, and Forensics for the next two years.
Medical students train in the University Hospital for the last two years. Clinical training is a part of the curriculum. Upon completion of the graduation examination, students are awarded an M.D. Medical graduates are titled as Doctor, as are Ph. D. holders. The University does have an MD / PhD program that enables Doctors of Medicine to become Ph. D. holders, as well.
At the end, Medical students take the National Medical License examination and, if they pass it, become a Physician and register in the record in the Ministry of Health, Labour and Welfare. The scope of this exam encompasses every aspect of medicine.
The Bachelor of Medicine and Surgery (MBBS) degree is awarded in Jordan after completion of six years comprising three years of medical sciences and three clinical years. Currently, four state supported universities include a medical school and grant the degree, which are:
In Kyrgyzstan, the Government university Kyrgyz State Medical Academy offers 6 years duration undergraduate (bachelor 's degree) program whereas the other institutions mostly private such as the International School of Medicine at the International University of Kyrgyzstan offers a five - year medical program, with a requisite for English knowledge, that is recognized by the World Health Organization, the General Medical Council, and UNESCO. The medical school is also partnered with the University of South Florida School of Medicine, the University of Heidelberg (Germany), the Novosibirsk Medical University (Russia), and the University of Sharjah (UAE).
Other medical schools located in Kyrgyzstan include the 5 years duration MD / MBBS undergraduate degree program at International University of Science and Business or Mezhdunarodnyy Universitet Nauki i Biznesa, Kyrgyzstan others are the Asian Medical Institute, Kyrgyzstan and the Medical Institute, Osh State University and so on.
In Lebanon, there are two programs of medical education followed: the American system (4 years) and the European system (6 years). Programs are offered in English and French. Admission requirements to the American system requires a candidate to complete a bachelor 's degree along with specific pre-medical courses during the undergraduate years, and writing the MCAT examination. European programs usually requires a candidate to complete 1 year of general science followed by a selection exam by the end of the year.
Schools following the American system (M.D. degree) are:
The language of instruction in all three is English.
Schools following the European system (MBBS degree) are:
In Malaysia, getting into medical school is regarded as difficult, due to high fees and a rigorous selection process. Some new medical schools do offer a foundation in medicine course before admission into a full - time medical programme. Most government, and some private medical schools offer M.D., and others mostly offer MBBS degrees.
There are five medical institutions - UM 1, UM 2, DSMA, UM Mdy, and UM Mgy - in Myanmar. Myanmar medical schools are government - funded and require Myanmar citizenship for eligibility. No private medical school exists at this moment. In Myanmar, admission to medical colleges is organized under the Department of Health Science, which is the branch of Ministry of Health of Myanmar. A student can join one of the Five medical universities of Myanmar if he gets the highest scores in the science combination of the matriculation examination. This exam is highly competitive. Entrance is solely based on this examination and academic records have very minor consequences on an application. The undergraduate program is five years plus one year for work experience in government hospitals. After medical school, Myanmar medical graduates are under contract to spend one year of internship and three years of tenure in rural areas before they are eligible for most residency positions. The degree granted is Bachelor of Medicine and Bachelor of Surgery (M.B.B.S.). Further postgraduate qualifications may be obtained as a Degree (M. Med. Sc) and (Dr.Med.Sc).
In Nepal, medical studies start at undergraduate level. As of 2016, there are twenty institutions recognised by the Nepal Medical Council. There are four main medical bodies in Nepal:
New Zealand medical programs are undergraduate - entry programs of six years duration. Students are considered for acceptance only after a year of undergraduate basic sciences or, as alternative, following the completion of a bachelor 's degree. There are two main medical schools in New Zealand: the University of Auckland and the University of Otago. Each of these has subsidiary medical schools such as Otago 's Wellington School of Medicine and Health Sciences and Auckland 's Waikato Clinical School.
The first year of the medical degree is the basic sciences year, which comprises study in chemistry, biology, physics, and biochemistry as well as population health and behavioural sciences. The following two years are spent studying human organ systems and pathological processes in more detail as well as professional and communication development. Toward the end of the third year, students begin direct contact with patients in hospital settings.
The clinical years begin fully at the beginning of year 4, where students rotate through various areas of general clinical medicine with rotation times varying from between two and six weeks. Year 5 continues this pattern, focusing more on specialized areas of medicine and surgery. Final medical school exams (exit exams) are actually held at the end of year 5, which is different from most other countries, where final exams are held near the very end of the medical degree. Final exams must be passed before the student is allowed to enter year 6.
The final year (Year 6) of medical school is known as the "Trainee Intern '' year, wherein a student is known as a "Trainee Intern '' (commonly referred to in the hospitals as a "T.I. ''). Trainee interns repeat most rotations undertaken in years 4 and 5 but at a higher level of involvement and responsibility for patient care. Trainee interns receive a stipend grant from the New Zealand government (not applicable for international students). At the current time, this is $ NZ 26,756 / year (about $ US 18,500). Trainee interns have responsibility under supervision for the care of about one - third the patient workload of a junior doctor. However, all prescriptions and most other orders (e.g., radiology requests and charting of IV fluids) made by trainee interns must be countersigned by a registered doctor.
New Zealand medical schools currently award the degrees of Bachelor of Medicine and Bachelor of Surgery (MBChB).
Upon completion of the 6th year, students go on to become "House Officers, '' also known as "House Surgeons '' for 1 -- 2 years where they rotate through specialities in the first year and then begin to narrow down to what they 'd like to do for speciality training in the second year. After 2 years of house officer work they apply to get into a training scheme and start to train towards the speciality.
In Pakistan a medical school is more often referred to as a medical college. A medical college is affiliated with a university as a department. There are however several medical universities and medical institutes with their own medical colleges. All medical colleges and universities are regulated by the respective provincial department of health. They however have to be recognized after meeting a set criteria by a central regulatory authority called Pakistan Medical and Dental Council (PMDC) in Islamabad. There are almost equal number of government and private medical colleges and universities, with their number exceeding 50. Admission to a government medical college is highly competitive. Entrance into the medical colleges is based on merit under the guidelines of PMDC. Both the academic performance at the college (high school, grades 11 - 12) level and an entrance test like MCAT are taken into consideration for the eligibility to enter most of the medical colleges. After successfully completing five years of academic and clinical training in the medical college and affiliated teaching hospitals the graduates are awarded a Bachelor of Medicine and Bachelor of Surgery (MBBS) degree. The graduates are then eligible to apply for a medical license from the PMDC. A house job of one - year duration is mandatory in a teaching hospital after completing five years of academic and clinical training in the medical college.
Medical education is normally a five - year Bachelor degree, including one - year internship (or clinical rotation, during which students are actively involved in patient care) before the final degree is awarded. Clinical specialization usually involves a two - or three - year Master degree. Acceptance is based on the national entrance examination used for all universities. There are a few colleges that teach in English and accept foreign medical students. Some of those universities have increased their course duration to 6 years.
The degree conferred is known as Bachelor of Clinical Medicine (BCM).
The Dominicans, under the Spanish Government, established the oldest medical school in the Philippines in 1871, known as the Faculty of Medicine and Surgery (at that time was one with the University of Santo Tomas Faculty of Pharmacy, also considered the oldest pharmacy school in the Philippines) of the Pontifical and Royal University of Santo Tomas in Intramuros, Manila.
Medical education in the Philippines became widespread under the American administration. The Americans, led by the insular government 's Secretary of the Interior, Dean Worcester, built the University of the Philippines College of Medicine and Surgery in 1905. By 1909, nursing instruction was also begun at the Philippine Normal School.
At present there are a number of medical schools in the Philippines, notable examples include the University of the Philippines College of Medicine, Our Lady of Fatima University, Far Eastern University -- Nicanor Reyes Medical Foundation, Saint Louis University International School of Medicine, De La Salle Health Sciences Institute, University of Santo Tomas Faculty of Medicine and Surgery, Pamantasan ng Lungsod ng Maynila, UERMMMC College of Medicine, St. Luke 's College of Medicine -- William H. Quasha Memorial, Cebu Doctors ' University, Cebu Institute of Medicine, Mindanao State University College of Medicine, Cagayan State University College of Medicine in Tuguegarao, Southwestern University - Matias H. Aznar Memorial College of Medicine Inc., West Visayas State University in Iloilo City, University of St. La Salle College of Medicine in Bacolod City, Davao Medical School Foundation in Davao City, Xavier University -- Ateneo de Cagayan, Dr. Jose P. Rizal School of Medicine in Cagayan de Oro, Ago medical educational center AMEC - BCCM in Legazpi, Bicol and University of Northern Philippines in Vigan.
In 2007, the Ateneo School of Medicine and Public Health was established. It is the first medical school in the country to offer a double degree program leading to the degrees Doctor of Medicine and Masters in Business Administration.
Any college graduate may apply for medical school given that they satisfy the requirements set by the institutions. There is also a test known as the National Medical Admission Test or NMAT. Scores are given on a percentile basis and a high ranking is a must to enter the top medical schools in the country.
In most institutions, medical education lasts for four years. Basic subjects are taken up in the first and second years, while clinical sciences are studied in the second and third years. In their fourth year, students rotate in the various hospital departments, spending up to two months each in the fields of internal medicine, surgery, obstetrics and gynecology, and pediatrics, and several weeks in the other specialties. After this, students graduate with a Doctorate in Medicine and apply for postgraduate internship (PGI) in an accredited hospital of their choice. After PGI, the student is eligible to take the Medical Licensure Examination. Passing the examinations confers the right to practice medicine as well as to apply in a Residency Training Program.
The medical education in the Republic of China (Taiwan) is usually 7 years (6 - year learning plus 1 - year internship) in duration, starting right after high schools. The first 2 years in the 7 - year system is composed of basic sciences and liberal art courses. Doctor - patient classes are emphasized, and most schools require compulsory amounts of volunteer hours. Clinical sciences are compressed into a two - year program in the 3rd and 4th years. The duration of clerkships and internships varies from school to school, but all of them end at the 7th grade. Taiwan 's medical education began in 1897 and is over 100 years old now. Students graduate with a Doctor of Medicine (MD) degree. Starting from the year 2013, incoming students will have a 6 + 2 year curriculum, in which the first 6 years are oriented similarly as before and the last two years are Post Graduate Years; this change aims to increase primary care capabilities of medical school graduates.
In Saudi Arabia medical education is free for all Saudi citizens. A medical student must pass an entrance examination and complete a 1 - year pre-medical course containing some basic medical subjects including: Biology, Organic Chemistry, Inorganic Chemistry, Physics, Medical Biostatistics, and English for medical uses. Passing this year is commonly considered as the most challenging. It offers an MBBS (Bachelor of Medicine, Bachelor of Surgery) degree. after one pre-medical course, five medical years and one training year. By 2010, there are 24 medical schools in KSA - 21 nonprofit and three private medical schools the last college opened was Sulaiman AlRajhi Colleges with its partnership with Maastricht in the Netherlands.
Currently, there are 41 medical schools in South Korea. Medical programs in South Korea used to be direct - entry programs such as in the UK, taking six years to complete. However, most universities were going through a transition from direct - entry to a 4 + 4 year system, such as those found in the United States and Canada. Recently, about half of the universities are converting back to six years direct - entry program by 2015, and almost all of the universities are converting it back by 2017.
There are eight medical schools in Sri Lanka that teach evidence based (sometimes called "western '') medicine. The oldest medical school is the Faculty of Medicine, University of Colombo, established as Ceylon Medical School in 1870. There are medical faculties in Peradeniya, Kelaniya, Sri Jayawardanepura, Galle, Batticaloa, Jaffna and Rajarata as well.
Kelaniya Medical Faculty initially started as the North Colombo Medical College (NCMC), a private medical institution. It was one of the earliest private higher educational institutions (1980). Heavy resistance by the medical professionals, university students and other professionals led to its nationalization and to its renaming as the Kelaniya Medical Faculty.
Faculty of Health - Care Sciences is the faculty that offers MBBS together with other para-medical courses. It is an entity of the Eastern University - Sri Lanka.
The Open International University for Complementary Medicines (OIUCM), established under World Health Organization teaches various field of Medicines and related program of Environmental Sciences.
despite having basic problems of training programme.
Postgraduate Institute of Medicine (PGIM) is the only institution that provides specialist training of medical doctors.
The Institute of Indigenous Medicine of the University of Colombo, the Gampaha Wickramarachchi Ayurvedhic Medicine Institute of the University of Kelaniya and the Faculty of Siddha Medicine, University of Jaffna teach Ayurvedha / Unani / Siddha Medicine.
The first medical school in Thailand has been established back in 1890 at Siriraj Hospital, which is now become Faculty of Medicine Siriraj Hospital, Mahidol University. At the current time, there are 22 medical programs offers nationwide. Most of the Thai medical schools are government - funded and require Thai citizenship for eligibility. Only one private medical school exists at the moment. Some Thais choose to attend the private medical school or attend a medical school in a foreign country due to relatively few openings and high college entrance examination scores required for enrollment in public medical schools.
The Thai medical education is 6 years system, consisting of 1 year in basic - science, 2 years in pre-clinical training, and 3 years for clinical training. Upon graduation, all medical students must pass national medical licensing examinations and a university - based comprehensive test. After medical school, newly graduated doctor are under contract to spend a year of internship and 2 years of tenure in rural areas before they are eligible for any other residency positions or specialized training.
The students will receive Doctor of Medicine (MD) degree. However the degree is equivalent to master 's degree in Thailand.
There are four Medical Schools (Fakultete te Mjeksise) in Albania:
These medical schools are usually affiliated with regional hospitals. The course of study lasts 6 years. Students are conferred degree Doctor of Medicine (M.D.) upon graduation.
There are 4 Medical Schools (Medical Universities) in Belarus:
There are five Medical Schools (Medicinski Fakultet) in Bosnia and Herzegovina:
These medical schools are usually affiliated with regional hospitals.
The course of study lasts 6 years or 12 semesters. Students are conferred degree Doctor of Medicine (M.D.) upon graduation.
Entry to BH Medical Schools are very competitive due to limited places imposed by the government quota. Students are required to complete Secondary School Leaving Diploma (Gimnazija - Gymnasium (school) or Medicinska skola matura / svedocanstvo / svjedodzba).
Entrance examination is usually held in June / July. Combined score of Secondary School Diploma assessment (on scale 1 - 5, with 2 minimum passing grade and 5 maximum grade) and entrance examination is taken into consideration. Usually, 5 in Chemistry, Biology, Mathematics, and Physics are required for entry to medicine.
Course structure is more traditional and divided in pre-clinical (year 1 - 3) / clinical part (year 3 - 6) and subject - based.
Practical examinations are held throughout the degree (Anatomy, Biochemistry, Pathology, Physiology practicals etc.). Dissection is part of all medical curricula in Bosnian and Herz. Medical Schools.
In Bulgaria, a medical school is a type of college or a faculty of a university. The medium of instruction is officially in Bulgarian. A six - to one - year course in Bulgarian language is required prior to admittance to the medical program. For European candidates, an exam in Biology and Chemistry in Bulgarian is also required. While a number of Bulgarian medical schools have now started offering medical programmes in English, Bulgarian is still required during the clinical years.
Students join medical school after completing high - school. Admission offers are made by individual medical schools. Bulgarian applicants have to pass entrance examinations in the subjects of Biology and Chemistry. The competitive result of every candidate is the based on their marks these exams plus their secondary - school certificate marks in the same subjects. Those applicants with the highest results achieved are classified for admission.
The course of study is offered as a six - year program. The first 2 years are pre-clinical, the next 3 years are clinical training and the sixth year is the internship year, during which students work under supervision at the hospitals. During the sixth year, students have to appear for ' state exams ' in the 5 major subjects of Internal Medicine, Surgery, Gynaecology and Obstetrics, Social Medicine, and Pediatrics. Upon successful completion of the six years of study and the state exams the degree of ' Physician ' is conferred.
For specialization, graduates have to appear for written tests and interviews to obtain a place in a specialization program. For specialization in general medicine, general practice lasts three years, cardiology lasts four years, internal medicine lasts five years, and general surgery lasts five years.
In Croatia, there are four out of seven universities that offer a medical degree, the University of Zagreb (offers medical studies in English), University of Rijeka, University of Split (also offers medical studies in English), and the University of Osijek. The Medical schools are a faculties of those four universities. Medical students enroll into medical school after finishing secondary education, typically after a Gymnasium, or after a four - year nursing school, or any other high school lasting four years. During the application process, their high school grades, and the grades of their matriculation exam at the end of high school (Matura) and the score at the obligatory admission exam are taken into account, and the best students are enrolled.
The course of study lasts 6 years or 12 semesters. During the first 3 years, students are engaged in pre-clinical courses (Anatomy, Histology, Chemistry, Physics, Cell Biology, Genetics, Physiology, Biochemistry, Immunology, Pathologic Physiology And Anatomy, Pharmacology, Microbiology, etc.). Contact with patients begins at the third year. The remaining 3 years are composed of rotations at various departments, such as Internal Medicine, Neurology, Radiology, Dermatology, Psychiatry, Surgery, Pediatrics, Gynecology and Obstetrics, Anesthesiology, and others. During each academic year, students also enroll into two or three elective courses. After each rotation, the students take a total of about 60 exams. In the end, the students must pass a final multiple - choice exam comprising questions about clinical courses, after which they finally gain an MD, and the title of Doctor of Medicine, which they put after their name. Now the doctors must complete a one - year, supervised, paid internship in a hospital of their choice, after which they take the state (license) examination, which is an eight - part oral examination containing the eight most important clinical branches. After that, the doctors are eligible to practice medicine as general practitioners. Residencies are offered at various hospitals throughout Croatia, and at numerous medical specialities.
Medical study in Czech Republic has a long tradition dating from the 14th century, with the first medical school starting at the First Faculty of Medicine, Charles University in Prague in 1348, making it the 11th oldest in the world and highly prestigious. Students from all over the world are attracted to study medicine in Czech Republic because of the high standards of education provided. Most Czech Universities offer a 6 - year General Medicine program in Czech and in English separately for international students.
The admission to medical studies in Czech Republic is based on the performance in high school diploma (Biology, Chemistry and Physics), English proficiency and performance in the entrance exams. Entrance examination is conducted at the university and by some representative offices abroad. The entrance exams are competitive due to students from all over the world fighting to secure a place. After the entrance exams, successful candidates are further scrutinised by conducting interviews.
Most of the international students studying medicine in the Czech Republic originate from USA, Canada, UK, Norway, Sweden, Germany, Israel, Malaysia and the Middle East.
Most faculties of Medicine in Czech Republic have been approved by the U.S. Department of Education for participation in Federal Student Financial Aid Programs and is listed in the Directory of Postsecondary Institutions published by the U.S. Department of Education. The qualifications are also approved in Canada by the Canadian Ministry of Education and Training, and in the UK by the General Medical Council. Most medical schools are globally recognised and carry a good reputation.
There are nine public government owned medical schools in the Czech Republic:
There is one military medical school, Faculty of Military Health Sciences, University of Defence.
In Denmark, basic medical education is given in four universities: University of Copenhagen, Aarhus University, University of Southern Denmark and Aalborg University. The duration of basic medical education is six years and the course leads to the degree of Candidate of Medicine (M.D.) after swearing the Hippocratic Oath upon graduation.
Medical school is usually followed by a year residency called clinical basic education (Danish: Klinisk basisuddannelse or just KBU) which upon completion grants the right to practices medicine without supervision.
In Finland, basic medical education is given in five universities: Helsinki, Kuopio, Oulu, Tampere and Turku. Admission is regulated by an entrance examination. Studies involve an initial two - year preclinical period of mainly theoretical courses in anatomy, biochemistry, pharmacology etc. However, students have contact with patients from the beginning of their studies. The preclinical period is followed by a four - year clinical period, when students participate in the work of various hospitals and health care centres, learning necessary medical skills. Some Finnish universities have integrated clinical and preclinical subjects along the six - year course, diverging from the traditional program. A problem - based learning method is widely used, and inclusion of clinical cases in various courses and preclinical subjects is becoming common. All medical schools have research programs for students who wish to undertake scientific work. The duration of basic medical education is six years and the course leads to the degree of Licentiate of Medicine.
Medical studies in France are organized as follow:
Right after graduating from High School with a Baccalaureat, any student can register at a university of medicine (there are about 30 of them throughout the country). At the end of first year, an internal ranking examination takes place in each of these universities in order to implement the numerus clausus. First year consists mainly of theoretical classes such as biophysics and biochemistry, anatomy, ethics or histology. Passing first year is commonly considered as challenging and requires hard and continuous work. Each student can only try twice. For example, the Université René Descartes welcomes about 2000 students in first year and only 300 after numerus clausus.
The second and third year are usually mainly quite theoretical although the teachings are often accompanied by placements in the field (e.g. internships as nurses or in the emergency room, depending on the university).
During 4th, 5th and 6th years, medical students get a special status called ' Externe ' (In some universities, such as Pierre et Marie Curie, the ' Externe ' status is given starting in the 3rd year). They work as interns every morning at the hospital plus a few night shifts a month and study in the afternoon. Each internship lasts between 3 and 4 months and takes place in a different department. Med students get 5 weeks off a year.
At the end of sixth year, they need to pass a national ranking exam, which will determine their specialty. Indeed, the first student gets to choose first, then the second, etcetera. Usually students work pretty hard during 5th and 6th years in order to train properly for the national ranking exam. During these years, actual practice at the hospital and some theoretical courses are meant to balance the training. Such externs ' average wage stands between 100 and 300 euros a month.
After that ranking exams, students can start as residents in the specialty they have been able to pick. That is the point from which they also start getting paid.
Towards the end of the medical program, French medical students are provided with more responsibilities and are required to defend a thesis. At the conclusion of the thesis defense, French medical students receive a State Diploma of Doctor of Medicine (MD) or "Diplôme d'Etat de Doctorat en Medecine for general medicine. For those who are in speciality training will also receive a Diploma of Specialized Studies (DES = Diplôme d'Etudes Specialisees) to mark their specialties. Some students may also receive a Diploma of Specialized Complementary Studies (DESC) = Diplôme d'Etudes Specialisees Complementaires.
In Germany, admission to medical schools is currently administered jointly by the Stiftung für Hochschulzulassung (SfH), a centralized federal organization, and the universities themselves. The most important criterion for admission is the Numerus clausus, the final GPA scored by the applicant on the Abitur (highest secondary school diploma). However, in light of the recent gain in influence of medical schools in regards to applicant selection, additional criteria are being used to select students for admission. These criteria vary among medical faculties and the final Abitur GPA is always a core indicator and strongly influences admission. Admission remains highly competitive. A very small number of slots per semester are reserved for selected applicants which already hold a university degree (Zweitstudium) and for medical officer candidates (Sanitätsoffizieranwärter).
The first two years of medical school consist of the so - called pre-clinical classes. During this time, the students are instructed in the basic sciences (e.g. physics, chemistry, biology, anatomy, physiology, biochemistry, etc.) and must pass a federal medical exam (Erster Abschnitt der ärztlichen Prüfung), administered nationally. Upon completion, the students advance to the clinical stage, where they receive three years of training and education in the clinical subjects (e.g., internal medicine, surgery, obstetrics and gynecology, pediatrics, pharmacology, pathology, etc.). After these three years, they have to pass the second federal medical exam (Zweiter Abschnitt der ärztlichen Prüfung) before continuing with the sixth and final year. The last year of medical school consists of the so - called "practical year '' (Praktisches Jahr, PJ). Students are required to spend three four - month clerkships, two of them in a hospital (internal medicine and surgery) as well as one elective, which can be one of the other clinical subjects (e.g. family medicine, anesthesiology, neurology, pediatrics, radiology etc.).
After at least six years of medical school, the students graduate with a final federal medical exam (Dritter Abschnitt der ärztlichen Prüfung). Graduates receive the license to practice medicine or dentistry and the professional title of physician (Arzt) or dentist (Zahnarzt). The academic degrees Doctor of Medicine (Dr. med.) and Doctor of dental Medicine (Dr. med. dent.) are awarded if the graduate has, in addition, successfully completed a scientific study and dissertation. It is a doctoral degree and therefore different from the MD or DDS degrees in the U.S., which as professional degrees are awarded after passing the final exams and do not require additional scientific work. Many medical students opt to perform their thesis during their studies at medical school, but only a fraction of them is able to finish the dissertation - process during their studies. The requirements for getting a Dr. med. degree across the board are not as hard as for the doctor in natural science (Dr. rer. nat.). Therefore, many critics advocate to adopt a system similar to that of the Anglo - Saxon countries with an MD as a professional degree and a PhD showing additional scientific qualification. If physicians wish to open up a doctor 's office, they are required to further complete residency in order to fulfill the federal requirements of becoming Facharzt (specialized in a certain field of medicine like internal medicine, surgery, pediatrics etc.). Oral and maxillofacial surgeons must complete both studies, medicine and dentistry, then afterwards specializing another 5 years.
There are 36 medical faculties in Germany.
There are seven medical schools in Greece. The most prominent one of them is the University of Athens Medical School. The rest of them are in Patras, Thessaloniki, Ioannina, Larissa, Heraklion, and Alexandroupoli. The duration of the studies in Greece is 6 years.
Hungary has four medical schools, in Budapest, Debrecen, Pécs and Szeged. Medical school takes six years to complete, of which the last year is a practical year. Students receive the degree dr. med. univ. or dr. for short, equivalent to the M.D. degree upon graduation. All Hungarian medical schools have programs fully taught in English.
In Iceland, admission to medical school requires passing an organized test, controlled by the University of Iceland, which anyone with a gymnasium degree can take. Only the top 48 scores on the exam are granted admission each year. Medical school in Iceland takes 6 years to complete. Students receive a cand. med. degree upon graduation. Following this, Icelandic regulations require 12 months of clinical internship before granting a full medical license. This internship consists of internal medicine (4 months), surgery (2 months), family medicine (3 months) and a three - month elective period. Upon receiving a license to practice, a physician can start specialist training, in Iceland or abroad.
There are six medical schools in Ireland. They are at Trinity College Dublin, the Royal College of Surgeons in Ireland, University College Dublin, University College Cork, University of Limerick and the National University of Ireland, Galway (the National University of Ireland is the degree - awarding institution for all except the University of Limerick and Trinity College). Training lasts four, five or six years, with the last two years in the affiliated teaching hospitals (UCD - St. Vincents University Hospital, Mater Misericordiae University Hospital) (Trinity - St. James 's Hospital, Adelaide and Meath Hospital incorporating the National Children 's Hospital) (UCC - Cork University Hospital) (RCSI - Beaumont Hospital, Connolly Hospital, Waterford Regional Hospital). For Programmes that are six years in length, entry is based on secondary school qualifications. Programmes that are four years in length require previous university degrees. The Royal College of Surgeons in Ireland and the University of Limerick were the first medical institutions to offer Graduate Entry Medicine of four years in duration in the Ireland. This is now also offered in University College Dublin and University College Cork. The National University of Ireland, Galway also launched a graduate entry programme in 2010.
Medical education is regulated by the Irish Medical Council, the statutory body that is also responsible for maintaining a register of medical practitioners. After graduation with the degrees of BM BS (Bachelor of Medicine and Bachelor of Surgery) or MB BCh BAO (Medicinae Baccalaureus, Baccalaureus in Chirurgia, Baccalaureus in Arte Obstetricia), a doctor is required to spend one year as an intern under supervision before full registration is permitted. Graduates of the Royal College of Surgeons in Ireland also receive the traditional "Licenciate of the Royal Colleges of Surgeons and Physicians in Ireland '' (LRCP&SI), which was awarded before the Royal College of Surgeons in Ireland became an Affiliate of the National University of Ireland and thus was allowed grant degrees, under the Medical Practitioners Act (1978).
In Italy, the contents of the medical school admission test is decided each year by the Ministry of Education, Universities and Research (MIUR) and consists of eighty questions divided in five categories: logics and "general education '' ("cultura generale ''), mathematics, physics, chemistry, and biology. Results are expressed in a national ranking.
As a general rule, all state - run medical schools in the country administer it on the same day, whereas all privately run medical schools administer it on another day, so that a candidate may take the test once for state - run schools and once for a private school of his or her choice, but no more.
Some universities in Italy provide an international degree course in medicine taught entirely in English for both Italian and non-Italian students. A number of these medical schools are at public universities, and have relatively low tuition fees compared to the English - speaking world, because the cost of the medical education is subsidized by the state for both Italian and non-Italian students. These public medical schools include the International Medical School at the University of Milan, the University of Pavia, Rome "La Sapienza '', Rome "Tor Vergata '', Naples Federico II, the Second University of Naples, and the University of Bari. These universities require applicants to rank highly on the International Medical Admissions Test. Italy also has private or parochial, more expensive English - language medical schools such as Vita - Salute San Raffaele University and Humanitas University in Milan, and at the Università Cattolica del Sacro Cuore Rome campus.
Medicine is one of the university faculties implementing numerus clausus ("numero chiuso ''): the overall number of medical students admitted every year is constant, as each medical school is assigned a maximum number of new admission per year by MIUR.
Medical school lasts 6 years (12 semesters). Traditionally, the first three years are devoted to "biological '' subjects (physics, chemistry, biology, biochemistry, genetics, anatomy, physiology, immunology, pathophysiology, microbiology, and usually English language courses), whereas the later three years are devoted to "clinical '' subjects. However, most schools are increasingly devoting the second semester of the third year to clinical subjects and earlier patient contact. In most schools, there are about 36 exams over the 6 - year cycle, as well as a number of compulsory rotations and elective activities.
At the end of the cycle, students have to discuss a final thesis before a board of professors; the subject of this thesis may be a review of academic literature or an experimental work, and usually takes more than a year to complete, with most students beginning an internato (internship) in the subject of their choice in their fifth or sixth year. The title awarded at the end of the discussion ceremony is that of "Dottore Magistrale '', styled in English as a Doctor of Medicine, which in accordance with the Bologna process is comparable with a master 's degree qualification or a US MD.
After graduating, new doctors must complete a three - month, unpaid, supervised tirocinio post-lauream ("post-degree placement '') consisting of two months in their university hospital (one month in a medical service and one in a surgical service) as well as one month shadowing a general practitioner. After getting a statement of successful completion of each month from their supervisors, new doctors take the esame di stato ("state exame '') to obtain full license to practise medicine. They will then have to register with one of the branches of the Ordine dei Medici ("Order of Physicians ''), which are based in each of the Provinces of Italy.
Registration makes new doctors legally able to practice medicine without supervision. They will then have to choose between various career paths, each usually requiring a specific admission exam: most either choose to train as general practitioner (a 3 - year course run by each Region, including both general practice and rotation at non-university hospitals), or to enter a Scuola di Specializzazione ("specialty school '') at a university hospital 5 - year or 6 - year course.
Lithuania has two medical schools, in Kaunas - LSMU http://lsmuni.lt/ and Vilnius. Studies are of six years, of which the last year is a practical year. All Lithuanian medical schools have ams in English. Since 1990, LSMU has been the Alma Mater of many international students and 550 full - time foreign students from 42 countries (mainly Israel, Germany, Finland, Norway, Spain, Sweden, Lebanon, Poland, India, South Korea, Ireland and the United Kingdom) are currently enrolled here.
The Lithuanian University of Health Sciences (LSMU) is the largest university - type school in Lithuania preparing the health specialists. It unites the Veterinary Academy and the Medical Academy. LSMU traditions of studies and scientific work go back to the times of the Faculty of Medicine at Vytautas Magnus Universitythat was later turned into Kaunas Institute of Medicine.
LSMU collaborates with more than 140 European, American and Asian universities for study and research purposes.
The university is a member of numerous international organizations, such as the European University Association (EUA), Association of Schools of Public Health in The European Region (ASPHER), Association of Medical Schools in Europe (AMSE), Association for Medical Education in Europe (AMEE), Organisation for PhD Education in Biomedicine and Health Sciences in the European System (ORPHEUS), European Association of Establishments for Veterinary Education (EAEVE), World Veterinary Association, and more.
LSMU is also a member of the World Health Organization (WHO), where it fulfils the role of a collaboration centre for research and training in epidemiology, as well as for the prevention of cardiovascular and other chronic non-communicable diseases.
The study programmes at LSMU meet university education standards applied in EU countries.
In the Netherlands and Belgium, medical students receive 6 years of university education prior to their graduation.
In the Netherlands, students used to receive four years of preclinical training, followed by two years of clinical training (co-assistentschappen, or co-schappen for short) in hospitals. However, for a number of medical schools this has recently changed to three years of preclinical training, followed by three years of clinical training. At least one medical faculty, that of the Utrecht University, clinical training already begins in the third year of medical school. After 6 years, students graduate as basisartsen (comparable to Doctors of Medicine). As a result of the Bologna process, medical students in the Netherlands now receive a bachelor 's degree after three years in medical school and a master 's degree upon graduation. Prospective students can apply for medical education directly after finishing the highest level of secondary school, vwo; previous undergraduate education is not a precondition for admittance.
The Belgian medical education is much more based on theoretical knowledge than the Dutch system. In the first 3 years, which are very theoretical and lead to a university bachelor degree, general scientific courses are taken such as chemistry, biophysics, physiology, biostatistics, anatomy, virology, etc. To enter the bachelor course in Flanders, prospective students have to pass an exam, as a result of the numerus clausus. In the French - speaking part of Belgium, only the best students that pass the first year of the bachelor course in medicine are admitted to the second and third year.
After the bachelor courses, students are allowed to enter the ' master in medicine ' courses, which consist of 4 years of theoretical and clinical study. In general, the first 2 master years are very theoretical and teach the students in human pathology, diseases, pharmacology. The third year is a year full of internships in a wide range of specialities in different clinics. The seventh, final year serves as a kind of ' pre-specialization ' year in which the students are specifically trained in the specialty they wish to pursue after medical school. This contrasts with the Dutch approach, in which graduates are literally ' basic doctors ' (basisartsen) who have yet to decide on a specialty.
Medical education in Norway begins with a six - to six - and - a-half - year undergraduate university program. Admission requires a very high GPA from secondary school - medicine consistently ranks as the most difficult university programme to be admitted to in Norway. Furthermore, certain high school subjects are required for admission (chemistry, mathematics and physics). Upon completion, students are awarded a candidatus / candidata medicinae (cand. med.) degree (corresponding to e.g. and MD in the USA) and medical license. Those completing a research programme (Forskerlinje) get this added to their degree. Following this, it is required a minimum of 18 months of internship (turnustjeneste) before applying on a specialist training in Norway. The internship consist of 6 months of internal medicine, 6 months of surgery and 6 months family medicine. There are currently 43 recognized medical specialties in Norway.
In Romania, medical school is a department of a medical university, which typically includes Dentistry and Pharmacy departments as well. The name facultate is used for departments in their universities too, but the Medicine departments distinguish themselves by the length of studies (6 years), which grants to graduates a status equivalent to that of a Master in Science. The Medicine departments are also marked by reduced flexibility - in theory, a student in a regular university can take courses from different departments, like Chemistry and Geography (although it usually does not happen, majors being clearly defined), while the medical universities do not have any extra offers for their students, due to their specialization. Admission to medical faculty is usually awarded by passing a Human Biology, Organic Chemistry and / or Physics test. The program lasts 6 years, with first 2 years being preclinical and last 4 years being mostly clinical. After these six years, one has to take the national licence exam (which consists of mostly clinically oriented questions, but some questions also deal with basic sciences) and has to write a thesis in any field he / she studied. Final award is Doctor - Medic (titlu onorific) (shortened Dr.), which is not an academic degree (similar to Germany). All graduates have to go through residency and specialization exams after that in order to practice, although older graduates had different requirements and training (e.g., clinical rotations similar to sub-internship) and might still be able to practice Family Medicine / General Medicine.
Medical schools in Russia offer a 6 - year curriculum leading to award Doctor of Medicine (MD) "Physician ''. Russian medical authorities reluctantly agrees with inclusion in list of international medical schools FAIMER - IMED. FAIMER ca n't include medical schools without cooperation from Russia. For example, Orel State University Medical Institute is n't included in this list.
Medical education in Sweden begins with a five - and - a-half - year undergraduate university program leading to the degree "Master of Science in Medicine '' (Swedish: Läkarexamen). Following this, the National Board of Health and Welfare requires a minimum of 18 months of clinical internship (Swedish: Allmäntjänstgöring) before granting a medical license to be fully qualified as Medical Doctor (MD).
This internship consists of surgery (3 -- 6 months), internal medicine (3 -- 6 months), psychiatry (three months) and family medicine (six months). Upon receiving a license to practice, a physician is able to apply for a post to start specialist training. There are currently 52 recognized medical specialties in Sweden. The specialist training has a duration of minimum five years, which upon completion grants formal qualification as a specialist.
There are five universities granting medical degrees in Switzerland (plus the University of Fribourg that provides the bachelor but not the master in medicine) and five university hospitals:
All high school graduates who wish to pursue further education are required to take an MCQ exam. The exam covers most of the high school and secondary school curricula.
A student who scores high enough gets a place in a faculty of his / her desire. Entrance to medical schools is extremely competitive, only very top scoring students are accepted to medical schools.
Medical education takes six years, first three years being Pre-clinical years and the latter three being Clinical years. Right after graduation, graduates can either work as GPs or take another exam called TUS (Medical Specialization Examination) to do residency in a particular department of a particular hospital.
Most of the medical schools in Turkey are state schools but the number of private schools is on the rise. MCQ exam (YGS and LYS) scores required to be accepted to private medical schools are lower compared to their public counterparts. The language of instruction is, in general, Turkish, but few universities also offer schools with English as the language of instruction. This makes Turkey a popular place to study medicine for students from nearby areas like the Balkans, the Middle East, and to a lesser extent North Africa.
Medical degrees in Ukraine were offered only in institutions called medical universities, which are separate from traditional universities. However, some medical schools are now associated with classical universities. These include:
Due to the UK code for higher education, first degrees in medicine comprise an integrated programme of study and professional practice spanning several levels. While the final outcomes of the qualifications themselves typically meet the Expectations of the descriptor for higher education qualification at level 7 (the UK master 's degree). These degrees may retain, for historical reasons, "Bachelor of Medicine, Bachelor of Surgery '' and are abbreviated to MBChB or MBBS.
There are currently 32 institutions that offer medical degrees in the United Kingdom. Completion of a medical degree in the UK results in the award of the degrees of Bachelor of Medicine and Bachelor of Surgery. Admission requirements to the schools varies; most insist on solid A-Levels / Highers, a good performance in an aptitude test such as the UKCAT, the BMAT or the GAMSAT, and usually an interview. As of 2008 the UK has approximately 8000 places for medical students.
Methods of education range from courses that offer a problem - based learning approach (alongside lectures etc.), and others having a more traditional pre-clinical / clinical structure. Others combine several approaches in an integrated approach.
Following qualification, UK doctors enter a generalised two - year, competency - based "foundation programme '', gaining full GMC (General Medical Council) registration at the end of foundation year one, and applying for specialist training (in medicine, surgery, general practice etc.) after foundation year two.
Many medical schools offer intercalated degree programmes to allow students to focus on an area of research outside their medical degree for a year.
Some medical schools offer graduate entry programmes, which are four years long. The name refers to the fact that students on these courses already have a degree in another subject (i.e. they are graduates). Due to the shorter length of the course, the timetable of these degrees are more intense and the holidays are shorter, compared to students on the 5 - year course. In terms of entrance requirements, the 4 - year degree restricts entry to those who already hold a first degree, and have previously worked in an area of healthcare. The first degree does n't necessarily have to be a BSc degree (this is the criteria for some of the medical schools), whereas other medical schools specify that the prior degree has to be in a science subject. Competition for this course is fierce, with students having to also sit an entrance exam prior to being considered for an interview.
Medical schools typically admit more students into undergraduate programmes than into graduate entry programmes.
A person accepted into a medical school and enrolled in an educational program in medicine, with the goal of becoming a medical doctor, is referred to as a medical student. Medical students are generally considered to be at the earliest stage of the medical career pathway. In some locations they are required to be registered with a government body.
Medical students typically engage in both basic science and practical clinical coursework during their tenure in medical school. Course structure and length vary greatly among countries (see above).
Upon completion of medical school in the United States, students transition into residency programs through the National Resident Match Program (NRMP). Each year, approximately 16,000 US medical school students participate in the residency match. An additional 18,000 independent applicants -- former graduates of U.S. medical schools, U.S. osteopathic medical schools, U.S. podiatry students, Canadian students, and graduates of foreign medical schools -- compete for the approximately 25,000 available residency positions.
Medical students, perhaps being vulnerable because of their relatively low status in health care settings, commonly experience verbal abuse, humiliation and harassment (nonsexual or sexual). Discrimination based on gender and race is less common.
A meta - analysis of the American JAMA magazine suggested depressive symptoms in 24 % to 29 % of all medical students and 25 % to 33 % of all resident physicians. Burnout in medical students, in addition, seems to be associated with increased likelihood of subsequent suicidal ideation.
It has been estimated by a US study that approximately 14 % of medical students have symptoms of moderate to severe depression, and roughly 5 % have suicidal thoughts at some point during training. Internationally depression as well as distress in medical school is widely studied and gained more attention over the years. A recent study among German medical students at international universities displayed the significantly higher risk of depression symptoms being 2.4 times higher than the average population. 23.5 % of these German medical students showed clinically relevant depressive symptoms. In a South Korean study, 40 % of medical students appeared to have depression. Medical students with more severe depression also may be less likely to seek treatment, largely from fear that faculty members would view them as being unable to handle their responsibilities. Students who feel that they lack a social support system are 10 times more likely to be depressed compared with students that consider themselves to have good social support.
Approximately 10 % experience suicidal ideation during medical school.
Lemon and Stone hypothesised in what has become termed the ' Lemon Stone Hypothesis ', that medical students from lower socioeconomic backgrounds increase in prevalence during times of national economic adversity. Their hypothesis was a formulation of Becker Maimans ' health belief model and Adaption theory. This hypothesis has to some extent been supported by a series of surveys.
|
how did the symphony change throughout the classical era | Classical period (music) - wikipedia
The dates of the Classical period in Western music are generally accepted as being between about the year 1730 and the year 1820. However, the term classical music is often used in a colloquial sense as a synonym for Western art music which describes a variety of Western musical styles from the Middle Ages to the present, and especially from the seventeenth century to the nineteenth. This article is about the specific period in most of the 18th century to the early 19th century, though overlapping with the Baroque and Romantic periods.
The Classical period falls between the Baroque and the Romantic periods. Classical music has a lighter, clearer texture than Baroque music and is less complex. It is mainly homophonic, using a clear melody line over a subordinate chordal accompaniment, but counterpoint was by no means forgotten, especially later in the period. It also makes use of style galant which emphasized light elegance in place of the Baroque 's dignified seriousness and impressive grandeur. Variety and contrast within a piece became more pronounced than before and the orchestra increased in size, range, and power.
The harpsichord was replaced as the main keyboard instrument by the piano (or fortepiano). Unlike the harpsichord, which plucked strings with quills, pianos strike the strings with leather - covered hammers when the keys are pressed, which enables the performer to play louder or softer and play with more expression; in contrast, the force with which a performer plays the harpsichord keys does not change the sound. Instrumental music was considered important by Classical period composers. The main kinds of instrumental music were the sonata, trio, string quartet, symphony (performed by an orchestra) and the solo concerto, which featured a virtuoso solo performer playing a solo work for violin, piano, flute, or another instrument, accompanied by an orchestra. Vocal music, such as songs for a singer and piano (notably the work of Schubert), choral works, and opera (a staged dramatic work for singers and orchestra) were also important during this period.
The best - known composers from this period are Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven, and Franz Schubert; other notable names include Luigi Boccherini, Muzio Clementi, Antonio Salieri, Leopold Mozart, Johann Christian Bach, Carl Philipp Emanuel Bach, and Christoph Willibald Gluck. Ludwig van Beethoven is regarded either as a Romantic composer or a Classical period composer who was part of the transition to the Romantic era. Franz Schubert is also a transitional figure, as were Johann Nepomuk Hummel, Luigi Cherubini, Gaspare Spontini, Gioachino Rossini, and Carl Maria von Weber. The period is sometimes referred to as the era of Viennese Classic or Classicism (German: Wiener Klassik), since Gluck, Mozart, Haydn, Salieri, and Beethoven all worked in Vienna and Schubert was born there.
In the middle of the 18th century, Europe began to move toward a new style in architecture, literature, and the arts, generally known as Classicism. This style sought to emulate the ideals of Classical antiquity, especially those of Classical Greece. Classical music was still tightly linked to aristocratic Court culture and supported by absolute monarchies. Classical music used formality and emphasis on order and hierarchy, and a "clearer '', "cleaner '' style that used clearer divisions between parts (notably a clear, single melody accompanied by chords), brighter contrasts and "tone colors '' (achieved by the use of dynamic changes and modulations to more keys). In contrast with the richly layered music of the Baroque era, Classical music moved towards simplicity rather than complexity. In addition, the typical size of orchestras began to increase, giving orchestras a more powerful sound.
The remarkable development of ideas in "natural philosophy '' had already established itself in the public consciousness. In particular, Newton 's physics was taken as a paradigm: structures should be well - founded in axioms and be both well - articulated and orderly. This taste for structural clarity began to affect music, which moved away from the layered polyphony of the Baroque period toward a style known as homophony, in which the melody is played over a subordinate harmony. This move meant that chords became a much more prevalent feature of music, even if they interrupted the melodic smoothness of a single part. As a result, the tonal structure of a piece of music became more audible.
The new style was also encouraged by changes in the economic order and social structure. As the 18th century progressed, the nobility became the primary patrons of instrumental music, while public taste increasingly preferred lighter, funny comic operas. This led to changes in the way music was performed, the most crucial of which was the move to standard instrumental groups and the reduction in the importance of the continuo -- the rhythmic and harmonic groundwork of a piece of music, typically played by a keyboard (harpsichord or organ) and usually accompanied by a varied group of bass instruments, including cello, double bass, bass viol, and theorbo. One way to trace the decline of the continuo and its figured chords is to examine the disappearance of the term obbligato, meaning a mandatory instrumental part in a work of chamber music. In Baroque compositions, additional instruments could be added to the continuo group according to the group or leader 's preference; in Classical compositions, all parts were specifically noted, though not always notated, so the term "obbligato '' became redundant. By 1800, basso continuo was practically extinct, except for the occasional use of a pipe organ continuo part in a religious Mass in the early 1800s.
Economic changes also had the effect of altering the balance of availability and quality of musicians. While in the late Baroque, a major composer would have the entire musical resources of a town to draw on, the musical forces available at an aristocratic hunting lodge or small court were smaller and more fixed in their level of ability. This was a spur to having simpler parts for ensemble musicians to play, and in the case of a resident virtuoso group, a spur to writing spectacular, idiomatic parts for certain instruments, as in the case of the Mannheim orchestra, or virtuoso solo parts for particularly skilled violinists or flautists. In addition, the appetite by audiences for a continual supply of new music carried over from the Baroque. This meant that works had to be performable with, at best, one or two rehearsals. Indeed, even after 1790 Mozart writes about "the rehearsal '', with the implication that his concerts would have only one rehearsal.
Since polyphonic textures with interweaving melodic lines were no longer the main focus of music (excluding in the development section), a single melodic line with accompaniment became the main texture. In the Classical era, there was greater emphasis on notating that line for dynamics and phrasing. This contrasts with the Baroque era, when melodies were typically written with no dynamics, phrasing marks or ornaments, as it was assumed that the performer would improvise these elements on the spot. In the Classical era, it became more common for composers to indicate where they wanted performers to play ornaments such as trills or turns. The simplification of texture made such instrumental detail more important, and also made the use of characteristic rhythms, such as attention - getting opening fanfares, the funeral march rhythm, or the minuet genre, more important in establishing and unifying the tone of a single movement.
Forms such as the concerto and sonata were more heavily defined and given more specific rules, whereas the symphony was created in this period (this is popularly attributed to Joseph Haydn). The concerto grosso (a concerto for more than one musician), a very popular form in the Baroque era, began to be replaced by the solo concerto (a concerto featuring only one soloist, accompanied by orchestra). Given that Classical concertos only had a single soloist, composers began to place more importance on the particular soloist 's ability to show off virtuoso skills, with challenging, fast scale and arpeggio runs. There were, of course, some concerti grossi that remained, the most famous of which being Mozart 's Sinfonia Concertante for Violin and Viola in E flat Major.
Classical music has a lighter, clearer texture than Baroque music and is less complex. It is mainly homophonic -- a clear melody above a subordinate chordal accompaniment. Counterpoint was by no means forgotten, especially later in the period, and composers still used counterpoint in religious pieces, such as Masses. Classical music also makes use of style galant, which contrasted with the heavy strictures of the Baroque style. Galant style emphasized light elegance in place of the Baroque 's dignified seriousness and impressive grandeur.
Variety and contrast within a piece became more pronounced than before. Composers used a variety of keys, melodies, rhythms and dynamics. Classical pieces frequently utilize dynamic changes such as crescendo (an instruction to gradually get louder), diminuendo (an instruction to gradually get softer) and sforzando (a sudden strong, loud attack). Classical pieces had frequent changes of dynamics, mood and timbre, in contrast to Baroque music. Melodies tended to be shorter than those of Baroque music, with clear - cut phrases and distinct cadences. The orchestra increased in size and range; the harpsichord or pipe organ basso continuo role in orchestra gradually fell out of use between 1750 and 1800. As well, the woodwinds became a self - contained section, consisting of clarinets, oboes, flutes and bassoons. As a solo instrument, the harpsichord was replaced by the piano (or fortepiano, the first type of piano which was invented ca. 1700). Early piano music was light in texture, often with Alberti bass accompaniment, which used arpeggios in the left hand to state the harmonies. Over the Classical period, the pieces became richer, more sonorous and more powerful.
While vocal music such as comic opera was popular, great importance was given to instrumental music. The main kinds of instrumental music were the sonata, trio, string quartet, symphony, concerto (usually for a virtuoso solo instrument accompanied by orchestra), and light pieces such as serenades and divertimentos. Sonata form developed and became the most important form. It was used to build up the first movement of most large - scale works in symphonies and string quartets. Sonata form was also used in other movements and in single, standalone pieces such as overtures.
At first the new style took over Baroque forms -- the ternary da capo aria and the sinfonia and concerto -- but composed with simpler parts, more notated ornamentation, rather than the improvised ornaments that were common in the Baroque era, and more emphatic division of pieces into sections. However, over time, the new aesthetic caused radical changes in how pieces were put together, and the basic formal layouts changed. Composers from this period sought dramatic effects, striking melodies, and clearer textures. One of the big textural changes was a shift away from the complex, dense polyphonic style of the Baroque, in which multiple interweaving melodic lines were played simultaneously, and towards homophony, a lighter texture which uses a clear single melody line accompanied by chords.
The Italian composer Domenico Scarlatti was an important figure in the transition from Baroque to Classical style. His unique compositional style is strongly related to that of the early Classical period. He is best known for composing more than five hundred one - movement keyboard sonatas. In Spain, Antonio Soler also produced valuable keyboard sonatas, more varied in form than those of Scarlatti, with some pieces in three or four movements.
Baroque music generally uses many harmonic fantasies and polyphonic sections that focus less on the structure of the musical piece, and there was less emphasis on clear musical phrases. In the classical period, the harmonies became simpler. However, the structure of the piece, the phrases and small melodic or rhythmic motives, became much more important than in the Baroque period.
Another important break with the past was the radical overhaul of opera by Christoph Willibald Gluck, who cut away a great deal of the layering and improvisational ornaments and focused on the points of modulation and transition. By making these moments where the harmony changes more of a focus, he enabled powerful dramatic shifts in the emotional color of the music. To highlight these transitions, he used changes in instrumentation (orchestration), melody, and mode. Among the most successful composers of his time, Gluck spawned many emulators, one of whom was Antonio Salieri. Their emphasis on accessibility brought huge successes in opera, and in other vocal music such as songs, oratorios, and choruses. These were considered the most important kinds of music for performance and hence enjoyed greatest public success.
The phase between the Baroque and the rise of the Classical, with its broad mixture of competing ideas and attempts to unify the different demands of taste, economics and "worldview '', goes by many names. It is sometimes called Galant, Rococo, or pre-Classical, or at other times early Classical. It is a period where some composers still working in the Baroque style flourish, though sometimes thought of as being more of the past than the present -- Bach, Handel, and Telemann all composed well beyond the point at which the homophonic style is clearly in the ascendant. Musical culture was caught at a crossroads: the masters of the older style had the technique, but the public hungered for the new. This is one of the reasons C.P.E. Bach was held in such high regard: he understood the older forms quite well and knew how to present them in new garb, with an enhanced variety of form.
By the late 1750s there were flourishing centers of the new style in Italy, Vienna, Mannheim, and Paris; dozens of symphonies were composed and there were bands of players associated with musical theatres. Opera or other vocal music accompanied by orchestra was the feature of most musical events, with concertos and symphonies (arising from the overture) serving as instrumental interludes and introductions for operas and church services. Over the course of the Classical period, symphonies and concertos developed and were presented independently of vocal music.
The "normal '' orchestra ensemble -- a body of strings supplemented by winds -- and movements of particular rhythmic character were established by the late 1750s in Vienna. However, the length and weight of pieces was still set with some Baroque characteristics: individual movements still focused on one "affect '' (musical mood) or had only one sharply contrasting middle section, and their length was not significantly greater than Baroque movements. There was not yet a clearly enunciated theory of how to compose in the new style. It was a moment ripe for a breakthrough.
Many consider this breakthrough to have been made by C.P.E. Bach, Gluck, and several others. Indeed, C.P.E. Bach and Gluck are often considered founders of the Classical style. The first great master of the style was the composer Joseph Haydn. In the late 1750s he began composing symphonies, and by 1761 he had composed a triptych (Morning, Noon, and Evening) solidly in the contemporary mode. As a vice-Kapellmeister and later Kapellmeister, his output expanded: he composed over forty symphonies in the 1760s alone. And while his fame grew, as his orchestra was expanded and his compositions were copied and disseminated, his voice was only one among many.
While some scholars suggest that Haydn was overshadowed by Mozart and Beethoven, it would be difficult to overstate Haydn 's centrality to the new style, and therefore to the future of Western art music as a whole. At the time, before the pre-eminence of Mozart or Beethoven, and with Johann Sebastian Bach known primarily to connoisseurs of keyboard music, Haydn reached a place in music that set him above all other composers except perhaps the Baroque era 's George Frideric Handel. Haydn took existing ideas, and radically altered how they functioned -- earning him the titles "father of the symphony '' and "father of the string quartet ''.
One of the forces that worked as an impetus for his pressing forward was the first stirring of what would later be called Romanticism -- the Sturm und Drang, or "storm and stress '' phase in the arts, a short period where obvious and dramatic emotionalism was a stylistic preference. Haydn accordingly wanted more dramatic contrast and more emotionally appealing melodies, with sharpened character and individuality in his pieces. This period faded away in music and literature: however, it influenced what came afterward and would eventually be a component of aesthetic taste in later decades.
The Farewell Symphony, No. 45 in F ♯ Minor, exemplifies Haydn 's integration of the differing demands of the new style, with surprising sharp turns and a long slow adagio to end the work. In 1772, Haydn completed his Opus 20 set of six string quartets, in which he deployed the polyphonic techniques he had gathered from the previous Baroque era to provide structural coherence capable of holding together his melodic ideas. For some, this marks the beginning of the "mature '' Classical style, in which the period of reaction against late Baroque complexity yielded to a period of integration Baroque and Classical elements.
Haydn, having worked for over a decade as the music director for a prince, had far more resources and scope for composing than most other composers. His position also gave him the ability to shape the forces that would play his music, as he could select skilled musicians. This opportunity was not wasted, as Haydn, beginning quite early on his career, sought to press forward the technique of building and developing ideas in his music. His next important breakthrough was in the Opus 33 string quartets (1781), in which the melodic and the harmonic roles segue among the instruments: it is often momentarily unclear what is melody and what is harmony. This changes the way the ensemble works its way between dramatic moments of transition and climactic sections: the music flows smoothly and without obvious interruption. He then took this integrated style and began applying it to orchestral and vocal music.
Haydn 's gift to music was a way of composing, a way of structuring works, which was at the same time in accord with the governing aesthetic of the new style. However, a younger contemporary, Wolfgang Amadeus Mozart, brought his genius to Haydn 's ideas and applied them to two of the major genres of the day: opera, and the virtuoso concerto. Whereas Haydn spent much of his working life as a court composer, Mozart wanted public success in the concert life of cities, playing for the general public. This meant he needed to write operas and write and perform virtuoso pieces. Haydn was not a virtuoso at the international touring level; nor was he seeking to create operatic works that could play for many nights in front of a large audience. Mozart wanted to achieve both. Moreover, Mozart also had a taste for more chromatic chords (and greater contrasts in harmonic language generally), a greater love for creating a welter of melodies in a single work, and a more Italianate sensibility in music as a whole. He found, in Haydn 's music and later in his study of the polyphony of J.S. Bach, the means to discipline and enrich his artistic gifts.
Mozart rapidly came to the attention of Haydn, who hailed the new composer, studied his works, and considered the younger man his only true peer in music. In Mozart, Haydn found a greater range of instrumentation, dramatic effect and melodic resource. The learning relationship moved in both directions. Mozart also had a great respect for the older, more experienced composer, and sought to learn from him.
Mozart 's arrival in Vienna in 1780 brought an acceleration in the development of the Classical style. There, Mozart absorbed the fusion of Italianate brilliance and Germanic cohesiveness that had been brewing for the previous 20 years. His own taste for flashy brilliances, rhythmically complex melodies and figures, long cantilena melodies, and virtuoso flourishes was merged with an appreciation for formal coherence and internal connectedness. It is at this point that war and economic inflation halted a trend to larger orchestras and forced the disbanding or reduction of many theater orchestras. This pressed the Classical style inwards: toward seeking greater ensemble and technical challenges -- for example, scattering the melody across woodwinds, or using a melody harmonized in thirds. This process placed a premium on small ensemble music, called chamber music. It also led to a trend for more public performance, giving a further boost to the string quartet and other small ensemble groupings.
It was during this decade that public taste began, increasingly, to recognize that Haydn and Mozart had reached a high standard of composition. By the time Mozart arrived at age 25, in 1781, the dominant styles of Vienna were recognizably connected to the emergence in the 1750s of the early Classical style. By the end of the 1780s, changes in performance practice, the relative standing of instrumental and vocal music, technical demands on musicians, and stylistic unity had become established in the composers who imitated Mozart and Haydn. During this decade Mozart composed his most famous operas, his six late symphonies that helped to redefine the genre, and a string of piano concerti that still stand at the pinnacle of these forms.
One composer who was influential in spreading the more serious style that Mozart and Haydn had formed is Muzio Clementi, a gifted virtuoso pianist who tied with Mozart in a musical "duel '' before the emperor in which they each improvised on the piano and performed their compositions. Clementi 's sonatas for the piano circulated widely, and he became the most successful composer in London during the 1780s. Also in London at this time was Jan Ladislav Dussek, who, like Clementi, encouraged piano makers to extend the range and other features of their instruments, and then fully exploited the newly opened up possibilities. The importance of London in the Classical period is often overlooked, but it served as the home to the Broadwood 's factory for piano manufacturing and as the base for composers who, while less notable than the "Vienna School '', had a decisive influence on what came later. They were composers of many fine works, notable in their own right. London 's taste for virtuosity may well have encouraged the complex passage work and extended statements on tonic and dominant.
When Haydn and Mozart began composing, symphonies were played as single movements -- before, between, or as interludes within other works -- and many of them lasted only ten or twelve minutes; instrumental groups had varying standards of playing, and the continuo was a central part of music - making.
In the intervening years, the social world of music had seen dramatic changes. International publication and touring had grown explosively, and concert societies formed. Notation became more specific, more descriptive -- and schematics for works had been simplified (yet became more varied in their exact working out). In 1790, just before Mozart 's death, with his reputation spreading rapidly, Haydn was poised for a series of successes, notably his late oratorios and "London '' symphonies. Composers in Paris, Rome, and all over Germany turned to Haydn and Mozart for their ideas on form.
The time was again ripe for a dramatic shift. In the 1790s, a new generation of composers, born around 1770, emerged. While they had grown up with the earlier styles, they heard in the recent works of Haydn and Mozart a vehicle for greater expression. In 1788 Luigi Cherubini settled in Paris and in 1791 composed Lodoiska, an opera that raised him to fame. Its style is clearly reflective of the mature Haydn and Mozart, and its instrumentation gave it a weight that had not yet been felt in the grand opera. His contemporary Étienne Méhul extended instrumental effects with his 1790 opera Euphrosine et Coradin, from which followed a series of successes. The final push towards change came from Gaspare Spontini, who was deeply admired by future romantic composers such as Weber, Berlioz and Wagner. The innovative harmonic language of his operas, their refined instrumentation and their "enchained '' closed numbers (a structural pattern which was later adopted by Weber in Euryanthe and from him handed down, through Marschner, to Wagner), formed the basis from which French and German romantic opera had its beginnings.
The most fateful of the new generation was Ludwig van Beethoven, who launched his numbered works in 1794 with a set of three piano trios, which remain in the repertoire. Somewhat younger than the others, though equally accomplished because of his youthful study under Mozart and his native virtuosity, was Johann Nepomuk Hummel. Hummel studied under Haydn as well; he was a friend to Beethoven and Franz Schubert. He concentrated more on the piano than any other instrument, and his time in London in 1791 and 1792 generated the composition and publication in 1793 of three piano sonatas, opus 2, which idiomatically used Mozart 's techniques of avoiding the expected cadence, and Clementi 's sometimes modally uncertain virtuoso figuration. Taken together, these composers can be seen as the vanguard of a broad change in style and the center of music. They studied one another 's works, copied one another 's gestures in music, and on occasion behaved like quarrelsome rivals.
The crucial differences with the previous wave can be seen in the downward shift in melodies, increasing durations of movements, the acceptance of Mozart and Haydn as paradigmatic, the greater use of keyboard resources, the shift from "vocal '' writing to "pianistic '' writing, the growing pull of the minor and of modal ambiguity, and the increasing importance of varying accompanying figures to bring "texture '' forward as an element in music. In short, the late Classical was seeking a music that was internally more complex. The growth of concert societies and amateur orchestras, marking the importance of music as part of middle - class life, contributed to a booming market for pianos, piano music, and virtuosi to serve as examplars. Hummel, Beethoven, and Clementi were all renowned for their improvising.
Direct influence of the Baroque continued to fade: the figured bass grew less prominent as a means of holding performance together, the performance practices of the mid-18th century continued to die out. However, at the same time, complete editions of Baroque masters began to become available, and the influence of Baroque style continued to grow, particularly in the ever more expansive use of brass. Another feature of the period is the growing number of performances where the composer was not present. This led to increased detail and specificity in notation; for example, there were fewer "optional '' parts that stood separately from the main score.
The force of these shifts became apparent with Beethoven 's 3rd Symphony, given the name Eroica, which is Italian for "heroic '', by the composer. As with Stravinsky 's The Rite of Spring, it may not have been the first in all of its innovations, but its aggressive use of every part of the Classical style set it apart from its contemporary works: in length, ambition, and harmonic resources as well.
The First Viennese School is a name mostly used to refer to three composers of the Classical period in late - 18th - century Vienna: Haydn, Mozart, and Beethoven. Franz Schubert is occasionally added to the list.
In German - speaking countries, the term Wiener Klassik (lit. Viennese classical era / art) is used. That term is often more broadly applied to the Classical era in music as a whole, as a means to distinguish it from other periods that are colloquially referred to as classical, namely Baroque and Romantic music.
The term "Viennese School '' was first used by Austrian musicologist Raphael Georg Kiesewetter in 1834, although he only counted Haydn and Mozart as members of the school. Other writers followed suit, and eventually Beethoven was added to the list. The designation "first '' is added today to avoid confusion with the Second Viennese School.
Whilst, Schubert apart, these composers certainly knew each other (with Haydn and Mozart even being occasional chamber - music partners), there is no sense in which they were engaged in a collaborative effort in the sense that one would associate with 20th - century schools such as the Second Viennese School, or Les Six. Nor is there any significant sense in which one composer was "schooled '' by another (in the way that Berg and Webern were taught by Schoenberg), though it is true that Beethoven for a time received lessons from Haydn.
Attempts to extend the First Viennese School to include such later figures as Anton Bruckner, Johannes Brahms, and Gustav Mahler are merely journalistic, and never encountered in academic musicology.
Musical eras and their prevalent styles, forms and instruments seldom disappear at once; instead, features are replaced over time, until the old approach is simply felt as "old - fashioned ''. The Classical style did not "die '' suddenly; rather, it gradually got phased out under the weight of changes. To give just one example, while it is generally stated that the Classical era stopped using the harpsichord in orchestras, this did not happen all of a sudden at the start of the Classical era in 1750. Rather, orchestras slowly stopped using the harpsichord to play basso continuo until the practice was discontinued by the end of the 1700s.
One crucial change was the shift towards harmonies centering on "flatward '' keys: shifts in the subdominant direction. In the Classical style, major key was far more common than minor, chromaticism being moderated through the use of "sharpward '' modulation (e.g., a piece in C major modulating to G major, D major, or A major, all of which are keys with more sharps). As well, sections in the minor mode were often used for contrast. Beginning with Mozart and Clementi, there began a creeping colonization of the subdominant region (the ii or IV chord, which in the key of C major would be the keys of d minor or F major). With Schubert, subdominant modulations flourished after being introduced in contexts in which earlier composers would have confined themselves to dominant shifts (modulations to the dominant chord, e.g., in the key of C major, modulating to G major). This introduced darker colors to music, strengthened the minor mode, and made structure harder to maintain. Beethoven contributed to this by his increasing use of the fourth as a consonance, and modal ambiguity -- for example, the opening of the Symphony No. 9 in D minor.
Franz Schubert, Carl Maria von Weber, and John Field are among the most prominent in this generation of "Proto - Romantics '', along with the young Felix Mendelssohn. Their sense of form was strongly influenced by the Classical style. While they were not yet "learned '' composers (imitating rules which were codified by others), they directly responded to works by Beethoven, Mozart, Clementi, and others, as they encountered them. The instrumental forces at their disposal in orchestras were also quite "Classical '' in number and variety, permitting similarity with Classical works.
However, the forces destined to end the hold of the Classical style gathered strength in the works of many of the above composers, particularly Beethoven. The most commonly cited one is harmonic innovation. Also important is the increasing focus on having a continuous and rhythmically uniform accompanying figuration: Beethoven 's Moonlight Sonata was the model for hundreds of later pieces -- where the shifting movement of a rhythmic figure provides much of the drama and interest of the work, while a melody drifts above it. Greater knowledge of works, greater instrumental expertise, increasing variety of instruments, the growth of concert societies, and the unstoppable domination of the increasingly more powerful piano (which was given a bolder, louder tone by technological developments such as the use of steel strings, heavy cast - iron frames and sympathetically vibrating strings) all created a huge audience for sophisticated music. All of these trends contributed to the shift to the "Romantic '' style.
Drawing the line between these two styles is very difficult: some sections of Mozart 's later works, taken alone, are indistinguishable in harmony and orchestration from music written 80 years later -- and some composers continue to write in normative Classical styles into the early 20th century. Even before Beethoven 's death, composers such as Louis Spohr were self - described Romantics, incorporating, for example, more extravagant chromaticism in their works (e.g., using chromatic harmonies in a piece 's chord progression). Conversely, works such as Schubert 's Symphony No. 5, written during the chronological dawn of the Romantic era, exhibit a deliberately anachronistic artistic paradigm, harking back to the compositional style of several decades before.
However, Vienna 's fall as the most important musical center for orchestral composition during the late 1820s, precipitated by the deaths of Beethoven and Schubert, marked the Classical style 's final eclipse -- and the end of its continuous organic development of one composer learning in close proximity to others. Franz Liszt and Frédéric Chopin visited Vienna when they were young, but they then moved on to other cities. Composers such as Carl Czerny, while deeply influenced by Beethoven, also searched for new ideas and new forms to contain the larger world of musical expression and performance in which they lived.
Renewed interest in the formal balance and restraint of 18th century classical music led in the early 20th century to the development of so - called Neoclassical style, which numbered Stravinsky and Prokofiev among its proponents, at least at certain times in their careers.
The Baroque guitar, with four or five sets of double strings or "courses '' and elaborately decorated soundhole, was a very different instrument from the early classical guitar which more closely resembles the modern instrument with the standard six strings. Judging by the number of instructional manuals published for the instrument - over three hundred texts were published by over two hundred authors between 1760 and 1860 -- the classical period marked a golden age for guitar.
In the Baroque era, there was more variety in the bowed stringed instruments used in ensembles, with instruments such as the viola d'amore and a range of fretted viols being used, ranging from small viols to large bass viols. In the Classical period, the string section of the orchestra was standardized as just four instruments:
In the Baroque era, the double bass players were not usually given a separate part; instead, they typically played the same basso continuo bassline that the cellos and other low - pitched instruments (e.g., theorbo, serpent wind instrument, viols), albeit an octave below the cellos, because the double bass is a transposing instrument that sounds one octave lower than it is written. In the Classical era, some composers continued to write only one bass part for their symphony, labeled "bassi ''; this bass part was played by cellists and double bassists. Even though the cellos and basses played from the same music, the basses were one octave below the cellos. During the Classical era, some composers began to give the double basses their own part which was different from the cello part.
|
spirit stallion of the cimarron full movie 2 | Spirit: Stallion of the Cimarron - Wikipedia
Spirit: Stallion of the Cimarron is a 2002 American animated western - drama film, produced by DreamWorks Animation and distributed by DreamWorks Pictures. The film was written by John Fusco and directed by Kelly Asbury and Lorna Cook, the latter in her directional debut, and was nominated for the Academy Award for Best Animated Feature. In contrast to the way animals are portrayed in an anthropomorphic style in other animated features, Spirit and his fellow horses communicate with each other through sounds and body language. Spirit 's thoughts are narrated by his voice actor Matt Damon, but otherwise, he has no dialogue. Spirit: Stallion of the Cimarron was released in theaters on May 24, 2002, and earned $122.6 million on an $80 million budget.
In the 19th - century American West, a young Kiger Mustang colt, Spirit, is born to a herd of horses. Spirit soon grows into a stallion and assumes the role of leader of the herd, whose duty is to keep the herd safe. Spirit is a courageous leader but has great curiosity. Spotting a strange light one night not far from his herd, the stallion is intrigued and investigates. He finds restrained, docile horses and their human wranglers sleeping around a campfire. They wake up, and seeing him as a magnificent specimen, chase and capture him, taking him to a US cavalry post.
At this time, the US army is fighting the Indian Wars and taking over the soon - to - be western United States. Frightened and confused he finds himself and other horses slaves. Then, he encounters "The Colonel '', who decides to have the mustang tamed, refusing to believe the idea of Spirit being too stubborn, but Spirit manages to fight off all attempts to tame him. To weaken Spirit, The Colonel orders him tied to a post for three days with no food or water. Meanwhile, a Lakota Native American named Little Creek is also brought into the fort and held captive. Spirit is later supposedly broken in by the Colonel, who speaks his idea of how any wild horse can be tamed. However, Spirit gets a second wind and finally throws him off. The Colonel gets frustrated and tries to shoot him, but Little Creek (who frees himself from his bounds with a knife) saves Spirit from being shot. The two of them, along with other horses, escape the post. Little Creek 's mare, Rain, meets them along with other natives who capture Spirit.
After returning to the Lakota village, Little Creek tries to tame Spirit with kindness, but Spirit refuses to be ridden. Little Creek ties Spirit and Rain together and, when he tries to leave, she insists on staying, then shows him her world. Spirit begins to warm up to Little Creek and falls in love with the mare. At the end of their time together, Little Creek tries again to ride him, but Spirit is still unwilling. He then decides that Spirit will never be tamed and frees him. As Spirit asks Rain to come with him to his herd, a cavalry regiment led by the Colonel attacks the village. During the vicious battle, the Colonel tries to shoot Little Creek, but Spirit runs into the Colonel and his horse, deflecting the shot and saving Little Creek 's life. However, Rain is shot by the Colonel, knocking her into the river. Spirit dives into the river to try to rescue Rain but is unsuccessful and they both plummet over a waterfall. Spirit finds Rain dying from her injuries and stays by her side until the army captures him. Watching Spirit being pulled away, Little Creek arrives, vowing to free him to satisfy his life - debt and follows the men after tending to Rain.
Spirit is loaded onto a train and taken to a work site on the Transcontinental Railroad, where he is put to work pulling a steam locomotive. Realizing that the track will infringe on his homeland, Spirit breaks free from the sledge and breaks the chains holding the other horses. They escape, and the locomotive falls off its sledge and rolls down the hill back to the work site, causing an explosion. Little Creek appears in time and saves Spirit from the ensuing wildfire.
The next morning, the Colonel and his men find Spirit and Little Creek, and a chase ensues through the Grand Canyon. Eventually, they are trapped by a gorge. Little Creek gives up, but Spirit manages to successfully leap across the canyon. Spirit 's move amazes the Colonel; he humbly accepts defeat, stops his men from shooting the two, and allows Spirit and Little Creek to leave. Spirit returns to the rebuilt Lakota village with Little Creek and finds Rain nursed back to health. Little Creek decides to name Spirit the "Spirit - Who - Could - Not - Be-Broken '' and sets him and Rain free. The two horses return to Spirit 's homeland, eventually reuniting with Spirit 's herd.
Writer John Fusco, best known for his work in the Western and Native American genres (such as the films Young Guns and Young Guns II), was hired by DreamWorks to create an original screenplay based on an idea by Jeffrey Katzenberg. Fusco began by writing and submitting a novel to the studio and then adapted his own work into screenplay format. He remained on the project as the main writer over the course of four years, working closely with Katzenberg, the directors, and artists.
Spirit: Stallion of the Cimarron was made over the course of four years using a conscious blend of traditional hand - drawn animation and computer animation in a technique the film 's creators dubbed "tradigital animation. '' DreamWorks SKG purchased a horse as the model for Spirit and brought the horse to the animation studio in Glendale, California for the animators to study. In the sound department, recordings of real horses were used for the sounds of the many horse characters ' hoof beats as well as their vocalizations. None of the animal characters in the film speak English beyond occasional reflective narration from the protagonist mustang (voice of Matt Damon). Many of the animators who worked on Spirit also worked on Shrek 2, and their influence can be seen in the horses in that film, such as Prince Charming 's horse from the opening sequence and Donkey 's horse form. Makers of the film took a trip to the western United States to view scenic places that they could use as inspiration for locations in the film. The homeland of the mustangs and Lakotas is based on Glacier National Park, Yellowstone National Park, Yosemite National Park, and the Grand Teton mountain range. The cavalry outpost appears to be located at Monument Valley. The canyon of the climax looks like Bryce Canyon and the Grand Canyon.
The instrumental score was composed by Hans Zimmer with songs by Bryan Adams in both the English and French versions of the album. The opening theme song for the film, "Here I Am '' was written by Bryan Adams, Gretchen Peters, and Hans Zimmer. It was produced by Jimmy Jam and Terry Lewis. Another song, not included in the film itself (although it can be heard in the ending credits), is "Do n't Let Go '', which is sung by Bryan Adams with Sarah McLachlan on harmonies and piano. It was written by Bryan Adams, Gavin Greenaway, Robert John "Mutt '' Lange, and Gretchen Peters. Many of the songs and arrangements were set in the American West, with themes based on love, landscapes, brotherhood, struggles, and journeys. Garth Brooks was originally supposed to write and record songs for the film but the deal fell through. The Italian versions of the songs were sung by Zucchero. The Spanish versions of the tracks on the album were sung by Erik Rubín (Hispanic America) and Raúl (Spain). The Brazilian version of the movie soundtrack was sung in Portuguese by Paulo Ricardo. The Norwegian versions of the songs were sung by Vegard Ylvisåker of the Norwegian comedy duo Ylvis.
Based on 126 reviews collected by review aggregator Rotten Tomatoes, Spirit: Stallion of the Cimarron has an overall approval rating of 69 % and a weighted average score of 6.4 / 10. The site 's critical consensus reads: "A visually stunning film that may be too predictable and politically correct for adults, but should serve children well. '' Review aggregator Metacritic gave the film a score of 52 based on 29 reviews, indicating "mixed or average reviews ''. Critic Roger Ebert, said in his review of the film, "Uncluttered by comic supporting characters and cute sidekicks, Spirit is more pure and direct than most of the stories we see in animation -- a fable I suspect younger viewers will strongly identify with. '' Leonard Maltin of Hot Ticket called it "one of the most beautiful and exciting animated features ever made ''. Clay Smith of Access Hollywood considered the film "An Instant Classic ''. Jason Solomons described the film as "a crudely drawn DreamWorks animation about a horse that saves the West by bucking a US Army General ''. USA Today 's Claudia Puig gave it 3 stars out of 4, writing that the filmmakers ' "most significant achievement is fashioning a movie that will touch the hearts of both children and adults, as well as bring audiences to the edge of their seats. '' Dave Kehr of the New York Times criticized the way in which the film portrayed Spirit and Little Creek as "pure cliches '' and suggested that the film could have benefited from a comic relief character. The film was screened out of competition at the 2002 Cannes Film Festival. Rain received an honorary registration certificate from the American Paint Horse Association (APHA), which has registered more than 670,000 American Paint Horses to date. She is the first animated horse to be registered by this organization.
When the film opened on Memorial Day Weekend 2002, the film earned $17,770,036 on the Friday - Sunday period, and $23,213,736 through the four - day weekend for a $6,998 average from 3,317 theaters. The film overall opened in fourth place behind Star Wars: Episode II -- Attack of the Clones, Spider - Man, and Insomnia. In its second weekend, the film retreated 36 % to $11,303,814 for a $3,362 average from expanding to 3,362 theaters and finishing in fifth place for the weekend. In its third weekend, the film decreased 18 % to $9,303,808 for a $2,767 average from 3,362 theaters. The film closed on September 12, 2002, after earning $73,280,117 in the United States and Canada with an additional $49,283,422 overseas for a worldwide total of $122,563,539, against an $80 million budget.
Spirit: Stallion of the Cimarron was released on VHS and DVD on November 19, 2002. It was re-released on DVD on May 18, 2010. The film was released on Blu - ray in Germany on April 3, 2014, in Australia on April 4 and in the United States and Canada on May 13, 2014. The film was re-issued by 20th Century Fox Home Entertainment on Blu - ray and DVD on October 19, 2014. It includes a movie ticket to Penguins of Madagascar.
Two video games based on the film were released on October 28, 2002, by THQ: the PC game Spirit: Stallion of the Cimarron -- Forever Free and the Game Boy Advance game Spirit: Stallion of the Cimarron -- Search For Homeland.
A computer - animated television series based on the film, titled Spirit Riding Free, premiered on Netflix on May 5, 2017. The series follows all the daring adventures when Spirit, who is the offspring of the original, meets a girl named Lucky whose courage matches his own.
|
who is long term prime minister of india | List of Prime Ministers of India by longevity - wikipedia
This is a list of Indian Prime Ministers by longevity. Where the person in question is still living, the longevity is calculated up to 6 January 2018.
Two measures of the longevity are given - this is to allow for the differing number of leap days occurring within the life of each Prime Minister. The first column is the number of days between date of birth and date of death, allowing for leap days; the second column breaks this number down into years and days, with the years being the number of whole years the Prime Minister lived, and the days being the remaining number of days after his / her last birthday.
If a Prime Minister served more than one non-consecutive term, the dates listed below are for the beginning of their first term, and the end of their final term.
The median age at which a Prime Minister first takes office is roughly 64 years and 8 months, which falls between Gulzarilal Nanda and Chandra Shekhar. The youngest person to become Prime Minister was Rajiv Gandhi, who became Prime Minister at the age of 40 years, 72 days. The oldest person to become Prime Minister was Morarji Desai, who became Prime Minister at the age of 81 years, 23 days.
The oldest living Prime Minister is Atal Bihari Vajpayee, born 25 December 1924 (aged 7004339800000000000 ♠ 93 years, 12 days). The youngest living Prime Minister is the incumbent Narendra Modi, born 17 September 1950 (aged 7004245830000000000 ♠ 67 years, 111 days).
The longest lived Prime Minister was Gulzarilal Nanda, who lived to the age of 99 years, 195 days. Morarji Desai was the second - longest lived Prime Minister, and the longest lived elected Prime Minister (Nanda, a cabinet minister, served as acting Prime Minister when both Jawaharlal Nehru and Lal Bahadur Shastri died in office). Desai lived to the age of 99 years, 41 days, only 154 days short of matching Nanda. The shortest lived Prime Minister was Rajiv Gandhi, who was assassinated at the age of 46 years, 274 days.
Narendra Modi (17 September 1950) is the first Prime Minister of India to be born after the Independence of India. All other former Prime Ministers were born before the Independence of India. Before him, Rajiv Gandhi was the last born Prime Minister (20 August 1944), but he was assassinated in 1991.
Template: IndianPrimeMinister
|
rock n roll is king electric light orchestra | Rock ' n ' Roll Is King - Wikipedia
"Rock ' n ' Roll Is King '' is a song written and performed by Electric Light Orchestra (ELO) released as a single from the 1983 album Secret Messages. With this song the band returned to their rock roots. It features a violin solo by Mik Kaminski.
The song went through many changes during recording and at one point was going to be called "Motor Factory '' with a completely different set of lyrics. The single proved to be ELO 's last UK top twenty hit single, and reached no. 19 in the US in August 1983.
"I sang on quite a few tracks; I sang on Rock ' N ' Roll Is King. I played on that one, but it was n't called that, it was something about something about working at Austin Longbridge! It was full of car plant sounds, you could hear it going clank, clank, clank, like somebody hitting a lathe with a hammer, and Jeff went away and made it into Rock ' n ' Roll Is King, wiped off everything we 'd done, no, there was still some backing left in there, It was much better how he finished it off than it was before. '' Dave Morgan (4 March 1999 -- King Of The Universe # 8)
|
air movement within a high pressure system generally sinks | High - pressure area - wikipedia
A high - pressure area, high or anticyclone is a region where the atmospheric pressure at the surface of the planet is greater than its surrounding environment.
Winds within high - pressure areas flow outward from the higher pressure areas near their centers towards the lower pressure areas further from their centers. Gravity adds to the forces causing this general movement, because the higher pressure compresses the column of air near the center of the area into greater density -- and so greater weight compared to lower pressure, lower density, and lower weight of the air outside the center.
However, because the planet is rotating underneath the atmosphere, and frictional forces arise as the planetary surface drags some atmosphere with it, the air flow from center to periphery is not direct, but is twisted due to the Coriolis effect, or the merely apparent force that arise when the observer is in a rotating frame of reference. Viewed from above this twist in wind direction is in the same direction as the rotation of the planet.
The strongest high - pressure areas are associated with cold air masses which push away out of polar regions during the winter when there is less sun to warm neighboring regions. These Highs change character and weaken once they move further over relatively warmer water bodies.
Somewhat weaker but more common are high - pressure areas caused by atmospheric subsidence, that is, areas where large masses of cooler drier air descend from an elevation of 8 to 15 km after the lower temperatures have precipitated out the water vapor.
Many of the features of Highs may be understood in context of middle - or meso - scale and relatively enduring dynamics of a planet 's atmospheric circulation. For example, massive atmospheric subsidences occur as part of the descending branches of Ferrel cells and Hadley cells. Hadley cells help form the subtropical ridge, steer tropical waves and tropical cyclones across the ocean and is strongest during the summer. The subtropical ridge also helps form most of the world 's deserts.
On English - language weather maps, high - pressure centers are identified by the letter H. Weather maps in other languages may use different letters or symbols.
The direction of wind flow around an atmospheric high - pressure area and a low - pressure area, as seen from above, depends on the hemisphere. High - pressure systems rotate clockwise in the northern Hemisphere; low - pressure systems rotate clockwise in the southern hemisphere.
The scientific terms in English used to describe the weather systems generated by highs and lows were introduced in the mid-1800s, mostly by the British. The scientific theories which explain the general phenomena originated about two centuries earlier.
The term Cyclone was coined by Henry Piddington of the British East India Company to describe the devastating storm of December 1789 in Coringa, India. A cyclone forms around a low - pressure area. Anticyclone, the term for the kind of weather around a high - pressure area, was coined in 1877 by Francis Galton to indicate an area whose winds revolved in the opposite direction of a cyclone. In British English, the opposite direction of clockwise is referred to as anticlockwise, making the label anticyclones a logical extension.
A simple rule is that for high - pressure areas, where generally air flows from the center outward, the coriolis force given by the earth 's rotation to the air circulation is in the opposite direction of earth 's apparent rotation if viewed from above the hemisphere 's pole. So, both the earth and winds around a low - pressure area rotate counter-clockwise in the northern hemisphere, and clockwise in the southern. The opposite to these two cases occurs in the case of a high. These results derive from the Coriolis effect; that article explains in detail the physics, and provides an animation of a model to aid understanding.
High - pressure systems form due to downward motion through the troposphere, the atmospheric layer where weather occurs. Preferred areas within a synoptic flow pattern in higher levels of the troposphere are beneath the western side of troughs.
On weather maps, these areas show converging winds (isotachs), also known as convergence, near or above the level of non-divergence, which is near the 500 hPa pressure surface about midway up through the troposphere, and about half the atmospheric pressure at the surface.
High - pressure systems are alternatively referred to as anticyclones. On English - language weather maps, high - pressure centers are identified by the letter H in English, within the isobar with the highest pressure value. On constant pressure upper level charts, it is located within the highest height line contour.
Highs are frequently associated with light winds at the surface and subsidence through the lower portion of the troposphere. In general, subsidence will dry out an air mass by adiabatic, or compressional, heating. Thus, high pressure typically brings clear skies. During the day, since no clouds are present to reflect sunlight, there is more incoming shortwave solar radiation and temperatures rise. At night, the absence of clouds means that outgoing longwave radiation (i.e. heat energy from the surface) is not absorbed, giving cooler diurnal low temperatures in all seasons. When surface winds become light, the subsidence produced directly under a high - pressure system can lead to a buildup of particulates in urban areas under the ridge, leading to widespread haze. If the low level relative humidity rises towards 100 percent overnight, fog can form.
Strong, vertically shallow high - pressure systems moving from higher latitudes to lower latitudes in the northern hemisphere are associated with continental arctic air masses. Once arctic air moves over an unfrozen ocean, the air mass modifies greatly over the warmer water and takes on the character of a maritime air mass, which reduces the strength of the high - pressure system. When extremely cold air moves over relatively warm oceans, polar lows can develop. However, warm and moist (or maritime tropical) air masses that move poleward from tropical sources are slower to modify than arctic air masses.
In terms of climatology, high pressure forms at the horse latitudes, or torrid zone, between the latitudes of 20 and 40 degrees from the equator, as a result of air that has been uplifted at the equator. As the hot air rises it cools, losing moisture; it is then transported poleward where it descends, creating the high - pressure area. This is part of the Hadley cell circulation and is known as the subtropical ridge or subtropical high, and is strongest in the summer. The subtropical ridge is a warm core high - pressure system, meaning it strengthens with height. Many of the world 's deserts are caused by these climatological high - pressure systems.
Some climatological high - pressure areas acquire regionally based names. The land - based Siberian High often remains quasi-stationary for more than a month during the most frigid time of the year, making it unique in that regard. It is also a bit larger and more persistent than its counterpart in North America. Surface winds accelerating down valleys down the western Pacific ocean coastline, causing the winter monsoon. Arctic high - pressure systems such as the Siberian High are cold core, meaning that they weaken with height. The influence of the Azores High, also known as the Bermuda High, brings fair weather over much of the North Atlantic Ocean and mid to late summer heat waves in western Europe. Along its southerly periphery, the clockwise circulation often impels easterly waves, and tropical cyclones that develop from them, across the ocean towards landmasses in the western portion of ocean basins during the hurricane season. The highest barometric pressure ever recorded on Earth was 1,085.7 hectopascals (32.06 inHg) measured in Tonsontsengel, Mongolia on 19 December 2001.
Wind flows from areas of high pressure to areas of low pressure. This is due to density differences between the two air masses. Since stronger high - pressure systems contain cooler or drier air, the air mass is more dense and flows towards areas that are warm or moist, which are in the vicinity of low pressure areas in advance of their associated cold fronts. The stronger the pressure difference, or pressure gradient, between a high - pressure system and a low - pressure system, the stronger the wind. The coriolis force caused by the Earth 's rotation is what gives winds within high - pressure systems their clockwise circulation in the northern hemisphere (as the wind moves outward and is deflected right from the center of high pressure) and counterclockwise circulation in the southern hemisphere (as the wind moves outward and is deflected left from the center of high pressure). Friction with land slows down the wind flowing out of high - pressure systems and causes wind to flow more outward than would be the case in the absence of friction. This is known as a geostrophic wind.
|
which of the following is a critique of cco approaches | Constitutive Role of communication in organizations - wikipedia
The focal point of the communicative constitution of organizations is that "organization is an effect of communication not its predecessor ''. This approach, also referred to as the CCO perspective, posits that "elements of communication, rather than being fixed in advance, are reflexively constituted within the act of communication itself ''.
The model of communication as constitutive of organizations has origins in the linguistic approach to organizational communication taken in the 1980s. Theorists such as Karl E. Weick were among the first to posit that organizations were not static but inherently comprised by a dynamic process of communicating.
The notion of a communicative constitution of organization comprises three schools of thought: (1) The Montréal School, (2) the McPhee 's Four Flows based on Gidden 's Structuration Theory, and (3), Luhmann 's Theory of Social Systems.
In their seminal 2000 article, which was republished in 2009 The Communicative Constitution of Organizations: A Framework for Explanation, Robert D. McPhee and Pamela Zaug distinguish four types of communicative flows generate a social structure through interaction. The flows, though distinct, can affect one another in the model and lead to multi-way conversation or texts typically involving reproduction of as well as resistance to the rules and resources of the organization.
Reflexive self - structuring separates organizations from other groupings such as a crowd or mob. The self - structuring process is deliberately carried out through communication among role - holders and groups. Communication regarding self - structuring is recursive and dialogic in nature. It concerns the control, design, and documentation of an organization 's relations, norms, processes, and entities. Communication of formal structure predetermines work routines rather than allowing them to emerge and controls the collaboration and membership - negotiation processes. Physical examples of organizational self - structuring include a charter, organizational chart, and policy manual.
Organizational self - structuring is a political, subjective process that can be affected by systems, individuals, interests, and traditions in which it takes place. It is not necessarily free of error or ambiguity. To constitute an organization, the communication must imply the formation and governance of a differentiated whole with its own reflexive response cycle and mechanisms.
Organizations are necessarily composed of, yet are distinct from, individual members. Because humans are not inherently members of organizations, negotiatory communication must occur to incorporate them. Membership negotiation links an organization to its members by establishing and maintaining relationships. Practices in membership negotiation include job recruitment and socialization. In recruitment, potential members are evaluated, both parties must agree to a relationship, and the member must be incorporated into the structure of the organization. The negotiation process can be influenced by powers including prior existence and supervision, and all parties involved may redefine themselves to fit expectations. Among higher status members, power - claiming and spokesmanship are examples of negotiation processes to gain resources of an organization.
Activity coordination is a result of the fact that organizations inherently have at least one purpose to which the members ' activity is contributing. Often an organization 's self - structuring defines the division of labor, work flow sequences, policies, etc. that set the course for activity coordination. The structure is reflexively changing and may not be complete, relevant, fully understood, or free of problems. Therefore, a necessity of communication arises among members to amend and adjust the work process. Activity coordination can include adjusting the work process and resolving immediate or unforeseen practical problems.
Activity coordination operates on the assumption that members are working in an interdependent social unit beyond the work tasks themselves. It incorporates any processes and attitudes and therefore includes coordination for members to not complete work or to seek power over one another. The work of Dr. Henry Mintzberg exemplifies activity coordination in the mechanism of mutual adjustment in his theory of organizational forms. In this example, co-workers informally coordinate work arounds for issues on the job.
Institutional positioning links the organization to the environment outside the organization at a macro level. Examples of entities outside the organization include suppliers, customers, and competitors. Communication outside the organization negotiates terms of recognition of the organization 's existence and place in what is called "identity negotiation '' or "positioning ''. Often the communicators of this message are individuals who concurrently negotiate their own relationships but messages can come from the greater organization as a whole.
Though there is not one configuration that an organization must embody, in order to be considered by peer institutions, the minimum process involves negotiating inclusion in the environment. Organizations must establish and maintain a presence, image, status, and a two - way communication channel with partners. Objects such as organizational charts can assert a particular image and demonstrate legitimacy. Organizations which are marginalized due to their lack of institutional positioning include startup companies and illegal groups such as the Mafia. Generally, the more secure an organization, the stronger relationships and control over uncertainty and resources it has in its environment. Pre-existing institutional (corporations, agencies), political, legal, cultural, etc. structures allow for easier constitution of complex organizations.
One of the most distinctive stances of the Montreal School approach, birthed in the Universite de Montreal by James Taylor, Francois Cooren (see particularly Cooren, 2004), and Bruno Latour amongst others, is that texts have agency. Texts do something to humans that is not reducible to certain human interactions and human actions.
The Montreal flavor of CCO is exemplified by Taylor et al. (1996) and the volume edited by Cooren, Taylor, and Van Emery (2006). The Montréal school foregrounds process of coorientation, or the orientation of two individuals to one another, and the object of conversation. Cooren, Kuhn, Cornelissen, and Clark (2011) suggest that coorientation occurs when individuals focus on each other and the multitudes of agencies within the organizational environment. Cooren et al. (2011), the current leading voice in the Montréal school, suggests that Greimas language theory (described by Cooren & Taylor, 2006 as almost incomprehensible) and Latour 's (1995, 2005) Actor Network Theory are the basis of the Montréal school 's thinking. Taylor and Van Every (2001) rely on Austin 's (1962) and Searle 's (1975) Speech Act Theory.
Two central terms to the Montréal school are derived from Austin 's work on language: text and conversation. The text represents big ' D ' Discourse in the organization, or the way people talk, while conversation represents the messages exchanged between two parties that solidify into text. In this way, Taylor et al. (1996) claim that organizations are not real in the material sense; instead, organizations are a culmination of conversations and texts. Further, Taylor et al. (1996) suggest that organizations always speak through an agent. Over the course of time, distanciation, or solidification of various texts lead to what laypeople refer to as the organization, occurs.
Taylor et al. (1996) propose several degrees of separation between text and conversation. First, text is translated into action through the ability of communication to carry intention. Second, conversation turns into a narrative representation as interlocutors agree on meaning. Third, text is translating into (semi) permanent medium; for example, we write down regulations in an employee handbook. Such medium permits storage of texts to help them become conversation. Fourth, these media specialize the language as professionalism. Fifth, physical and material structures are created by the organization to perpetuate conversation. Finally, publication, dissemination, diffusion, and other forms of broadcast are employed to convey the message created by members of the organization. Through this CCO process, the social arrangements of the workplace become codified. The Montréal school 's proponents contend that the essence of organizing is captured in the submission, imbrication, and embeddedness of text and conversation (Schoeneborn et al., 2014).
Several other key terms are related with the Montréal school: coorientation (Taylor, 2006), plenum of agencies (Cooren, 2006), closure (Cooren & Fairhurst, 2004), hybridity (Castor & Cooren, 2006), imbrication (Taylor, 2011), and most recently ventriloquism (Cooren et al. 2013). Coorientation, as described above is an A-B-X relationship between two actors and an object; the object can be psychological, physical, or social. A plenum of agencies refers to the potential of both human and non-human actants (a term borrowed from Actor Network Theory; Latour, 1995) to interact within the organizational environment. Closure is the punctuation of conversations to provide deeper understanding by interlocutors. Hybridity refers to human and nonhuman actants working together to co-orient a claim. Imbrication refers to the emerging structures created by discourse in the organization over time that become an unquestioned part of what we call the organization. Finally, ventriloquism is the study of how interacts (both human and non-human) position and are positioned by the need to act via different values, principles, interests, norms, experiences, and other structures.
Luhmann 's systems theory focuses on three topics, which are interconnected in his entire work. (8)
The core element of Luhmann 's theory is communication. Social systems are systems of communication, and society is the most encompassing social system. Being the social system that comprises all (and only) communication, today 's society is a world society. (9) A system is defined by a boundary between itself and its environment, dividing it from an infinitely complex, or (colloquially) chaotic, exterior. The interior of the system is thus a zone of reduced complexity: Communication within a system operates by selecting only a limited amount of all information available outside. This process is also called "reduction of complexity ''. The criterion according to which information is selected and processed is meaning (in German, Sinn). Both social systems and psychical or personal systems (see below for an explanation of this distinction) operate by processing meaning.
The third strand of CCO was first acknowledged by Taylor (1995) but has only recently been included as a strand of CCO theorizing (Cooren et al., 2011; Schoeneborn et al., 2014). Schoeneborn (2011) has been the most published advocate of the Luhmannian perspective. Luhmann (1995) claims that individuals do not create meaning, instead all meaning comes from social systems (Cooren et al., 2011). Perhaps this is why Luhmann 's general system perspective has only recently been considered a part of the CCO body of scholarship. Luhmann (1995) takes care to define communication as a tripartite conceptualization of interactive forces. Specifically, Seidl (2014) explains that Luhmann suggests communication is an amalgam of information, utterance, and understanding. Whereas information is what is contained in a message, utterance is how the communication is conducted, and understanding "refers to the distinction between information and utterance '' (p. 290). Luhmann 's perspective is a radical departure from traditional communication scholarship. Putnam and Fairhurst (2015) explain that the Luhmannian perspective is wholly communicative; that is, meaning is complete up to utterances in a given communicative interaction.
Luhmann 's (1995) perspective gives less value to human agency in favor of a social agentic perspective. For this reason, Sidel claims that CCO research using Luhmann 's version should focus on communication not on actors. Human agency is minimized by this perspective. Psychic systems (i.e., the mind) interact with social systems (i.e., an organic conglomerate of multiple psychic systems) and human actors are not relevant to the constitution of organizations (Schoeneborn et al., 2014). Recently, both Schoeneborn and Sidel have been employing the Luhmannian version of CCO to advance organization science (e.g., Schoeneborn & Scherer, 2010; Sidel & Becker, 2005).
|
when did when i'm gone come out | When I 'm Gone (Eminem song) - wikipedia
"When I 'm Gone '' is a song by American rapper Eminem from his first greatest hits compilation album Curtain Call: The Hits (2005). It was released on December 6, 2005, the same day as the album was released, as the lead single.
"When I 'm Gone '' received mostly positive reviews from music critics. The song charted at number eight on the United States Billboard Hot 100, number 22 on the Hot Rap Tracks chart, number four on the UK Singles Chart and on the Australian ARIA Singles Chart, the latter being the only country where it peaked at number one.
"When I 'm Gone '' was written in 2003 and released in 2005 by Shady Records, on the album Curtain Call: The Hits. It was thought at the time that this album would mark the beginning of an extended musical hiatus for Eminem, who had said "I 'm at a point in my life right now where I feel like I do n't know where my career is going... This is the reason that we called it Curtain Call, because this could be the final thing. We do n't know. ''
The song 's lyrics pertain to Eminem 's life as a celebrity musician interfering with his family. Eminem uses imagery to evoke a feeling of sadness about his family falling apart.
The music video for the song was directed by Anthony Mandler. Tarick Salmaci, from the TV show The Contender, makes an appearance in the video with his wife.
AllMusic wrote a mixed to positive overview: "There 's the closing "When I 'm Gone, '' a sentimental chapter in the Eminem domestic psychodrama that bears the unmistakable suggestion that Em is going away for a while. While it 's not up to the standard of "Mockingbird, '' it is more fully realized than the two other new cuts here... '' Pitchfork considered: "When I 'm Gone '' is an all desolate placeholder -- lesser version of Eminem songs that already piss me off. "Gone '' is the worst offender, yet another love letter from Em to daughter Hailie, it, like Encore 's "Mockingbird '', is heavy - handed and saccharine. ''
IGN also panned the single: "The latter, however, is another prime example of the beat dragging Eminem down. The sluggish, lumbering bump causes his lyrics to stumble and sound forced in their delivery. The heartfelt storyline would have been better served drenched in some warm, fuzzy, but ultimately mournful soul instead. '' NME called this song disappointing. Sputnik Music wrote a mixed review: "Eminem gives a powerful vocal performance, with some rather touching lyrics and a nice chorus. However, much like much of his previous work on Encore, it gets bogged down in a repetitive and boring beat. '' Rolling Stone agreed.
The music video starts with Eminem rapping on a podium about his life at a small gathering. He 's at an open house where he completely ignores his daughter 's creations but before he can see the error of his actions, it is too late. As he watches her walk away, he raps "Then turn right around in that song and tell her you love her / And put hands on her mother, who 's a spitting image of her '', feeling remorse for writing songs about being violent towards Kim, who is Hailie 's mother, and wonders how he could be a good parent.
Then, he is confronted by his daughter piling boxes in front of the door to prevent him from leaving. He convinces his daughter to let him go but she insists to give him a necklace with her picture. He then finds himself in a hotel room and then on a stage where he raps. At the end of his performance on the stage, he finds his daughter has followed him and tells him of the divorce.
Afterwards, Eminem goes backstage as the "bad '' Slim Shady and looks in a mirror. He then punches it as a way of "killing '' Slim Shady. He then finds himself back at home playing with his daughters on the swing. The video ends with him being applauded at the podium.
The video features his daughter Hailie Jade Mathers and an actress as his ex-wife Kim Mathers, from whom Eminem was divorced when the song was released. Kim 's daughter Whitney Laine Scott, whom Eminem thinks of as his own daughter, is also featured as herself towards the end of the video. She is the younger girl on the swing, at the moment in the lyrics when he says "Hailie just smiles and winks at her little sister ''.
shipments figures based on certification alone sales + streaming figures based on certification alone
|
places in the us that have spanish names | List of place names of Spanish origin in the United States - wikipedia
As a consequence of former Spanish and, later, Mexican sovereignty over lands that are now part of the United States, there are many places in the country, mostly in the southwest, with names of Spanish origin. Florida and Louisiana also were at times under Spanish control. There are also several places in the United States with Spanish names as a result to other factors. Some of these names preserved ancient writing.
Not all Spanish place name etymologies in the United States originate from the Spanish colonial period or from the Spanish language. Spanish - sounding place names are classified into four categories:
This is not an exhaustive list.
This is not an exhaustive list.
This is not an exhaustive list.
This is not an exhaustive list.
|
which case was the supreme court’s first ruling on the establishment clause answers | Everson v. Board of education - wikipedia
(1) The Establishment Clause of the First Amendment is incorporated against the states through the Due Process Clause of the Fourteenth Amendment.
Everson v. Board of Education, 330 U.S. 1 (1947) was a landmark decision of the United States Supreme Court which applied the Establishment Clause in the country 's Bill of Rights to State law. Prior to this decision the First Amendment words, "Congress shall make no law respecting an establishment of religion '' imposed limits only on the federal government, while many states continued to grant certain religious denominations legislative or effective privileges. This was the first Supreme Court case incorporating the Establishment Clause of the First Amendment as binding upon the states through the Due Process Clause of the Fourteenth Amendment. The decision in Everson marked a turning point in the interpretation and application of disestablishment law in the modern era.
The case was brought by a New Jersey taxpayer against a tax - funded school district that provided reimbursement to parents of both public and private schooled people taking the public transportation system to school. The taxpayer contended that reimbursement given for children attending private religious schools violated the constitutional prohibition against state support of religion, and the use of taxpayer funds to do so violated the constitution 's Due Process Clause. The Justices were split over the question whether the New Jersey policy constituted support of religion, with the majority concluding these reimbursements were "separate and so indisputably marked off from the religious function '' that they did not violate the constitution. Both affirming and dissenting Justices, however, were decisive that the Constitution required a sharp separation between government and religion and their strongly worded opinions paved the way to a series of later court decisions that taken together brought about profound changes in legislation, public education, and other policies involving matters of religion. Both Justice Hugo Black 's majority opinion and Justice Wiley Rutledge 's dissenting opinion defined the First Amendment religious clause in terms of a "wall of separation between church and state ''.
After repealing a former ban, a New Jersey law authorized payment by local school boards of the costs of transportation to and from schools -- including private schools. Of the private schools that benefited from this policy, 96 % were parochial Catholic schools. Arch R. Everson, a taxpayer in Ewing Township, filed a lawsuit alleging that this indirect aid to religion through the mechanism of reimbursing parents and students for costs incurred as a result of attending religious schools violated both the New Jersey state constitution and the First Amendment. After a loss in the New Jersey Court of Errors and Appeals, then the state 's highest court, Everson appealed to the U.S. Supreme Court on purely federal constitutional grounds.
The 5 - 4 decision was handed down on February 10, 1947, and was based upon the writing of James Madison (Memorial and Remonstrance Against Religious Assessments) and Thomas Jefferson (Virginia Statute for Religious Freedom). The Court, through Justice Hugo Black, ruled that the state bill was constitutionally permissible because the reimbursements were offered to all students regardless of religion and because the payments were made to parents and not any religious institution. Perhaps as important as the actual outcome, though, was the interpretation given by the entire Court to the Establishment Clause. It reflected a broad interpretation of the Clause that was to guide the Court 's decisions for decades to come. Black 's language was sweeping:
The ' establishment of religion ' clause of the First Amendment means at least this: Neither a state nor the Federal Government can set up a church. Neither can pass laws which aid one religion, aid all religions or prefer one religion over another. Neither can force nor influence a person to go to or to remain away from church against his will or force him to profess a belief or disbelief in any religion. No person can be punished for entertaining or professing religious beliefs or disbeliefs, for church attendance or non-attendance. No tax in any amount, large or small, can be levied to support any religious activities or institutions, whatever they may be called, or whatever form they may adopt to teach or practice religion. Neither a state nor the Federal Government can, openly or secretly, participate in the affairs of any religious organizations or groups and vice versa. In the words of Jefferson, the clause against establishment of religion by law was intended to erect ' a wall of separation between Church and State. ' 330 U.S. 1, 15 - 16.
Justice Jackson wrote a dissenting opinion in which he was joined by Justice Frankfurter. Justice Rutledge wrote another dissenting opinion in which he was joined by Justices Frankfurter, Jackson and Burton. The four dissenters agreed with Justice Black 's definition of the Establishment Clause, but protested that the principles he laid down ought logically to lead to the invalidation of the challenged law.
In his written dissent, Justice Wiley Rutledge argued that:
The funds used here were raised by taxation. The Court does not dispute nor could it that their use does in fact give aid and encouragement to religious instruction. It only concludes that this aid is not ' support ' in law. But Madison and Jefferson were concerned with aid and support in fact not as a legal conclusion ' entangled in precedents. ' Here parents pay money to send their children to parochial schools and funds raised by taxation are used to reimburse them. This not only helps the children to get to school and the parents to send them. It aids them in a substantial way to get the very thing which they are sent to the particular school to secure, namely, religious training and teaching. 330 U.S. 1, 45.
In its first hundred years, the United States Supreme Court interpreted the Constitution 's Bill of Rights as a limit on federal government and considered the states bound only by those rights granted to its citizens by their own state constitutions. Because the federal laws during this period were remote influences at most on the personal affairs of its citizens, minimal attention was paid by the Court to how those provisions in the federal Bill of Rights were to be interpreted.
Following the passage of the Thirteenth to Fifteenth Amendments to the Constitution at the end of the Civil War, the Supreme Court would hear hundreds of cases involving conflicts over the constitutionality of laws passed by the states. The decisions in these cases were often criticized as resulting more from the biases of the individual Justices than the applicable rule of law or constitutional duty to protect individual rights. But, by the 1930s, the Court began consistently reasoning that the Fourteenth Amendment guaranteed citizens First Amendment protections from even state and local governments, a process known as incorporation.
The 1940 decision in Cantwell v. Connecticut was the first Supreme Court decision to apply the First Amendment 's religious protections to the states, that case focusing on the so - called Free Exercise Clause. The decision Everson followed in 1947, the first incorporating the Establishment Clause. Numerous state cases followed disentangling the church from public schools, most notably the 1951 New Mexico case of Zellers v. Huff.
Similar First Amendment cases have flooded the courts in the decades following Everson. Having invoked Thomas Jefferson 's metaphor of the wall of separation in the Everson decision, the lawmakers and courts have struggled how to balance governments ' dual duty to satisfy both the nonestablishment clause and the free exercise clause contained in the language of the amendment. The majority and dissenting Justices in Everson split over this very question, with Rutledge in the minority by insisting that the Constitution forbids "every form of public aid or support for religion ''.
|
the width of a dna double helix is constant | Nucleic acid double helix - wikipedia
In molecular biology, the term double helix refers to the structure formed by double - stranded molecules of nucleic acids such as DNA. The double helical structure of a nucleic acid complex arises as a consequence of its secondary structure, and is a fundamental component in determining its tertiary structure. The term entered popular culture with the publication in 1968 of The Double Helix: A Personal Account of the Discovery of the Structure of DNA, by James Watson
The DNA double helix polymer of nucleic acid, held together by nucleotides which base pair together. In B - DNA, the most common double helical structure found in nature, the double helix is right - handed with about 10 -- 10.5 base pairs per turn. The double helix structure of DNA contains a major groove and minor groove. In B - DNA the major groove is wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to B - DNA do so through the wider major groove.
The double - helix model of DNA structure was first published in the journal Nature by James Watson and Francis Crick in 1953, (X, Y, Z coordinates in 1954) based upon the crucial X-ray diffraction image of DNA labeled as "Photo 51 '', from Rosalind Franklin in 1952, followed by her more clarified DNA image with Raymond Gosling, Maurice Wilkins, Alexander Stokes, and Herbert Wilson, and base - pairing chemical and biochemical information by Erwin Chargaff. The prior model was triple - stranded DNA.
The realization that the structure of DNA is that of a double - helix elucidated the mechanism of base pairing by which genetic information is stored and copied in living organisms and is widely considered one of the most important scientific discoveries of the 20th century. Crick, Wilkins, and Watson each received one third of the 1962 Nobel Prize in Physiology or Medicine for their contributions to the discovery. (Franklin, whose breakthrough X-ray diffraction data was used to formulate the DNA structure, died in 1958, and thus was ineligible to be nominated for a Nobel Prize.)
Hybridization is the process of complementary base pairs binding to form a double helix. Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes, or mechanical force. Melting occurs preferentially at certain points in the nucleic acid. T and A rich regions are more easily melted than C and G rich regions. Some base steps (pairs) are also susceptible to DNA melting, such as TA and TG. These mechanical features are reflected by the use of sequences such as TATA at the start of many genes to assist RNA polymerase in melting the DNA for transcription.
Strand separation by gentle heating, as used in polymerase chain reaction (PCR), is simple, providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. The cell avoids this problem by allowing its DNA - melting enzymes (helicases) to work concurrently with topoisomerases, which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. Helicases unwind the strands to facilitate the advance of sequence - reading enzymes such as DNA polymerase.
The geometry of a base, or base pair step can be characterized by 6 coordinates: shift, slide, rise, tilt, roll, and twist. These values precisely define the location and orientation in space of every base or base pair in a nucleic acid molecule relative to its predecessor along the axis of the helix. Together, they characterize the helical structure of the molecule. In regions of DNA or RNA where the normal structure is disrupted, the change in these values can be used to describe such disruption.
For each base pair, considered relative to its predecessor, there are the following base pair geometries to consider:
Rise and twist determine the handedness and pitch of the helix. The other coordinates, by contrast, can be zero. Slide and shift are typically small in B - DNA, but are substantial in A - and Z - DNA. Roll and tilt make successive base pairs less parallel, and are typically small.
Note that "tilt '' has often been used differently in the scientific literature, referring to the deviation of the first, inter-strand base - pair axis from perpendicularity to the helix axis. This corresponds to slide between a succession of base pairs, and in helix - based coordinates is properly termed "inclination ''.
At least three DNA conformations are believed to be found in nature, A-DNA, B - DNA, and Z - DNA. The B form described by James Watson and Francis Crick is believed to predominate in cells. It is 23.7 Å wide and extends 34 Å per 10 bp of sequence. The double helix makes one complete turn about its axis every 10.4 -- 10.5 base pairs in solution. This frequency of twist (termed the helical pitch) depends largely on stacking forces that each base exerts on its neighbours in the chain. The absolute configuration of the bases determines the direction of the helical curve for a given conformation.
A-DNA and Z - DNA differ significantly in their geometry and dimensions to B - DNA, although still form helical structures. It was long thought that the A form only occurs in dehydrated samples of DNA in the laboratory, such as those used in crystallographic experiments, and in hybrid pairings of DNA and RNA strands, but DNA dehydration does occur in vivo, and A-DNA is now known to have biological functions. Segments of DNA that cells have been methylated for regulatory purposes may adopt the Z geometry, in which the strands turn about the helical axis the opposite way to A-DNA and B - DNA. There is also evidence of protein - DNA complexes forming Z - DNA structures.
Other conformations are possible; A-DNA, B - DNA, C - DNA, E-DNA, L - DNA (the enantiomeric form of D - DNA), P - DNA, S - DNA, Z - DNA, etc. have been described so far. In fact, only the letters F, Q, U, V, and Y are now available to describe any new DNA structure that may appear in the future. However, most of these forms have been created synthetically and have not been observed in naturally occurring biological systems. There are also triple - stranded DNA forms and quadruplex forms such as the G - quadruplex.
Twin helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double - stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.
Alternative non-helical models were briefly considered in the late 1970s as a potential solution to problems in DNA replication in plasmids and chromatin. However, the models were set aside in favor of the double - helical model due to subsequent experimental advances such as X-ray crystallography of DNA duplexes and later the nucleosome core particle, and the discovery of topoisomerases. Also, the non-double - helical models are not currently accepted by the mainstream scientific community.
Single - stranded nucleic acids (ssDNA) do not adopt a helical formation, and are described by models such as the random coil or worm - like chain.
DNA is a relatively rigid polymer, typically modelled as a worm - like chain. It has three significant degrees of freedom; bending, twisting, and compression, each of which cause certain limits on what is possible with DNA within a cell. Twisting - torsional stiffness is important for the circularisation of DNA and the orientation of DNA bound proteins relative to each other and bending - axial stiffness is important for DNA wrapping and circularisation and protein interactions. Compression - extension is relatively unimportant in the absence of high tension.
DNA in solution does not take a rigid structure but is continually changing conformation due to thermal vibration and collisions with water molecules, which makes classical measures of rigidity impossible to apply. Hence, the bending stiffness of DNA is measured by the persistence length, defined as: ((quote The length of DNA over which the time - averaged orientation of the polymer becomes uncorrelated by a factor of e.
This value may be directly measured using an atomic force microscope to directly image DNA molecules of various lengths. In an aqueous solution, the average persistence length is 46 -- 50 nm or 140 -- 150 base pairs (the diameter of DNA is 2 nm), although can vary significantly. This makes DNA a moderately stiff molecule.
The persistence length of a section of DNA is somewhat dependent on its sequence, and this can cause significant variation. The variation is largely due to base stacking energies and the residues which extend into the minor and major grooves.
The entropic flexibility of DNA is remarkably consistent with standard polymer physics models, such as the Kratky - Porod worm - like chain model. Consistent with the worm - like chain model is the observation that bending DNA is also described by Hooke 's law at very small (sub-piconewton) forces. However, for DNA segments less than the persistence length, the bending force is approximately constant and behaviour deviates from the worm - like chain predictions.
This effect results in unusual ease in circularising small DNA molecules and a higher probability of finding highly bent sections of DNA.
DNA molecules often have a preferred direction to bend, i.e., anisotropic bending. This is, again, due to the properties of the bases which make up the DNA sequence - a random sequence will have no preferred bend direction, i.e., isotropic bending.
Preferred DNA bend direction is determined by the stability of stacking each base on top of the next. If unstable base stacking steps are always found on one side of the DNA helix then the DNA will preferentially bend away from that direction. As bend angle increases then steric hindrances and ability to roll the residues relative to each other also play a role, especially in the minor groove. A and T residues will be preferentially be found in the minor grooves on the inside of bends. This effect is particularly seen in DNA - protein binding where tight DNA bending is induced, such as in nucleosome particles. See base step distortions above.
DNA molecules with exceptional bending preference can become intrinsically bent. This was first observed in trypanosomatid kinetoplast DNA. Typical sequences which cause this contain stretches of 4 - 6 T and A residues separated by G and C rich sections which keep the A and T residues in phase with the minor groove on one side of the molecule. For example:
The intrinsically bent structure is induced by the ' propeller twist ' of base pairs relative to each other allowing unusual bifurcated Hydrogen - bonds between base steps. At higher temperatures this structure is denatured, and so the intrinsic bend, is lost.
All DNA which bends anisotropically has, on average, a longer persistence length and greater axial stiffness. This increased rigidity is required to prevent random bending which would make the molecule act isotropically.
DNA circularization depends on both the axial (bending) stiffness and torsional (rotational) stiffness of the molecule. For a DNA molecule to successfully circularize it must be long enough to easily bend into the full circle and must have the correct number of bases so the ends are in the correct rotation to allow bonding to occur. The optimum length for circularization of DNA is around 400 base pairs (136 nm), with an integral number of turns of the DNA helix, i.e., multiples of 10.4 base pairs. Having a non integral number of turns presents a significant energy barrier for circularization, for example a 10.4 x 30 = 312 base pair molecule will circularize hundreds of times faster than 10.4 x 30.5 ≈ 317 base pair molecule.
Longer stretches of DNA are entropically elastic under tension. When DNA is in solution, it undergoes continuous structural variations due to the energy available in the thermal bath of the solvent. This is due to the thermal vibration of the molecule combined with continual collisions with water molecules. For entropic reasons, more compact relaxed states are thermally accessible than stretched out states, and so DNA molecules are almost universally found in a tangled relaxed layouts. For this reason, one molecule of DNA will stretch under a force, straightening it out. Using optical tweezers, the entropic stretching behavior of DNA has been studied and analyzed from a polymer physics perspective, and it has been found that DNA behaves largely like the Kratky - Porod worm - like chain model under physiologically accessible energy scales.
Under sufficient tension and positive torque, DNA is thought to undergo a phase transition with the bases splaying outwards and the phosphates moving to the middle. This proposed structure for overstretched DNA has been called P - form DNA, in honor of Linus Pauling who originally presented it as a possible structure of DNA.
Evidence from mechanical stretching of DNA in the absence of imposed torque points to a transition or transitions leading to further structures which are generally referred to as S - form DNA. These structures have not yet been definitively characterised due to the difficulty of carrying out atomic - resolution imaging in solution while under applied force although many computer simulation studies have been made (for example,).
Proposed S - DNA structures include those which preserve base - pair stacking and hydrogen bonding (GC - rich), while releasing extension by tilting, as well as structures in which partial melting of the base - stack takes place, while base - base association is nonetheless overall preserved (AT - rich).
Periodic fracture of the base - pair stack with a break occurring once per three bp (therefore one out of every three bp - bp steps) has been proposed as a regular structure which preserves planarity of the base - stacking and releases the appropriate amount of extension, with the term "Σ - DNA '' introduced as a mnemonic, with the three right - facing points of the Sigma character serving as a reminder of the three grouped base pairs. The Σ form has been shown to have a sequence preference for GNC motifs which are believed under the GNC_hypothesis to be of evolutionary importance.
The B form of the DNA helix twists 360 ° per 10.4 - 10.5 bp in the absence of torsional strain. But many molecular biological processes can induce torsional strain. A DNA segment with excess or insufficient helical twisting is referred to, respectively, as positively or negatively supercoiled. DNA in vivo is typically negatively supercoiled, which facilitates the unwinding (melting) of the double - helix required for RNA transcription.
Within the cell most DNA is topologically restricted. DNA is typically found in closed loops (such as plasmids in prokaryotes) which are topologically closed, or as very long molecules whose diffusion coefficients produce effectively topologically closed domains. Linear sections of DNA are also commonly bound to proteins or physical structures (such as membranes) to form closed topological loops.
Francis Crick was one of the first to propose the importance of linking numbers when considering DNA supercoils. In a paper published in 1976, Crick outlined the problem as follows:
In considering supercoils formed by closed double - stranded molecules of DNA certain mathematical concepts, such as the linking number and the twist, are needed. The meaning of these for a closed ribbon is explained and also that of the writhing number of a closed curve. Some simple examples are given, some of which may be relevant to the structure of chromatin.
Analysis of DNA topology uses three values:
Any change of T in a closed topological domain must be balanced by a change in W, and vice versa. This results in higher order structure of DNA. A circular DNA molecule with a writhe of 0 will be circular. If the twist of this molecule is subsequently increased or decreased by supercoiling then the writhe will be appropriately altered, making the molecule undergo plectonemic or toroidal superhelical coiling.
When the ends of a piece of double stranded helical DNA are joined so that it forms a circle the strands are topologically knotted. This means the single strands can not be separated any process that does not involve breaking a strand (such as heating). The task of un-knotting topologically linked strands of DNA falls to enzymes termed topoisomerases. These enzymes are dedicated to un-knotting circular DNA by cleaving one or both strands so that another double or single stranded segment can pass through. This un-knotting is required for the replication of circular DNA and various types of recombination in linear DNA which have similar topological constraints.
For many years, the origin of residual supercoiling in eukaryotic genomes remained unclear. This topological puzzle was referred to by some as the "linking number paradox ''. However, when experimentally determined structures of the nucleosome displayed an over-twisted left - handed wrap of DNA around the histone octamer, this paradox was considered to be solved by the scientific community.
|
is it legal to dropout of school at 14 | School - leaving age - wikipedia
The school leaving age is the minimum age a person is legally allowed to cease attendance at an institute of compulsory secondary education. Most countries have their school leaving age set the same as their minimum full - time employment age, thus allowing smooth transition from education into employment, whilst a few have it set just below the age at which a person is allowed to be employed.
In contrast, there are numerous countries that have several years between their school leaving age and their legal minimum employment age, thus in some cases preventing any such transition for several years. Countries which have their employment age set below the school leaving age of 5 years old but (mostly developing countries), risk giving children the opportunity to leave their education early to earn money for their families.
Some countries have different leaving or employment ages, but in certain countries like China and Japan, the average age at which people graduate is 15, depending upon part - time or full - time learning or employment. The table below states the school leaving ages in countries across the world and their respective minimum employment age, showing a comparison of how many countries have synchronised these ages. All information is taken from the Right to Education Project 's table unless otherwise indicated.
Legend
For further information http://www.moe.gov.lk/sinhala/images/publications/Education_First_SL/Education_First_SL.pdf
Also, all children between those ages, even if they 're refugees or new, have to attend school. Not attending school with - out proper reason for example; sickness or a doctors visit, is illegal and is seen as wagging which is seen as fraud and punishable by law. Until the age of 12 children can not be punished by law, but as soon as your child or you turn 12 years of age or older, you can be held responsible for wagging. Punishments can be done as a fine, temporary jail time or a community service job done under supervision. Fines can be up to € 3 900 00. These punishments can be given to the student and / or his or her legal parent or guardian.
The minimum ages from 2009 will be the following: Northern Territory - 15; ACT - 15; South Australia - 17; Queensland - 17; Students must remain in school until they turn 16 years of age or complete Year 10, which ever comes first. From there they must be "learning or earning '' which means they must be employed at least 25 hours a week, or be in full time education or be in a combination of both part time employment and part time education which adds up to at least 25 hours a week until they turn 17 or complete Year 12 or equivalent, which ever comes first. Victoria - 17; Western Australia - 15; NSW - 17 (if they want to not do their HSC they need to be working at least 25 hours per week or at TAFE studying until they turn 17); Tasmania - 17.
|
peter frampton baby i love your way original | Baby, I Love Your Way - wikipedia
"Baby, I Love Your Way '' is a song written and performed by English singer Peter Frampton. It was released in September 1975 and was first featured on Frampton 's 1975 album, Frampton. The song segues from the previous track "Nassau ''.
A live version of the song was later released on his 1976 multi-platinum album Frampton Comes Alive!, where it gained popularity as a hit song, peaking at number 12 on the US Billboard Hot 100 chart. It also reached number three in Canada.
In 2017, Frampton discussed this song while talking to Washington D.C. lawmakers about inequitable revenue payments from streaming music services like iTunes and Spotify. "For 55 million streams of Baby I Love Your Way, I got $1,700, '' said Frampton. "Their jaws dropped and they asked me to repeat that for them. ''
7 '' single -- United States (1975)
7 '' single -- United States (1976)
In 1987 the American dance - pop band Will to Power recorded Baby, I Love Your Way / Freebird Medley (Free Baby). The song combines elements of two previously recorded rock songs: "Baby, I Love Your Way '' and American Southern rock band Lynyrd Skynyrd 's song "Free Bird '', which hit # 19 on the Hot 100 chart in 1975. Will to Power 's medley of these two songs had more of a synthesized dance beat (as opposed to the rock ballad - like nature of the two original songs). It spent one week at # 1 on the Hot 100 chart dated December 3, 1988. It also peaked at # 2 on the Billboard adult contemporary chart. Additionally, in the "Freebird '' section, the line "and the bird you can not change '' in the original version was changed to "and this bird will never change ''.
In March and April 2009, VH1 ran a countdown of the 100 Greatest One Hit Wonders of the 80s. Will to Power 's "Baby, I Love Your Way / Freebird Medley '' placed at # 97 on the countdown despite the fact the group having another Top 10 hit in 1991 with a cover version of the 1975 10cc hit "I 'm Not in Love. ''
The song was recorded by the American reggae / pop band Big Mountain in 1994, reaching number 6 on the Billboard Hot 100 chart and number 2 on the UK Singles Chart (being kept off the top spot by "Love is All Around '' by Wet Wet Wet). Their version achieved major worldwide success, reaching the top ten in many countries across Europe. Their version is featured on the soundtrack for the 1994 movie Reality Bites. In the film itself, Michael Grates (played by Ben Stiller) explains that he used to listen to the Frampton song a lot. This version was also played during the 2017 film Jumanji: Welcome to the Jungle during two fight sequences.
Billboard magazine reviewed the song favorably, calling it an "earthy rendition '' which is "right in the pocket of current trends. ''
CD single -- Europe (1994)
The video was directed by Matti Leshem and premiered in April 1994.
sales figures based on certification alone shipments figures based on certification alone
|
carol burnett rock hudson i do i do | I Do! I Do! (Musical) - wikipedia
I Do! I Do! is a musical with a book and lyrics by Tom Jones and music by Harvey Schmidt which is based on the Jan de Hartog play The Fourposter. The two - character story spans fifty years, from 1895 to 1945, as it focuses on the ups and downs experienced by Agnes and Michael throughout their marriage. The set consists solely of their bedroom, dominated by the large fourposter bed in the center of the room.
For producer David Merrick, who initially presented the play on Broadway, I Do! I Do! was an ideal investment in that it had neither expensive sets and costumes nor a large cast. After four previews, the Broadway production, directed and choreographed by Gower Champion, opened on December 5, 1966, at the 46th Street Theatre, and closed on June 15, 1968, after 560 performances. Mary Martin and Robert Preston comprised the original cast. Carol Lawrence and Gordon MacRae played matinees starting in October 1967 and then replaced Martin and Preston in December 1967.
Martin and Preston starred in a national tour, originally scheduled to play 27 cities for one year, starting in March 1968 in Rochester, New York. However, in February 1969 Martin became ill and the remainder of the tour was cancelled. Carol Burnett and Rock Hudson also starred in a national tour, appearing during Burnett 's hiatus from her television show in 1973 and again in 1974 at The Muny, St. Louis, Missouri and in Dallas.
A film adaptation, written by Champion and starring Julie Andrews and Dick Van Dyke, was announced by United Artists in 1969 but, following the commercial failure of several movie musicals, the project was abandoned in the spring of 1970. A television version with Lee Remick and Hal Linden was broadcast in 1982.
A 1996 Off - Broadway revival at the Lamb 's Theatre was directed by Will Mackenzie and starred Karen Ziemba and David Garrison. It ran for 52 performances.
The show is frequently presented by regional theatres across the United States, because of the minimal cost of mounting it. A production at the Chanhassen Dinner Theatre in Chanhassen, Minnesota ran for more than 20 years with leads David Anders and Susan Goeppinger, who eventually married during their run. This set the American record for a play running with the original cast.
Act One
A bedroom, complete with four - poster bed, chaise longue and easy chair. There are two dressing tables downstage on either side. Michael and Agnes sit at the tables, getting dressed for their wedding. They finish their makeup and don their wedding apparel. They move through the ceremony, complete with Agnes throwing the flowers and the two going out into the audience to shake hands and welcome guests. Finally, Michael carries Agnes back across the threshold and they fall into the bed ("Prologue '').
Agnes ' feet hurt. Michael removes her shoe and kisses her foot. Agnes protests; a little drunk, a little weepy and very nervous. Michael professes his belief that they were married in a former life, and his sweetness makes her cry. We get a glimpse of the Michael that we will soon get to know when he ruins her happy moment by pointing out to her that she should go ahead and cry, as her youth is over. They nervously and elaborately prepare for the wedding night. They climb clumsily into bed and pull the covers back to find, to their horror, a pillow embroidered with the words, "God Is Love. '' Michael awkwardly turns out the light. They say goodnight to each other, and Agnes admits that she 's never seen a naked man ("Goodnight ''). There is an uncomfortable silence. Finally, they kiss and embrace passionately.
A spotlight comes up on Michael, sitting on the lip of the stage. He stretches and smiles and tells the audience a surprising secret: contrary to conventional wisdom, he actually loves his wife ("I Love My Wife ''). He wakes her and they dance together. He falls asleep, and she puts the "God Is Love '' pillow under his head, tucks him in and kisses him. The music becomes soft and tender as Agnes straightens the room. She folds his clothes and puts them away, gets the robe that is hanging at her dressing area and slips into a new outfit. We see that she is very, very pregnant. She ruminates on impending motherhood ("Something Has Happened '').
In the following blackout, we hear an old - fashioned, hand - rung bell as Michael calls for Agnes. He is in bed with a washcloth on his head as she enters -- still hugely pregnant -- pushing a bassinet. Michael is having sympathetic labor pains and is very needy and upset. He already feels displaced by the baby to come. Sitting on his lap, Agnes goes into labor. As Michael goes for the doctor, they promise to each other that they will never let anything happen to their relationship.
Lights come up on Michael pacing, worrying and praying that his wife and baby survive ("The Waiting Room ''). All is well; Michael has a son. He tosses cigars into the audience. Agnes enters, pulling a clothesline strung with diapers and baby clothes. Michael now realizes that he has a family for whom he needs to provide ("Love Is n't Everything ''). Agnes then has a girl. Now Michael knows that he really needs to make money. Despite the stress, love is n't everything... but makes it all worth it. The tension between Agnes and Michael starts to become palpable. Michael has become very self - involved and self - important about his work and success as a novelist. He treats her as a lowly domestic as he lectures the audience on writers and writing, themes and works. She interrupts him in the middle of his diatribe and calls his work dull. He corrects her grammar, criticizes her cooking and habitual lateness, insisting that she accompany him to literary parties at which she feels uncomfortable. She counters that she also has a list of irritating habits ("Nobody 's Perfect '').
They return from the party and argue bitterly. He admits to having an ongoing affair with a younger woman. He blames Agnes for driving him away. He also points out that everyone knows that men get better with age and women get worse ("A Well Known Fact ''). Agnes exits in disgust, and Michael finishes the song, making a fast, showy exit, a matinee idol in all his glory. In response to Michael having criticized her shopping habits, Agnes starts parading the extravagant items on her dressing table. She fantasizes about what her life would be like if she were a saucy, single divorcee, partying the night away ("Flaming Agnes '').
Michael reappears to finish their discussion. She tells him to get out; he refuses, since, he claims, it is his house and his mortgage. She resolves to leave, taking the checkbook with her. He begins throwing her things into a suitcase: her alarm clock, her nightgown, her cold cream and the "God Is Love '' pillow. Their eyes are now wide open about each other... and it is n't pleasant ("The Honeymoon Is Over '').
She stalks out, with her ermine thrown over her nightgown, and the Flaming Agnes hat set determinedly on her head. He waits for a moment, certain of her return. When she does n't come back, he rushes after her. We hear a struggle and he reappears, dragging her into the room. They fight and he throws her on the bed. His anger dissipates. Looking at her pleadingly, he tells her of his loneliness and regret. Her eyes fill with tears and she acknowledges that no one is perfect ("Finale -- Act I ''). They lie together and embrace.
Act Two
Agnes and Michael are in bed, celebrating New Year 's Eve. The "God Is Love '' pillow is gone, as is the gaudy chandelier. Time has passed; their children are teenagers now, celebrating at New Year 's Eve parties of their own. Agnes and Michael are getting older ("Where Are the Snows? ''). Michael is angry that their son has n't returned and goes downstairs to wait for him. He storms back into the room, having found bourbon in his son 's room. They argue about parenting, and Michael takes a swig from the bottle, only to discover that their son has filled the bottle with the cod liver oil that his mother thought she was administering for three years. We hear that, offstage, Michael has confronted his son at the door with the razor strap, only to discover that his boy is a man, dressed in his father 's tuxedo.
Michael and Agnes reflect on the dreams and regrets of their early married years. Agnes asks Michael if he is disappointed. He is not ("My Cup Runneth Over ''). They fantasize about their children growing up and moving out. They make plans for their middle age and retirement: he 'll finally finish his Collected Tolstoy; she 'll cruise to Tahiti and learn to do the hootchi - koo; he 'll play the saxophone, she the violin ("When the Kids Get Married '').
Later, Michael is dressing with little success for his daughter 's wedding. He is not pleased with his little girl 's choice of husband ("The Father of the Bride ''). Agnes enters, crying. The stained glass window appears again, and Michael and Agnes watch the ceremony. They wave to the departing couple and go home to face an empty nest.
Agnes faces her transition to middle age. She does n't know what to be now that her children no longer need her as much ("What Is a Woman? ''). Michael enters the scene with two packages, but Agnes announces that she 's going away; she does n't love Michael anymore. She feels that he neither understands nor appreciates her, and reveals her infatuation with a young poet. Michael confesses his love and concern for his wife. He shows her that he loves her, and she breaks down, laughing and crying at once. He gives her a charm bracelet with a charm for each of them, one for each of their children and room for lots of grandchildren. She feels much better. They dance together and multi-colored ribbons cascade from above ("Someone Needs Me '').
The music changes, becomes softer and more carousel - like as, together, they pick up the boxes and papers. They begin to pack up the house ("Roll up the Ribbons ''). The music continues as they go to their dressing tables and apply old - age makeup, wigs and whiten their hair. It 's eight a.m., and the much older Michael and Agnes are moving to a smaller apartment. They are gathering the last bits and pieces to take along with the movers. He pulls out a steamer trunk and finds the "God Is Love '' pillow. Agnes wants to leave the pillow for the newlyweds who have bought the house, but Michael refuses. He was mortified to find it on their wedding night and wo n't have another young groom traumatized. Agnes sends Michael to look for a bottle of champagne and sneaks the pillow back under the covers. Michael returns with the champagne, but they determine that they wo n't drink it, since it 's too early. They look at each other across the bed and remember what a good life they 've had ("This House -- Finale '').
They take a last look around and leave the room together. Michael comes back in for the champagne and finds the "God Is Love '' pillow under the covers. He puts it on Agnes ' side of the bed and the champagne on his side before exiting. The curtain falls on the home.
The original cast album was released by RCA Victor. Ed Ames had a major hit with his recording of the song "My Cup Runneth Over. ''
In his review for The New York Times, Walter Kerr wrote that the stars were "great. '' Martin has "several funny little vocal tricks... and always... with that mellow sound that comes from her throat like red wine at room temperature. '' Preston is "at his untouchable best when the show asks him to be pompous, and blissfully obtuse. '' The work of the director is noted: "Then, courtesy of Gower Champion, there are all those engaging things the two do together... one of them is literally the soft - shoe to end all soft - shoes, because it is done with no shoes at all. '' Kerr further wrote that the material was "on the whole barely passable, a sort of carefully condensed time capsule of all the cliches that have ever been spawned by people married and / or single... the lyrics are for the most part remarkably plain - spoken. '' In reviewing the new cast of Carol Lawrence and Gordon MacRae, Clive Barnes wrote in The New York Times that they "exerted a certain charm. '' The musical is "very slight indeed... Carol Lawrence was more than (Martin 's) equal. She did the younger scenes with less cuteness and her acting had a depth unusual in a musical. Mr. MacRae... has a better if more conventional voice. ''
Notes
Bibliography
|
who sings in ghost town with kanye west | Ghost Town (Kanye West song) - wikipedia
"Ghost Town '' is a song by American rapper and producer Kanye West from his eighth studio album, Ye (2018). It features vocals from PartyNextDoor, Kid Cudi, and 070 Shake.
In an interview, featured artist 070 Shake revealed that the song was finished on the very same day that the album was released.
The track includes a sample of "Take Me for a Little While '' by Dave Edmunds within its leading bass, drum and keys combination. Whilst singing the chorus of "Ghost Town '', Cudi samples vocals from the song himself.
On the week of Ye being released, the song entered the US Billboard Hot 100 at number 16. "Ghost Town '' also debuted in the UK Singles Chart at number 17 following the release of West 's album '.
Credits adapted from Tidal.
|
who did the buffalo bills play in the super bowl | History of the Buffalo Bills - wikipedia
This article details the history of the Buffalo Bills. The team began play in 1960 as a charter member of the American Football League (AFL) and won two consecutive AFL titles in 1964 and 1965. The club joined the National Football League (NFL) as part of the 1970 AFL - NFL Merger. The Bills have the distinction of being the only team to advance to four consecutive Super Bowls, but also has the dubious distinction of losing all four of them.
The Bills were not the first professional football team to play in Buffalo, nor was it the first NFL team in the region. Professional football had been played in Buffalo and in upstate New York since the beginning of the 20th century. In 1915, Barney Lepper 's "Buffalo All - Stars '' were founded; the team would later be replaced by the Niagaras in 1918, then the Prospects in 1919. The Prospects were the basis of what would become the "Buffalo All - Americans, '' who joined what would become the NFL in 1920. After changing their name to the Bisons in 1924 (and, for one season, the Rangers in 1926), the team suspended operations in 1927, then came back in 1929 and re-folded at the end of that season.
After Buffalo hosted two NFL games in 1938 (a practice that would become a semi-regular occurrence in the city until the current team 's arrival), the third American Football League installed the Buffalo Indians in the city; the Indians played two years before the league suspended and ultimately folded due to World War II. After the war, when the All - America Football Conference formed, Buffalo was again selected for a team; originally known as the Buffalo Bisons, the same name as a baseball team and (at the time) a hockey team in the area, the team sought a new identity and named itself the "Buffalo Bills '' in 1947. When the AAFC merged with the NFL in 1950, the AAFC Bills were merged into the Cleveland Browns. Though there was no connection between the AAFC team and the current team, the Bills name proved popular enough that it was used as the namesake for the future American Football League team that would form in 1959.
The forerunners to the Canadian Football League would also play at least one game in Buffalo in 1951.
When Lamar Hunt announced formation of the American Football League in the summer of 1959, Buffalo was one of the target cities Hunt sought. His first choice of owner, however, turned him down; Pat McGroder (then a liquor store owner and sports liaison with the city of Buffalo) was still hopeful that the threat of the AFL would prompt the NFL to come back to Buffalo to try and stop the AFL from gaining a foothold there (as the NFL would do with teams in Minnesota, Dallas, St. Louis and later Atlanta). McGroder 's hopes never came to fruition, and in 1961, he took a position in the new Bills organization.
Harry Wismer, who was to own the Titans of New York franchise, reached out to insurance salesman and automobile heir Ralph C. Wilson, Jr. to see if he was interested in joining the upstart league. (Both Wismer and Wilson were minority owners of NFL franchises at the time: Wilson part - owned the Detroit Lions, while Wismer was a small partner in the Washington Redskins but had little power due to majority owner George Preston Marshall 's near - iron fist over the team and the league). Wilson agreed to field a team in the new league, with the words "Count me in. I 'll take a franchise anywhere you suggest. '' Hunt gave him the choice of six cities: Miami, Buffalo, Cincinnati, St. Louis, Atlanta, or Louisville, Kentucky. Wilson 's first choice was Miami, but city officials there were wary of an upstart league after the failure of the Miami Seahawks over a decade prior and rejected the idea. (Once the AFL established itself, the city reversed its stance and allowed the Miami Dolphins to reside in the city.) In WWII, Ralph Wilson served aboard a Minesweeper, the YMS - 29, serving in the Mediterranean. Wilson was an Executive officer to the ship 's Captain, Buffalo native George E. Schaaf. Wilson remembered the Buffalo team in the old NFL and remembered his old ship 's Captain was from Buffalo. Ralph Wilson reached out to the General Construction Contractor, George E. Schaaf who still resided there. Schaaf assured Wilson that pro-football interest was significant in Buffalo and assembled a coalition of key Buffalo figures who were able to interest Wilson in bringing the AFL franchise to Buffalo. Their efforts to lobby Wilson to come to Buffalo were successful, and Wilson sent Hunt a telegram with the now - famous words, "Count me in with Buffalo. ''
The Buffalo Bills were a charter member of the American Football League (AFL) in 1960. After a public contest, the team adopted the same name as the AAFC Buffalo Bills, the former All - America Football Conference team in Buffalo. The AAFC Bills franchise was named after the Buffalo Bills a popular barbershop quartet, whose name was a play on the name of the famed Wild West showman Buffalo Bill Cody. The franchises are not officially related, other than in name, to each other.
After an inaugural season that saw the Bills finish 5 -- 8 -- 1 (third in the four - team AFL East Division), the Bills gained four of the first five picks in the 1961 AFL draft, including the top slot, which they used to draft offensive tackle Ken Rice. They also drafted guard Billy Shaw in the same draft. Success did not come overnight. On August 8, 1961, the Bills became the first (and only) American Football League team to play a Canadian Football League team, the nearby Hamilton Tiger - Cats. Because of that game, they also hold the dubious distinction of being the only current NFL team to have ever lost to a CFL team, as the Tiger - Cats won, 38 -- 21. Hamilton was one of the best teams in the CFL (they would go on to win the Big Four title but lose in the 49th Grey Cup that year), and Buffalo, at the time, was the worst team in the AFL.
In the 1962 offseason, Buffalo began to get good players for the first time in franchise history. Jack Kemp was acquired off waivers from the San Diego Chargers after the Chargers thought Kemp, who had led the Chargers to back - to - back AFL title games, had a bum hand. The Bills also drafted Syracuse running back phenomenon Ernie Davis and had a serious chance of getting him to play for Buffalo after the Redskins, a team Davis refused to play for, drafted him; however, Davis instead opted to play for the NFL after the Redskins traded him to Cleveland, and he died of leukemia before playing a single down of professional football. Instead, the Bills then acquired one of the CFL 's top running backs, Cookie Gilchrist.
Offensive lineman Bob Kalsu quit the team after his 1968 rookie season to serve in the Vietnam War. He never returned; Kalsu was killed in action in 1970 and is often cited by Bills fans as the first professional football player to die in action in war during his playing career. This is not true, as Young Bussey and Jack Lummus were still of playing age when they left the NFL to serve in World War II and were killed in action a few years later. Kalsu would be the only NFL player to lose his life in Vietnam.
The 1968 season was a tumultuous one. With starter Jack Kemp injured, Buffalo resorted to converting wide receiver Ed Rutkowski to quarterback in a rotation with Rutkowski, Kay Stephenson and Dan Darragh. The result was disastrous, and the Bills once again dropped to last in the league, resulting in the Bills earning the first overall draft pick in what was now the combined AFL - NFL draft. The Bills selected O.J. Simpson with their pick.
Before the 1969 season, the Bills drafted running back O.J. Simpson, who would become the face of the franchise through the 1970s. The NFL - AFL merger placed Buffalo in the AFC East division with the Patriots, Dolphins, Jets, and Colts. Their first season in the NFL saw the team win only three games, lose ten, and tie one. In 1971, not only did the Bills finish in sole possession of the NFL 's worst overall record at 1 -- 13, but they also scored the fewest points (184) in the league that year while allowing the most (394); no NFL team has since done all three of those things in the same season in a non-strike year. They thus obtained the # 1 draft pick for 1972, which was Notre Dame DE Walt Patulski. Despite good on - field performances, he struggled with injuries before being traded to the St. Louis Cardinals in 1976. Lou Saban, who had coached the Bills ' AFL championship teams, was re-hired in 1972, in which the team finished 4 -- 9 -- 1.
Meanwhile, War Memorial Stadium was in severe need of replacement, being in poor condition, located in an increasingly worsening neighborhood, and too small to meet the NFL 's post-1969 requirement that all stadiums seat at least 50,000. Construction began on a new stadium in the suburbs after Ralph Wilson threatened to move the team to another city; at one point after the 1970 season Wilson was "prepared to move the team '' to Husky Stadium in Seattle and was also fielding offers from Tampa and Memphis. Western New York leaders acquiesced to Wilson 's demands and built a new open - air facility that featured a capacity of over 80,000 (at Wilson 's request) and, unlike other stadiums, was built into the ground. Rich Stadium (now New Era Field) opened in 1973 and continues to house the Bills to this day.
1973 was a season of change: Joe Ferguson became their new quarterback, they moved into a new stadium, Simpson recorded a 2,000 - yard season and was voted NFL MVP, and the team had its first winning record since 1966 with eight wins. The "Electric Company '' of Simpson, Jim Braxton, Paul Seymour, and Joe DeLamielleure as recounted in the locally recorded hit "Turn on the Juice '', lead a dramatic turnaround on the field. The "Electric Company '' was the offensive line (OG Reggie McKenzie, OT Dave Foley, C Mike Montler, OG Joe DeLamielleure and OT Donnie Green) which provided the electricity for the "Juice ''. O.J. became the only player to rush for 2,000 yards prior to the introduction of a 16 - game season. The team made the NFL playoffs at 9 -- 5 for the first time in history in 1974, but in their divisional playoff, they lost to the eventual Super Bowl champions, the Pittsburgh Steelers.
After an 8 -- 6 1975 season, the Bills had internal troubles in 1976; Ferguson was injured and Gary Marangi proved ineffective in replacement. The team dropped to the bottom of the AFC East at 2 -- 12, where they stayed for the rest of the 1970s. On a high note, the 1976 Thanksgiving Day game saw Simpson set the league record for rushing yards in a game, despite a 27 -- 14 loss to the Detroit Lions. After the 1977 season, Simpson was traded to the San Francisco 49ers.
1980 marked the 3rd year the Bills were good. They beat the archrival Miami Dolphins for the first time in 11 years in their season opener, en route to an 11 - 5 season and their first AFC East title. However, they lost to the San Diego Chargers 20 - 14 in the divisional playoffs. In 1981, the Bills made the playoffs as a wild - card team with a 10 - 6 record. They defeated the New York Jets 31 - 27 in the wild card round of the playoffs, but lost in the divisional round to the eventual AFC champion Cincinnati Bengals, 28 - 21. The following year -- the strike - shortened season of 1982 -- the Bills slipped to a 4 -- 5 final record and missed the playoffs.
In the famous 1983 draft, the Bills selected quarterback Jim Kelly as their replacement to an aging Joe Ferguson, but Kelly decided to play in the upstart United States Football League instead. Chuck Knox left his coaching position to take a job with the Seattle Seahawks, and running back Joe Cribbs also defected to the USFL, a loss incoming head coach Kay Stephenson unsuccessfully attempted to stop in court. In 1984 and 1985, the Bills went 2 -- 14. By this point, attendance at Rich Stadium had fallen to under 30,000 fans per game for most of the 1985 season, leaving the team 's long - term future in doubt.
Among the names that Buffalo picked up after the USFL 's demise were general manager Bill Polian, head coach Marv Levy (both from the Chicago Blitz), special teams coach Bruce DeHaven, starting quarterback Jim Kelly (of the Houston Gamblers), center Kent Hull (of the New Jersey Generals), and linebacker Ray Bentley (of the Oakland Invaders), all of whom joined the Bills for the 1986 season. Midway through the 1986 season, the Bills fired coach Hank Bullough and replaced him with Levy, who in addition to the Blitz had also previously coached the Kansas City Chiefs and Montreal Alouettes. Levy and Polian put together a receiving game featuring Andre Reed, a defense led by first - overall draft pick Bruce Smith, and a top - flight offensive line, led by Hull along with Jim Ritcher, Will Wolford and Howard "House '' Ballard.
After the strike year of 1987, in 1988, the rookie season of running back Thurman Thomas, the Bills went 12 -- 4 and finished atop the AFC East for the first of four consecutive seasons. After a 17 - 10 victory over the Houston Oilers in the divisional playoff, they lost the AFC championship 21 - 10 to the Cincinnati Bengals.
1989 was a relative disappointment, with a 9 -- 7 record and a divisional playoff loss to the Cleveland Browns. The Bills had a chance to win the game as time was running out, but Ronnie Harmon dropped a Kelly pass in the corner of the end zone. During this season, the Bills were called the "Bickering Bills '' by the fans and media due to significant infighting among the players and coaches throughout the season.
In 1990, the Bills switched to a no huddle, hurry - up offense (frequently with Kelly in the shotgun formation, the "K - gun '', named for tight - end Keith McKeller), and it led the Bills ' offense to one of the best in the league; their 428 points (26.75 points per game) scored was first in the league. The team finished 13 -- 3, and behind their no - huddle attack, beat the Miami Dolphins 44 - 34 and blew out the Los Angeles Raiders 51 -- 3 in the playoffs on their way to Super Bowl XXV. The Bills were favorites to beat the New York Giants (whom they had beaten on the road during the regular season), but the defensive plan laid out by Giants coach Bill Parcells and defensive coordinator Bill Belichick kept Buffalo in check (and without the ball) for much of the game. The game featured many lead changes, and with the score 20 -- 19 in favor of New York with eight seconds left, Bills kicker Scott Norwood attempted a 47 - yard field goal. His kick sailed wide right, less than a yard outside of the goalpost upright.
The Bills won their fourth consecutive AFC East title in 1991, finishing 13 -- 3 again and with Thurman Thomas winning a couple of awards. In the playoffs, they routed the Kansas City Chiefs 37 - 14 in the divisional round and beat the Denver Broncos in a defensive struggle, 10 - 7, in the AFC Championship. The Bills looked to avenge their heartbreaking Super Bowl loss a year earlier by playing the Washington Redskins in Super Bowl XXVI, but it was not to be. The Redskins opened up a 17 -- 0 halftime lead and never looked back, handing the Bills a 37 -- 24 loss. Early in that game, Thurman Thomas lost his helmet and had to sit out the first two plays, making the Bills the butt of jokes nationwide.
The Bills lost the 1992 AFC East title to the Miami Dolphins and Jim Kelly was injured in the final game of the regular season. Backup quarterback Frank Reich started their wild card playoff game against the Houston Oilers, and they were down 35 -- 3 early in the third quarter. But the Bills rallied behind Reich, taking the lead late in the 4th quarter and winning the game in overtime 41 - 38. The 35 - 3 deficit remains, to this day, the largest deficit (32 points) overcome to win a game in NFL history. Buffalo then defeated the Pittsburgh Steelers 24 - 3 in the divisional playoff and upset the archrival Dolphins 29 - 10 in the AFC Championship to advance to their third straight Super Bowl. Super Bowl XXVII, played against the Dallas Cowboys, turned out to be a mismatch. Buffalo committed a Super Bowl - record 9 turnovers en route to a 52 -- 17 loss, becoming the first team in NFL history to lose three consecutive Super Bowls. One of the sole bright spots for the Bills was Don Beebe 's rundown and strip of Leon Lett after Lett had returned a fumble inside the Bills ' 5 - yard line and was on his way to scoring. Lett started celebrating too early and held the ball out long enough for Beebe, who had made up a considerable distance to get to Lett, to knock it out of his hand. The play resulted in a touchback, not a touchdown, thus stopping Dallas from breaking the record for most points scored by a team in a Super Bowl (55), which was set three years earlier and is still held today by the San Francisco 49ers.
The Bills won the AFC East championship in 1993 with a 12 -- 4 record, and again won playoff games against the Los Angeles Raiders and Kansas City Chiefs, setting up a rematch with the Cowboys in Super Bowl XXVIII on January 30, 1994. The Bills became the only team ever to play in four straight Super Bowls, and in this game became the first team to face the same team in 2 straight Super Bowls, and looked ready to finally win one when they led at halftime. A Thurman Thomas fumble returned for a touchdown by James Washington tied the game, with Super Bowl MVP Emmitt Smith taking over the rest of the game for the Cowboys and the Bills were stunned again, 30 -- 13.
The four consecutive failures to win the title game, despite a 14 -- 2 regular - season record against the NFC, inspired many jokes. Steve Tasker recalled that when he made motivational speeches to groups of children, "invariably, some little guy raises his hand. He goes, ' Do you know what Bills stands for? ' and I 've heard it a hundred times. I go, ' No, what? ' He goes ' Boy, I Love Losing Super Bowls '. '' A player denounced the team 's poor reputation: "They still consider us losers. That is the most unfair statement that I 've ever seen or heard or read in my life ''. Andrea Kremer recalled, however, that "I do n't think there 's any doubt that America, that the national fan base, turned their back on the Bills. They 're just tired of it ''. The Bills would not get a chance to make it five straight in 1994. The team stumbled down the stretch and finished 7 -- 9, fourth in the division and out of the playoffs. During this period Tasker established himself year in and year out as the league 's top special teams performer.
In 1995, Buffalo signed free agent linebacker Bryce Paup to anchor the defense. The expansion Carolina Panthers ended up selecting several key Bills contributors (backup quarterback Frank Reich, wide receiver Don Beebe and tight end Pete Metzelaars) in the expansion draft, where they formed the core of that team 's inaugural roster.
The Bills again made the playoffs with a 10 -- 6 record, and defeated Miami in the wild card round. They would not get a chance to get back to the Super Bowl -- the Pittsburgh Steelers, who went on to advance to the Super Bowl, beat Buffalo in the divisional playoffs 40 -- 21.
In 1996, the Bills saw their commanding lead in the AFC East race disappear to a surging New England Patriots team; the Bills won against the Patriots in September, then in late October the Patriots won after three touchdowns were scored in the final 85 seconds. The Bills still made the playoffs as the Wild Card home team; they became the first victim of the cinderella Jacksonville Jaguars, the first (and as of the present only) visiting team ever to win a playoff game at Rich Stadium. Jim Kelly retired after the season after the Bills management told him they were moving in a new direction and wanted him to help develop a younger QB to take over, signaling an end to the most successful era in Bills history. Thurman Thomas gave way to new running back Antowain Smith. Kelly 's loss was felt in 1997, when his replacement Todd Collins faltered and the Bills stumbled to 6 -- 10. Coach Marv Levy retired after the season.
The Bills, under new coach Wade Phillips signed two quarterbacks for the 1998 season, one that Buffalo traded a high first round pick for, and one that was signed as almost an afterthought. The former was for Jaguars backup Rob Johnson and the latter was former Heisman Trophy winner and Canadian Football League star Doug Flutie. Despite many Bills fans wanting Flutie to get the starting job after Flutie looked the better of the two QBs in camp and in preseason, Phillips named Johnson to the position. The Bills stumbled to begin the season 0 - 3, and after Johnson suffered a rib injury against the Indianapolis Colts, Flutie came in and led the Bills to a playoff spot and a 10 -- 6 record. They faltered in their first playoff game against the Miami Dolphins.
Flutie 's popularity continued into the 1999 season, with the Bills finishing 11 -- 5, two games behind the Indianapolis Colts in the AFC East standings. Wade Phillips gave Rob Johnson the starting quarterback job in the first round playoff game against the Tennessee Titans even though Flutie had won 10 games and had gotten the Bills into the playoffs. The Bills scored a field goal with 16 seconds left to give them a 16 -- 15 lead. But the Titans won the game on a controversial play that became to be known as the "Music City Miracle '': During the ensuing kickoff, Frank Wycheck lateraled the ball to Kevin Dyson who then scored the winning touchdown. Although Wycheck 's pass was close to an illegal forward lateral, replays were ruled inconclusive and the call on the field was upheld as a touchdown. The Titans went on to advance to the Super Bowl. As of 2017, this was the Bills last playoff appearance.
The final ties to the Bills ' Super Bowl years were severed in 2000, when Thurman Thomas, Andre Reed and Bruce Smith were all cut. Antowain Smith, Eric Moulds, and Marcellus Wiley respectively had long since eclipsed them on the depth chart. After an 8 -- 8 season, and the team still caught up in the Johnson vs. Flutie controversy, general manager John Butler departed for the San Diego Chargers -- and took Flutie and Wiley with him, among many other Bills contributors. Doug Flutie left the Bills with a. 677 winning percentage in 31 starts. Antowain Smith also left as a free agent for the New England Patriots, where he was the starting running back on their first two Super Bowl championship teams. Both Flutie and Smith were dominant in their final game as Bills, in a 42 -- 23 victory over the Seattle Seahawks. Thomas would be quickly replaced by rookie Travis Henry.
In 2001, following the departure of John Butler, team owner Ralph Wilson announced his retirement as president of the organization and handed the reins of his franchise to Tom Donahoe, a former executive with the Pittsburgh Steelers. The move turned out to be disastrous. Donahoe (just a year after the team had released three eventual Hall of Famers in a salary cap move) proceeded to gut the franchise of most of its remaining recognizable talent and replaced it with young, inexperienced, unknown lower - end players, much of which joined Butler in San Diego that year, and installed Rob Johnson as the starting quarterback. The team went from playoff contenders to a 31 -- 49 record during Donahoe 's five - year tenure. The Bills still have not made it to the playoffs since Donahoe 's arrival, even after his departure.
Titans defensive coordinator Gregg Williams took over as head coach for the 2001 season, which proved to be the worst in recent memory for the Bills. Rob Johnson went down in mid-season with an injury and Alex Van Pelt took over. Buffalo finished 3 -- 13. The Bills even lost a much - hyped mid-season match up with "Bills West '' (the Flutie - led Chargers). After the season, they traded for quarterback Drew Bledsoe, deemed expendable by the Patriots after Tom Brady led them to a Super Bowl victory.
Bledsoe revived the Bills for the 2002 season, leading them to an 8 -- 8 record, setting 10 team passing records in the process. However, in a tough division with all other teams finishing 9 -- 7, they were still in last place. Another Patriots castoff, safety Lawyer Milloy, who joined the Bills days before the 2003 season began, gave the team an immediate boost on defense. After beating eventual champions New England 31 -- 0 in the first game, and crushing the Jaguars in their second game, play - by - play announcer Van Miller immediately announced his retirement as of the end of the season, expecting the team to have a shot at the title. However, the Bills stumbled through the rest of the season, finishing 6 -- 10. In fact their season had ended the exact opposite of the beginning as they were trounced by New England 31 -- 0. In one game, however, the Bills ' fans gained a small measure of satisfaction when the defense sacked Rob Johnson multiple times in his relief effort for the Washington Redskins.
Gregg Williams was fired as head coach after the 2003 season and replaced with Mike Mularkey. The Bills also drafted another quarterback, J.P. Losman, to be used if Bledsoe continued to struggle in 2004. Unfortunately, Losman broke his leg in the pre-season and missed most of the regular season, seeing very limited action.
Bledsoe continued to struggle in 2004. The Bills started the 2004 season 0 -- 4, with Bledsoe and his offense struggling in their run - first offense, averaging only 13 points per game. Additionally, each loss was heartbreakingly close. The team finally managed to turn things around with a victory at home against the also winless Miami Dolphins. This, along with the emergence of Willis McGahee (a first round - pick and a gamble by the Bills due to the knee injury that McGahee suffered in his last college game) taking over the starting running back role from the injured Travis Henry, and emergence of Lee Evans to give the Bills a second deep threat, sparked the Bills to go 9 -- 2 in their next eleven games. This string of victories allowed the Bills to be in the hunt for a final AFC wildcard playoff spot. Though they would lose to the Pittsburgh Steelers in the final game of the season, costing them a playoff berth and devastating the fans, the late season surge gave the team a positive direction to approach 2005.
After the season, wanting to go in a younger direction and unhappy with Drew Bledsoe 's overall performance, the Bills decided to hand the starting quarterback reins to J.P. Losman. This angered Bledsoe, who demanded his release, which the Bills granted. Bledsoe then signed with the Dallas Cowboys, reuniting him with his former New England Patriots coach Bill Parcells.
Losman 's development did not proceed as quickly as the Bills had hoped it would. He began the 2005 season 1 -- 3 as a starter, prompting Kelly Holcomb to replace him. Losman would not see action again until Holcomb was injured in Week 10 against the Kansas City Chiefs. He led the Bills to a win in that game, but would again be replaced by Holcomb after losing the next several games. Perhaps the low point of Losman 's season was a 24 -- 23 loss to the Miami Dolphins, a game in which Buffalo led 21 -- 0 and 23 -- 3, but gave up 21 unanswered points in the 4th quarter. Buffalo 's 2005 campaign resulted in a 5 -- 11 record and the firing of General Manager Tom Donahoe in January 2006. Marv Levy was named as his replacement, with hopes that he would improve a franchise that failed to make the playoffs during Donahoe 's tenure. That same month, Mike Mularkey resigned as head coach, citing family reasons along with disagreement over the direction of the organization. Dick Jauron was hired as his replacement.
The 2006 and 2007 seasons both brought 7 -- 9 records under Jauron 's coaching, having been eliminated from playoff contention in December in both years. 2006 saw the additions of Donte Whitner, Ko Simpson, Ashton Youboty, Anthony Hargrove and Kyle Williams to the defensive corps while 2007 brought in Trent Edwards to quarterback the offense, rookie first - round draft pick Marshawn Lynch, second - round pick Paul Posluszny, offensive linemen Derrick Dockery and Langston Walker, and backup running back Fred Jackson. J.P. Losman played all 16 games in 2006 but was benched in early 2007 in favor of Edwards.
At the end of the 2007 season, Levy retired once again, citing the fact that he had reached the end of his two - year contract. Meanwhile, offensive coordinator Steve Fairchild, a frequent fan target for the Bills ' offensive woes, was hired as head coach of Colorado State University 's football program. Offensive line coach Jim McNally retired shortly after the end of the season. All of those positions were filled from within, with Turk Schonert promoted to offensive coordinator.
One of the most notable moves in the league occurred during the 2008 offseason, when league officials approved an October 2007 proposal by Bills owner Ralph Wilson to lease his team to Canadian media mogul Edward S. "Ted '' Rogers, Jr. to play an annual regular season game and a biennial preseason game in Toronto, Ontario, Canada 's Rogers Centre over the next five years, in exchange for a sum of C $78,000,000 cash. The games began during the 2008 season. Notable additions to the roster for 2008 include linebacker Kawika Mitchell, acquired as a free agent from the defending Super Bowl champion New York Giants, and defensive tackle Marcus Stroud, in addition to draft picks, cornerback Leodis McKelvin and wide receiver James Hardy. The Bills started extremely well that season, starting out with a 5 -- 1 record before their bye week and showing promise in Trent Edwards as finally being a capable quarterback for the Bills. However, Edwards suffered a concussion from a huge hit in a game against the Arizona Cardinals. The team then went 2 -- 8 in their last games, earning them another 7 -- 9 record, which then resulted in the longest active streak of missed playoffs at the time.
On March 7, 2009 the Buffalo Bills made a major splash in the free agency market when it acquired veteran wide receiver Terrell Owens, who had recently been released by the Dallas Cowboys and is known as much for his elaborate touchdown celebrations as he is for his on - the - field play. Owens was signed to a one - year deal. In addition, former starting quarterback J.P. Losman, by this point relegated to third string behind Trent Edwards and Gibran Hamdan, was allowed to become a free agent. In the first round of the 2009 NFL Draft, the Bills selected defensive end / linebacker Aaron Maybin from Penn State with the 11th overall pick and center Eric Wood of Louisville with the 28th overall pick. Buffalo also selected free safety Jarius Byrd of Oregon, guard Andy Levitre of Oregon State, tight end Shawn Nelson of Southern Mississippi, and cornerbacks Cary Harris of USC and Ellis Lankster of West Virginia. As the season began, Terrell Owens proved to disappoint for most of the season, and the offensive line suffered from severe turnover, leading the team to stumble to a 3 -- 6 start, after which the Bills fired head coach Dick Jauron midseason. Overall, Owens ' stats for 2009 were modest: 829 yards and five TDs. The season opener against New England was a loss, although Buffalo 's morale was raised by the fact that it was only by a single point. Other notable games included a 16 -- 13 OT victory over the Jets in Week 6, and the Week 10 game against Tennessee, where that team 's owner Bud Adams made an obscene gesture at Bills fans and was fined $250,000. The Week 13 game against the Jets was an international series match held across the border in Toronto. In Week 15, the Bills hosted New England, but despite optimistic predictions, fell 17 -- 10, marking the fifth season in a row where they lost both matches against the Patriots. This completely eliminated Buffalo from playoff contention and marked their tenth consecutive season without a playoff appearance. On the season ender, they "routed '' the 14 -- 1 Indianapolis Colts 30 -- 7 to end the year at 6 -- 10, however, it should be noted that Peyton Manning was benched early due to this being a meaningless game for the playoff bound Colts. Quarterback Trent Edwards battled injury throughout the whole season, splitting games with back - up Ryan Fitzpatrick, formerly of the Cincinnati Bengals. The Bills were hit with another hard blow when star running back Marshawn Lynch was given a three - game suspension by Commissioner Goodell for pleading guilty to misdemeanor weapons charges. Though back - up running back Fred Jackson did quite well in Lynch 's absence, his performance then hindered on Lynch 's return but he still had a 1,000 - yard rushing season. However, the performance of free Safety Jarius Byrd showed extreme promise as Byrd led the NFL with 9 interceptions and was selected to the 2009 Pro Bowl.
Buddy Nix, a former assistant general manager of the San Diego Chargers, was named general manager in the final week of the 2009 season. One of his first personnel moves was to cut ties with Owens (ironically, a man he had recruited during his time in college football).
On January 20, the team named Chan Gailey as head coach. Gailey was previously the offensive coordinator of Kansas City and head coach of Georgia Tech and the Dallas Cowboys, going 8 -- 0 in the division in 1998, and leading the team to the postseason in both 1998 and 1999.
With the expiration of Terrell Owens ' contract in March 2010, the Bills chose not to re-sign him.
As 2010 began, the Bills lost to Miami at home. After going 0 -- 4, the Bills released Trent Edwards and named Ryan Fitzpatrick starting quarterback. Despite some close games, they ended up at an 0 -- 8 record before beating Detroit at home in Week 10. Then came a 49 -- 31 win in Cincinnati and an OT loss to Pittsburgh. The team finished 2010 with a 4 -- 12 record.
The Bills fired Tom Modrak, one of the last connections to the Donahoe era, shortly after the 2011 NFL Draft. As a result of the Bills ' poor play in 2010, the team earned the third overall selection in said draft, using it to select defensive tackle Marcell Dareus in an effort to improve the team 's long - struggling run defense.
Buffalo had an excellent start to 2011, routing Kansas City 41 -- 7. The following week, they hosted Oakland and erased a 21 - 3 deficit, winning 38 -- 35. In week 3, the Bills hosted the Patriots; they erased a 21 - 0 Patriots lead and led 31 - 24 in the fourth; a late Tom Brady touchdown tied the game, but the Bills whipped into range of a last second field goal. The 34 - 31 win ended a 15 - game franchise losing streak spanning 8 years to the Patriots. Despite starting the 2011 season with a 5 -- 2 record, leading the AFC East for several weeks, a wave of injuries to several key starters led to the Bills compiling a 7 - game losing streak, pushing the team out of playoff contention for the twelfth straight year. The losing streak was finally broken with a defeat of the Tim Tebow - led Denver Broncos on Christmas Eve, in a game that had unusually poor attendance.
Following another disappointing season in 2012 where the Bills went 6 - 10, the Buffalo Bills relieved Chan Gailey and his entire coaching staff of their duties.
On January 1, 2013 it was announced that Ralph Wilson had "passed the torch '' to Russ Brandon, and that he would have complete control of football operations. He then served as CEO and President of the team.
Early during the morning on January 6, 2013 it was reported by Adam Schefter that the Buffalo Bills had hired Doug Marrone as their new head coach.
In the 2013 NFL Draft the Bills traded back from their 8th pick to the 16th pick and selected quarterback E.J. Manuel out of Florida State. Olympic sprinter Marquise Goodwin and linebacker Kiko Alonso were among the other notable players chosen in the 2013 draft. After the draft, Nix announced his resignation; Doug Whaley moved into the general manager position. A knee injury to Manuel almost forced the team to start undrafted rookie Jeff Tuel as their opening day starting quarterback; Manuel nonetheless recovered in time to start week 1 only to injure his other knee a few weeks later, which resulted in the signing of Thad Lewis (who himself had started a game as an undrafted rookie the previous year with the Cleveland Browns). The Bills finished yet again 6 - 10 and missed the playoffs for the 14th consecutive season.
Owner Ralph Wilson died March 25, 2014, at the age of 95. Wilson 's assets, including the team, were placed into a trust governed by four members: Wilson 's widow, Mary Wilson; his niece, Mary Owen; Jeff Littman, the Bills ' chief financial officer; and Eugene Driker, an attorney. The trust sold the team to Buffalo Sabres owner Terrence Pegula, along with his wife Kim, reportedly for $1.4 billion in cash, which the Wilson trust intends to use as an endowment for charitable causes in Western New York (and Wilson 's hometown of Detroit); Pegula outbid two other parties, a Toronto - based consortium led by Jon Bon Jovi and a stalking horse bid from Donald Trump (the latter 's failure was a major factor in Trump 's decision to run for President the next year), to secure the team. The deal closed October 10, 2014.
The Bills finished the 2014 season with a 9 -- 7 record, which broke a league - leading streak of nine consecutive losing seasons. However, they were eliminated from playoff contention after a loss to the Oakland Raiders in the second to last week of the season, which extended their league - leading playoff drought to fifteen seasons. The starting quarterback for most of the 2014 season was Kyle Orton, a last - minute signing who was named starter a month into the regular season. Orton announced his retirement the Monday following the conclusion of the season.
The 2015 season was the first full season for the Bills under the Pegula Family 's ownership. On December 31, 2014 Doug Marrone chose to opt out of his contract with the Bills. He asked for a contract extension, but his request was denied by Mr. Pegula. On January 11, 2015 it was reported that Rex Ryan, who had recently been fired from his head coaching job with the New York Jets, would become the next head coach. Ryan was officially named the new head coach the next day, January 12, 2015. The day after that, January 13, 2015, it was announced that defensive coordinator Jim Schwartz would not be returning for the 2015 season. The team dramatically overhauled its offense in the offseason, bringing in a number of new starters: quarterback Tyrod Taylor, running back LeSean McCoy, fullback Jerome Felton, wide receiver Percy Harvin and tight end Charles Clay.
The Bills set a franchise record for season ticket sales for the 2015 season with more than 60,000 season tickets sold, a franchise record. The Rex Ryan hiring has been linked to the high increase in sales. The Bills opened the 2015 season with a 24 - 17 win over the Indianapolis Colts, but faltered (despite an unsuccessful late - game comeback) against traditional nemesis New England. Through the first quarter of the season the Bills led the NFL in penalties heading into their Week 5 game against the Tennessee Titans. After being flagged 17 times in Week 4 against the New York Giants, the Bills were penalized only seven times in their 14 - 13 victory over the Titans. In the end, the Bills finished a middling 8 - 8, missing the playoffs for the 16th consecutive season, the longest active streak in major professional sport (after the 2015 Toronto Blue Jays broke their then - 22 - year streak).
In 2016 Kathryn Smith became the first woman to be a full - time coach in the NFL, when she was hired by the Bills as a special teams quality control coach. The start of the 2016 season was marred by long - term injuries to both of the team 's top draft picks, first - rounder Shaq Lawson and second - rounder Reggie Ragland (who will miss his entire rookie season). On December 27, 2016, Rex Ryan was fired after compiling 15 - 16 record in 2 seasons along with his brother Rob which made the Bills the third team in the NFL to fire a coach in - season (along with the Los Angeles Rams and Jacksonville Jaguars who both fired Jeff Fisher and Gus Bradley), Anthony Lynn was promoted to interim coach. After winning four straight games from weeks 3 - 6, they only won three more games to finish 7 - 9.
On January 11, 2017, Sean McDermott was hired as the head coach of the Buffalo Bills. McDermott had previously spent the past six seasons as the defensive coordinator of the Carolina Panthers.
As of 2016, the Buffalo Bills are the only team in the National Football League to have not yet made the playoffs in the 21st century. Heading into that season, the playoff drought stands at 16 seasons, the longest active droughts in the league (the Bills ' streak of nine consecutive losing seasons, also a high among active streaks, was broken in 2014). In addition, the Bills have suffered attendance problems at Ralph Wilson Stadium, particularly in the later portions of the season, when Buffalo 's weather declines and the team usually falls out of playoff contention. Of the fourteen home games at Ralph Wilson Stadium in 2010 and 2011, six were blacked out due to the failure to sell out, and a seventh very narrowly avoided the same fate. Christmas Eve games in 2006 and 2011 were particularly poorly attended, with only 45,000 in attendance at the 2011 contest, an estimated half of which were fans of the opposing team. The attendance problems have largely been mitigated since the arrival of Ryan and the Pegulas.
On December 21, 2012 Team CEO Russ Brandon, New York Governor Andrew Cuomo, and Erie County Executive Mark Poloncarz announced a new 10 - year lease for Ralph Wilson Stadium. Included in the terms are $130 million in renovations and a $400 million buyout to move the team out of Buffalo (in addition to the NFL re-locating fee). The lease will include the team paying for part of the renovations for the first time. The deal also calls for a committee to explore building a new stadium in the Buffalo vicinity, a proposal the Pegulas have put on hold.
Buffalo Bills Miami Dolphins New England Patriots New York Jets
Baltimore Ravens Cincinnati Bengals Cleveland Browns Pittsburgh Steelers
Houston Texans Indianapolis Colts Jacksonville Jaguars Tennessee Titans
Denver Broncos Kansas City Chiefs Los Angeles Chargers Oakland Raiders
Dallas Cowboys New York Giants Philadelphia Eagles Washington Redskins
Chicago Bears Detroit Lions Green Bay Packers Minnesota Vikings
Atlanta Falcons Carolina Panthers New Orleans Saints Tampa Bay Buccaneers
Arizona Cardinals Los Angeles Rams San Francisco 49ers Seattle Seahawks
|
when did the first earth day take place | Earth Day - wikipedia
Earth Day is an annual event celebrated on April 22. Worldwide, various events are held to demonstrate support for environmental protection. First celebrated in 1970, Earth Day events in more than 193 countries are now coordinated globally by the Earth Day Network.
On Earth Day 2016, the landmark Paris Agreement was signed by the United States, China, and some 120 other countries. This signing satisfied a key requirement for the entry into force of the historic draft climate protection treaty adopted by consensus of the 195 nations present at the 2015 United Nations Climate Change Conference in Paris.
In 1969 at a UNESCO Conference in San Francisco, peace activist John McConnell proposed a day to honor the Earth and the concept of peace, to first be celebrated on March 21, 1970, the first day of spring in the northern hemisphere. This day of nature 's equipoise was later sanctioned in a proclamation written by McConnell and signed by Secretary General U Thant at the United Nations. A month later a separate Earth Day was founded by United States Senator Gaylord Nelson as an environmental teach - in first held on April 22, 1970. Nelson was later awarded the Presidential Medal of Freedom award in recognition of his work. While this April 22 Earth Day was focused on the United States, an organization launched by Denis Hayes, who was the original national coordinator in 1970, took it international in 1990 and organized events in 141 nations.
Numerous communities celebrate Earth Week, an entire week of activities focused on the environmental issues that the world faces. In 2017, the March for Science occurred on Earth Day (April 22, 2017) and was followed by the People 's Climate Mobilization (April 29, 2017).
On January 28, 1969, a well drilled by Union Oil Platform A off the coast of Santa Barbara, California, blew out. More than three million gallons of oil spewed, killing over 10,000 seabirds, dolphins, seals, and sea lions. As a reaction to this natural disaster, activists were mobilized to create environmental regulation, environmental education, and Earth Day. Among the proponents of Earth Day were the people in the front lines of fighting this disaster, Selma Rubin, Marc McGinnes, and Bud Bottoms, founder of Get Oil Out. According to Kate Wheeling, Denis Hayes told her that Senator Gaylord Nelson from Wisconsin saw the Santa Barbara Channel 800 square - mile oil slick from an airplane, which gave him the impetus to organize Earth Day.
On the first anniversary of the oil blowout, January 28, 1970, Environmental Rights Day is celebrated, where the Declaration of Environmental Rights is read. It had been written by Rod Nash during a boat trip across the Santa Barbara Channel while carrying a copy of Thomas Jefferson 's Declaration of Independence. The organizers of Environmental Rights Day, led by Marc McGinnes, had been working closely over a period of several months with Congressman Pete McCloskey (R - CA) to consult on the creation of the National Environmental Policy Act, the first of many new environmental protection laws sparked by the national outcry about the blowout / oil spill and on the Declaration of Environmental Rights. Both McCloskey (Earth Day co-chair with Senator Gaylord Nelson) and Earth Day organizer Denis Hayes, along with Senator Alan Cranston, Paul Ehrlich, David Brower and other prominent leaders, endorsed the Declaration and spoke about it at the Environmental Rights Day conference. According to Francis Sarguis, "the conference was sort of like the baptism for the movement. '' Nash, Garrett Hardin, McGinnes and others went on to develop the first undergraduate Environmental Studies program of its kind at the University of California at Santa Barbara.
The first Earth Day celebrations took place in two thousand colleges and universities, roughly ten thousand primary and secondary schools, and hundreds of communities across the United States. More importantly, it "brought 20 million Americans out into the spring sunshine for peaceful demonstrations in favor of environmental reform. '' It now is observed in 192 countries, and coordinated by the nonprofit Earth Day Network, chaired by the first Earth Day 1970 organizer Denis Hayes, according to whom Earth Day is now "the largest secular holiday in the world, celebrated by more than a billion people every year. '' Walt Kelly created an anti-pollution poster featuring his comic strip character Pogo with the quotation "We have met the enemy and he is us '' to promote the 1970 Earth Day. Environmental groups have sought to make Earth Day into a day of action to change human behavior and provoke policy changes.
In the winter of 1969 -- 1970, a group of students met at Columbia University to hear Denis Hayes talk about his plans for Earth Day. Among the group were Fred Kent, Pete Grannis, and Kristin and William Hubbard. This group agreed to head up the New York City activities within the national movement. Fred Kent took the lead in renting an office and recruiting volunteers. "The big break came when Mayor Lindsay agreed to shut down Fifth Avenue for the event. A giant cheer went up in the office on that day, '' according to Kristin Hubbard (now Kristin Alexandre). ' From that time on we used Mayor Lindsay 's offices and even his staff. I was Speaker Coordinator but had tremendous help from Lindsay staffer Judith Crichton. ''
In addition to shutting down Fifth Avenue, Mayor John Lindsay made Central Park available for Earth Day. In Union Square, New York Times estimated crowds of up to 20,000 people at any given time and, perhaps, as many as over 100,000 over the course of the day. Since Manhattan was also the home of NBC, CBS, ABC, The New York Times, Time, and Newsweek, it provided the best possible anchor for national coverage from their reporters throughout the country.
U.S. Senator Edmund Muskie was the keynote speaker on Earth Day in Fairmount Park in Philadelphia. Other notable attendees included consumer protection activist and presidential candidate Ralph Nader; Landscape Architect Ian McHarg; Nobel prize - winning Harvard Biochemist, George Wald; U.S. Senate Minority Leader, Hugh Scott; and poet, Allen Ginsberg.
Mobilizing 200 million people in 141 countries and lifting the status of environmental issues onto the world stage, Earth Day activities in 1990 gave a huge boost to recycling efforts worldwide and helped pave the way for the 1992 United Nations Earth Summit in Rio de Janeiro. Unlike the first Earth Day in 1970, this 20th Anniversary was waged with stronger marketing tools, greater access to television and radio, and multimillion - dollar budgets.
Two separate groups formed to sponsor Earth Day events in 1990: The Earth Day 20 Foundation, assembled by Edward Furia (Project Director of Earth Week in 1970), and Earth Day 1990, assembled by Denis Hayes (National Coordinator for Earth Day 1970). Senator Gaylord Nelson, the original founder of Earth Day, was honorary chairman for both groups. The two did not combine forces over disagreements about leadership of combined organization and incompatible structures and strategies. Among the disagreements, key Earth Day 20 Foundation organizers were critical of Earth Day 1990 for including on their board Hewlett - Packard, a company that at the time was the second - biggest emitter of chlorofluorocarbons in Silicon Valley and refused to switch to alternative solvents. In terms of marketing, Earth Day 20 had a grassroots approach to organizing and relied largely on locally based groups like the National Toxics Campaign, a Boston - based coalition of 1,000 local groups concerned with industrial pollution. Earth Day 1990 employed strategies including focus group testing, direct mail fund raising, and email marketing.
The Earth Day 20 Foundation highlighted its April 22 activities in George, Washington, near the Columbia River with a live satellite phone call with members of the historic Earth Day 20 International Peace Climb who called from their base camp on Mount Everest to pledge their support for world peace and attention to environmental issues. The Earth Day 20 International Peace Climb was led by Jim Whittaker, the first American to summit Mt. Everest (many years earlier), and marked the first time in history that mountaineers from the United States, Soviet Union, and China had roped together to climb a mountain, let alone Mt. Everest. The group also collected more than two tons of trash (transported down the mountain by support groups along the way) that was left behind on Mount Everest from previous climbing expeditions. The master of ceremonies for the Columbia Gorge event was the TV star, John Ratzenberger, from "Cheers '', and the headlining musician was the "Father of Rock and Roll, '' Chuck Berry.
Warner Bros. Records released an Earth Day - themed single in 1990 entitled "Tomorrow 's World '', written by Kix Brooks (who would later become one - half of Brooks & Dunn) and Pam Tillis. The song featured vocals from Lynn Anderson, Butch Baker, Shane Barmby, Billy Hill, Suzy Bogguss, Kix Brooks, T. Graham Brown, The Burch Sisters, Holly Dunn, Foster & Lloyd, Vince Gill, William Lee Golden, Highway 101, Shelby Lynne, Johnny Rodriguez, Dan Seals, Les Taylor, Pam Tillis, Mac Wiseman, and Kevin Welch. It charted at number 74 on the Hot Country Songs chart dated May 5, 1990.
Earth Day 2000 combined the ambitious spirit of the first Earth Day with the international grassroots activism of Earth Day 1990. This was the first year that Earth Day used the Internet as its principal organizing tool, and it proved invaluable nationally and internationally. Kelly Evans, a professional political organizer, served as executive director of the 2000 campaign. The event ultimately enlisted more than 5,000 environmental groups outside the United States, reaching hundreds of millions of people in a record 183 countries. Leonardo DiCaprio was the official host for the event, and about 400,000 participants stood in the cold rain during the course of the day.
To turn Earth Day into a sustainable annual event rather than one that occurred every 10 years, Nelson and Bruce Anderson, New Hampshire 's lead organizers in 1990, formed Earth Day USA. Building on the momentum created by thousands of community organizers around the world, Earth Day USA coordinated the next five Earth Day celebrations through 1995, including the launch of EarthDay.org. Following the 25th Anniversary in 1995, the coordination baton was handed to Earth Day Network.
As the millennium approached, Hayes agreed to spearhead another campaign, this time focusing on global warming and pushing for clean energy. The April 22 Earth Day in 2000 combined the big - picture feistiness of the first Earth Day with the international grassroots activism of Earth Day 1990. For 2000, Earth Day had the internet to help link activists around the world. By the time April 22 came around, 5,000 environmental groups around the world were on board reaching out to hundreds of millions of people in a record 184 countries. Events varied: A talking drum chain traveled from village to village in Gabon, Africa, for example, while hundreds of thousands of people gathered on the National Mall in Washington, D.C., USA.
Earth Day 2007 was one of the largest Earth Days to date, with many people participating in the activities in thousands of places including Kiev, Ukraine; Caracas, Venezuela; Tuvalu; Manila, Philippines; Togo; Madrid, Spain; London; and New York.
For Earth Day 2017, the Earth Day Network created four toolkits to aid organizations wanting to hold teach - ins to celebrate the theme "Environmental and Climate Literacy. '' The four toolkits are:
2017 also saw the Earth Day Network co-organize the March for Science rally and teach - in at the National Mall in Washington D.C.
According to Nelson, the moniker "Earth Day '' was "an obvious and logical name '' suggested by a lot of other people in the fall of 1969, including, he writes, both "a friend of mine who had been in the field of public relations '' and "a New York advertising executive, '' Julian Koenig. Koenig, who had been on Nelson 's organizing committee in 1969, has said that the idea came to him by the coincidence of his birthday with the day selected, April 22; "Earth Day '' rhyming with "birthday, '' the connection seemed natural. Other names circulated during preparations -- Nelson himself continued to call it the National Environment Teach - In, but national coordinator Denis Hayes used the term Earth Day in his communications and press coverage of the event was "practically unanimous '' in its use of "Earth Day, '' so the name stuck. The introduction of the name "Earth Day '' was also claimed by John McConnell (see "Equinox Earth Day, '' below).
The first Canadian Earth Day was held on Thursday, September 11, 1980, and was organized by Paul D. Tinari, then a graduate student in Engineering Physics / Solar Engineering at Queen 's University. Flora MacDonald, then MP for Kingston and the Islands and former Canadian Secretary of State for External Affairs, officially opened Earth Day Week on September 6, 1980 with a ceremonial tree planting and encouraged MPs and MPPs across the country to declare a cross-Canada annual Earth Day. The principal activities taking place on the first Earth Day included educational lectures given by experts in various environmental fields, garbage and litter pick - up by students along city roads and highways as well as tree plantings to replace the trees killed by Dutch Elm Disease.
The equinoctial Earth Day is celebrated on the March equinox (around March 20) to mark the precise moment of astronomical spring in the Northern Hemisphere, and of astronomical autumn in the Southern Hemisphere. An equinox in astronomy is that point in time (not a whole day) when the Sun is directly above the Earth 's equator, occurring around March 20 and September 23 each year. In most cultures, the equinoxes and solstices are considered to start or separate the seasons.
John McConnell first introduced the idea of a global holiday called "Earth Day '' at the 1969 UNESCO Conference on the Environment. The first Earth Day proclamation was issued by San Francisco Mayor Joseph Alioto on March 21, 1970. Celebrations were held in various cities, such as San Francisco and in Davis, California with a multi-day street party. UN Secretary - General U Thant supported McConnell 's global initiative to celebrate this annual event; and on February 26, 1971, he signed a proclamation to that effect, saying:
May there be only peaceful and cheerful Earth Days to come for our beautiful Spaceship Earth as it continues to spin and circle in frigid space with its warm and fragile cargo of animate life.
United Nations secretary - general Kurt Waldheim observed Earth Day with similar ceremonies on the March equinox in 1972, and the United Nations Earth Day ceremony has continued each year since on the day of the March equinox (the United Nations also works with organizers of the April 22 global event). Margaret Mead added her support for the equinox Earth Day, and in 1978 declared:
"Earth Day is the first holy day which transcends all national borders, yet preserves all geographical integrities, spans mountains and oceans and time belts, and yet brings people all over the world into one resonating accord, is devoted to the preservation of the harmony in nature and yet draws upon the triumphs of technology, the measurement of time, and instantaneous communication through space. Earth Day draws on astronomical phenomena in a new way -- which is also the most ancient way -- by using the vernal Equinox, the time when the Sun crosses the equator making the length of night and day equal in all parts of the Earth. To this point in the annual calendar, EARTH DAY attaches no local or divisive set of symbols, no statement of the truth or superiority of one way of life over another. But the selection of the March Equinox makes planetary observance of a shared event possible, and a flag which shows the Earth, as seen from space, appropriate. ''
At the moment of the equinox, it is traditional to observe Earth Day by ringing the Japanese Peace Bell, which was donated by Japan to the United Nations. Over the years, celebrations have occurred in various places worldwide at the same time as the UN celebration. On March 20, 2008, in addition to the ceremony at the United Nations, ceremonies were held in New Zealand, and bells were sounded in California, Vienna, Paris, Lithuania, Tokyo, and many other locations. The equinox Earth Day at the UN is organized by the Earth Society Foundation.
Earth Day ringing the peace bell is celebrated around the world in many towns, ringing the Peace Bell in Vienna, Berlin, and elsewhere. A memorable event took place at the UN in Geneva, celebrating a Minute for Peace ringing the Japanese Shinagawa Peace Bell with the help of the Geneva Friendship Association and the Global Youth Foundation, directly after in deep mourning about the Fukushima Daiichi Nuclear Power Plant catastrophe 10 days before.
Beside the Spring Equinox for the Northern Hemisphere, the observance of the Spring Equinox for the Southern Hemisphere in September is of equal importance. The International Day of Peace is celebrated on September 21, and can thus be considered to accord with the original intentions of John McConnell, U Thant and others.
In 1968, Morton Hilbert and the U.S. Public Health Service organized the Human Ecology Symposium, an environmental conference for students to hear from scientists about the effects of environmental degradation on human health. This was the beginning of Earth Day. For the next two years, Hilbert and students worked to plan the first Earth Day. In April 1970 -- along with a federal proclamation from U.S. Sen. Gaylord Nelson -- the first Earth Day was held.
Project Survival, an early environmentalism - awareness education event, was held at Northwestern University on January 23, 1970. This was the first of several events held at university campuses across the United States in the lead - up to the first Earth Day. Also, Ralph Nader began talking about the importance of ecology in 1970.
The 1960s had been a very dynamic period for ecology in the US. Pre-1960 grassroots activism against DDT in Nassau County, New York, and widespread opposition to open - air nuclear weapons tests with their global nuclear fallout, had inspired Rachel Carson to write her influential bestseller, Silent Spring (1962).
Nelson chose the date in order to maximize participation on college campuses for what he conceived as an "environmental teach - in ''. He determined the week of April 19 -- 25 was the best bet as it did not fall during exams or spring breaks. Moreover, it did not conflict with religious holidays such as Easter or Passover, and was late enough in spring to have decent weather. More students were likely to be in class, and there would be less competition with other mid-week events -- so he chose Wednesday, April 22. The day also fell after the anniversary of the birth of noted conservationist John Muir. The National Park Service, John Muir National Historic Site, has a celebration every year on or around Earth Day (April 21, 22 or 23), called Birthday - Earth Day, in recognition of Earth Day and John Muir 's contribution to the collective consciousness of environmentalism and conservation.
Unbeknownst to Nelson, April 22, 1970, was coincidentally the 100th anniversary of the birth of Vladimir Lenin, when translated to the Gregorian calendar (which the Soviets adopted in 1918). Time reported that some suspected the date was not a coincidence, but a clue that the event was "a Communist trick '', and quoted a member of the Daughters of the American Revolution as saying, "subversive elements plan to make American children live in an environment that is good for them. '' J. Edgar Hoover, director of the U.S. Federal Bureau of Investigation, may have found the Lenin connection intriguing; it was alleged the FBI conducted surveillance at the 1970 demonstrations. The idea that the date was chosen to celebrate Lenin 's centenary still persists in some quarters, an idea borne out by the similarity with the subbotnik instituted by Lenin in 1920 as days on which people would have to do community service, which typically consisted in removing rubbish from public property and collecting recyclable material. Subbotniks were also imposed on other countries within the compass of Soviet power, including Eastern Europe, and at the height of its power the Soviet Union established a nationwide subbotnik to be celebrated on Lenin 's birthday, April 22, which had been proclaimed a national holiday celebrating communism by Nikita Khrushchev in 1955.
There are many songs that are performed on Earth Day, that generally fall into two categories. Popular songs by contemporary artists not specific to Earth Day that are under copyright or new lyrics adapted to children 's songs.
(federal) = federal holidays, (state) = state holidays, (religious) = religious holidays, (week) = weeklong holidays, (month) = monthlong holidays, (36) = Title 36 Observances and Ceremonies Bold indicates major holidays commonly celebrated in the United States, which often represent the major celebrations of the month.
|
where did they film the new it movie | It (2017 film) - Wikipedia
It (also known as It: Chapter One) is a 2017 American supernatural horror film directed by Andy Muschietti, based on the 1986 novel of the same name by Stephen King. The screenplay is by Chase Palmer, Cary Fukunaga and Gary Dauberman. The film tells the story of seven children in Derry, Maine, who are terrorized by the eponymous being, only to face their own personal demons in the process. The novel was previously adapted into a 1990 miniseries.
The film stars Jaeden Lieberher and Bill Skarsgård as Bill Denbrough and Pennywise the Dancing Clown, respectively, with Jeremy Ray Taylor, Sophia Lillis, Finn Wolfhard, Wyatt Oleff, Chosen Jacobs, Jack Dylan Grazer, Nicholas Hamilton, and Jackson Robert Scott in supporting roles. Principal photography began in the Riverdale neighborhood of Toronto on June 27, 2016, and ended on September 21, 2016. Other Ontario locations included Port Hope and Oshawa.
It premiered in Los Angeles on September 5, 2017, and was theatrically released in the United States on September 8, 2017. Upon release, the film set numerous box office records and grossed $700 million worldwide. Unadjusted for inflation, it is the highest - grossing horror film and the third highest - grossing R - rated film of all - time (after Deadpool and The Matrix Reloaded). It received positive reviews, with critics praising the performances, direction, cinematography and musical score, with many calling it one of the best Stephen King adaptations.
A sequel, It: Chapter Two, is scheduled to be released on September 6, 2019.
In October 1988, stuttering teenager Bill Denbrough gives his seven - year - old brother, Georgie, a paper sailboat. Georgie sails the boat along the rainy streets of small town Derry, and is disappointed when it falls down a storm drain. As he attempts to retrieve it Georgie sees a clown in the sewer, who introduces himself as "Pennywise the Dancing Clown ''. The clown entices Georgie to come closer, then severs his arm and drags him into the sewer.
The following summer, Bill and his friends (loudmouth Richie Tozier, hypochondriac Eddie Kaspbrak, and timid Stan Uris) run afoul of bully Henry Bowers and his gang. Bill, still haunted by Georgie 's disappearance and the resulting neglect from his grief - stricken parents, discovers that his brother 's body may have washed up in a marshy wasteland called the Barrens. He recruits his friends to check it out, believing his brother may still be alive.
"New kid '' Ben Hanscom learns that the town has been plagued by unexplained tragedies and child disappearances for centuries. He is targeted by Bowers ' gang for being fat, after which he flees into the Barrens and meets Bill 's group. They find the sneaker of a missing girl, while a member of the pursuing Bowers Gang, Patrick Hockstetter, is killed by Pennywise while searching the sewers for Ben.
Beverly Marsh, a girl ostracized over rumors of promiscuity, also joins the group; both Bill and Ben develop feelings for her. Later, the group befriends homeschool student Mike Hanlon after defending him from Bowers. All the while each member of the group has encountered terrifying phenomena in various forms; these include a menacing clown, a headless boy, a fountain of blood, a diseased and rotting man, a creepy painting come to life, Mike 's parents burning alive, and a phantom Georgie.
Now calling themselves "The Losers Club '', they realize they are all being terrorized by the same entity. They determine that Pennywise (or "It '') assumes the appearance of what they fear, awakens every 27 years to feed on the children of Derry before returning to hibernation, and moves about by using sewer lines -- which all lead to a well currently under the creepy, abandoned house at 29 Neibolt Street. After an attack by Pennywise, the group ventures to the house to confront him, only to be separated and terrorized. Eddie breaks his arm, while Pennywise gloats to Bill about Georgie. As they regroup, Beverly impales Pennywise through the head, forcing the clown to retreat. However, after the encounter the group begins to splinter, with only Bill and Beverly resolute in fighting It.
Weeks later, after Beverly confronts and incapacitates her sexually abusive father, she is abducted by Pennywise. The Losers Club reassembles and travels back to the Neibolt house to rescue her. Henry Bowers, who has killed his father after being compelled into madness by It, attacks the group. Mike fights back and pushes Bowers down the well to his apparent death. The Losers descend into the sewers and find It 's underground lair, which contains a mountain of decayed circus props and children 's belongings, around which the bodies of missing children float in mid-air. Beverly, now catatonic after being exposed to It 's true form, is restored to consciousness as Ben kisses her. Bill encounters Georgie, but recognizes that he 's Pennywise in disguise. Pennywise attacks the group and takes Bill hostage, offering to spare the others if they let It keep Bill. The Losers reject this and reaffirm their friendship, overcoming their various fears. After a brutal battle they defeat Pennywise and he retreats, with Bill declaring that It will starve during its hibernation. Their victory is bittersweet, as Bill finally accepts his brother 's death and is comforted by his friends.
As summer ends, Beverly informs the group of a vision she had while catatonic, where she saw them fighting the creature as adults. The Losers create a blood oath by cutting each other 's hands and forming a circle, swearing to return to Derry in adulthood if It returns and destroy the creature once and for all. Stanley, Eddie, Richie, Mike, and Ben make their goodbyes as the group part ways. Beverly tells Bill she is leaving the next day to live with her aunt in Portland. As she leaves, Bill runs up to her and they kiss.
Additionally, Owen Teague is introduced as Patrick Hockstetter, a psychopathic bully of the Bowers Gang who meets his end early at the hands of Pennywise; Logan Thompson appears as Victor "Vic '' Criss, a bully and friend of Henry Bowers who is reluctant to engage in their most sadistic acts; Jake Sim appears as Reginald "Belch '' Huggins, another bully and friend of Henry Bowers who is known for his ability to belch on command. Additionally, It 's other forms include; Javier Botet as The Leper, a diseased and rotting man that encounters Eddie Kaspbrak at the house on 29 Neibolt Street; and Tatum Lee as Judith, a disturbing woman from an abstract painting that haunts Stan.
Stephen Bogaert appears as Alvin Marsh, the abusive father of Beverly Marsh; Molly Atkinson appears as Sonia Kaspbrak, the overprotective mother of Eddie Kaspbrak; Geoffrey Pounsett appears as Zack Denbrough, the father of Bill and George Denbrough; Pip Dwyer appears as Sharon Denbrough, the mother of Bill and George Denbrough; Stuart Hughes appears as Oscar "Butch '' Bowers, a police officer and abusive father of Henry Bowers; Steven Williams appears as Leroy Hanlon, the stern grandfather of Mike Hanlon, who runs a nearby abattoir; Ari Cohen appears as Rabbi Uris, the father of Stanley Uris; Joe Bostick and Megan Charpentier appear as Mr. Keene and Gretta Keene, the pharmacist at Derry and his daughter who targets Beverly for ridicule, respectively.
The project first entered development in 2009. The proposed film adaptation went through three phases of planning: initially a single film with screenwriter David Kajganich; then the dual film project, first with Cary Fukunaga attached as director and co-writer, then with Andy Muschietti.
On March 12, 2009, Variety reported that Warner Bros. would bring Stephen King 's novel to the big screen, with David Kajganich adapting the novel, and Dan Lin, Roy Lee and Doug Davison producing the piece. When Kajganich learnt of Warner Bros. ' plans to adapt King 's novel, he went after the job. Knowing that Warner Bros. was committed to adapting It as a single feature film, Kajganich began to reread the novel in an attempt to try to find a structure that would accommodate such a large number of characters in two different time periods, around 120 pages, which was one of Warner Bros. ' stipulations. Kajganich worked with Lin, Lee, and Davison on The Invasion (2007), and he knew they would champion good storytelling, and allow him the time to work out a solid first draft of the screenplay. Kajganich spoke of the remake being set in the, "mid-1980s and in the present... mirroring the twenty - odd - year gap King uses in the book... and with a great deal of care and attention paid to the backstories of all the characters ''.
Kajganich also mentioned that Warner Bros. wished for the adaptation to be rated R, saying, "... we can really honor the book and engage with the traumas (both the paranormal ones and those they deal with at home and school) that these characters endure ''. He said that his dream choice for Pennywise would be Buster Keaton if he were still alive, and that the Pennywise that he scripted is "less self - conscious of his own irony and surreality ''. As of June 29, 2010, Kajganich was re-writing his screenplay.
On June 7, 2012, The Hollywood Reporter revealed that Cary Fukunaga was boarding the project as director, and would co-write the script with Chase Palmer. The producers were now Roy Lee of Vertigo Entertainment, Dan Lin of Lin Pictures, and Seth Grahame - Smith and David Katzenberg of KatzSmith Productions. On May 21, 2014, Warner Bros. was announced to have moved the film to its New Line Cinema division, with overseer duties conducting by New Line 's Walter Hamada and Dave Neustadter, along with Vice President of Production at Warner Bros., Niija Kuykendall. On December 5, 2014, in an interview with Vulture, Dan Lin announced that the first film would be a coming - of - age story about the children tormented by It, and the second will skip ahead in time as those same characters band together to continue the fight as adults. Lin also stated that Fukunaga was then only committed to directing the first film, though was currently closing a deal to co-write the second. Lin concluded by mentioning King, to which he remarked, "The most important thing is that (King) gave us his blessing. We did n't want to make this unless he felt it was the right way to go, and when we sent him the script, the response that Cary got back was, ' Go with God, please! This is the version the studio should make. ' So that was really gratifying. '' Lin confirmed that Fukunaga was set to begin principal photography in the summer of 2016.
On February 3, 2015, Fukunaga was interviewed by Slate, and spoke about It, mentioning that he has someone in mind for the role of Pennywise. On March 3, 2015, Fukunaga noted his goal to find the "perfect guy to play Pennywise ''. Fukunaga also revealed that he, Kajganich and Palmer had changed the names and dates in the script, adding, "... the spirit is similar to what he 'd like to see in cinemas ''. On May 4, 2015, it was officially announced that Will Poulter had been cast to play Pennywise, after Fukunaga was "blown away '' by his audition. Ty Simpkins was then considered to play one of The Losers ' Club members.
On May 25, 2015, it was reported that Fukunaga had dropped out as the director of It. According to TheWrap, Fukunaga clashed with the studio and did not want to compromise his artistic vision in the wake of budget cuts by New Line, which greenlit the first film at $30 million. However, Fukunaga maintained that was not the case, stating he had bigger disagreements with New Line over the direction of the story, "I was trying to make an unconventional horror film. It did n't fit into the algorithm of what they knew they could spend and make money back on based on not offending their standard genre audience. '' He made mention that the budget was fine, as well as his desire to make Pennywise more than just the clown. Fukunaga concluded by stating, "We invested years and so much anecdotal storytelling in it. Chase and I both put our childhood in that story. So our biggest fear was they were going to take our script and bastardize it... So I 'm actually thankful that they are going to rewrite the script. I would n't want them to stealing our childhood memories and using that... I was honoring King 's spirit of it, but I needed to update it. King saw an earlier draft and liked it. '' On Fukunaga 's departure, King wrote, "The remake of IT may be dead -- or undead -- but we 'll always have Tim Curry. He 's still floating down in the sewers of Derry. ''
On July 16, 2015, it was announced that Andy Muschietti was in negotiations to direct It, with New Line beginning a search for a new writer to tailor a script to Muschietti 's vision. The announcement also confirmed the possible participation of Muschietti 's sister, Barbara Muschietti, as a producer, and Richard Brener joining Hamada, Neustadter and Kuykendall to oversee the project. On April 22, 2016, it was indicated that Will Poulter, who had been cast to portray Pennywise in Fukunaga 's version, had dropped out of the film due to a scheduling conflict and that executives were meeting with actors to portray the antagonist. Also that day, New Line Cinema set the film for a release of September 8, 2017.
On October 30, 2015, Muschietti was interviewed by Variety wherein he spoke about his vision of It, while mentioning Poulter was still in the mix for the role of Pennywise: "(Poulter) would be a great option. For me he is at the top of my list... '' He confirmed that next summer is the time for them to start shooting. It was decided to shoot It during the summer months to give the filmmakers time to work with the children who have the main roles in the first part of the film. Muschietti went on to say that "King described 50s ' terror iconography '', adding that he feels there is a whole world now to "rediscover, to update ''. He said there would not be any mummies or werewolves and that the "terrors are going to be a lot more surprising ''. On February 19, 2016, at the D.I.C.E. Summit 2016, producer Roy Lee confirmed that Fukunaga and Chase Palmer 's original script had been rewritten, remarking, "It will hopefully be shooting later this year. We just got the California tax credit... (Dauberman) wrote the most recent draft working with (Muscetti), so it 's being envisioned as two movies. ''
On May 5, 2016, in an interview with Collider.com, David Kajganich expressed uncertainty as to whether drafts of his original screenplay would be used by Dauberman and Muschietti, with the writer stating, "We know there 's a new director, I do n't know myself whether he 's going back to any of the previous drafts or writing from scratch. I may not know until the film comes out. I do n't know how it works! If you find out let me know. ''
On June 2, 2016, Jaeden Lieberher was confirmed as portraying lead protagonist Bill Denbrough, while The Hollywood Reporter reported that Bill Skarsgård was in final negotiations to star as Pennywise, with a cast also including Finn Wolfhard, Jack Dylan Grazer, Wyatt Oleff, Chosen Jacobs and Jeremy Ray Taylor. Also that day, there was a call for 100 background performers, with the background actor call going from 3 p.m. to 7 p.m.; by 4 p.m., more than 300 people had gone through. The casting call also asked for a marching band and period cars between 1970 and 1989. On June 9, 2016, The Hollywood Reporter reported that Owen Teague was set to portray Patrick Hockstetter. On June 21, 2016, it was officially announced that Nicholas Hamilton had been cast to play Henry Bowers, and Bloody Disgusting reported that Javier Botet had been added to the cast shortly before filming commenced. On June 22, 2016, Deadline.com reported that Muschietti had chosen Sophia Lillis to portray Beverly Marsh, and on June 24, 2016, Moviepilot wrote that Stephen Bogaert had been added to the cast shortly before filming commenced, portraying Al Marsh, the abusive father of Beverly Marsh.
On July 22, 2016, Barbara Muschietti was interviewed by Northumberland News ' Karen Longwell, wherein she spoke about the filming locations on It, while mentioning the beauty of Port Hope being one of the reasons as to why it was chosen. Muschietti added, "We were looking for an idyllic town, one that would be a strong contrast to the story. Port Hope is the kind of place we all wish we had grown up in: long summers riding bicycles, walks by the lake, a lovely main street, charming homes with green lawns, warm people. '' Muschietti also mentioned that 360 extras from the area, from adults to small children, had been involved.
On August 11, 2016, at The CW TCA presentation for the series Frequency, producer Dan Lin spoke of the piece 's comparison to Netflix 's Stranger Things, describing It being an "homage to 1980s movies '', while remarking: "I think a great analogy is actually Stranger Things, and we 're seeing it on Netflix right now. It 's very much an homage to ' 80s movies, whether it 's classic Stephen King or even Spielberg. Think about Stand by Me (1986) as far as the bonding amongst the kids. But there is a really scary element in Pennywise. '' Lin continued, speaking of how well the young cast had bonded in the first weeks of shooting, "We clearly had a great dynamic amongst the kids. Really great chemistry is always a challenging thing with a movie like It because you 're casting kids who do n't have a ton of experience, but it ended up being really natural. Each kid, like a The Goonies (1985) or Stand by Me (1986), has a very specific personality and they 're forming the loser 's club obviously... We 've spent a few months getting the kids to bond and now they 're going to fight this evil, scary clown. ''
On February 9, 2017, at the press day for The Lego Batman Movie (2017), Lin confirmed that It would be rated R by the MPAA, and stated to Collider.com 's Steve Weintraub, "If you 're going to make a "Rated - R movie '', you have to fully embrace what it is, and you have to embrace the source material. It is a scary clown that 's trying to kill kids... They do have a scary clown that 's taken over the town of Derry, so it 's going to be rated R. '' On March 11, 2017, Muschietti, at the SXSW festival, spoke of an element of the pre-production phase in his attempt to keep Skarsgård separated from the film 's child actors, wherein the actor was not introduced to the young cast until Pennywise 's first encounter with the children: "It was something that we agreed on, and that 's how it happened... The day that he showed up on the stage, they f * cking freaked out. Bill is like, seven - foot high, and I ca n't describe how scary he looks in person. He 's a wiry man, crouching, making sounds, snotting, drooling, speaking in Swedish sometimes. Terrifying. '' Muschietti stated that the story had been moved forward, with the scenes with the young Losers Club shifting from the 1950s to the 1980s, while also describing the plot as "getting much wider '', with new material not in the novel or the 1990 miniseries. However, Muschietti said he hoped the material would still strike the same emotional resonance that the book did for him when he first read it: "It 's all about trying to hit the core and the heart. ''
On July 12, 2017, Muschietti, in an interview with French magazine Mad Movies, spoke of the R rating allowing him to go into adult themes, which was championed from the people at New Line Cinema. He also stated that, "... if you aimed for a PG - 13 movie, you had nothing at the end. So we were very lucky that the producers did n't try to stop us. In fact it 's more our own moral compass that sometimes showed us that some things lead us in places where we did n't want to go. '' In the same interview, on July 12, 2017, producer Barbara Muschietti added that there was only one scene that was deemed to be too horrific to feature in the new adaptation, stating, "... you wo n't find the scene where a kid has his back broken and is thrown in the toilets. We thought that the visual translation of that scene had something that was really too much. '' Muschietti concluded by emphasizing that nothing was removed from the original vision, nor was the violence of any event watered down.
On July 19, 2017, in an interview with Variety 's Brent Lang, director Muschietti commented of the monstrous forms that It will be taking, as well as noting the fact that they are very different from the incarnations present in King 's story, "The story is the same, but there are changes in the things the kids are scared of. In the book they 're children in the ' 50s, so the incarnations of the monsters are mainly from movies, so it 's Wolf Man, the Mummy, Frankenstein, (and) Dracula. I had a different approach. I wanted to bring out deeper fears, based not only on movie monsters but on childhood traumas. '' While on the topic of the key to a successful horror film, Muschietti concluded by remarking that "Stay true to what scares you. If you do n't respect that, you ca n't scare anyone. '' Muschietti explained how Skarsgård caught his attention to embody Pennywise, while pointing out that he did not want the young cast to spend too much time with the actor when not shooting, and encouraged them to "maintain distance '', wherein Muschietti detailed: "We wanted to carry the impact of the encounters to when the cameras were rolling. The first scene where Bill interacted with the children, it was fun to see how the plan worked. The kids were really, really creeped out by Bill. He 's pretty intimidating because he 's six - four and has all this makeup. ''
Production designer Mara LePere - Schloop went to Bangor, Maine, to scope out locations, including the Thomas Hill Standpipe, the land running alongside the Kenduskeag Stream that in It is called The Barrens, and the Waterworks on the Penobscot River. LePere - Schloop said that they were hoping to shoot some scenes in the city, and possibly take some aerial shots. On May 31, 2016, Third Act Productions was confirmed to have applied to film interior and exterior scenes for It in the municipality of Port Hope, with filming slated for various locations around the municipality from July 11, 2016 to July 18, 2016. Principal photography begun in Toronto, with an original shooting schedule from June 27 to September 6, 2016.
By July 8, 2016, Port Hope had undergone changes to transform it into Derry; Port Hope Municipal hall was the Derry Public Library, The Port Hope Tourism Centre became the City of Derry office, and numerous shops had their frontage changed. A statue of Paul Bunyan was erected in Memorial Park, US flags hung in place of Canadian flags downtown, and Port Hope Capitol Theatre had appeared to be showing Batman and Lethal Weapon 2, confirming the film 's 1989 setting.
On July 11, 2016, preliminary shooting took place in Port Hope Town Hall, Memorial Park cenotaph, Queen Street, between Walton and Robertson streets and the Capitol Theatre. On July 12, 2016, filming occurred between the intersection of Mill and Walton street, Walton Street bridge, and in front and behind 16 -- 22 Walton Street and Port Hope Town Hall. Other shooting locations included Queen Street between Walton and Roberston street, and Memorial Park, on July 13. It was also reported, on July 14, that filming had been set up on the alley between Gould 's Shoe 's and Avanti Hair Design, and John and Hayward streets. Filming moved to Cavan Street, between Highland Drive and Ravine Drive, and Victoria Street South, between Trafalgar Street and Sullivan Street, on July 15. Filming in Port Hope ended on July 18, at Watson 's Guardian Drugs.
Oshawa had been chosen by producers of It as the next filming location, and on July 20, 2016, filming notices were sent out to homes in the area of Eulalie Avenue and James Street, near downtown Oshawa, advising residents that filming would take place in the area from August 5 to August 8, 2016. On July 29, 2016, it was announced the crew had worked on the formerly vacant lot at the dead end of James Street constructing the set, in the form of a dilapidated old house. It was also remarked that the structure is a facade built around scaffolding that would be used for exterior shots. The set was composed of pre-fabricated modules that were trucked in and put into place by IATSE carpenters.
On July 18, 2016, production crews had arrived in Riverdale, Toronto, with filming beginning at 450 Pape Ave, which is home to a circa 1902 heritage - designated building called Cranfield House, up until August 19, 2016. It was reported on September 4 that filming had wrapped in Oshawa, which included the haunted house location, as well as on Court and Fisher streets. Principal photography was confirmed to have ended in Toronto on September 21, 2016, with an altered shooting schedule occurring from June 27 to September 21, 2016, and post-production initially beginning on September 14, 2016.
On August 16, 2016, in an interview with Entertainment Weekly, costume designer Janie Bryant spoke of crafting Pennywise 's form - fitting suit and the inspirations it drew from -- involving a number of eras -- among them Medieval, Renaissance, Elizabethan, and Victorian. Bryant explained that the costume incorporates all these otherworldly past lives, highlighting the point that Pennywise is a clown from a different time. In designing Pennywise 's costume, Bryant included a Fortuny pleating, which gives the costume an almost crepe - like effect, to which Bryant remarked, "It 's a different technique than what the Elizabethans would do. It 's more organic, it 's more sheer. It has a whimsical, floppy quality to it. It 's not a direct translation of a ruff or a whisk, which were two of the collars popular during the Elizabethan period. ''
Bryant played with multiple eras as a way of reflecting Pennywise 's immortality, and added a "doll - like quality to the costume ''. She further stated, "The pants being short, the high waistline of the jacket, and the fit of the costume is a very important element. It gives the character a child - like quality. '' Bryant spoke of the two puffs off the shoulder, sleeves and again on the bloomers, with her desire to create an "organic, gourd or pumpkin kind of effect '', which includes the peplum at the waist, and the flared, skirt - like fabric blossoming from below his doublet. She explains, "It helps exaggerate certain parts of the body. The costume is very nipped in the waist and with the peplum and bloomers it has an expansive silhouette. '' The main color of his costume is a dusky gray, but with a few splashes of colour. She concludes the interview by stating, "The pompoms are orange, and then with the trim around the cuffs and the ankles, it 's basically a ball fringe that 's a combination of orange, red, and cinnamon. It 's almost like Pennywise fades into his environment. But there are accents to pull out the definition of the gray silk. ''
Judith, the woman in the portrait whose form It assumes to terrify Stan, did not appear in the novel. Muschietti based this sequence on the paintings of Amedeo Modigliani, one of which hung in his childhood home, and which he found frightening, interpreting Modigliani 's stylisation as monstrosity. The eponymous creature in Muschietti 's previous film, Mama, was also based on Modigliani 's work.
Nicholas Brooks was the overall visual effects supervisor, and visual effects company Rodeo FX worked on most of the visual effects on It. Amalgamated Dynamics worked on the special makeup effects.
The film has been described as a loss of innocence story, with fear, mortality and survivalist themes. Muschietti remarked of the film 's elements of coming of age and issues of mortality, adding that such themes are prevalent in King 's book, though in reality they occur in a more progressive way, "There 's a passage (in It) that reads, ' Being a kid is learning how to live and being an adult is learning how to die. ' There 's a bit of a metaphor of that and it just happens in a very brutal way, of course. ''
He also mentioned the characterization of Pennywise 's survivalist attitude, and a passage in the novel which inspired Muschietti was when Bill wonders if Pennywise is eating children simply because that is what people are told monsters do, "It 's a tiny bit of information, but that sticks with you so much. Maybe it is real as long as children believe in it. And in a way, Pennywise 's character is motivated by survival. In order to be alive in the imagination of children, he has to keep killing. '' While Muschietti acknowledged It being a horror film, he felt that it is not simply that, "It 's a story of love and friendship and a lot of other beautiful emotions. ''
The graphic sexual content that was in the novel and Fukunaga 's original script was not included in the film.
The film 's score was composed by British composer, Benjamin Wallfisch. A soundtrack album was released in August 2017.
All tracks written by Benjamin Wallfisch.
It was released in North America on September 8, 2017. In Europe, the film was released in Belgium on September 6, 2017, Denmark and the Netherlands on September 7, 2017, and Norway and Finland on September 8, 2017. On March 7, 2017, the alternate title of the film was announced by Stephen King as Part 1 -- The Losers ' Club. In addition to the conventional 2D format, It was also released across 615 IMAX screens globally, including 389 domestically.
The film was released on digital download on December 19, 2017, and was released on 4K, Blu - ray and DVD on January 9, 2018.
On January 31, 2016, Muschietti, on his Instagram, posted a sketch that was thought to be the precursor to Pennywise 's final look, to celebrate pre-production getting underway. Beginning from July 11, 2016, Muschietti posted a variety of missing person posters of children within the Derry area, including Betty Ripsom, Richie Tozier, Paul Greenberg, Jonathan Chan, and Tania McGowan.
The first official image for It debuted on July 13, 2016, introducing the first look at Skarsgård 's Pennywise The Dancing Clown, as well as an interview with Skarsgård, conducted by Anthony Breznican. Thomas Freeman of Maxim wrote "... Skarsgard in full, terrifying costume,... he 's clearly got what it takes to fill King 's most macabre, nightmare - inducing creation. '' Chris Eggertsen of HitFix responded positively, stating the image to be "... an appropriately macabre look that does n't deviate too radically from the aesthetic of Curry 's Pennywise... dare I say, a more creepily seductive look to Skarsgard 's version that was absent from Curry 's interpretation. ''
On July 30, 2016, Muschietti released three storyboard images, up until the date of August 22, 2016, with the first featuring Bill Denbrough making a paper boat for his younger brother George. The second storyboard features Bill leading his bike, nicknamed Silver, across a lawn with the included phrase: "He thrusts his fists against the posts but still insists he sees the ghosts ''. The third and final storyboard features Bill asleep next to a sketch of Beverly Marsh.
On August 16, 2016, Entertainment Weekly released the full costume image of Skarsgård 's Pennywise to promote It, including an interview with costume designer Janie Bryant. JoBlo.com 's Damion Damaske was fond of the new design, though was understanding of others being dismissive of it. Damaske also stated, "One of the chief complaints is that it looked too automatically scary, and that one of the reasons Pennywise chooses his guise is to trick and lure children. '' Dave Trumbore of Collider.com noted that "This one 's going to divide some folks. It 's nowhere near as baggy or colorful as the one Tim Curry... donned..., but the new version certainly seems to have a lot more thought and intent behind its creation. '' Jonathan Barkan of Bloody Disgusting called the image one of "... (drawing) attention and curiosity ''. Barkan then stated "I do n't know if it 's morbid curiosity or hopeful wishes but the overall response to his face and makeup seemed to be quite positive! ''
On March 9, 2017, Neha Aziz of SXSW announced that Muschietti would appear at a screening event titled, Face Your Fears, to share footage from It, while discussing his inspirations and influences. On March 11, 2017, New Line Cinema showcased its promotion of It by releasing a teaser trailer and a scene at the South by Southwest festival. Trace Thurman of Bloody Disgusting heralded the trailer: "It was maybe 90 seconds of footage, but it was a damn impressive 90 seconds of footage... As far as teasers go, it 's one of the best that I 've ever seen. '' Dread Central 's Jonathan Barkan praised the scene, and stated, "The kids are clearly very adept at working off one another. There was a chemistry between the four that was wonderful to see and it 's obvious that Muschietti worked very hard to ensure they were believable. '' Eric Vespe of Ai n't It Cool News remarked that "... this one scene shows us the key traits of the bulk of the members of the Losers Club within one sequence. I loved it for that reason. ''
On March 28, 2017, New Line released a 139 - second teaser trailer to promote It, following a 19 - second trailer and the official teaser poster the prior day, and for exhibitors at CinemaCon. Tom Philip of GQ heralded the trailer and its tonality by stating: "Dark corners everywhere and a pervading sense of absolute doom, even in the scenes where the creature is n't looming. That projector scene! Christ! '' Michael Gold of The New York Times praised the trailer, and stated: "There 's always tension in the sustained string chords of the soundtrack, and it imbues everything with suspense and darkness. '' Wired 's Brian Raftery spoke most highly of the trailer, to which he stated, "The teaser 's scariest moment features no gore or gotcha - ness; instead, it involves a misfiring slide - projector and a barely discernible clown - grin. Nothing in the It trailer feels like a cheap thrill, which is all the more thrilling. '' IndieWire 's William Earl reacted positively to the "top - notch '' production design of Derry, Maine within the trailer. The trailer was viewed 197 million times in the first 24 hours after it was released, setting a new record as the trailer with the most views in one day. In addition to dethroning The Fate of the Furious (2017), the trailer numbers surpassed previous records held by Star Wars: The Force Awakens (2015), Fifty Shades Darker (2017), and Beauty and the Beast (2017).
On May 7, 2017, a second teaser trailer, this one lasting 137 seconds, was shown at the MTV Movie & TV Awards in Los Angeles, California, with the new preview showcasing a snippet of the film where the "Losers ' Club '' search for Pennywise 's many victims. Daniel Kreps of Rolling Stone felt snippet of the film "was initially... similar to Stand by Me (1986), with the Losers ' Club playfully bantering about "gray water ''... A series of scary images soon follow before the trailer ends on Pennywise doing unimaginable balloon tricks to lure a victim. '' Matt Goldberg of Collider.com praised the trailer, and stated: "This new trailer really plays up the kids ' role and their fears. It 's a smart move, because if a sequel does come along, it 's going to be looking at the kids as adults, so that aspect will be lost. '' Digital Spy 's Jack Tomlin spoke of the clarity in that director Muschietti 's film will carry on down the "creepy as hell '' vibe he gave the first trailer. On July 13, 2017, Entertainment Weekly released a collection of new images and concept art such as Pennywise 's lair to promote It, including commentary from director Andrés Muschietti. On July 19, 2017, New Line Cinema showcased its promotion of It, by releasing three reels of footage at San Diego Comic - Con, before an advanced screening of Annabelle: Creation (2017).
It has grossed $327.5 million in the United States and Canada, and $372.8 million in other territories, for a worldwide total of $700.3 million, against a production budget of $35 million.
In North America, initial opening weekend projections had the film grossing $50 -- 60 million. By the week of its release, estimates were raised to $60 -- 70 million, with a chance to go higher if word of mouth was strong. It opened in 4,103 theaters, setting the record for most venues for an R - rated film (beating Logan 's 4,071 from the past March). A few days before its release, the film became Fandango 's top horror pre-seller of all - time, eclipsing Paranormal Activity 3 (2011), as well as setting the record as the site 's top pre-seller among September releases, beating Sully (2016). The film made $13.5 million from Thursday night previews, setting the record for highest amount by both an R - rated (besting Deadpool 's $12.6 million) and a horror film. Due to the high Thursday gross, Deadline.com noted some industry trackers upped weekend projections to $90 million. It went on to have an opening day of $50.2 million (including previews), increasing weekend projections to over $100 million. The film 's Friday gross not only set a record for biggest single - day amount by an R - rated film (beating Deadpool 's $47.3 million) but nearly eclipsed Paranormal Activity 3 's entire weekend gross of $52.6 million, which was the highest opening weekend gross for a horror film.
It went on to open to $123.1 million, setting the records for largest opening weekend for both a September release and a horror film, and was the second - biggest debut for an R - rated film behind Deadpool ($132.4 million). Variety and Deadline both noted that the film 's opening weekend could have been even greater if not for Hurricane Irma shutting down nearly 50 % of Florida 's theaters, a state that typically accounts for 5 % of the country 's box office grosses. During its first full week, the film made $8.8 million on Monday, $11.4 million on Tuesday, $7.9 million on Wednesday and $7.2 million on Thursday, each setting September records for their respective days. In its second weekend the film grossed $60.1 million (a better - than - average for horror films drop of 51 %), making more in its second weekend than previous opening record holder Paranormal Activity 3 made in its first, and again topped the box office. It also pushed the domestic total to $218.7 million, overtaking Crocodile Dundee for highest - grossing September film ($174.8 million in 1984).
In its third week the film was dethroned by newcomer Kingsman: The Golden Circle, finishing second at the box office with $29.7 million. In its fourth week, the film initially made a projected gross of $17.3 million, apparently retaking its top spot at the box office ahead of Kingsman ($17 million). However the following day, actual results had the film finishing in second by a gross of $16.93 million to $16.90 million, while beating out newcomer American Made ($16.8 million). The film continued to hold well in the following weeks, making $10 million and $6.1 million in its fifth and sixth weeks, finishing a respective 3rd and 4th at the box office.
On review aggregator website Rotten Tomatoes the film has an approval rating of 85 % based on 288 reviews, with an average rating of 7.2 / 10. The site 's critical consensus reads, "Well - acted and fiendishly frightening with an emotionally affecting story at its core, It amplifies the horror in Stephen King 's classic story without losing touch with its heart. '' Metacritic, another review aggregator, assigned the film a weighted average score of 70 out of 100, based on 48 critics, indicating "generally favorable reviews ''. Audiences polled by CinemaScore gave the film an average grade of "B + '' on an A+ to F scale.
Richard Roeper of the Chicago Sun - Times gave the film 4 out of 4 stars, saying: "What will REALLY put a chill down your spine and raise the hairs on the back of your neck are the moments when an adolescent character is isolated from friends, all alone in the cellar or the bathroom or the alley or a dark office, and something they 've long feared springs to ' life ' in a certain fashion, confirming their worst sense of dread and doom. '' Andrew Barker of Variety praised the visuals and cast, while acknowledging the familiarities, calling the film "a collection of alternately terrifying, hallucinatory, and ludicrous nightmare imagery... a series of well - crafted yet decreasingly effective suspense setpieces; and a series of well - acted coming - of - age sequences that do n't quite fully mature. '' Mark Kermode of The Guardian gave the film 4 out of 5 stars, writing that the film "is an energetic romp with crowd - pleasing appeal that is n't afraid to bare its gory teeth ''. Christopher Orr of The Atlantic gave the film a mixed review, calling it "a solid but relatively conventional horror movie '' and writing that it "privileges CGI scares over dread and nuance ''.
Some critics were disappointed with the film 's implementation of jump scares. Michael Phillips of the Chicago Tribune noted the film 's "diminishing returns of one jump scare after another '', writing that "nearly every scene begins and ends the same way, with a slow build... leading up to a KAAA - WHUMMMMMM!!!! sound effect ''. Eric Kohn of IndieWire praised the film 's visuals but wrote that it "simplifies its appeal with jump scares '', and Chris Nashawaty of Entertainment Weekly lauded the child actors but wrote that "the more we see of Pennywise, the less scary he becomes ''.
On February 16, 2016, producer Roy Lee, in an interview with Collider.com, mentioned a second film, remarking that: "(Dauberman) wrote the most recent draft working with (Muschietti), so it 's being envisioned as two movies. '' On July 19, 2017, Muschietti revealed that the plan is to get production underway for the sequel to It next spring, adding, "We 'll probably have a script for the second part in January (2018). Ideally, we would start prep in March. Part one is only about the kids. Part two is about these characters 30 years later as adults, with flashbacks to 1989 when they were kids. '' On July 21, 2017, Muschietti spoke of looking forward to having a dialogue in the second film that does not exist within the first, stating, "... it seems like we 're going to do it. It 's the second half, it 's not a sequel. It 's the second half and it 's very connected to the first one. '' Muschietti confirmed that two cut scenes from the film will hopefully be included in the second, one of which being the fire at the Black Spot from the book.
On September 25, 2017, New Line Cinema announced that the sequel would be released on September 6, 2019, with Gary Dauberman writing the script. Later in December 2017, Agent Cody Banks writer Jeffrey Jurgensen was also listed as a screenwriter.
|
which of the following computer databases would you search if you had images of a fired bullet | Forensic firearm examination - wikipedia
Forensic firearm examination is the forensic process of examining the characteristics of firearms as well as any cartridges or bullets left behind at a crime scene. Specialists in this field are tasked with linking bullets and cartridges to weapons and weapons to individuals. Obliterated serial numbers can be raised and recorded in an attempt to find the registered owner of the weapon. Examiners can also look for fingerprints on the weapon and cartridges, and then viable prints can be processed through fingerprint databases for a potential match.
By examining unique striations, or markings, left behind on the bullet as it passes through the barrel and on the cartridge as it is hit by the firing pin, individual spent rounds can be linked back to a specific weapon. Known exemplars taken from a seized weapon can be directly compared to samples recovered from the scene using a comparison microscope. Striation images can also be uploaded to any existing national databases. Furthermore, these markings can be compared to other images in an attempt to link one weapon to multiple crime scenes. Like all forensic specialties, forensic firearm examiners are subject to being called to testify in court as expert witnesses.
The ability to compare ammunition is a direct result of the invention of rifling around the turn of the 16th century. By forcing the bullet to spin as it travels down the barrel of the weapon the bullet 's accuracy greatly increases. At the same time, the rifling leaves marks on the bullet that are indicative of that particular barrel. Prior to mass production of firearms, each barrel and bullet mold was hand made by gunsmiths making them unique. The first successful documented case of forensic firearm examination occurred in 1835 when a member of the Bow Street Runners in London matched a recovered bullet from a murder victim to a specific mold in a suspect 's home confirming that he made the bullet. Further evidence that the bullet maker was the perpetrator was found in his home and he was convicted. As manufacturing and automation replaced hand tools, the ability to compare bullets became impossible due to the standardization of molds within a specific company. However, experts in the field postulated that there were microscopic differences on each barrel left during the manufacturing process. These differences were a result of wear on the machines and since each new weapon caused a tiny amount of wear, each barrel would be slightly different from every other barrel produced by that company. Also, each bullet fired from a specific barrel would be printed with the same marks, allowing investigators to identify the weapon that fired a specific bullet.
One of the first uses of this knowledge was in 1915 to exonerate Charles Stielow of the murder of his neighbors. Stielow was sentenced to death and appealed to Charles S. Whitman, the Governor of New York, who was not convinced by the evidence used to convict Stielow. Whitman halted the execution until an inquiry could be conducted and after further examination it was shown that Stielow 's firearm could not have fired the bullets recovered from the victims. The invention of the comparison microscope by Calvin Goddard and Phillip O. Gravelle in 1925 modernized the forensic examination of firearms. Simultaneous comparison of two different objects at the same time allowed to closely examine striations for matches and therefore make a more definitive statement as to whether or not they matched.
One of the first true tests of this new technology was in the aftermath of the Saint Valentine 's Day Massacre in 1929. During the Prohibition Era, competing gang members were fighting over bootlegging operations within the city of Chicago. Members of the Chicago Outfit and the Egan 's Rats led by Al Capone attempted to remove all competition from Chicago by eliminating the North Side Gang leader Bugs Moran. The massacre missed Moran, who was not present, but killed seven members of the North Side Gang. The murderers attempted to cover up their crime by posing as police officers, even dressing in police uniforms. Witnesses saw two "officers '' leaving the scene, which implicated the Chicago police department as the perpetrators of the massacre. High levels of police corruption during that time period made it seem likely that the police department committed the killings. The investigation stalled until December 1929 when Fred Burke, a member of the Egan 's Rats, shot and killed a police officer in St. Joseph, Michigan. Officers searching for Burke were led to a home in nearby Stevensville. While Burke was not there, inside officers found an arsenal of weapons including two Thompson submachine guns. The Chicago police department was contacted and the weapons were brought back to Chicago for testing. Goddard was asked to compare the weapons to collected evidence found at the massacre using his new "ballistic - forensics '' technique. After test firing the guns, Goddard proved that the weapons were those used to kill the members of the North Side Gang, absolving the Chicago police department of all involvement. The successful use of Goddard 's technique resulted in the solidification of his place as the father of forensic firearm examination.
Any firearm collected during the course of an investigation could yield viable evidence if examined. For forensic firearm examination specific evidence that can be recovered include weapon serial numbers and potentially fingerprints left on the weapon 's surface.
Fingerprint recovery from the surface of firearms is done with cyanoacrylate (more commonly known as superglue) fuming. Firearms are placed in a specially designed fume hood designed to evenly distribute fumes instead of removing them. Liquid superglue is placed in a container and heated until it is in a gaseous state. The circulating fumes adhere to the oils left behind by the fingerprint, turning the print white. The resulting white print can be enhanced with fingerprint powder to increase the contrast of the white print against the weapon 's finish. While using the fuming technique on recovered guns is commonplace, the recovery of fingerprints from the surfaces of a firearm is challenging due to the textured grip and the general condition of recovered weapons. If fingerprints are recovered, they can be processed through fingerprint databases such as the Integrated Automated Fingerprint Identification System (IAFIS). Various parts of the recovered weapon can also be tested for touch DNA left by whomever handled it. However, the low levels of DNA that can be recovered presents numerous issues such as contamination and analysis anomalies such as allele drop - out and drop - in.
Serial numbers became commonplace after the United States passed the Gun Control Act of 1968. This law mandated that all guns manufactured or imported into the country have a serial number. Prior to 1968, many firearms either did not have a serial number or the serial numbers were not unique and were reused by a manufacturer on multiple firearms. If a recovered weapon has had the serial numbers altered or destroyed, examiners can attempt to recover the original numbers. The two main methods for the restoration of serial numbers are magnetic particle inspection and chemical restoration. It is recommended that magnetic particle inspection be performed first due to the nondestructive nature of the method. If magnetic particle inspection fails, chemical restoration is the next step in the forensic analysis.
If the serial number is successfully restored it can be used to help investigators track the weapon 's history, as well as potentially determine who owns the weapon. Firearm databases such as the National Crime Information Center of the United States and INTERPOL 's Firearm Reference Table can be used by investigators to track weapons that have been lost, stolen, or used previously in other crimes.
Originally developed as a method to detect flaws or irregularities in magnetic materials, magnetic particle inspection can be used on firearms to visualize the serial number underneath the obliterated area. When performing this technique, examiners place the weapon in a magnetic field. The irregularities in the metal, in this case the serial number, cause the field to deform. When a solution of ferrous particles is added to the weapon 's magnetized surface they will be attracted to the area where the magnetic field has deformed and will build up in the area. If fluorescent particles are added to the ferrous solution, ultraviolet light can be used to make it easier to visualize any recovered serial number.
Chemical restoration is a type of chemical milling. Typically, chemical milling is used to slowly remove material to create a desired shape. In serial number restoration, small amounts of metal are removed until the serial number is brought back to the surface. This can be performed due to the depth that serial numbers are engraved into the weapon. However, chemical restoration is limited by that depth and is only successful when the obliteration of the serial number is superficial. Examiners performing a restoration first sand the area where the serial number used to be. This removes any debris from the area left when the serial number was obliterated. The examiner then chooses a chemical, usually an acid, that will be used to slowly bring the number back to the surface. The type of chemical that is used depends on the material the weapon is made of. These acids can range from Fry 's Reagent for a magnetic metal, which is a mixture of hydrochloric acid, cupric chloride, and distilled water, to an acidic ferric chloride solution for a non-magnetic, non-aluminum material.
Spent cartridges found at a scene can be examined for physical evidence such as fingerprints or compared to samples that match them to a weapon. The examination of the cartridge relies on the unique tool marks left by the various parts of the weapon including the firing pin and the ejector in semi and fully automatic firearms. These markings can be compared and matched to known exemplars fired from the same weapon using the same parts. The examination of the marks left on the cartridge is done using a comparison microscope. Examiners view the questioned cartridge and the known exemplar simultaneously, looking for similar microscopic marks left during the firing process.
Cartridges are also routinely examined for fingerprints as the act of loading the ammunition into the magazine, or chamber, leaves recoverable impressions. These fingerprints can survive the firing processes and, while a rare occurrence, fingerprints have been obtained from cartridges recovered from the scene. Cartridges are subjected to cyanoacrylate fuming and examined for any usable prints. Usable prints are photographed and can be uploaded to fingerprint databases such as IAFIS for comparison with known exemplars. Cartridges can also be swabbed for trace DNA left by the individual who loaded the magazine. The extremely low levels of recoverable DNA present the same issues as swabbing a firearm for DNA.
Advancements in microscopic stamping have led to a push for the inclusion of firing pin microstamping. The microstamp is etched onto the firing pin and is transferred to the cartridge during the firing process. Each firing pin would have a unique serial number allowing investigators to trace casings found at a crime scene to a known firearm. The practice is not in use as of 2017, although California has enacted legislation that requires microstamping on all newly sold firearms. The law, and microstamping in general, has received significant opposition from gun manufacturers due to increased costs associated with introducing the microstamps into the manufacturing lines.
Preliminary examination of the bullet can exclude a large number of weapons by examining the general characteristics of a recovered bullet. By determining general aspects of the fired ammunition, a number of weapons can be immediately excluded as being incapable of firing that type of bullet. The make and model of the weapon can also be inferred from the combination of different class characteristics that are common to specific manufactures. The three main class characteristics of all bullets are the lands and grooves, the caliber of the bullet, and the rifling twist. All three can be tied directly to the type of barrel that was used to fire the bullet. The lands and grooves of barrel are the bumps and valleys created when the rifling is created. The caliber is the diameter of the barrel. The twist is the direction of the striations left by the barrel 's rifling, clockwise (right - handed) or counterclockwise (left - handed). Most barrels will have a right - handed twist with the exception of weapons created by the Colt 's Manufacturing Company which uses left - handed twists. Weapon barrels that match the class characteristics of recovered bullets can be examined further for individual characteristics to determine if the bullet came from that particular weapon.
In order to compare individual striations, examiners must obtain a known sample using the seized weapon. For slower - traveling bullets, such as pistols or revolvers, known bullet exemplars are created by firing the weapon into a water tank. The spent bullet can be recovered, intact, as the water slows down the bullet before it can reach the tank walls, allowing for it to be recovered. For faster traveling bullets, such as those fired from high - powered rifles and military style weapons, water tanks can not be used as the tank will not provide enough stopping power for the projectiles. To examine these weapons, investigators must fire them at a target at a controlled range with enough backing to stop the bullet and collect the spent round after it has been fired.
Once a known exemplar is produced, the evidence sample can be compared to the known by examining both at the same time with a comparison microscope. Striations that line up are examined more closely, looking for multiple consecutive matches. There is no set number of consecutive matches that equates to a match declaration, and examiners are trained to use the phrase "sufficient agreement '' when testifying. The degree to which an examiner can make that determination is based on their training and expertise. All findings by examiners are subject to questioning by both sides, prosecution and defense, during testimony in court.
Bullets and casings found at a scene require a known example to compare to in order to match them to a weapon. Without a weapon, the striation pattern can be uploaded to a database such as the National Integrated Ballistic Identification Network (NIBIN) maintained by the ATF or the United Kingdom 's National Ballistics Intelligence Service (NABIS). Information uploaded to these databases can be used to track gun crimes and to link crimes together. Maintainers of these databases recommend that every recovered firearm be test fired and the resulting known exemplar be uploaded into the database.
Firearm examiners have attempted to determine the shooter 's position by the location of spent bullet casings. The use of ejection pattern studies were originally part of incident reconstruction and methods for determining shooter location continue to be explained in major crime scene examination books. However, the validity of ejection pattern analysis has been brought into question by multiple studies that look at the reproducibility and end determination of shooter position by qualified examiners. Studies have shown that over 25 % of spent casings land somewhere other than to the right and rear of the shooter. This is the most commonly accepted location for where spent cartridge casings should fall, and the large percentage of casings that end up somewhere else raises concerns for the validity of the examination technique. Investigators should only present a location gained from an ejection pattern study as a tentative estimate when using the information in a courtroom setting.
Prior to September 2005, comparative bullet - lead analysis was performed on bullets found at a scene that were too destroyed for striation comparison. The technique would attempt to determine the unique elemental breakdown of the bullet and compare it to seized bullets possessed by a suspect. Review of the method found that the breakdown of elements found in bullets could be significantly different enough to potentially allow for two bullets from separate sources to be correlated to each other. However, there is not enough differences to definitely match a bullet from a crime scene to one taken from a suspect 's possession. An additional report in 2004 from the National Academy of Sciences (NAS) found that the testimony given regarding comparative bullet - lead analysis was overstated and potentially "misleading under the federal rules of evidence ''. In 2005, the Federal Bureau of Investigation indicated that they would no longer be performing this type of analysis.
Further criticism came from the 2009 NAS report on the current state of various forensic fields in the United States. The report 's section on firearm examination focused on the lack of defined requirements that are necessary in order to determine "matches '' between known and unknown striations. The NAS stated that, "sufficient studies have not been done to understand the reliability and repeatability of the methods. '' Without defined procedures on what is and what is n't considered "sufficient agreement '' the report states that forensic firearm examination contains fundamental problems that need to be addressed by the forensic community through a set of repeatable scientific studies that outline standard operating procedures that should be adopted by all firearm examiners. Another report issued in 2016 by the United States President 's Council of Advisors on Science and Technology confirmed the NAS 's findings, finding only one appropriately designed study that examined the rate of false positives and reliability amongst firearm examiners.
|
by the end of the seventeenth century who was most successful at using diplomacy to secure land | Dutch East India Company - wikipedia
The Dutch East India Company (Dutch: Vereenigde Oostindische Compagnie; VOC) was an early modern megacorporation, founded by a government - directed amalgamation of several rival Dutch trading companies (the so - called voorcompagnieën or pre-companies) in the early 17th century. It was originally established, on 20 March 1602, as a chartered company to trade with India and Indianized Southeast Asian countries when the Dutch government granted it a 21 - year monopoly on the Dutch spice trade. The VOC was an early multinational / transnational corporation in its modern sense. The Company has been often labelled a trading company (a company of merchants) or sometimes a shipping company. However, the VOC was in fact a proto - conglomerate company, diversifying into multiple commercial and industrial activities such as international trade (especially intra-Asian trade), shipbuilding, production and trade of East Indian spices, Formosan sugarcane, and South African wine. The Company was also a transcontinental employer and an early pioneer of outward foreign direct investment at the dawn of modern capitalism. The Company 's investment projects helped raise the commercial and industrial potential of many underdeveloped or undeveloped regions of the world in the early modern period. In the early 1600s, by widely issuing bonds and shares of stock to the general public, the VOC became the world 's first formally listed public company. In other words, it was the first corporation to be ever actually listed on an official stock exchange. The VOC was influential in the rise of corporate - led globalization in the early modern period.
With its pioneering institutional innovations and powerful roles in global business history, the Company is often considered by many to be the forerunner of modern corporations. In many respects, modern - day corporations are all the ' direct descendants ' of the VOC model. It was the VOC 's 17th - century institutional innovations and business practices that laid the foundations for the rise of giant global corporations in subsequent centuries -- as a highly significant and formidable socio - politico - economic force of the modern - day world -- to become the dominant factor in almost all economic systems today, whether for better or worse. The VOC also served as the direct model for the organizational reconstruction of the English / British East India Company (EIC) in 1657. The Company, for nearly 200 years of its existence (1602 -- 1800), had effectively transformed itself from a corporate entity into a state or an empire in its own right. One of the most influential and best expertly researched business enterprises in history, the VOC 's world has been the subject of a vast amount of literature that includes both fiction and nonfiction works.
Dubbed the ' VOC Republic ' or ' VOC Empire ' by some, the Company was historically an exemplary (transcontinental) company - state rather than a pure for - profit corporation. Originally a government - backed military - commercial enterprise, the VOC was the wartime brainchild of leading Dutch republican statesman Johan van Oldenbarnevelt and the States - General. From its inception in 1602, the Company was not only a commercial enterprise but also effectively an instrument of war in the young Dutch Republic 's revolutionary global war against the powerful Spanish Empire and Iberian Union (1579 -- 1648). In 1619, the Company forcibly established a central position in the Indonesian city of Jayakarta, changing the name to Batavia (modern - day Jakarta). Over the next two centuries the Company acquired additional ports as trading bases and safeguarded their interests by taking over surrounding territory. To guarantee its supply it established positions in many countries and became an early pioneer of outward foreign direct investment. In its foreign colonies the VOC possessed quasi-governmental powers, including the ability to wage war, imprison and execute convicts, negotiate treaties, strike its own coins, and establish colonies. With increasing importance of foreign posts, the company is often considered the world 's first true transnational corporation. Along with the Dutch West India Company (WIC / GWIC), the VOC became seen as the international arm of the Dutch Republic and the symbolic power of the Dutch Empire. To further its trade routes, the VOC - funded exploratory voyages such as those led by Willem Janszoon (Duyfken), Henry Hudson (Halve Maen) and Abel Tasman who revealed largely unknown landmasses to the western world. In the Golden Age of Netherlandish cartography (c. 1570s -- 1670s), VOC navigators and cartographers helped shape geographical knowledge of the modern world as we know them today.
Socio - economic changes in Europe, the shift in power balance, and less successful financial management resulted in a slow decline of the VOC between 1720 and 1799. After the financially disastrous Fourth Anglo - Dutch War (1780 -- 1784), the company was first nationalised in 1796, and finally dissolved in 1799. All assets were taken over by the government with VOC territories becoming Dutch government colonies.
In spite of the VOC 's historic roles and contributions, the Company has long been heavily criticized for its monopoly policy, exploitation, colonialism, uses of violence, and slavery.
In Dutch the name of the company is Vereenigde Oostindische Compagnie or Verenigde Oost - Indische Compagnie. abbreviated to VOC. The company 's monogram logo was possibly in fact the first globally recognized corporate logo. The logo of the VOC consisted of a large capital ' V ' with an O on the left and a C on the right leg. It appeared on various corporate items, such as cannons and coins. The first letter of the hometown of the chamber conducting the operation was placed on top (see figure for example of the Amsterdam Chamber logo). The monogram, versatility, flexibility, clarity, simplicity, symmetry, timelessness, and symbolism are considered notable characteristics of the VOC 's professionally designed logo, those ensured its success at a time when the concept of the corporate identity was virtually unknown. An Australian vintner has used the VOC logo since the late 20th century, having re-registered the company 's name for the purpose. The flag of the company was red, white, and blue (see Dutch flag), with the company logo embroidered on it.
Around the world and especially in English - speaking countries, the VOC is widely known as the "Dutch East India Company ''. The name ' Dutch East India Company ' is used to make a distinction with the (British) East India Company (EIC) and other East Indian companies (such as the Danish East India Company, French East India Company, Portuguese East India Company, and the Swedish East India Company). The company 's alternative names that have been used include the ' Dutch East Indies Company ', ' United East India Company ', ' United East Indian Company ', ' United East Indies Company ', ' Jan Company ', or ' Jan Compagnie '.
Before the Dutch Revolt, Antwerp had played an important role as a distribution centre in northern Europe. After 1591, however, the Portuguese used an international syndicate of the German Fuggers and Welsers, and Spanish and Italian firms, that used Hamburg as the northern staple port to distribute their goods, thereby cutting Dutch merchants out of the trade. At the same time, the Portuguese trade system was unable to increase supply to satisfy growing demand, in particular the demand for pepper. Demand for spices was relatively inelastic, and therefore each lag in the supply of pepper caused a sharp rise in pepper prices.
In 1580 the Portuguese crown was united in a personal union with the Spanish crown, with which the Dutch Republic was at war. The Portuguese Empire therefore became an appropriate target for Dutch military incursions. These factors motivated Dutch merchants to enter the intercontinental spice trade themselves. Further, a number of Dutchmen like Jan Huyghen van Linschoten and Cornelis de Houtman obtained first hand knowledge of the "secret '' Portuguese trade routes and practices, thereby providing opportunity.
The stage was thus set for the four - ship exploratory expedition by Frederick de Houtman in 1595 to Banten, the main pepper port of West Java, where they clashed with both the Portuguese and indigenous Indonesians. Houtman 's expedition then sailed east along the north coast of Java, losing twelve crew to a Javanese attack at Sidayu and killing a local ruler in Madura. Half the crew were lost before the expedition made it back to the Netherlands the following year, but with enough spices to make a considerable profit.
In 1598, an increasing number of fleets were sent out by competing merchant groups from around the Netherlands. Some fleets were lost, but most were successful, with some voyages producing high profits. In March 1599, a fleet of eight ships under Jacob van Neck was the first Dutch fleet to reach the ' Spice Islands ' of Maluku, the source of pepper, cutting out the Javanese middlemen. The ships returned to Europe in 1599 and 1600 and the expedition made a 400 percent profit.
In 1600, the Dutch joined forces with the Muslim Hituese on Ambon Island in an anti-Portuguese alliance, in return for which the Dutch were given the sole right to purchase spices from Hitu. Dutch control of Ambon was achieved when the Portuguese surrendered their fort in Ambon to the Dutch - Hituese alliance. In 1613, the Dutch expelled the Portuguese from their Solor fort, but a subsequent Portuguese attack led to a second change of hands; following this second reoccupation, the Dutch once again captured Solor, in 1636.
East of Solor, on the island of Timor, Dutch advances were halted by an autonomous and powerful group of Portuguese Eurasians called the Topasses. They remained in control of the Sandalwood trade and their resistance lasted throughout the 17th and 18th centuries, causing Portuguese Timor to remain under the Portuguese sphere of control.
At the time, it was customary for a company to be set up only for the duration of a single voyage and to be liquidated upon the return of the fleet. Investment in these expeditions was a very high - risk venture, not only because of the usual dangers of piracy, disease and shipwreck, but also because the interplay of inelastic demand and relatively elastic supply of spices could make prices tumble at just the wrong moment, thereby ruining prospects of profitability. To manage such risk the forming of a cartel to control supply would seem logical. The English had been the first to adopt this approach, by bundling their resources into a monopoly enterprise, the English East India Company in 1600, thereby threatening their Dutch competitors with ruin.
In 1602, the Dutch government followed suit, sponsoring the creation of a single "United East Indies Company '' that was also granted monopoly over the Asian trade. For a time in the seventeenth century, they were able to monopolize the trade in nutmeg, mace, and cloves and to sell these spices in Europe and India at fourteen to seventeen times the price they paid in Indonesia; while Dutch profits soared, the local economy of the Spice Islands was destroyed. With a capital of 6,440,200 guilders, the charter of the new company empowered it to build forts, maintain armies, and conclude treaties with Asian rulers. It provided for a venture that would continue for 21 years, with a financial accounting only at the end of each decade.
In February 1603, the Company seized the Santa Catarina, a 1500 - ton Portuguese merchant carrack, off the coast of Singapore. She was such a rich prize that her sale proceeds increased the capital of the VOC by more than 50 %.
Also in 1603 the first permanent Dutch trading post in Indonesia was established in Banten, West Java, and in 1611 another was established at Jayakarta (later "Batavia '' and then "Jakarta ''). In 1610, the VOC established the post of Governor General to more firmly control their affairs in Asia. To advise and control the risk of despotic Governors General, a Council of the Indies (Raad van Indië) was created. The Governor General effectively became the main administrator of the VOC 's activities in Asia, although the Heeren XVII, a body of 17 shareholders representing different chambers, continued to officially have overall control.
VOC headquarters were located in Ambon during the tenures of the first three Governors General (1610 -- 1619), but it was not a satisfactory location. Although it was at the centre of the spice production areas, it was far from the Asian trade routes and other VOC areas of activity ranging from Africa to India to Japan. A location in the west of the archipelago was thus sought. The Straits of Malacca were strategic but had become dangerous following the Portuguese conquest, and the first permanent VOC settlement in Banten was controlled by a powerful local ruler and subject to stiff competition from Chinese and English traders.
In 1604, a second English East India Company voyage commanded by Sir Henry Middleton reached the islands of Ternate, Tidore, Ambon and Banda. In Banda, they encountered severe VOC hostility, sparking Anglo - Dutch competition for access to spices. From 1611 to 1617, the English established trading posts at Sukadana (southwest Kalimantan), Makassar, Jayakarta and Jepara in Java, and Aceh, Pariaman and Jambi in Sumatra, which threatened Dutch ambitions for a monopoly on East Indies trade.
Diplomatic agreements in Europe in 1620 ushered in a period of co-operation between the Dutch and the English over the spice trade. This ended with a notorious but disputed incident known as the ' Amboyna massacre ', where ten Englishmen were arrested, tried and beheaded for conspiracy against the Dutch government. Although this caused outrage in Europe and a diplomatic crisis, the English quietly withdrew from most of their Indonesian activities (except trading in Banten) and focused on other Asian interests.
In 1619, Jan Pieterszoon Coen was appointed Governor - General of the VOC. He saw the possibility of the VOC becoming an Asian power, both political and economic. On 30 May 1619, Coen, backed by a force of nineteen ships, stormed Jayakarta, driving out the Banten forces; and from the ashes established Batavia as the VOC headquarters. In the 1620s almost the entire native population of the Banda Islands was driven away, starved to death, or killed in an attempt to replace them with Dutch plantations. These plantations were used to grow cloves and nutmeg for export. Coen hoped to settle large numbers of Dutch colonists in the East Indies, but implementation of this policy never materialised, mainly because very few Dutch were willing to emigrate to Asia.
Another of Coen 's ventures was more successful. A major problem in the European trade with Asia at the time was that the Europeans could offer few goods that Asian consumers wanted, except silver and gold. European traders therefore had to pay for spices with the precious metals, which were in short supply in Europe, except for Spain and Portugal. The Dutch and English had to obtain it by creating a trade surplus with other European countries. Coen discovered the obvious solution for the problem: to start an intra-Asiatic trade system, whose profits could be used to finance the spice trade with Europe. In the long run this obviated the need for exports of precious metals from Europe, though at first it required the formation of a large trading - capital fund in the Indies. The VOC reinvested a large share of its profits to this end in the period up to 1630.
The VOC traded throughout Asia. Ships coming into Batavia from the Netherlands carried supplies for VOC settlements in Asia. Silver and copper from Japan were used to trade with India and China for silk, cotton, porcelain, and textiles. These products were either traded within Asia for the coveted spices or brought back to Europe. The VOC was also instrumental in introducing European ideas and technology to Asia. The Company supported Christian missionaries and traded modern technology with China and Japan. A more peaceful VOC trade post on Dejima, an artificial island off the coast of Nagasaki, was for more than two hundred years the only place where Europeans were permitted to trade with Japan. When the VOC tried to use military force to make Ming dynasty China open up to Dutch trade, the Chinese defeated the Dutch in a war over the Penghu islands from 1623 to 1624, forcing the VOC to abandon Penghu for Taiwan. The Chinese defeated the VOC again at the Battle of Liaoluo Bay in 1633.
The Vietnamese Nguyen Lords defeated the VOC in a 1643 battle during the Trịnh -- Nguyễn War, blowing up a Dutch ship. The Cambodians defeated the VOC in the Cambodian -- Dutch War from 1643 to 1644 on the Mekong River.
In 1640, the VOC obtained the port of Galle, Ceylon, from the Portuguese and broke the latter 's monopoly of the cinnamon trade. In 1658, Gerard Pietersz. Hulft laid siege to Colombo, which was captured with the help of King Rajasinghe II of Kandy. By 1659, the Portuguese had been expelled from the coastal regions, which were then occupied by the VOC, securing for it the monopoly over cinnamon. To prevent the Portuguese or the English from ever recapturing Sri Lanka, the VOC went on to conquer the entire Malabar Coast from the Portuguese, almost entirely driving them from the west coast of India. When news of a peace agreement between Portugal and the Netherlands reached Asia in 1663, Goa was the only remaining Portuguese city on the west coast.
In 1652, Jan van Riebeeck established an outpost at the Cape of Good Hope (the southwestern tip of Africa, now Cape Town, South Africa) to re-supply VOC ships on their journey to East Asia. This post later became a full - fledged colony, the Cape Colony, when more Dutch and other Europeans started to settle there.
Through the seventeenth century VOC trading posts were also established in Persia, Bengal, Malacca, Siam, Formosa (now Taiwan), as well as the Malabar and Coromandel coasts in India. Direct access to mainland China came in 1729 when a factory was established in Canton. In 1662, however, Koxinga expelled the Dutch from Taiwan (see History of Taiwan).
In 1663, the VOC signed the "Painan Treaty '' with several local lords in the Painan area that were revolting against the Aceh Sultanate. The treaty allowed the VOC to build a trading post in the area and eventually to monopolise the trade there, especially the gold trade.
By 1669, the VOC was the richest private company the world had ever seen, with over 150 merchant ships, 40 warships, 50,000 employees, a private army of 10,000 soldiers, and a dividend payment of 40 % on the original investment.
Many of the VOC employees inter-mixed with the indigenous peoples and expanded the population of Indos in pre-colonial history
Sword of the East India Company, featuring the V.O.C. monogram of the guard. On display at the Musée de l'Armée in Paris.
This Kraak porcelain dish (in a museum in Malacca) was emblazoned with the V.O.C. monogram.
Trade area of the VOC around 1700 CE.
VOC ships in Chittagong or Arakan.
City hall of Batavia in 1682 CE.
Natives of Arakan sell slaves to the Dutch East India Company, c. 1663 CE.
Around 1670, two events caused the growth of VOC trade to stall. In the first place, the highly profitable trade with Japan started to decline. The loss of the outpost on Formosa to Koxinga in the 1662 Siege of Fort Zeelandia and related internal turmoil in China (where the Ming dynasty was being replaced with the Qing dynasty) brought an end to the silk trade after 1666. Though the VOC substituted Bengali for Chinese silk other forces affected the supply of Japanese silver and gold. The shogunate enacted a number of measures to limit the export of these precious metals, in the process limiting VOC opportunities for trade, and severely worsening the terms of trade. Therefore, Japan ceased to function as the lynchpin of the intra-Asiatic trade of the VOC by 1685.
Even more importantly, the Third Anglo - Dutch War temporarily interrupted VOC trade with Europe. This caused a spike in the price of pepper, which enticed the English East India Company (EIC) to enter this market aggressively in the years after 1672. Previously, one of the tenets of the VOC pricing policy was to slightly over-supply the pepper market, so as to depress prices below the level where interlopers were encouraged to enter the market (instead of striving for short - term profit maximisation). The wisdom of such a policy was illustrated when a fierce price war with the EIC ensued, as that company flooded the market with new supplies from India. In this struggle for market share, the VOC (which had much larger financial resources) could wait out the EIC. Indeed, by 1683, the latter came close to bankruptcy; its share price plummeted from 600 to 250; and its president Josiah Child was temporarily forced from office.
However, the writing was on the wall. Other companies, like the French East India Company and the Danish East India Company also started to make inroads on the Dutch system. The VOC therefore closed the heretofore flourishing open pepper emporium of Bantam by a treaty of 1684 with the Sultan. Also, on the Coromandel Coast, it moved its chief stronghold from Pulicat to Negapatnam, so as to secure a monopoly on the pepper trade at the detriment of the French and the Danes. However, the importance of these traditional commodities in the Asian - European trade was diminishing rapidly at the time. The military outlays that the VOC needed to make to enhance its monopoly were not justified by the increased profits of this declining trade.
Nevertheless, this lesson was slow to sink in and at first the VOC made the strategic decision to improve its military position on the Malabar Coast (hoping thereby to curtail English influence in the area, and end the drain on its resources from the cost of the Malabar garrisons) by using force to compel the Zamorin of Calicut to submit to Dutch domination. In 1710, the Zamorin was made to sign a treaty with the VOC undertaking to trade exclusively with the VOC and expel other European traders. For a brief time, this appeared to improve the Company 's prospects. However, in 1715, with EIC encouragement, the Zamorin renounced the treaty. Though a Dutch army managed to suppress this insurrection temporarily, the Zamorin continued to trade with the English and the French, which led to an appreciable upsurge in English and French traffic. The VOC decided in 1721 that it was no longer worth the trouble to try to dominate the Malabar pepper and spice trade. A strategic decision was taken to scale down the Dutch military presence and in effect yield the area to EIC influence.
The 1741 Battle of Colachel by warriors of Travancore under Raja Marthanda Varma defeated the Dutch. The Dutch commander Captain Eustachius De Lannoy was captured. Marthanda Varma agreed to spare the Dutch captain 's life on condition that he joined his army and trained his soldiers on modern lines. This defeat in the Travancore - Dutch War is considered the earliest example of an organised Asian power overcoming European military technology and tactics; and it signalled the decline of Dutch power in India.
The attempt to continue as before as a low volume - high profit business enterprise with its core business in the spice trade had therefore failed. The Company had however already (reluctantly) followed the example of its European competitors in diversifying into other Asian commodities, like tea, coffee, cotton, textiles, and sugar. These commodities provided a lower profit margin and therefore required a larger sales volume to generate the same amount of revenue. This structural change in the commodity composition of the VOC 's trade started in the early 1680s, after the temporary collapse of the EIC around 1683 offered an excellent opportunity to enter these markets. The actual cause for the change lies, however, in two structural features of this new era.
In the first place, there was a revolutionary change in the tastes affecting European demand for Asian textiles, coffee and tea, around the turn of the 18th century. Secondly, a new era of an abundant supply of capital at low interest rates suddenly opened around this time. The second factor enabled the Company easily to finance its expansion in the new areas of commerce. Between the 1680s and 1720s, the VOC was therefore able to equip and man an appreciable expansion of its fleet, and acquire a large amount of precious metals to finance the purchase of large amounts of Asian commodities, for shipment to Europe. The overall effect was approximately to double the size of the company.
The tonnage of the returning ships rose by 125 percent in this period. However, the Company 's revenues from the sale of goods landed in Europe rose by only 78 percent. This reflects the basic change in the VOC 's circumstances that had occurred: it now operated in new markets for goods with an elastic demand, in which it had to compete on an equal footing with other suppliers. This made for low profit margins. Unfortunately, the business information systems of the time made this difficult to discern for the managers of the company, which may partly explain the mistakes they made from hindsight. This lack of information might have been counteracted (as in earlier times in the VOC 's history) by the business acumen of the directors. Unfortunately by this time these were almost exclusively recruited from the political regent class, which had long since lost its close relationship with merchant circles.
Low profit margins in themselves do not explain the deterioration of revenues. To a large extent the costs of the operation of the VOC had a "fixed '' character (military establishments; maintenance of the fleet and such). Profit levels might therefore have been maintained if the increase in the scale of trading operations that in fact took place had resulted in economies of scale. However, though larger ships transported the growing volume of goods, labour productivity did not go up sufficiently to realise these. In general the Company 's overhead rose in step with the growth in trade volume; declining gross margins translated directly into a decline in profitability of the invested capital. The era of expansion was one of "profitless growth ''.
Specifically: "(t) he long - term average annual profit in the VOC 's 1630 -- 70 ' Golden Age ' was 2.1 million guilders, of which just under half was distributed as dividends and the remainder reinvested. The long - term average annual profit in the ' Expansion Age ' (1680 -- 1730) was 2.0 million guilders, of which three - quarters was distributed as dividend and one - quarter reinvested. In the earlier period, profits averaged 18 percent of total revenues; in the latter period, 10 percent. The annual return of invested capital in the earlier period stood at approximately 6 percent; in the latter period, 3.4 percent. ''
Nevertheless, in the eyes of investors the VOC did not do too badly. The share price hovered consistently around the 400 mark from the mid-1680s (excepting a hiccup around the Glorious Revolution in 1688), and they reached an all - time high of around 642 in the 1720s. VOC shares then yielded a return of 3.5 percent, only slightly less than the yield on Dutch government bonds.
After 1730, the fortunes of the VOC started to decline. Five major problems, not all of equal weight, explain its decline over the next fifty years to 1780:
Despite these problems, the VOC in 1780 remained an enormous operation. Its capital in the Republic, consisting of ships and goods in inventory, totalled 28 million guilders; its capital in Asia, consisting of the liquid trading fund and goods en route to Europe, totalled 46 million guilders. Total capital, net of outstanding debt, stood at 62 million guilders. The prospects of the company at this time therefore were not hopeless, had one of the plans for reform been undertaken successfully. However, the Fourth Anglo - Dutch War intervened. British attacks in Europe and Asia reduced the VOC fleet by half; removed valuable cargo from its control; and devastated its remaining power in Asia. The direct losses of the VOC can be calculated at 43 million guilders. Loans to keep the company operating reduced its net assets to zero.
From 1720 on, the market for sugar from Indonesia declined as the competition from cheap sugar from Brazil increased. European markets became saturated. Dozens of Chinese sugar traders went bankrupt, which led to massive unemployment, which in turn led to gangs of unemployed coolies. The Dutch government in Batavia did not adequately respond to these problems. In 1740, rumours of deportation of the gangs from the Batavia area led to widespread rioting. The Dutch military searched houses of Chinese in Batavia for weapons. When a house accidentally burnt down, military and impoverished citizens started slaughtering and pillaging the Chinese community. This massacre of the Chinese was deemed sufficiently serious for the board of the VOC to start an official investigation into the Government of the Dutch East Indies for the first time in its history.
After the Fourth Anglo - Dutch War, the VOC was a financial wreck. After vain attempts at reorganisation by the provincial States of Holland and Zeeland, it was nationalised by the new Batavian Republic on 1 March 1796. The VOC charter was renewed several times, but was allowed to expire on 31 December 1799. Most of the possessions of the former VOC were subsequently occupied by Great Britain during the Napoleonic wars, but after the new United Kingdom of the Netherlands was created by the Congress of Vienna, some of these were restored to this successor state of the Dutch Republic by the Anglo - Dutch Treaty of 1814.
The VOC is generally considered to be the world 's first truly transnational corporation and it was also the first multinational enterprise to issue shares of stock to the public. Some historians such as Timothy Brook and Russell Shorto consider the VOC as the pioneering corporation in the first wave of the corporate globalization era. The VOC was the first multinational corporation to operate officially in different continents such as Europe, Asia and Africa. While the VOC mainly operated in what later became the Dutch East Indies (modern Indonesia), the company also had important operations elsewhere. It employed people from different continents and origins in the same functions and working environments. Although it was a Dutch company its employees included not only people from the Netherlands, but also many from Germany and from other countries as well. Besides the diverse north - west European workforce recruited by the VOC in the Dutch Republic, the VOC made extensive use of local Asian labour markets. As a result, the personnel of the various VOC offices in Asia consisted of European and Asian employees. Asian or Eurasian workers might be employed as sailors, soldiers, writers, carpenters, smiths, or as simple unskilled workers. At the height of its existence the VOC had 25,000 employees who worked in Asia and 11,000 who were en route. Also, while most of its shareholders were Dutch, about a quarter of the initial shareholders were Zuid - Nederlanders (people from an area that includes modern Belgium and Luxembourg) and there were also a few dozen Germans.
The VOC had two types of shareholders: the participanten, who could be seen as non-managing members, and the 76 bewindhebbers (later reduced to 60) who acted as managing directors. This was the usual set - up for Dutch joint - stock companies at the time. The innovation in the case of the VOC was that the liability of not just the participanten but also of the bewindhebbers was limited to the paid - in capital (usually, bewindhebbers had unlimited liability). The VOC therefore was a limited liability company. Also, the capital would be permanent during the lifetime of the company. As a consequence, investors that wished to liquidate their interest in the interim could only do this by selling their share to others on the Amsterdam Stock Exchange. Confusion of confusions, a 1688 dialogue by the Sephardi Jew Joseph de la Vega analysed the workings of this one - stock exchange.
The VOC consisted of six Chambers (Kamers) in port cities: Amsterdam, Delft, Rotterdam, Enkhuizen, Middelburg and Hoorn. Delegates of these chambers convened as the Heeren XVII (the Lords Seventeen). They were selected from the bewindhebber - class of shareholders.
Of the Heeren XVII, eight delegates were from the Chamber of Amsterdam (one short of a majority on its own), four from the Chamber of Zeeland, and one from each of the smaller Chambers, while the seventeenth seat was alternatively from the Chamber of Middelburg - Zeeland or rotated among the five small Chambers. Amsterdam had thereby the decisive voice. The Zeelanders in particular had misgivings about this arrangement at the beginning. The fear was not unfounded, because in practice it meant Amsterdam stipulated what happened.
The six chambers raised the start - up capital of the Dutch East India Company:
The raising of capital in Rotterdam did not go so smoothly. A considerable part originated from inhabitants of Dordrecht. Although it did not raise as much capital as Amsterdam or Middelburg - Zeeland, Enkhuizen had the largest input in the share capital of the VOC. Under the first 358 shareholders, there were many small entrepreneurs, who dared to take the risk. The minimum investment in the VOC was 3,000 guilders, which priced the Company 's stock within the means of many merchants.
Among the early shareholders of the VOC, immigrants played an important role. Under the 1,143 tenderers were 39 Germans and no fewer than 301 from the Southern Netherlands (roughly present Belgium and Luxembourg, then under Habsburg rule), of whom Isaac le Maire was the largest subscriber with ƒ85, 000. VOC 's total capitalisation was ten times that of its British rival.
The Heeren XVII (Lords Seventeen) met alternately 6 years in Amsterdam and 2 years in Middelburg - Zeeland. They defined the VOC 's general policy and divided the tasks among the Chambers. The Chambers carried out all the necessary work, built their own ships and warehouses and traded the merchandise. The Heeren XVII sent the ships ' masters off with extensive instructions on the route to be navigated, prevailing winds, currents, shoals and landmarks. The VOC also produced its own charts.
In the context of the Dutch - Portuguese War the company established its headquarters in Batavia, Java (now Jakarta, Indonesia). Other colonial outposts were also established in the East Indies, such as on the Maluku Islands, which include the Banda Islands, where the VOC forcibly maintained a monopoly over nutmeg and mace. Methods used to maintain the monopoly involved extortion and the violent suppression of the native population, including mass murder. In addition, VOC representatives sometimes used the tactic of burning spice trees to force indigenous populations to grow other crops, thus artificially cutting the supply of spices like nutmeg and cloves.
Organization and leadership structures were varied as necessary in the various VOC outposts:
Opperhoofd is a Dutch word (pl. Opperhoofden), which literally means ' supreme chief '. In this VOC context, the word is a gubernatorial title, comparable to the English Chief factor, for the chief executive officer of a Dutch factory in the sense of trading post, as led by a factor, i.e. agent.
The Council of Justice in Batavia was the appellate court for all the other VOC Company posts in the VOC empire.
The seventeenth - century Dutch businessmen, especially the VOC investors, were possibly the history 's first recorded investors to seriously consider the corporate governance 's problems. Isaac Le Maire, who is known as history 's first recorded short seller, was also a sizeable shareholder of the VOC. In 1609, he complained of the VOC 's shoddy corporate governance. On 24 January 1609, Le Maire filed a petition against the VOC, marking the first recorded expression of shareholder activism. In what is the first recorded corporate governance dispute, Le Maire formally charged that the VOC 's board of directors (the Heeren XVII) sought to "retain another 's money for longer or use it ways other than the latter wishes '' and petitioned for the liquidation of the VOC in accordance with standard business practice. Initially the largest single shareholder in the VOC and a bewindhebber sitting on the board of governors, Le Maire apparently attempted to divert the firm 's profits to himself by undertaking 14 expeditions under his own accounts instead of those of the company. Since his large shareholdings were not accompanied by greater voting power, Le Maire was soon ousted by other governors in 1605 on charges of embezzlement, and was forced to sign an agreement not to compete with the VOC. Having retained stock in the company following this incident, in 1609 Le Maire would become the author of what is celebrated as "first recorded expression of shareholder advocacy at a publicly traded company ''.
In 1622, the history 's first recorded shareholder revolt also happened among the VOC investors who complained that the company account books had been "smeared with bacon '' so that they might be "eaten by dogs. '' The investors demanded a "reeckeninge, '' a proper financial audit. The 1622 campaign by the shareholders of the VOC is a testimony of genesis of corporate social responsibility (CSR) in which shareholders staged protests by distributing pamphlets and complaining about management self enrichment and secrecy.
The history of VOC commercial conflict, for example with the British East India Company (EIC), was at times closely connected to Dutch military conflicts. The commercial interests of the VOC (and more generally the Netherlands) were reflected in military objectives and the settlements agreed by treaty. In the Treaty of Breda (1667) ending the Second Anglo - Dutch War, the Dutch were finally able to secure a VOC monopoly for nutmeg trade, ceding the island of Manhattan to the British while gaining the last non-VOC controlled source of nutmeg, the island of Rhun in the Banda islands. The Dutch later re-captured Manhattan, but returned it along with the colony of New Netherland in the Treaty of Westminster (1674) ending the Fourth Anglo - Dutch War. The British also gave up claims on Suriname as part of the Treaty of Westminster. There was also an effort to compensate the war - related losses of the Dutch West India Company in the mid-17th Century by the profits of the VOC, though this was ultimately blocked.
Military conflicts involving the VOC (not necessarily comprehensive)
In terms of global business history, the lessons from the VOC 's successes or failures are critically important. In his book Amsterdam: A History of the World 's Most Liberal City (2013), American author and historian Russell Shorto summarizes the VOC 's importance in world history: "Like the oceans it mastered, the VOC had a scope that is hard to fathom. One could craft a defensible argument that no company in history has had such an impact on the world. Its surviving archives -- in Cape Town, Colombo, Chennai, Jakarta, and The Hague -- have been measured (by a consortium applying for a UNESCO grant to preserve them) in kilometers. In innumerable ways the VOC both expanded the world and brought its far - flung regions together. It introduced Europe to Asia and Africa, and vice versa (while its sister multinational, the West India Company, set New York City in motion and colonized Brazil and the Caribbean Islands). It pioneered globalization and invented what might be the first modern bureaucracy. It advanced cartography and shipbuilding. It fostered disease, slavery, and exploitation on a scale never before imaged. ''
A pioneering early model of the multinational corporation in its modern sense, the Company is also considered to be the world 's first true transnational corporation. In the early 1600s, the VOC became the world 's first formally listed public company because it was the first corporation to be ever actually listed on a formal stock exchange. The VOC had a massive influence on the evolution of the modern corporation by creating an institutional prototype for subsequent large - scale business enterprises (in particular large corporations like multinational / transnational / global corporations) and their rise to become a highly significant socio - politico - economic force of the modern world as we know it today. In many respects, modern - day publicly - listed global companies (including Forbes Global 2000 companies) are all ' descendants ' of a business model pioneered by the VOC in the 17th century. Like modern - day major corporations, in many ways, the post-1657 English / British East India Company 's operational structure was a derivative of the earlier VOC model.
During its golden age, the Company played crucial roles in business, financial, socio - politico - economic, military - political, diplomatic, ethnic, and exploratory maritime history of the world. In the early modern period, the VOC was also the driving force behind the rise of corporate - led globalization, corporate power, corporate identity, corporate culture, corporate social responsibility, corporate ethics, corporate governance, corporate finance, corporate capitalism, and finance capitalism. With its pioneering institutional innovations and powerful roles in world history, the Company is considered by many to be the first major, first modern, first global, most valuable, and most influential corporation ever seen. The VOC was also arguably the first historical model of the megacorporation.
The VOC played a crucial role in the rise of corporate - led globalization, corporate governance, corporate identity, corporate social responsibility, corporate finance, modern entrepreneurship, and financial capitalism. During its golden age, the Company made some fundamental institutional innovations in economic and financial history. These financially revolutionary innovations allowed a single company (like the VOC) to mobilize financial resources from a large number of investors and create ventures at a scale that had previously only been possible for monarchs. In the words of Canadian historian and sinologist Timothy Brook, "the Dutch East India Company -- the VOC, as it is known -- is to corporate capitalism what Benjamin Franklin 's kite is to electronics: the beginning of something momentous that could not have been predicted at the time. '' The birth and growth of the VOC (especially in the 17th century) is considered by many to be the official beginning of the corporate globalization era with the rise of large - scale business enterprises (multinational / transnational corporations in particular) as a highly formidable socio - politico - economic force that significantly affects people 's lives in every corner of the world today, whether for better or worse. As the world 's first publicly traded company and first listed company (the first company to be ever listed on an official stock exchange), the VOC was the first company to issue stock and bonds to the general public. Considered by many experts to be the world 's first truly (modern) multinational corporation, the VOC was also the first permanently organized limited - liability joint - stock company, with a permanent capital base. The VOC shareholders were the pioneers in laying the basis for modern corporate governance and corporate finance. The VOC is often considered as the precursor of modern corporations, if not the first truly modern corporation. It was the VOC that invented the idea of investing in the company rather than in a specific venture governed by the company. With its pioneering features such as corporate identity (first globally recognized corporate logo), entrepreneurial spirit, legal personhood, transnational (multinational) operational structure, high and stable profitability, permanent capital (fixed capital stock), freely transferable shares and tradable securities, separation of ownership and management, and limited liability for both shareholders and managers, the VOC is generally considered a major institutional breakthrough and the model for large corporations that now dominate the global economy.
The VOC was a driving force behind the rise of Amsterdam as the first modern model of international financial centres that now dominate the global financial system. With its maritime and financial power, Republican - period Amsterdam -- unlike its Southern Netherlandish cousins and predecessors such as Burgundian - period Bruges and Habsburg - period Antwerp -- could control crucial resources and markets directly, sending its fleets to almost all quarters of the globe. During the 17th century and most of the 18th century, Amsterdam had been the most influential financial centre of the world. The VOC also played a major role in the creation of the world 's first fully functioning financial market, with the birth of a fully fledged capital market. The Dutch were also the first who effectively used a fully - fledged capital market (including the bond market and the stock market) to finance companies (such as the VOC and the WIC). It was in the 17th - century Dutch Republic that the global securities market began to take on its modern form. And it was in Amsterdam that the important institutional innovations such as publicly traded companies, transnational corporations, capital markets (including bond markets and stock markets), central banking system, investment banking system, and investment funds (mutual funds) were systematically operated for the first time in history. In 1602 the VOC established an exchange in Amsterdam where VOC stock and bonds could be traded in a secondary market. The VOC undertook the world 's first recorded IPO in the same year. The Amsterdam Stock Exchange (Amsterdamsche Beurs or Beurs van Hendrick de Keyser in Dutch) was also the world 's first fully - fledged stock exchange. While the Italian city - states produced first formal bond markets, they did n't develop the other ingredient necessary to produce a fully fledged capital market: the formal stock market. The Dutch East India Company (VOC) became the first company to offer shares of stock. The dividend averaged around 18 % of capital over the course of the Company 's 200 - year existence. The launch of the Amsterdam Stock Exchange by the VOC in the early 1600s, has long been recognized as the origin of ' modern ' stock exchanges that specialize in creating and sustaining secondary markets in the securities (such as bonds and shares of stock) issued by corporations. Dutch investors were the first to trade their shares at a regular stock exchange. The process of buying and selling these shares of stock in the VOC became the basis of the first official (formal) stock market in history. It was in the Dutch Republic that the early techniques of stock - market manipulation were developed. The Dutch pioneered stock futures, stock options, short selling, bear raids, debt - equity swaps, and other speculative instruments. Amsterdam businessman Joseph de la Vega 's Confusion of Confusions (1688) was the earliest book about stock trading.
The idea of a highly competitive and organized (active mainly in Greater India but headquartered in the United Provinces of the Netherlands) Dutch government - backed privately - financed military - commercial enterprise was the wartime brainchild of the leading republican statesman Johan van Oldenbarnevelt and the States - General in the late 1590s. In 1602, the "United '' East India Company (VOC) was formed by a government - directed consolidation / amalgamation of several rival Dutch trading companies or the so - called voorcompagnieën. It was a time when the newly formed Dutch Republic was in the midst of their eighty - year - long revolutionary global war against the mighty Spanish Empire and Iberian Union (1579 -- 1648). And therefore, from the beginning, the VOC was not only a business enterprise but also an instrument of war. In other words, the VOC was a fully functioning military - political - commercial complex in its own right rather than a pure trading company or shipping company.
In the early modern period, the VOC was the largest private employer in the Low Countries. The Company was a major force behind the financial revolution and economic miracle of the young Dutch Republic in the 17th century. During their Golden Age, the Dutch Republic (or the Northern Netherlands), as the resource - poor and obscure cousins of the more urbanized Southern Netherlands, rose to become the world 's leading economic and financial superpower. Despite its lack of natural resources (except for water and wind power) and its comparatively modest size and population, the Dutch Republic dominated global market in many advanced industries such as shipbuilding, shipping, water engineering, printing and publishing, map making, pulp and paper, lens - making, sugarcane refining, overseas investment, financial services, and international trade. The Dutch Republic was an early industrialized nation - state in its Golden Age. The 17th - century Dutch mechanical innovations / inventions such as wind - powered sawmills and Hollander beaters helped revolutionize shipbuilding and paper (including pulp) industries in the early modern period. The VOC 's shipyards also contributed greatly to the Dutch domination of global shipbuilding and shipping industries during the 1600s. "By seventeenth century standards, '' as Richard Unger affirms, Dutch shipbuilding "was a massive industry and larger than any shipbuilding industry which had preceded it. '' By the 1670s the size of the Dutch merchant fleet probably exceeded the combined fleets of England, France, Spain, Portugal, and Germany. Until the mid-1700s, the economic system of the Dutch Republic (including its financial system) was the most advanced and sophisticated ever seen in history. From about 1600 to 1720, the Dutch had the highest per capita income in the world, at least double that of neighbouring countries at the time.
However, in a typical multicultural society of the Netherlands (home to one million citizens with roots in the former colonies Indonesia, Suriname and the Antilles), the VOC 's history (and especially its dark side) has always been a potential source of controversy. In 2006 when the Dutch Prime Minister Jan Pieter Balkenende referred to the pioneering entrepreneurial spirit and work ethics of the Dutch people and Dutch Republic in their Golden Age, he coined the term "VOC mentality '' (VOC - mentaliteit in Dutch). For Balkenende, the VOC represented Dutch business acumen, entrepreneurship, adventurous spirit, and decisiveness. However, it unleashed a wave of criticism, since such romantic views about the Dutch Golden Age ignores the inherent historical associations with colonialism, exploitation and violence. Balkenende later stressed that "it had not been his intention to refer to that at all ''. But in spite of criticisms, the "VOC - mentality '', as a characteristic of the selective historical perspective on the Dutch Golden Age, has been considered a key feature of Dutch cultural policy for many years.
The VOC was a transcontinental employer and an early pioneer of outward foreign direct investment at the dawn of modern capitalism. In his book The Ecology of Money: Debt, Growth, and Sustainability (2013), Adrian Kuzminski notes, "The Dutch, it seems, more than anyone in the West since the palmy days of ancient Rome, had more money than they knew what to do with. They discovered, unlike the Romans, that the best use of money was to make more money. They invested it, mostly in overseas ventures, utilizing the innovation of the joint - stock company in which private investors could purchase shares, the most famous being the Dutch East India Company. '' The VOC 's intercontinental activities played a major role to the Dutch Republic 's prosperity, as well as it could awaken socio - economic dynamism elsewhere. Wherever Dutch capital went, urban features were developed, economic activities expanded, new industries established, new jobs created, trading companies operated, swamps drained, mines opened, forests exploited, canals constructed, mills turned, and ships were built. In the early modern period, the Dutch were pioneering capitalists who raised the commercial and industrial potential of underdeveloped or undeveloped lands whose resources they exploited, whether for better or worse. For example, the native economies of pre-VOC era Taiwan and South Africa were virtually undeveloped or were in almost primitive states. In many way, recorded economic history of Taiwan and South Africa began with the golden age of the VOC in the 17th century. It was VOC people who established and developed the first urban areas in the history of Taiwan (Tainan) and South Africa (including Cape Town, Stellenbosch, and Swellendam).
The VOC existed for almost 200 years from its founding in 1602, when the States - General of the Netherlands granted it a 21 - year monopoly over Dutch operations in Asia until its demise in 1796. During those two centuries (between 1602 and 1796), the VOC sent almost a million Europeans to work in the Asia trade on 4,785 ships, and netted for their efforts more than 2.5 million tons of Asian trade goods. By contrast, the rest of Europe combined sent only 882,412 people from 1500 to 1795, and the fleet of the English (later British) East India Company, the VOC 's nearest competitor, was a distant second to its total traffic with 2,690 ships and a mere one - fifth the tonnage of goods carried by the VOC. The VOC enjoyed huge profits from its spice monopoly through most of the 17th century. By 1669, the VOC was the richest company the world had ever seen, with over 150 merchant ships, 40 warships, 50,000 employees, a private army of 10,000 soldiers, and a dividend payment of 40 % on the original investment.
In terms of military - political history, the VOC, along with the Dutch West India Company (WIC / GWIC), was seen as the international arm of the Dutch Republic and the symbolic power of the Dutch Empire. The VOC was historically a military - political - economic complex rather than a pure trading company (or shipping company). The government - backed but privately financed company was effectively a state in its own right, or a state within another state. For almost 200 years of its existence, the VOC was a key non-state geopolitical player in Eurasia. The Company was much an unofficial representative of the States General of the United Provinces in foreign relations of the Dutch Republic with many states, especially Dutch - Asian relations. The Company 's territories were even larger than some countries.
The VOC had seminal influences on the modern history of many countries and territories around the world such as New Netherland (New York), Indonesia, Malaysia, India, Sri Lanka, Australia, New Zealand, South Africa, Mauritius, Taiwan, and Japan.
During the Dutch Golden Age, the Dutch -- using their expertise in doing business, cartography, shipbuilding, seafaring and navigation -- traveled to the far corners of the world, leaving their language embedded in the names of many places. Dutch exploratory voyages revealed largely unknown landmasses to the civilized world and put their names on the world map. During the Golden Age of Dutch exploration (c. 1590s -- 1720s) and the Golden Age of Netherlandish cartography (c. 1570s -- 1670s), Dutch - speaking navigators, explorers, and cartographers were the undisputed firsts to chart / map many hitherto largely unknown regions of the earth and the sky. The Dutch came to dominate the map - making and map printing industry by virtue of their own travels, trade ventures, and widespread commercial networks. As Dutch ships reached into the unknown corners of the globe, Dutch cartographers incorporated new geographical discoveries into their work. Instead of using the information themselves secretly, they published it, so the maps multiplied freely. For almost 200 years, the Dutch dominated world trade. Dutch ships carried goods, but they also opened up opportunities for the exchange of knowledge. The commercial networks of the Dutch transnational companies, i.e. the VOC and West India Company (WIC / GWIC), provided an infrastructure which was accessible to people with a scholarly interest in the exotic world. The VOC 's bookkeeper Hendrick Hamel was the first known European / Westerner to experience first - hand and write about Joseon - era Korea. In his report (published in the Dutch Republic) in 1666 Hendrick Hamel described his adventures on the Korean Peninsula and gave the first accurate description of daily life of Koreans to the western world. The VOC trade post on Dejima, an artificial island off the coast of Nagasaki, was for more than two hundred years the only place where Europeans were permitted to trade with Japan. Rangaku (literally ' Dutch Learning ', and by extension ' Western Learning ') is a body of knowledge developed by Japan through its contacts with the Dutch enclave of Dejima, which allowed Japan to keep abreast of Western technology and medicine in the period when the country was closed to foreigners, 1641 -- 1853, because of the Tokugawa shogunate 's policy of national isolation (sakoku).
From 1609 the VOC had a trading post in Japan (Hirado, Nagasaki), which used local paper for its own administration. However, the paper was also traded to the VOC 's other trading posts and even the Dutch Republic. Many impressions of the Dutch Golden Age artist Rembrandt 's prints were done on Japanese paper. From about 1647 Rembrandt sought increasingly to introduce variation into his prints by using different sorts of paper, and printed most of his plates regularly on Japanese paper. He also used the paper for his drawings. The Japanese paper types -- which was actually imported from Japan by the VOC -- attracted Rembrandt with its warm, yellowish colour. They are often smooth and shiny, whilst Western paper has a more rough and matt surface. Moreover, the VOC 's imported Chinese export porcelain and Japanese export porcelain wares are often depicted in many Dutch Golden Age genre paintings, especially in Jan Vermeer 's paintings.
The Dutch East India Company (VOC) was also a major force behind the Golden Age of Dutch exploration and discovery (c. 1590s -- 1720s). The VOC - funded exploratory voyages such as those led by Willem Janszoon (Duyfken), Henry Hudson (Halve Maen) and Abel Tasman revealed largely unknown landmasses to the civilized world. Also, during the Golden Age of Dutch / Netherlandish cartography (c. 1570s -- 1670s), VOC navigators, explorers, and cartographers helped shape cartographic and geographic knowledge of the modern - day world.
In 1609, English sea captain and explorer Henry Hudson was hired by the VOC émigrés running the VOC located in Amsterdam to find a north - east passage to Asia, sailing around Scandinavia and Russia. He was turned back by the ice of the Arctic in his second attempt, so he sailed west to seek a north - west passage rather than return home. He ended up exploring the waters off the east coast of North America aboard the vlieboot Halve Maen. His first landfall was at Newfoundland and the second at Cape Cod.
Hudson believed that the passage to the Pacific Ocean was between the St. Lawrence River and Chesapeake Bay, so he sailed south to the Bay then turned northward, traveling close along the shore. He first discovered Delaware Bay and began to sail upriver looking for the passage. This effort was foiled by sandy shoals, and the Halve Maen continued north. After passing Sandy Hook, Hudson and his crew entered the narrows into the Upper New York Bay. (Unbeknownst to Hudson, the narrows had already been discovered in 1524 by explorer Giovanni da Verrazzano; today, the bridge spanning them is named after him.) Hudson believed that he had found the continental water route, so he sailed up the major river which later bore his name: the Hudson. He found the water too shallow to proceed several days later, at the site of present - day Troy, New York.
Upon returning to the Netherlands, Hudson reported that he had found a fertile land and an amicable people willing to engage his crew in small - scale bartering of furs, trinkets, clothes, and small manufactured goods. His report was first published in 1611 by Emanuel Van Meteren, an Antwerp émigré and the Dutch Consul at London. This stimulated interest in exploiting this new trade resource, and it was the catalyst for Dutch merchant - traders to fund more expeditions.
In 1611 -- 12, the Admiralty of Amsterdam sent two covert expeditions to find a passage to China with the yachts Craen and Vos, captained by Jan Cornelisz Mey and Symon Willemsz Cat, respectively. In four voyages made between 1611 and 1614, the area between present - day Maryland and Massachusetts was explored, surveyed, and charted by Adriaen Block, Hendrick Christiaensen, and Cornelius Jacobsen Mey. The results of these explorations, surveys, and charts made from 1609 through 1614 were consolidated in Block 's map, which used the name New Netherland for the first time.
In terms of world history of geography and exploration, the VOC can be credited with putting most of Australia 's coast (then Hollandia Nova and other names) on the world map, between 1606 and 1756. While Australia 's territory (originally known as New Holland) never became an actual Dutch settlement or colony, Dutch navigators were the first to undisputedly explore and map Australian coastline. In the 17th century, the VOC 's navigators and explorers charted almost three - quarters of Australia 's coastline, except its east coast. The Dutch ship, Duyfken, led by Willem Janszoon, made the first documented European landing in Australia in 1606. Although a theory of Portuguese discovery in the 1520s exists, it lacks definitive evidence. Precedence of discovery has also been claimed for China, France, Spain, India, and even Phoenicia.
Hendrik Brouwer 's discovery of the Brouwer Route, that sailing east from the Cape of Good Hope until land was sighted and then sailing north along the west coast of Australia was a much quicker route than around the coast of the Indian Ocean, made Dutch landfalls on the west coast inevitable. The first such landfall was in 1616, when Dirk Hartog landed at Cape Inscription on what is now known as Dirk Hartog Island, off the coast of Western Australia, and left behind an inscription on a pewter plate. In 1697 the Dutch captain Willem de Vlamingh landed on the island and discovered Hartog 's plate. He replaced it with one of his own, which included a copy of Hartog 's inscription, and took the original plate home to Amsterdam, where it is still kept in the Rijksmuseum Amsterdam.
In 1627, the VOC 's explorers François Thijssen and Pieter Nuyts discovered the south coast of Australia and charted about 1,800 kilometres (1,100 mi) of it between Cape Leeuwin and the Nuyts Archipelago. François Thijssen, captain of the ship ' t Gulden Zeepaert (The Golden Seahorse), sailed to the east as far as Ceduna in South Australia. The first known ship to have visited the area is the Leeuwin ("Lioness ''), a Dutch vessel that charted some of the nearby coastline in 1622. The log of the Leeuwin has been lost, so very little is known of the voyage. However, the land discovered by the Leeuwin was recorded on a 1627 map by Hessel Gerritsz: Caert va n't Landt van d'Eendracht ("Chart of the Land of Eendracht ''), which appears to show the coast between present - day Hamelin Bay and Point D'Entrecasteaux. Part of Thijssen 's map shows the islands St Francis and St Peter, now known collectively with their respective groups as the Nuyts Archipelago. Thijssen 's observations were included as soon as 1628 by the VOC cartographer Hessel Gerritsz in a chart of the Indies and New Holland. This voyage defined most of the southern coast of Australia and discouraged the notion that "New Holland '' as it was then known, was linked to Antarctica.
In 1642, Abel Tasman sailed from Mauritius and on 24 November, sighted Tasmania. He named Tasmania Anthoonij van Diemenslandt (Anglicised as Van Diemen 's Land), after Anthony van Diemen, the VOC 's Governor General, who had commissioned his voyage. It was officially renamed Tasmania in honour of its first European discoverer on 1 January 1856.
In 1642, during the same expedition, Tasman 's crew discovered and charted New Zealand 's coastline. They were the first Europeans known to reach New Zealand. Tasman anchored at the northern end of the South Island in Golden Bay (he named it Murderers ' Bay) in December 1642 and sailed northward to Tonga following a clash with local Māori. Tasman sketched sections of the two main islands ' west coasts. Tasman called them Staten Landt, after the States General of the Netherlands, and that name appeared on his first maps of the country. In 1645 Dutch cartographers changed the name to Nova Zeelandia in Latin, from Nieuw Zeeland, after the Dutch province of Zeeland. It was subsequently Anglicised as New Zealand by James Cook. Various claims have been made that New Zealand was reached by other non-Polynesian voyagers before Tasman, but these are not widely accepted.
In spite of the VOC 's historic successes and contributions, the Company has long been criticized for its quasi-absolute commercial monopoly, colonialism, exploitation (including use of slave labour), slave trade, use of violence, environmental destruction (including deforestation), and overly bureaucratic in organizational structure.
By the time the settlement was established at the Cape in 1652, the VOC already had a long experience of practising slavery in the East Indies. Jan van Riebeeck concluded within two months of the establishment of the Cape settlement that slave labor would be needed for the hardest and dirtiest work. Initially, the VOC considered enslaving men from the indigenous Khoikhoi population, but the idea was rejected on the grounds that such a policy would be both costly and dangerous. Most Khoikhoi had chosen not to labor for the Dutch because of low wages and harsh conditions. In the beginning, the settlers traded with the Khoikhoi but the harsh working conditions and low wages imposed by the Dutch led to a series of wars. The European population remained under 200 during the settlement 's first five years, and war against neighbors numbering more than 20,000 would have been foolhardy. Moreover, the Dutch feared that Khoikhoi people, if enslaved, could always escape into the local community, whereas foreigners would find it much more difficult to elude their "masters. ''
Between 1652 and 1657, a number of unsuccessful attempts were made to obtain men from the Dutch East Indies and from Mauritius. In 1658, however, the VOC landed two shiploads of slaves at the Cape, one containing more than 200 people brought from Dahomey (later Benin), the second with almost 200 people, most of them children, captured from a Portuguese slaver off the coast of Angola. Except for a few individuals, these were to be the only slaves ever brought to the Cape from West Africa. From 1658 to the end of the Company 's rule, many more slaves were brought regularly to the Cape in various ways, chiefly by Company - sponsored slaving voyages and slaves brought to the Cape by its return fleets. From these sources and by natural growth, the slave population increased from zero in 1652 to about 1,000 by 1700. During the 18th century, the slave population increased dramatically to 16,839 by 1795. After the slave trade was initiated, all of the slaves imported into the Cape until the British stopped the trade in 1807 were from East Africa, Mozambique, Madagascar, and South and Southeast Asia. Large numbers were brought from India, Ceylon, and the Indonesian archipelago. Prisoners from other countries in the VOC 's empire were also enslaved. The slave population, which exceeded that of the European settlers until the first quarter of the nineteenth century, was overwhelmingly male and was thus dependent on constant imports of new slaves to maintain and to augment its size.
By the 1660s the Cape settlement was importing slaves from India, Malaya (Malaysia), and Madagascar to work on the farms. Conflict between Dutch farmers and Khoikhoi broke out once it became clear to the latter that the Dutch were there to stay and that they intended to encroach on the lands of the pastoralists. In 1659 Doman, a Khoikhoi who had worked as a translator for the Dutch and had even traveled to Java, led an armed attempt to expel the Dutch from the Cape peninsula. The attempt was a failure, although warfare dragged on until an inconclusive peace was established a year later. During the following decade, pressure on the Khoikhoi grew as more of the Dutch became free burghers, expanded their landholdings, and sought pastureland for their growing herds. War broke out again in 1673 and continued until 1677, when Khoikhoi resistance was destroyed by a combination of superior European weapons and Dutch manipulation of divisions among the local people. Thereafter, Khoikhoi society in the western Cape disintegrated. Some people found jobs as shepherds on European farms; others rejected foreign rule and moved away from the Cape. The final blow for most came in 1713 when a Dutch ship brought smallpox to the Cape. Hitherto unknown locally, the disease ravaged the remaining Khoikhoi, killing 90 percent of the population. Throughout the eighteenth century, the settlement continued to expand through internal growth of the European population and the continued importation of slaves. The approximately 3,000 Europeans and slaves at the Cape in 1700 had increased by the end of the century to nearly 20,000 Europeans, and approximately 25,000 slaves.
For the full list of places explored, mapped, and named by people of the VOC, see List of place names of Dutch origin.
Populated places (including cities, towns and villages) established / founded by people of the Dutch East India Company (VOC).
The VOC 's operations (trading posts and colonies) produced not only warehouses packed with spices, coffee, tea, textiles, porcelain and silk, but also shiploads of documents. Data on political, economic, cultural, religious, and social conditions spread over an enormous area circulated between the VOC establishments, the administrative centre of the trade in Batavia (modern - day Jakarta), and the board of directors (the Heeren XVII / Gentlemen Seventeen) in the Dutch Republic. The VOC records are included in UNESCO 's Memory of the World Register.
The Dutch East India Company (VOC), as a historical transcontinental company - state, is one of the best expertly researched business enterprises in history. For almost 200 years of the Company 's existence (1602 -- 1800), the VOC had effectively transformed itself from a corporate entity into a state, an empire, or even a world in its own right. The VOC World (i.e. networks of people, places, things, activities, and events associated with the Dutch East India Company) has been the subject of a vast amount of literature, including works of fiction and non-fiction. VOC World studies (often included within a broader field of early - modern Dutch global world studies) is an international multidisciplinary field focused on social, cultural, religious, scientific, technological, economic, financial, business, maritime, military, political, legal, diplomatic activities, organization and administration of the VOC and its colourful world. As North & Kaufmann (2014) notes, "the Dutch East India Company (VOC) has long attracted the attention of scholarship. Its lengthy history, widespread enterprises, and the survival of massive amounts of documentation -- literally 1,200 meters of essays pertaining to the VOC may be found in the National Archives in The Hague, and many more documents are scattered in archives throughout Asia and in South Africa -- have stimulated many works on economic and social history. Important publications have also appeared on the trade, shipping, institutional organization, and administration of the VOC. Much has also been learned about the VOC and Dutch colonial societies. Moreover, the TANAP (Towards a New Age of Partnership, 2000 -- 2007) project has created momentum for research on the relationship between the VOC and indigenous societies. In contrast, the role of the VOC in cultural history and especially in the history of visual and material culture has not yet attracted comparable interest. To be sure, journals and other travel accounts (some even with illustrations) by soldiers, shippers, and VOC officials among others have been utilized as sources. '' VOC scholarship is highly specialized in general, such as archaeological studies of the VOC World. Some of the notable VOC historians / scholars include Sinnappah Arasaratnam, Leonard Blussé, Peter Borschberg, Charles Ralph Boxer, Jaap R. Bruijn, Femme Gaastra, Om Prakash, and Nigel Worden.
For scholarly works (books and articles) about the VOC world, see article section: List of works about the Dutch East India Company # Non-fiction.
The publication of the Theatrum Orbis Terrarum by Abraham Ortelius in 1570 marked the official beginning of the Golden Age of Netherlandish cartography (c. 1570s -- 1670s). In the Golden Age of Dutch exploration and discovery (c. 1590s -- 1720s), the Dutch Republic 's seafarers and explorers (including the VOC 's navigators) became the first non-natives to undisputedly discover, explore and map coastlines of the Australian continent (including Mainland Australia, Tasmania, and their surrounding islands), New Zealand, Tonga, and Fiji.
VOC World -- networks of people, places, things, activities, and events associated with the Dutch East India Company (VOC).
Het Oost - Indisch Huis (Reinier Vinkeles, 1768)
The restored conference room of the Heeren XVII (the VOC 's board of directors) in the East Indies House / Oost - Indisch Huis, Amsterdam
Courtyard of the Amsterdam Stock Exchange (Beurs van Hendrick de Keyser in Dutch). In 1611, the world 's first formal stock exchange was launched by the VOC.
Duyfken replica under sail
A replica of the VOC vessel Batavia (1620 -- 29)
19th - century illustration Halve Maen (Half Moon) in the Hudson River in 1609
Anonymous painting with Table Mountain in the background, 1762
Dutch church at Batavia, Dutch East Indies, 1682
Factory in Hugli - Chuchura, Dutch Bengal, Dutch India, by Hendrik van Schuylenburgh (1665)
Ground - plan of the Dutch trade - post on the island Dejima at Nagasaki. An imagined bird 's - eye view of Dejima 's layout and structures (copied from a woodblock print by Toshimaya Bunjiemon of 1780).
A naval cannon (Dejima, Nagasaki, Japan). The letters "VOC '' are the monogram of the "Vereenigde Oost - Indische Compagnie '' and the letter "A '' represents the "Amsterdam '' Chamber of the company.
The Seri Rambai at Fort Cornwallis, George Town, Penang, Malaysia
Kraak porcelain in a museum in Malacca
Portrait of Abel Tasman, his wife and daughter. Attributed to Jacob Gerritsz Cuyp, 1637.
Portrait of Jan Pieterszoon Coen
Portrait of Simon van der Stel, a founding father of the South African wine industry
The statue of Jan van Riebeeck (the founder of Cape Town) in Heerengracht Street, Cape Town, South Africa
Swedish naturalist Carl Peter Thunberg was a VOC physician and an apostle of Carl Linnaeus.
Wall of Fort Zeelandia / Fort Anping, Tainan (Taiwan)
The Castle of Good Hope (Kasteel de Goede Hoop in Dutch), Cape Town, South Africa
Galle Fort (Galle) -- a UNESCO World Heritage Site in Sri Lanka
Malacca City (Malacca) -- a UNESCO World Heritage Site in Malaysia
A drawing of the 1740 Batavia massacre
Governors - General of the Dutch East India Company (VOC)
Other notable trading companies in the Age of Sail
|
what did king john do to kenilworth castle | Kenilworth castle - wikipedia
Kenilworth Castle is located in the town of the same name in Warwickshire, England. Constructed from Norman through to Tudor times, the castle has been described by architectural historian Anthony Emery as "the finest surviving example of a semi-royal palace of the later middle ages, significant for its scale, form and quality of workmanship ''. Kenilworth has also played an important historical role. The castle was the subject of the six - month - long Siege of Kenilworth in 1266, believed to be the longest siege in English history, and formed a base for Lancastrian operations in the Wars of the Roses. Kenilworth was also the scene of the removal of Edward II from the English throne, the French insult to Henry V in 1414 (said by John Strecche to have encouraged the Agincourt campaign), and the Earl of Leicester 's lavish reception of Elizabeth I in 1575.
The castle was built over several centuries. Founded in the 1120s around a powerful Norman great tower, the castle was significantly enlarged by King John at the beginning of the 13th century. Huge water defences were created by damming the local streams, and the resulting fortifications proved able to withstand assaults by land and water in 1266. John of Gaunt spent lavishly in the late 14th century, turning the medieval castle into a palace fortress designed in the latest perpendicular style. The Earl of Leicester then expanded the castle once again, constructing new Tudor buildings and exploiting the medieval heritage of Kenilworth to produce a fashionable Renaissance palace.
Kenilworth was partly destroyed by Parliamentary forces in 1649 to prevent it being used as a military stronghold. Ruined, only two of its buildings remain habitable today. The castle became a tourist destination from the 18th century onwards, becoming famous in the Victorian period following the publishing of Sir Walter Scott 's novel Kenilworth in 1821. English Heritage has managed the castle since 1984. The castle is classed as a Grade I listed building and as a Scheduled Monument, and is open to the public.
Although now ruined as a result of the slighting, or deliberate partial destruction, of the castle after the English Civil War, Kenilworth illustrates five centuries of English military and civil architecture. The castle is built almost entirely from local new red sandstone.
To the south - east of the main castle lie the Brays, a corruption of the French word braie, meaning an external fortification with palisades. Only earthworks and fragments of masonry remain of what was an extensive 13th - century barbican structure including a stone wall and an external gatehouse guarding the main approach to the castle. The area now forms part of the car park for the castle. Beyond the Brays are the ruins of the Gallery Tower, a second gatehouse remodelled in the 15th century. The Gallery Tower originally guarded the 152 - metre (499 - foot) long, narrow walled - causeway that still runs from the Brays to the main castle. This causeway was called the Tiltyard, as it was used for tilting, or jousting, in medieval times. The Tiltyard causeway acted both as a dam and as part of the barbican defences. To the east of the Tiltyard is a lower area of marshy ground, originally flooded and called the Lower Pool, and to the west an area once called the Great Mere. The Great Mere is now drained and forms a meadow, but would originally have been a large lake covering around 100 acres (40 ha), dammed by the Tiltyard causeway.
The outer bailey of Kenilworth Castle is usually entered through Mortimer 's Tower, today a modest ruin but originally a Norman stone gatehouse, extended in the late 13th and 16th centuries. The outer bailey wall, long and relatively low, was mainly built by King John; it has numerous buttresses but only a few towers, being designed to be primarily defended by the water system of the Great Mere and Lower Pool. The north side of the outer bailey wall was almost entirely destroyed during the slighting. Moving clockwise around the outer bailey from Mortimer 's Tower, the defences include a west - facing watergate, which would originally have led onto the Great Mere; the King 's gate, a late 17th - century agricultural addition; the Swan Tower, a late 13th - century tower with 16th century additions named after the swans that lived on the Great Mere; the early 13th - century Lunn 's Tower; and the 14th - century Water Tower, so named because it overlooked the Lower Pool.
Kenilworth 's inner court consists of a number of buildings set against a bailey wall, originally of Norman origin, exploiting the defensive value of a natural knoll that rises up steeply from the surrounding area. The 12th - century great tower occupies the knoll itself and forms the north - east corner of the bailey. Ruined during the slighting, the great tower is notable for its huge corner turrets, essentially hugely exaggerated Norman pilaster buttresses. Its walls are 5 metres (16 feet) thick, and the towers 30 metres (98 feet) high. Although Kenilworth 's great tower is larger, it is similar to that of Brandon Castle near Coventry; both were built by the local Clinton family in the 1120s. The tower can be termed a hall keep, as it is longer than it is wide. The lowest floor is filled with earth, possibly taken from the earlier motte that may have been present on the site, and is further protected by a sloping stone plinth around the base. The tall Tudor windows at the top of the tower date from the 1570s.
Much of the northern part of the inner bailey was built by the 14th - century noble John of Gaunt between 1372 and 1380. This part of the castle is considered by historian Anthony Emery to be "the finest surviving example of a semi-royal palace of the later middle ages, significant for its scale, form and quality of workmanship ''. Gaunt 's architectural style emphasised rectangular design, the separation of ground floor service areas from the upper stories and a contrast of austere exteriors with lavish interiors, especially on the 1st floor of the inner bailey buildings. The result is considered "an early example of the perpendicular style ''.
The most significant of Gaunt 's buildings is his great hall. The great hall replaced an earlier sequence of great halls on the same site, and was heavily influenced by Edward III 's design at Windsor Castle. The hall consists of a "ceremonial sequence of rooms '', approached by a particularly grand staircase, now lost. From the great hall, visitors could look out to admire the Great Mere or the inner court through huge windows. The undercroft to the hall, used by the service staff, was lit with slits, similar to design at the contemporary Wingfield Manor. The roof was built in 1376 by William Wintringham, producing the widest hall, unsupported by pillars, existing in England at the time. There is some debate amongst historians as to whether this roof was a hammerbeam design, a collar and truss - brace design, or a combination of the two.
There was an early attempt at symmetry in the external appearance of the great hall -- the Strong and Saintlowe Towers architecturally act as near symmetrical "wings '' to the hall itself, while the plinth of the hall is designed to mirror that of the great tower opposite it. An unusual multi-sided tower, the Oriel, provides a counterpoint to the main doorway of the hall and was intended for private entertainment by Gaunt away from the main festivities on major occasions. The Oriel tower is based on Edward III 's "La Rose '' Tower at Windsor, which had a similar function. Gaunt 's Strong Tower is so named for being entirely vaulted in stone across all its floors, an unusual and robust design. The great hall influenced the design of Bolton and Raby castles, while the hall 's roof design became famous and was copied at Arundel Castle and Westminster Hall.
Other parts of the castle built by Gaunt include the southern range of state apartments, Gaunt 's Tower and the main kitchen. Although now extensively damaged, these share the same style as the great hall; this would have unified the appearance of Gaunt 's palace in a distinct break from the more eclectic medieval tradition of design. Gaunt 's kitchen replaced the original 12th - century kitchens, built alongside the great tower in a similar fashion to the arrangement at Conisbrough. Gaunt 's new kitchen was twice the size of that in equivalent castles, measuring nineteen by eight metres (62 by 26 feet).
The remainder of the inner court was built by Robert Dudley, the Earl of Leicester, in the 1570s. He built a tower now known as Leicester 's building on the south edge of the court as a guest wing, extending out beyond the inner bailey wall for extra space. Leicester 's building was four floors high and built in a fashionable contemporary Tudor style with "brittle, thin walls and grids of windows ''. The building was intended to appear well - proportioned alongside the ancient great tower, one of the reasons for its considerable height. Leicester 's building set the style for later Elizabethan country house design, especially in the Midlands, with Hardwick Hall being a classic example. Modern viewing platforms, installed in 2014, provide views from Elizabeth I 's 's former bedroom.
Leicester also built a loggia, or open gallery, beside the great keep to lead to the new formal gardens. The loggia was designed to elegantly frame the view as the observer slowly admired the gardens, and was a new design in the 16th century, only recently imported from Italy.
The rest of Kenilworth Castle 's interior is divided into three areas: the base court, stretching between Mortimer 's Tower and Leicester 's gatehouse; the left - hand court, stretching south - west around the outside of the inner court; and the right - hand court, to the north - west of the inner court. The line of trees that cuts across the base court today is a relatively modern mid-19th century addition, and originally this court would have been more open, save for the collegiate chapel that once stood in front of the stables. Destroyed in 1524, only the chapel 's foundations remain. Each of the courts was designed to be used for different purposes: the base court was considered a relatively public area, with the left and right courts used for more private occasions.
Leicester 's gatehouse was built on the north side of the base court, replacing an older gatehouse to provide a fashionable entrance from the direction of Coventry. The external design, with its symbolic towers and, originally, battlements, echoes a style popular a century or more before, closely resembling Kirby Muxloe and the Beauchamp gatehouse at Warwick Castle. By contrast the interior, with its contemporary wood panelling, is in the same, highly contemporary Elizabethan fashion of Leicester 's building in the inner court. Leicester 's gatehouse is one of the few parts of the castle to remain intact. The stables built by John Dudley in the 1550s also survive and lie along the east side of the base court. The stable block is a large building built mostly in stone, but with a timber - framed, decoratively panelled first storey designed in an anachronistic, vernacular style. Both buildings could have easily been seen from Leicester 's building and were therefore on permanent display to visitors. Leciester 's intent may have been to create a deliberately anachronistic view across the base court, echoing the older ideals of chivalry and romance alongside the more modern aspects of the redesign of the castle.
Much of the right - hand court of Kenilworth Castle is occupied by the castle garden. For most of Kenilworth 's history the role of the castle garden, used for entertainment, would have been very distinct from that of the surrounding chase, used primarily for hunting. From the 16th century onwards there were elaborate knot gardens in the base court. The gardens today are designed to reproduce as closely as possible the primarily historical record of their original appearance in 1575, with a steep terrace along the south side of the gardens and steps leading down to eight square knot gardens. In Elizabethan gardens "the plants were almost incidental '', and instead the design focus was on sculptures, including four wooden obelisks painted to resemble porphyry and a marble fountain with a statue of two Greek mythological figures. A timber aviary contains a range of birds. The original garden was heavily influenced by the Italian Renaissance garden at Villa d'Este.
To the north - west of the castle are earthworks marking the spot of the "Pleasance '', created in 1414 by Henry V. The Pleasance was a banqueting house built in the style of a miniature castle. Surrounded by two diamond - shaped moats with its own dock, the Pleasance was positioned on the far side of the Great Mere and had to be reached by boat. It resembled Richard II 's retreat at Sheen from the 1380s, and was later copied by his younger brother, Duke Humphrey of Gloucester, at Greenwich in the 1430s, as well by his son, John of Lancaster at Fulbrook. The Pleasance was eventually dismantled by Henry VIII and partially moved into the left - hand court inside the castle itself, possibly to add to the anachronistic appearance. These elements were finally destroyed in the 1650s.
Kenilworth Castle was founded in the early 1120s by Geoffrey de Clinton, Lord Chamberlain and treasurer to Henry I. The castle 's original form is uncertain. It has been suggested that it consisted of a motte, an earthen mound surmounted by wooden buildings; however, the stone great tower may have been part of the original design. Clinton was a local rival to Roger de Beaumont, the Earl of Warwick and owner of the neighbouring Warwick Castle, and the king made Clinton the sheriff in Warwickshire to act as a counterbalance to Beaumont 's power. Clinton had begun to lose the king 's favour after 1130, and when he died in 1133 his son, also called Geoffrey, was only a minor. Geoffrey and his uncle William de Clinton were forced to come to terms with Beaumont; this set - back, and the difficult years of the Anarchy (1135 -- 54), delayed any further development of the castle.
Henry II succeeded to the throne at the end of the Anarchy but during the revolt of 1173 -- 74 he faced a significant uprising led by his son, Henry, backed by the French crown. The conflict spread across England and Kenilworth was garrisoned by Henry II 's forces; Geoffrey II de Clinton died in this period and the castle was taken fully into royal possession, a sign of its military importance. The Clintons themselves moved on to Buckinghamshire. By this point Kenilworth Castle consisted of the great keep, the inner bailey wall, a basic causeway across the smaller lake that preceded the creation of the Great Mere, and the local chase for hunting.
Henry 's successor, Richard I, paid relatively little attention to Kenilworth, but under King John significant building resumed at the castle. When John was excommunicated in 1208, he embarked on a programme of rebuilding and enhancing several major royal castles. These included Corfe, Odiham, Dover, Scarborough as well as Kenilworth. John spent £ 1,115 on Kenilworth Castle between 1210 and 1216, building the outer bailey wall in stone and improving the other defences, including creating Mortimer 's and Lunn 's Towers. He also significantly improved the castle 's water defences by damming the Finham and Inchford Brooks, creating the Great Mere. The result was to turn Kenilworth into one of the largest English castles of the time, with one of the largest artificial lake defences in England. John was forced to cede the castle to the baronial opposition as part of the guarantee of the Magna Carta, before it reverted to royal control early in the reign of his son, Henry III.
Henry III granted Kenilworth in 1244 to Simon de Montfort, Earl of Leicester, who later became a leader in the Second Barons ' War (1263 -- 67) against the king, using Kenilworth as the centre of his operations. Initially the conflict went badly for King Henry, and after the Battle of Lewes in 1264 he was forced to sign the Mise of Lewes, under which his son, Prince Edward, was given over to the rebels as a hostage. Edward was taken back to Kenilworth, where chroniclers considered he was held in unduly harsh conditions. Released in early 1265, Edward then defeated Montfort at the Battle of Evesham; the surviving rebels under the leadership of Henry de Hastings, Montfort 's constable at Kenilworth, regrouped at the castle the following spring. Edward 's forces proceeded to lay siege to the rebels.
The Siege of Kenilworth Castle in 1266 was "probably the longest in English history '' according to historian Norman Pounds, and at the time was also the largest siege to have occurred in England in terms of the number of soldiers involved. Simon de Monfort 's son, Simon VI de Montfort, promised in January 1266 to hand over the castle to the king. Five months later this had not happened, and Henry III laid siege to Kenilworth Castle on 21 June. Protected by the extensive water defences, the castle withstood the attack, despite Edward targeting the weaker north wall, employing huge siege towers and even attempting a night attack using barges brought from Chester. The distance between the royal trebuchets and the walls severely reduced their effectiveness, and heavier trebuchets had to be sent for from London. Papal intervention through the legate Ottobuono finally resulted in the compromise of the Dictum of Kenilworth, under which the rebels were allowed to re-purchase their confiscated lands provided they surrendered the castle; the siege ended on 14 December 1266. The water defences at Kenilworth influenced the construction of later castles in Wales, most notably Caerphilly.
Henry granted Kenilworth to his son, Edmund Crouchback, in 1267. Edmund held many tournaments at Kenilworth in the late 13th century, including a huge event in 1279, presided over by the royal favourite Roger de Mortimer, in which a hundred knights competed for three days in the tiltyard in an event called "the Round Table '', in imitation of the popular Arthurian legends.
Edmund Crouchback passed on the castle to his eldest son, Thomas, Earl of Lancaster, in 1298. Lancaster married Alice de Lacy, which made him the richest nobleman in England. Kenilworth became the primary castle of the Lancaster estates, replacing Bolingbroke, and acted as both a social and a financial centre for Thomas. Thomas built the first great hall at the castle from 1314 to 1317 and constructed the Water Tower along the outer bailey, as well as increasing the size of the chase. Lancaster, with support from many of the other English barons, found himself in increasing opposition to Edward II. War broke out in 1322, and Lancaster was captured at the Battle of Boroughbridge and executed. His estates, including Kenilworth, were confiscated by the crown. Edward and his wife, Isabella of France, spent Christmas 1323 at Kenilworth, amidst major celebrations.
In 1326, however, Edward was deposed by an alliance of Isabella and her lover, Roger Mortimer. Edward was eventually captured by Isabella 's forces and the custody of the king was assigned to Henry, Earl of Lancaster, who had backed Isabella 's invasion. Henry, reoccupying most of the Lancaster lands, was made constable of Kenilworth and Edward was transported there in late 1326; Henry 's legal title to the castle was finally confirmed the following year. Kenilworth was chosen for this purpose by Isabella probably both because it was a major fortification, and also because of the symbolism of its former owners ' links to popular ideals of freedom and good government. Royal writs were issued in Edward 's name by Isabella from Kenilworth until the next year. A deputation of leading barons led by Bishop Orleton was then sent to Kenilworth to first persuade Edward to resign and, when that failed, to inform him that he had been deposed as king. Edward formally resigned as king in the great hall of the castle on 21 January 1327. As the months went by, however, it became clear that Kenilworth was proving a less than ideal location to imprison Edward. The castle was in a prominent part of the Midlands, in an area that held several nobles who still supported Edward and were believed to be trying to rescue him. Henry 's loyalty was also coming under question. In due course, Isabella and Mortimer had Edward moved by night to Berkeley Castle, where he died shortly afterwards. Isabella continued to use Kenilworth as a royal castle until her fall from power in 1330.
Henry of Grosmont, the Duke of Lancaster, inherited the castle from his father in 1345 and remodelled the great hall with a grander interior and roof. On his death Blanche of Lancaster inherited the castle. Blanche married John of Gaunt, the third son of Edward III; their union, and combined resources, made John the second richest man in England next to the king himself. After Blanche 's death, John married Constance, who had a claim to the kingdom of Castile, and John styled himself the king of Castile and León. Kenilworth was one of the most important of his thirty or more castles in England. John began building at Kenilworth between 1373 and 1380 in a style designed to reinforce his royal claims in Iberia. John constructed a grander great hall, the Strong Tower, Saintlowe Tower, the state apartments and the new kitchen complex. When not campaigning abroad, John spent much of his time at Kenilworth and Leicester, and used Kenilworth even more after 1395 when his health began to decline. In his final years, John made extensive repairs to the whole of the castle complex.
Many castles, especially royal castles, were left to decay in the 15th century; Kenilworth, however, continued to be used as a centre of choice, forming a late medieval "palace fortress ''. Henry IV, John of Gaunt 's son, returned Kenilworth to royal ownership when he took the throne in 1399 and made extensive use of the castle. Henry V also used Kenilworth extensively, but preferred to stay in the Pleasance, the mock castle he had built on the other side of the Great Mere. According to the contemporary chronicler John Strecche, who lived at the neighbouring Kenilworth Priory, the French openly mocked Henry in 1414 by sending him a gift of tennis balls at Kenilworth. The French aim was to imply a lack of martial prowess; according to Strecche, the gift spurred Henry 's decision to fight the Agincourt campaign. The account was used by Shakespeare as the basis for a scene in his play Henry V.
English castles, including Kenilworth, did not play a decisive role during the Wars of the Roses (1455 -- 85), which were fought primarily in the form of pitched battles between the rival factions of the Lancastrians and the Yorkists. With the mental collapse of King Henry VI, Queen Margaret used the Duchy of Lancaster lands in the Midlands, including Kenilworth, as one of her key bases of military support. Margaret removed Henry from London in 1456 for his own safety and until 1461, Henry 's court divided almost all its time among Kenilworth, Leicester and Tutbury Castle for the purposes of protection. Kenilworth remained an important Lancastrian stronghold for the rest of the war, often acting as a military balance to the nearby castle of Warwick. With the victory of Henry VII at Bosworth, Kenilworth again received royal attention; Henry visited frequently and had a tennis court constructed at the castle for his use. His son, Henry VIII, decided that Kenilworth should be maintained as a royal castle. He abandoned the Pleasance and had part of the timber construction moved into the base court of the castle.
The castle remained in royal hands until it was given to John Dudley in 1553. Dudley came to prominence under Henry VIII and became the leading political figure under Edward VI. Dudley was a patron of John Shute, an early exponent of classical architecture in England, and began the process of modernising Kenilworth. Before his execution in 1553 by Queen Mary for attempting to place Lady Jane Grey on the throne, Dudley had built the new stable block and widened the tiltyard to its current form.
Kenilworth was restored to Dudley 's son, Robert, Earl of Leicester, in 1563, four years after the succession of Elizabeth I to the throne. Leicester 's lands in Warwickshire were worth between £ 500 -- £ 700 but Leicester 's power and wealth, including monopolies and grants of new lands, depended ultimately on his remaining a favourite of the queen.
Leicester continued his father 's modernisation of Kenilworth, attempting to ensure that Kenilworth would attract the interest of Elizabeth during her regular tours around the country. Elizabeth visited in 1566 and 1568, by which time Leicester had commissioned the royal architect Henry Hawthorne to produce plans for a dramatic, classical extension of the south side of the inner court. In the event this proved unachievable and instead Leicester employed William Spicer to rebuild and extend the castle so as to provide modern accommodation for the royal court and symbolically boost his own claims to noble heritage. After negotiation with his tenants, Leicester also increased the size of the chase once again. The result has been termed an English "Renaissance palace ''.
Elizabeth viewed the partially finished results at Kenilworth in 1572, but the complete effect of Leicester 's work was only apparent during the queen 's last visit in 1575. Leicester was keen to impress Elizabeth in a final attempt to convince her to marry him, and no expense was spared. Elizabeth brought an entourage of thirty - one barons and four hundred staff for the royal visit that lasted an exceptional nineteen days; twenty horsemen a day arrived at the castle to communicate royal messages. Leicester entertained the Queen and much of the neighbouring region with pageants, fireworks, bear baiting, mystery plays, hunting and lavish banquets. The cost was reputed to have amounted to many thousand pounds, almost bankrupting Leicester, though it probably did not exceed £ 1,700 in reality. The event was considered a huge success and formed the longest stay at such a property during any of Elizabeth 's tours, yet the queen did not decide to marry Leicester.
Kenilworth Castle was valued at £ 10,401 in 1588, when Leicester died without legitimate issue and heavily in debt. In accordance with his will, the castle passed first to his brother Ambrose, Earl of Warwick, and after the latter 's death in 1590, to his illegitimate son, Sir Robert Dudley.
Sir Robert Dudley, having tried and failed to establish his legitimacy in front of the Court of the Star Chamber, went to Italy in 1605. In the same year Sir Thomas Chaloner, governor (and from 1610 chamberlain) to James I 's eldest son Prince Henry, was commissioned to oversee repairs to the castle and its grounds, including the planting of gardens, the restoration of fish - ponds and improvement to the game park. During 1611 -- 12 Dudley arranged to sell Kenilworth Castle to Henry, by then Prince of Wales. Henry died before completing the full purchase, which was finalised by his brother, Charles, who bought out the interest of Dudley 's abandoned wife, Alice Dudley. When Charles became king, he gave the castle to his wife, Henrietta Maria; he bestowed the stewardship on Robert Carey, Earl of Monmouth, and after his death gave it to Carey 's sons Henry and Thomas. Kenilworth remained a popular location for both King James I and his son Charles, and accordingly was well maintained. The most famous royal visit occurred in 1624, when Ben Jonson 's The Masque of Owls at Kenilworth was performed for Charles.
The First English Civil War broke out in 1642. During its early campaigns, Kenilworth formed a useful counterbalance to the Parliamentary stronghold of Warwick. Kenilworth was used by Charles on his advance to Edgehill in October 1642 as a base for raids on Parliamentary strongholds in the Midlands. After the battle, however, the royalist garrison was withdrawn on the approach of Lord Brooke, and the castle was then garrisoned by parliamentary forces. In April 1643 the new governor of the castle, Hastings Ingram, was arrested as a suspected Royalist double agent. By January 1645 the Parliamentary forces in Coventry had strengthened their hold on the castle, and attempts by Royalist forces to dislodge them from Warwickshire failed. Security concerns continued after the end of the First Civil War in 1646, and in 1649 Parliament ordered the slighting of Kenilworth. One wall of the great tower, various parts of the outer bailey and the battlements were destroyed, but not before the building was surveyed by the antiquarian William Dugdale, who published his results in 1656.
Colonel Joseph Hawkesworth, responsible for the implementation of the slighting, acquired the estate for himself and converted Leicester 's gatehouse into a house; part of the base court was turned into a farm, and many of the remaining buildings were stripped for their materials. In 1660 Charles II was restored to the throne, and Hawkesworth was promptly evicted from Kenilworth. The Queen Mother, Henrietta Maria, briefly regained the castle, with the Earls of Monmouth acting as stewards once again, but after her death King Charles II granted the castle to Sir Edward Hyde, whom he later created Baron Hyde of Hindon and Earl of Clarendon. The ruined castle continued to be used as a farm, with the gatehouse as the principal dwelling; the King 's Gate was added to the outer bailey wall during this period for the use of farm workers.
Kenilworth remained a ruin during the 18th and 19th centuries, still used as a farm but increasingly also popular as a tourist attraction. The first guidebook to the castle, A Concise history and description of Kenilworth Castle, was printed in 1777 with many later editions following in the coming decades. The castle 's cultural prominence increased after Sir Walter Scott wrote Kenilworth in 1821 describing the royal visit of Queen Elizabeth. Very loosely based on the events of 1575, Scott 's story reinvented aspects of the castle and its history to tell the story of "the pathetic, beautiful, undisciplined heroine Amy Robsart and the steely Elizabeth I ''. Although considered today as a less successful literary novel than some of his other historical works, it popularised Kenilworth Castle in the Victorian imagination as a romantic Elizabethan location. Kenilworth spawned "numerous stage adaptations and burlesques, at least eleven operas, popular redactions, and even a scene in a set of dioramas for home display '', including Sir Arthur Sullivan 's 1865 cantata The Masque at Kenilworth. J.M.W. Turner painted several watercolours of the castle.
The number of visitors increased, including Queen Victoria and Charles Dickens. Work was undertaken during the 19th century to protect the stonework from further decline, with particular efforts to remove ivy from the castle in the 1860s.
The castle remained the property of the Clarendons until 1937, when Lord Clarendon found the maintenance of the castle too expensive and sold Kenilworth to the industrialist Sir John Siddeley. Siddeley, whose tax accounting in the 1930s had been at least questionable, was keen to improve his public image and gave over the running of the castle, complete with a charitable donation, to the Commissioner of Works. In 1958 his son gave the castle itself to the town of Kenilworth and English Heritage has managed the property since 1984. The castle is classed as a Grade I listed building and as a Scheduled Monument, and is open to the public.
Between 2005 -- 09 English Heritage attempted to restore Kenilworth 's garden more closely to its Elizabethan form, using as a basis the description in the Langham letter and details from recent archaeological investigations. The reconstruction cost more than £ 2 million and was criticised by some archaeologists as being a "matter of simulation as much as reconstruction '', due to the limited amount of factual information on the nature of the original gardens. In 2008 plans were put forward to re-create and flood the original Great Mere around the castle. As well as re-creating the look of the castle it was hoped that a new mere would be part of the ongoing flood alleviation plan for the area and that the lake could be used for boating and other waterside recreations.
|
what is the height of the tallest waterslide at wild rides water park | World Waterpark - Wikipedia
Coordinates: 53 ° 31 ′ 19 '' N 113 ° 37 ′ 33 '' W / 53.52194 ° N 113.62583 ° W / 53.52194; - 113.62583
World Waterpark is a water park located at West Edmonton Mall in Edmonton, Alberta, Canada the world 's largest shopping and entertainment complex as well as the world 's largest tourist attraction. Opened to the public in 1986, it is the world 's largest indoor water park. It has a maximum capacity of 40,000 guests, an average air temperature of 31 ° C (88 ° F), and also contains the world 's largest indoor wave pool with a capacity of 50.3 million litres.
The highest slides in the park are Twister and Cyclone, which are both 83 feet (25 m) high.
World Waterpark is the last waterslide park in Alberta after the closure of Wild Waters Waterslides in 2012, a unknown, unnamed waterpark near Chestermere Lake (unknown closure date), one in Calgary, and the Wild Rapids Waterslide Park in 2016.
This wave pool has four active wave bays, each with 2 panels operated by a 1,500 horsepower (1,100 kW) hydraulic system (8 total active panels). For many years, the (4) panels in the two outer wave bays have been disabled, apparently due to the waves being far too intense, resulting in injuries; guests were being thrown into each other when all 12 panels were operating, as they were in the 1980s.
Waves are generated (in 10 minutes on, 5 minutes off sessions) of approximately 5 to 6 feet, utilizing only the 8 active wave panels. It is arguably the most popular attraction in the park, as many swimmers (most with yellow inner tubes) can be found bobbing in the water. The start of every session is marked with a loud air horn blast, warning swimmers to be ready for a wave to flip them over. Every now and then, the large crowd of people in the pool will jokingly scream after hearing the air horn, a common behavior among frequent users of the wavepool.
Most evenings, after regular park business hours, the Blue Thunder wave pool is used by clubs for surfing, kayaking, and stand - up paddle boarding. For these activities, the waves are often programmed for increased intensity and continuous operation.
Height / Weight Requirements - Minimum: 122cm (48 '') tall. Maximum: 136kg (300lbs) Replaced the left chute of the Howler.
(former name: White Lightning)
World Waterpark also has two hot tubs: one double and one single.
Concessions:
Former:
Tubes and PFDs (lifejackets) can be rented at Sharky 's Supply Shack.
Sky Screamer and Nessie 's Revenge - side view
Blue Bullet prior to renovations
L-R Thunderbolt, Nessie 's Revenge, Sky Screamer
Blue Thunder Wave Pool - 6FT Waves
Sky Screamer
|
who played howards mother on big bang theory | Carol Ann Susi - wikipedia
Carol Ann Susi (February 2, 1952 -- November 11, 2014) was an American actress. She was known for providing the voice of recurring unseen character Mrs. Wolowitz, mother of Howard Wolowitz, on the television series The Big Bang Theory.
Susi made her first screen appearance in Kolchak: The Night Stalker. Other television and film credits included: McMillan & Wife, Coyote Ugly, Just Go with It, The Big Bang Theory, Becker, Grey 's Anatomy, That ' 70s Show, Out of Practice, Cats & Dogs, Just Shoot Me, Married... with Children, Night Court, The King of Queens, Death Becomes Her, Seinfeld, The Secret of My Success, My Blue Heaven, and Sabrina, the Teenage Witch. She also had extensive experience in live theatre and voiced a character on the video game installment of CSI: NY.
Carol Ann Susi was born in Brooklyn, of Italian descent. She studied acting at HB Studio in New York City before moving to Los Angeles in the 1970s.
Susi died of cancer on November 11, 2014, in Los Angeles, California, at age 62.
|
homologous features can be associated with divergent evolution | Convergent evolution - wikipedia
Convergent evolution is the independent evolution of similar features in species of different lineages. Convergent evolution creates analogous structures that have similar form or function but were not present in the last common ancestor of those groups. The cladistic term for the same phenomenon is homoplasy. The recurrent evolution of flight is a classic example, as flying insects, birds, pterosaurs, and bats have independently evolved the useful capacity of flight. Functionally similar features that have arisen through convergent evolution are analogous, whereas homologous structures or traits have a common origin but can have dissimilar functions. Bird, bat, and pterosaur wings are analogous structures, but their forelimbs are homologous, sharing an ancestral state despite serving different functions.
The opposite of convergence is divergent evolution, where related species evolve different traits. Convergent evolution is similar to but different from parallel evolution. Parallel evolution occurs when two independent but similar species evolve in the same direction and thus independently acquire similar characteristics; for instance, gliding frogs have evolved in parallel from multiple types of tree frog.
Many instances of convergent evolution are known in plants, including the repeated development of C photosynthesis, seed dispersal by fleshy fruits adapted to be eaten by animals, and carnivory.
In morphology, analogous traits arise when different species live in similar ways and / or a similar environment, and so face the same environmental factors. When occupying similar ecological niches (that is, a distinctive way of life) similar problems can lead to similar solutions. The British anatomist Richard Owen was the first to identify the fundamental difference between analogies and homologies.
In biochemistry, physical and chemical constraints on mechanisms have caused some active site arrangements such as the catalytic triad to evolve independently in separate enzyme superfamilies.
In his 1989 book Wonderful Life, Stephen Jay Gould argued that if one could "rewind the tape of life (and) the same conditions were encountered again, evolution could take a very different course ''. Simon Conway Morris disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an "optimum '' body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least primates, corvids, and cetaceans.
In cladistics, a homoplasy is a trait shared by two or more taxa for any reason other than that they share a common ancestry. Taxa which do share ancestry are part of the same clade; cladistics seeks to arrange them according to their degree of relatedness to describe their phylogeny. Homoplastic traits caused by convergence are therefore, from the point of view of cladistics, confounding factors which could lead to an incorrect analysis.
In some cases, it is difficult to tell whether a trait has been lost and then re-evolved convergently, or whether a gene has simply been switched off and then re-enabled later. Such a re-emerged trait is called an atavism. From a mathematical standpoint, an unused gene (selectively neutral) has a steadily decreasing probability of retaining potential functionality over time. The time scale of this process varies greatly in different phylogenies; in mammals and birds, there is a reasonable probability of remaining in the genome in a potentially functional state for around 6 million years.
When two species are similar in a particular character, evolution is defined as parallel if the ancestors were also similar, and convergent if they were not. Some scientists have argued that there is a continuum between parallel and convergent evolution, while others maintain that despite some overlap, there are still important distinctions between the two.
When the ancestral forms are unspecified or unknown, or the range of traits considered is not clearly specified, the distinction between parallel and convergent evolution becomes more subjective. For instance, the striking example of similar placental and marsupial forms is described by Richard Dawkins in The Blind Watchmaker as a case of convergent evolution, because mammals on each continent had a long evolutionary history prior to the extinction of the dinosaurs under which to accumulate relevant differences.
The enzymology of proteases provides some of the clearest examples of convergent evolution. These examples reflect the intrinsic chemical constraints on enzymes, leading evolution to converge on equivalent solutions independently and repeatedly.
Serine and cysteine proteases use different amino acid functional groups (alcohol or thiol) as a nucleophile. In order to activate that nucleophile, they orient an acidic and a basic residue in a catalytic triad. The chemical and physical constraints on enzyme catalysis have caused identical triad arrangements to evolve independently more than 20 times in different enzyme superfamilies.
Threonine proteases use the amino acid threonine as their catalytic nucleophile. Unlike cysteine and serine, threonine is a secondary alcohol (i.e. has a methyl group). The methyl group of threonine greatly restricts the possible orientations of triad and substrate, as the methyl clashes with either the enzyme backbone or the histidine base. Consequently, most threonine proteases use an N - terminal threonine in order to avoid such steric clashes. Several evolutionarily independent enzyme superfamilies with different protein folds use the N - terminal residue as a nucleophile. This commonality of active site but difference of protein fold indicates that the active site evolved convergently in those families.
Convergence occurs at the level of DNA and the amino acid sequences produced by translating structural genes into proteins. Studies have found convergence in amino acid sequences in echolocating bats and the dolphin; among marine mammals; between giant and red pandas; and between the thylacine and canids. Convergence has also been detected in a type of non-coding DNA, cis - regulatory elements, such as in their rates of evolution; this could indicate either positive selection or relaxed purifying selection.
Swimming animals including fish such as herrings, marine mammals such as dolphins, and ichthyosaurs (of the Mesozoic) all converged on the same streamlined shape. The fusiform bodyshape (a tube tapered at both ends) adopted by many aquatic animals is an adaptation to enable them to travel at high speed in a high drag environment. Similar body shapes are found in the earless seals and the eared seals: they still have four legs, but these are strongly modified for swimming.
The marsupial fauna of Australia and the placental mammals of the Old World have several strikingly similar forms, developed in two clades, isolated from each other. The body and especially the skull shape of the thylacine (Tasmanian wolf) converged with those of Canidae such as the red fox, Vulpes vulpes.
Red fox skeleton
Skulls of thylacine (left), timber wolf (right)
Thylacine skeleton
As a sensory adaptation, echolocation has evolved separately in cetaceans (dolphins and whales) and bats, but from the same genetic mutations.
One of the best - known examples of convergent evolution is the camera eye of cephalopods (such as squid and octopus), vertebrates (including mammals) and cnidaria (such as jellyfish). Their last common ancestor had at most a simple photoreceptive spot, but a range of processes led to the progressive refinement of camera eyes -- with one sharp difference: the cephalopod eye is "wired '' in the opposite direction, with blood and nerve vessels entering from the back of the retina, rather than the front as in vertebrates. This means that cephalopods do not have a blind spot.
Birds and bats have homologous limbs because they are both ultimately derived from terrestrial tetrapods, but their flight mechanisms are only analogous, so their wings are examples of functional convergence. The two groups have powered flight, evolved independently. Their wings differ substantially in construction. The bat wing is a membrane stretched across four extremely elongated fingers and the legs. The airfoil of the bird wing is made of feathers, strongly attached to the forearm (the ulna) and the highly fused bones of the wrist and hand (the carpometacarpus), with only tiny remnants of two fingers remaining, each anchoring a single feather. So, while the wings of bats and birds are functionally convergent, they are not anatomically convergent. Birds and bats also share a high concentration of cerebrosides in the skin of their wings. This improves skin flexibility, a trait useful for flying animals; other mammals have a far lower concentration. The extinct pterosaurs independently evolved wings from their fore - and hindlimbs, while insects have wings that evolved separately from different organs.
Flying squirrels and sugar gliders are much alike in their body plans, with gliding wings stretched between their limbs, but flying squirrels are placental mammals while sugar gliders are marsupials, widely separated within the mammal lineage.
Insect mouthparts show many examples of convergent evolution. The mouthparts of different insect groups consist of a set of homologous organs, specialised for the dietary intake of that insect group. Convergent evolution of many groups of insects led from original biting - chewing mouthparts to different, more specialised, derived function types. These include, for example, the proboscis of flower - visiting insects such as bees and flower beetles, or the biting - sucking mouthparts of blood - sucking insects such as fleas and mosquitos.
Opposable thumbs allowing the grasping of objects are most often associated with primates, like humans, monkeys, apes, and lemurs. Opposable thumbs also evolved in pandas, but these are completely different in structure, having six fingers including the thumb, which develops from a wrist bone entirely separately from other fingers.
Convergent evolution in humans includes blue eye colour and light skin colour. When humans migrated out of Africa, they moved to more northern latitudes with less intense sunlight. It was beneficial to them to reduce their skin pigmentation. It appears certain that there was some lightening of skin colour before European and East Asian lineages diverged, as there are some skin - lightening genetic differences that are common to both groups. However, after the lineages diverged and became genetically isolated, the skin of both groups lightened more, and that additional lightening was due to different genetic changes.
Lemurs and humans are both primates. Ancestral primates had brown eyes, as most primates do today. The genetic basis of blue eyes in humans has been studied in detail and much is known about it. It is not the case that one gene locus is responsible, say with brown dominant to blue eye colour. However, a single locus is responsible for about 80 % of the variation. In lemurs, the differences between blue and brown eyes are not completely known, but the same gene locus is not involved.
While convergent evolution is often illustrated with animal examples, it has often occurred in plant evolution. For instance, C photosynthesis, one of the three major carbon - fixing biochemical processes, has arisen independently up to 40 times. About 7,600 plant species of angiosperms use C carbon fixation, with many monocots including 46 % of grasses such as maize and sugar cane, and dicots including several species in the Chenopodiaceae and the Amaranthaceae.
A good example of convergence in plants is the evolution of edible fruits such as apples. These pomes incorporate (five) carpels and their accessory tissues forming the apple 's core, surrounded by structures from outside the botanical fruit, the receptacle or hypanthium. Other edible fruits include other plant tissues; for example, the fleshy part of a tomato is the walls of the pericarp. This implies convergent evolution under selective pressure, in this case the competition for seed dispersal by animals through consumption of fleshy fruits.
Seed dispersal by ants (myrmecochory) has evolved independently more than 100 times, and is present in more than 11,000 plant species. It is one of the most dramatic examples of convergent evolution in biology.
Carnivory has evolved multiple times independently in plants in widely separated groups. In three species studied, Cephalotus follicularis, Nepenthes alata and Sarracenia purpurea, there has been convergence at the molecular level. Carnivorous plants secrete enzymes into the digestive fluid they produce. By studying phosphatase, glycoside hydrolase, glucanase, RNAse and chitinase enzymes as well as a pathogenesis - related protein and a thaumatin - related protein, the authors found many convergent amino acid substitutions. These changes were not at the enzymes ' catalytic sites, but rather on the exposed surfaces of the proteins, where they might interact with other components of the cell or the digestive fluid. The authors also found that homologous genes in the non-carnivorous plant Arabidopsis thaliana tend to have their expression increased when the plant is stressed, leading the authors to suggest that stress - responsive proteins have often been co-opted in the repeated evolution of carnivory.
Phylogenetic reconstruction and ancestral state reconstruction proceed by assuming that evolution has occurred without convergence. Convergent patterns may, however, appear at higher levels in a phylogenetic reconstruction, and are sometimes explicitly sought by investigators. The methods applied to infer convergent evolution depend on whether pattern - based or process - based convergence is expected. Pattern - based convergence is the broader term, for when two or more lineages independently evolve patterns of similar traits. Process - based convergence is when the convergence is due to similar forces of natural selection.
Earlier methods for measuring convergence incorporate ratios of phenotypic and phylogenetic distance by simulating evolution with a Brownian motion model of trait evolution along a phylogeny. More recent methods also quantify the strength of convergence. One drawback to keep in mind is that these methods can confuse long - term stasis with convergence due to phenotypic similarities. Stasis occurs when there is little evolutionary change among taxa.
Distance - based measures assess the degree of similarity between lineages over time. Frequency - based measures assess the number of lineages that have evolved in a particular trait space.
Methods to infer process - based convergence fit models of selection to a phylogeny and continuous trait data to determine whether the same selective forces have acted upon lineages. This uses the Ornstein - Uhlenbeck (OU) process to test different scenarios of selection. Other methods rely on an a priori specification of where shifts in selection have occurred.
|
who was the boy in brandy i wanna be down video | I Wanna Be Down - wikipedia
"I Wanna Be Down '' is a song by American recording artist Brandy Norwood. It served as Norwood 's debut single, the first from her self - titled debut album, released in 1994. Written by musicians Keith Crouch and Kipper Jones, with production helmed by the former, it was released on September 6, 1994 by the Atlantic Recording Corporation. The song is a mid-tempo track that features a thunderous beat and light synth riffs. Lyrically, "I Wanna Be Down '' describes a flirt with a boy, who Norwood tries to convince of her charms.
The song 's music video was filmed by Keith Ward and released in October 1994. It features Norwood in her tomboy image, dancing in front of a jeep near a forest, flanked by several dancers. "I Wanna Be Down '' was performed on several television and award show ceremonies, such as The Tonight Show with Jay Leno, the 1996 Soul Train Music Awards, and the 2014 BET Hip Hop Awards. It has been performed on almost every one of Norwood 's concerts and tours, and is featured on the compilation album The Best of Brandy (2005).
Revered as one of 90s R&B 's most - astounding moments, "I Wanna Be Down '' was released to positive reaction by contemporary music critics. Its impact on the charts was comparatively large for a debut single: While it spent four weeks on top of the US Billboard Hot R&B Singles chart, it reached number six on the Billboard Hot 100, and the top 20 in Australia and New Zealand. In 1995, a hip hop remix with new vocals from American rappers MC Lyte, Queen Latifah, and Yo - Yo was released.
"I Wanna Be Down '' was written by Keith Crouch and Kipper Jones, while production and arrangement was also handled by the former for Human Rhythm Productions. Darryl Simmons served as executive producer, while mastering was overseen by Brian Gardner. Picked by Norwood 's record company, Atlantic Records, as the leading single from her debut album, Norwood initially disliked the idea of releasing it as her first offering. "' I Wanna Be Down ' was interesting '', she said in a retrospective interview with Complex magazine in 2012. "I did n't really get it at first, but I was young and I did n't really know what worked at radio or what it was. I liked the song, but I just did n't get it being the first thing that people heard from me. '' Upon its chart success she changed her mind on their decision however. "(...) Once it was released and I saw why everyone responded to the title phrase, I understood why! ''
The original clip for "I Wanna Be Down '' was directed by Keith Ward and premiered prior to the single 's official release in September 1994. The video portrays Brandy in her tomboy image, dancing in front of a jeep near a forest, flanked by several dancers. Her first video shooting, Norwood called the filming a great experience. "I was so excited about the video '', she said. "I got a chance to work with some great people like Frank Gatson. All my friends were in the video. My brother was in the video (...) He was there and we had this little dance, and that became really popular. That was a fun time. I was so excited because my dream was coming through right before my eyes... at the age of 15 ''.
Upon its release, Atlantic Records head Sylvia Rhone came up with the idea of re-recording the track with a group of rappers. "I Wanna Be Down '' was eventually remixed with new vocals from American rappers MC Lyte, Queen Latifah, and Yo - Yo. "The hip - hop remix meant the world to me '', Norwood stated in 2012. "I 'm fresh out of the box and these superstars are a part of my first single! They are my mentors and I looked up to them. I was a huge Queen Latifah fan. I 'm thinking, ' Oh my God... I ca n't believe this is happening to me. ' I got the chance to vibe with all three of them. They embraced me as a little sister. I was one of the first R&B artists to welcome hip - hop onto an R&B beat. It had never been done before quite like that (...) I knew it was a special record. ''
A music video for the Human Rhythm Hip Hop Remix premiered in February 1995. It was filmed by director Hype Williams whose remix video for Craig Mack 's 1994 song "Flava in Ya Ear '' served as inspiration for the clip. A simple performance video, it features appearances by Lyte, Latifah, and Yo - Yo and was photographed "in glamorous black and white and vivacious color, complete with flashbulbs popping to the beat. '' Norwood 's younger brother Ray J made a cameo appearance in the video. This version eventually earned Norwood her first nomination for a MTV Video Music Award for Best Rap Video at the 1995 ceremony.
These are the formats and track listings of major single - releases of "I Wanna Be Down. ''
sales figures based on certification alone
|
why did western countries want to establish spheres of influence in china | Sphere of influence - wikipedia
In the field of international relations, a sphere of influence (SOI) is a spatial region or concept division over which a state or organization has a level of cultural, economic, military, or political exclusivity, accommodating to the interests of powers outside the borders of the state that controls it.
While there may be a formal alliance or other treaty obligations between the influenced and influencer, such formal arrangements are not necessary and the influence can often be more of an example of soft power. Similarly, a formal alliance does not necessarily mean that one country lies within another 's sphere of influence. High levels of exclusivity have historically been associated with higher levels of conflict.
In more extreme cases, a country within the "sphere of influence '' of another may become a subsidiary of that state and serve in effect as a satellite state or de facto colony. The system of spheres of influence by which powerful nations intervene in the affairs of others continues to the present. It is often analyzed in terms of superpowers, great powers, and / or middle powers.
For example, during the height of its existence in World War II, the Japanese Empire had quite a large sphere of influence. The Japanese government directly governed events in Korea, Vietnam, Taiwan, and parts of China. The "Greater East Asia Co-Prosperity Sphere '' could thus be quite easily drawn on a map of the Pacific Ocean as a large "bubble '' surrounding the islands of Japan and the Asian and Pacific nations it controlled.
Sometimes portions of a single country can fall into two distinct spheres of influence. In the colonial era the buffer states of Iran and Thailand, lying between the empires of Britain / Russia and Britain / France respectively, were divided between the spheres of influence of the imperial powers. Likewise, after World War II, Germany was divided into four occupation zones, which later consolidated into West Germany and East Germany, the former a member of NATO and the latter a member of the Warsaw Pact.
The term is also used to describe non-political situations, e.g., a shopping mall is said to have a sphere of influence which designates the geographical area where it dominates the retail trade.
Many areas of the world are considered to have inherited culture from a previous sphere of influence, that while perhaps today halted, continues to share the same culture. Examples include the Anglosphere, Arab World, Eurosphere, Francophonie, Françafrique, Germanosphere, Indosphere, Latin Europe / Latin America, Lusophonie, Turkosphere, Chinese cultural sphere, Slavisphere, Hispanophone, Malay World, as well as many others.
According to a secret protocol attached to the Molotov - Ribbentrop pact of 1939 (revealed only after Germany 's defeat in 1945), Northern and Eastern Europe were divided into Nazi and Soviet spheres of influence. In the North, Finland, Estonia, and Latvia were assigned to the Soviet sphere. Poland was to be partitioned in the event of its "political rearrangement '' -- the areas east of the Narev, Vistula, and San Rivers going to the Soviet Union while Germany would occupy the west. Lithuania, adjacent to East Prussia, would be in the German sphere of influence, although a second secret protocol agreed in September 1939 assigned Lithuania to the USSR. Another clause of the treaty stipulated that Bessarabia, then part of Romania, would join the Moldovan ASSR and become the Moldovan SSR under the control of Moscow. The Soviet invasion of Bukovina on 28 June 1940 violated the Molotov - Ribbentrop Pact, as it went beyond the Soviet sphere of influence as agreed with the Axis. The USSR continued to deny the existence of the Pact 's protocols until after the dissolution of the USSR when the Russian government fully acknowledged the existence and authenticity of the secret protocols.
From 1941 and the German attack on the Soviet Union, the Allied Coalition operated on the unwritten assumption that the Western Powers and the Soviet Union had each its own sphere of influence. The presumption of the US - British and Soviet unrestricted rights in their respective spheres started causing difficulties as the Nazi - controlled territory shrank and the allied powers successively liberated other states. The wartime spheres lacked a practical definition and it had never been determined if a dominant allied power was entitled to unilateral decisions only in the area of military activity, or could also force its will regarding political, social and economic future of other states. This overly informal system backfired during the late stages of the war and afterwards, when it turned out that the Soviets and the Western Allies had very different ideas concerning the administration and future development of the liberated regions and of Germany itself.
During the Cold War the Baltic states, Central Europe, some countries in Eastern Europe, Cuba, Laos, Vietnam, North Korea, and, until the Sino - Soviet split, the People 's Republic of China, among other countries at various times, were said to lie under the Soviet sphere of influence. Western Europe, Oceania, Japan, and South Korea, among other places, were often said to lie under the sphere of influence of the United States. However, the level of control exerted in these spheres varied and was not absolute. For instance, France and Great Britain were able to act independently to invade (with Israel) the Suez Canal (they were later forced to withdraw by joint U.S. and Soviet pressure). Later, France was also able to withdraw from the military arm of the North Atlantic Treaty Organisation (NATO). Cuba often took positions that put it at odds with its Soviet ally, including momentary alliances with the People 's Republic of China, economic reorganizations, and providing support for insurgencies in Africa and the Americas without prior approval from the Soviet Union.
With the end of the Cold War, the Eastern Bloc fell apart, effectively ending the Soviet sphere of influence. Then in 1991, the Soviet Union collapsed, replaced by the Russian Federation and several ex-Soviet Republics became independent states.
After the fall of the Soviet Union, the countries of Eastern Europe, the Caucasus, and Central Asia that became independent were often portrayed as part of the Russian Federation 's "sphere of influence ''. According to Ulrich Speck, writing for Carnegie Europe, "After the breakup of the Soviet Union, the West 's focus was on Russia. Western nations implicitly treated the post-Soviet countries (besides the Baltic states) as Russia 's sphere of influence. ''
In 1997, NATO and Russia signed the Founding Act on Mutual Relations, Cooperation and Security, stating the "aim of creating in Europe a common space of security and stability, without dividing lines or spheres of influence limiting the sovereignty of any state. ''
In 2009, Russia asserted that the European Union desires a sphere of influence and that the Eastern Partnership is "an attempt to extend '' it. In March 2009, Sweden 's foreign minister Carl Bildt stated that "The Eastern Partnership is not about spheres of influence. The difference is that these countries themselves opted to join ''.
Following the 2008 Russo - Georgian War, Václav Havel and other former central and eastern European leaders signed an open letter stating that Russia had "violated the core principles of the Helsinki Final Act, the Charter of Paris... - all in the name of defending a sphere of influence on its borders. '' In April 2014, NATO stated that "Contrary to (the Founding Act), Russia now appears to be attempting to recreate a sphere of influence by seizing a part of Ukraine, maintaining large numbers of forces on its borders, and demanding, as Russian Foreign Minister Sergei Lavrov recently stated, that "Ukraine can not be part of any bloc. '' '' Criticising Russia in November 2014, German Chancellor Angela Merkel said that "old thinking about spheres of influence, which runs roughshod over international law '' put the "entire European peace order into question ''. In January 2017, British Prime Minister Theresa May said, "We should not jeopardise the freedoms that President Reagan and Mrs Thatcher brought to Eastern Europe by accepting President Putin 's claim that it is now in his sphere of influence. ''
When talking in corporate terms, the sphere of influence of a business, organization or group can show its power and influence in the decisions of other business / organization / groups. It can be found using many factors, such as the size, the frequency of visits, etc. In most cases, a company described as "bigger '' has a larger sphere of influence.
For example, software company Microsoft has a large sphere of influence in the market of operating systems; any entity wishing for its software product must ensure that it is compatible with Microsoft 's products to be successful.
For another example, for companies wishing to make the most profits, they must ensure they open their stores in the correct location. This is also true for shopping centers that, to reap the most profits, must be able to attract customers to its vicinity.
There is no defined scale on how to measure the sphere of influence. However, the spheres of influence of two shopping centers, can be measured by seeing how far people are prepared to travel to the shopping center, how much time they spend in its vicinity, how often they visit, the order of goods available, etc.
For historical and current examples of significant battles over spheres of influence see:
|
who was the first republican governor of texas since reconstruction and when was he or she elected | List of governors of Texas - wikipedia
The Governor of Texas is the chief executive of the U.S. State of Texas, the presiding officer over the executive branch of the Government of Texas, and the commander - in - chief of the Texas National Guard, the State 's militia. The governor has the power to consider bills passed by the Texas Legislature, by signing them into law, or vetoing them, and in bills relating to appropriations, the power of a line - item veto. He may convene the legislature, and grant pardons and reprieves, except in cases of impeachment, and upon the permission of the legislature, in cases of treason. The State provides an official residence, the Governor 's Mansion in Austin. The incumbent, Greg Abbott, is the forty - eighth governor, of whom two have been women, to serve in the office since Texas ' statehood in 1845.
When compared to those of other states, the Governorship of Texas has been described as one of relative weakness. In some respects, it is the Lieutenant Governor of Texas, who presides over the Texas Senate, who possesses greater influence to exercise their prerogatives.
The governor is inaugurated on the third Tuesday of January every four years along with the Lieutenant Governor, and serves a term of four years. Prior to the present laws, in 1845, the state 's first constitution established the office of governor, serving a term of two years, but no more than four years of every six. The 1861 constitution, following secession from the Union, established the first Monday of November following election as the term 's start. Following the end of the American Civil War, the 1866 constitution increased term length to four years, limiting overall service to no more than eight years of every twelve, moving the term 's start to the first Thursday following organization of the legislature, or "as soon thereafter as practicable. '' The constitution of 1869, enacted during Reconstruction, removed term limitations, to this day making Texas one of fourteen states with no limit on gubernatorial terms. The present constitution of 1876 returned terms to two years, but a 1972 amendment again returned them to four.
Since its establishment, only one man has served in excess of eight years as governor: Rick Perry. Perry, the longest - serving governor in state history, assumed the governorship in 2000 upon the exit of George W. Bush, who resigned to take office as the 43rd President of the United States. Perry was re-elected in 2002, 2006, and 2010 serving for 14 years before choosing to retire in 2014.
Allan Shivers assumed the governorship upon the death of Beauford Jester in July 1949 and was re-elected in 1950, 1952 and 1954, serving for 7 1 / 2 years, making him the second longest serving Texas governor. Price Daniel was elected to the governorship in 1956 and re-elected in 1958 and 1960 before losing his re-election for an unprecedented fourth term in the 1962 Democratic primary, missing the runoff. John Connally was elected in 1962 and re-elected in 1964 and 1966 before choosing to retire in 1968.
In the case of a vacancy in the office, the lieutenant governor becomes governor. Prior to a 1999 amendment, the lieutenant governor only acted as governor until the expiration of the term to which he succeeded.
See: List of Texas Governors and Presidents
See: List of Texas Governors and Presidents
See: President of the Republic of Texas # List of presidents and vice presidents
Currently, there are two living former governors of Texas. The most recent death of a former governor was that of Mark White (1983 -- 1987), who died on August 5, 2017. The most recently serving governor of Texas who has died is Ann Richards (1991 -- 1995, born 1933), who died on September 13, 2006. Pictured in order of service:
Texas has had two female governors: Miriam A. "Ma '' Ferguson and Ann Richards. Ferguson was one of the first two women elected governor of a U.S. state (on November 4, 1924), along with Nellie Tayloe Ross of Wyoming. Ross was inaugurated on January 5, 1925, while Ferguson was inaugurated on January 20, so Ross is considered the first female state governor. Ferguson was the wife of former governor Jim "Pa '' Ferguson, while Richards was elected "in her own right, '' being neither the spouse nor widow of a governor.
Texas governors have been born in fourteen states: Alabama, Connecticut, Florida, Georgia, Iowa, Kentucky, Louisiana, Mississippi, North Carolina, Ohio, South Carolina, Tennessee, Texas, and Virginia.
Baylor University is the most common alma mater of Texas governors, with five of them - Lawrence Sullivan Ross, Pat Morris Neff, Price Daniel, Mark White, and Ann Richards - considered alumni (though Ross attended but never completed a degree). To date, Coke Stevenson is the most recent governor who never attended college, and Bill Clements is the most recent who attended college but did not graduate.
Three governors have served non-consecutive terms: Elisha M. Pease, Miriam A. Ferguson, and Bill Clements. As was the case in most Southern states, Texas did not elect any Republican governor from the end of Reconstruction until the late twentieth century. Bill Clements was the state 's first Republican governor since Edmund J. Davis left office in 1874, 105 years earlier. Dolph Briscoe was the last governor to be elected to a two - year term, in 1972; he was also the first to be elected to a four - year term, in 1974, since the post-Reconstruction period when two - year terms had first been established. Rick Perry, who ascended to the governorship on December 21, 2000 upon the resignation of then - Governor George W. Bush, won full four - year terms in 2002, 2006 and 2010.
W. Lee "Pappy '' O'Daniel served as the inspiration for the fictional, but similarly named, Mississippi Governor Menelaus "Pappy '' O'Daniel, in the film O Brother, Where Art Thou?
Ann Richards had a cameo appearance on an episode of the animated comedy series King of the Hill, in which she has a brief romance with Bill Dauterive after he takes the fall for mooning her in the elevator of an Austin hotel (Hank actually mooned her because he thought his friends were going to be mooning the people in the elevator but they set him up).
In the Texas Senate, there are no majority or minority leaders.
In the Texas House, there are no majority or minority leaders.
|
who won the first t20 world cup name | ICC World Twenty20 - wikipedia
The ICC World Twenty20 (also referred to as the World T20, and colloquially as the T20 World Cup) is the international championship of Twenty20 cricket. Organised by cricket 's governing body, the International Cricket Council (ICC), the tournament currently consists of 16 teams, comprising all ten ICC full members and six other associate or affiliate members chosen through the World Twenty20 Qualifier. All matches played are accorded Twenty20 International status.
The event has generally been held every two years. However, the next edition of the tournament is scheduled to take place in 2020 in Australia, four years after the conclusion of the 2016 edition. In May 2016, the ICC put forward the idea of having a tournament in 2018, with South Africa being the possible host. But at the conclusion of the 2017 ICC Champions Trophy, the ICC announced that the next edition of the World T20 would take place in 2020 in Australia, as originally scheduled.
Six tournaments have so far been played, and only the West Indies, who currently hold the title, has won the tournament on multiple occasions. The inaugural event, the 2007 World Twenty20, was staged in South Africa, and won by India, who defeated Pakistan in the final at the Wanderers Stadium in Johannesburg. The 2009 tournament took place in England, and was won by the previous runner - up, Pakistan, who defeated Sri Lanka in the final at Lord 's. The third tournament was held in 2010, hosted by the countries making up the West Indies cricket team. England defeated Australia in the final in Barbados, which was played at Kensington Oval. The fourth tournament, the 2012 World Twenty20, was held in Asia for the first time, with all matches played in Sri Lanka. The West Indies won the tournament by defeating Sri Lanka in the final, winning its first international tournament since the 2004 Champions Trophy. The fifth tournament, the 2014 ICC World Twenty20, was hosted by Bangladesh, and was won by Sri Lanka, who became the first team to play in three finals. West Indies are the current World T20I holders, beating England in the 2016 final, winning their second title.
When the Benson & Hedges Cup ended in 2002, the ECB needed another one day competition to fill its place. Cricketing authorities were looking to boost the game 's popularity with the younger generation in response to dwindling crowds and reduced sponsorship. It was intended to deliver fast - paced, exciting cricket accessible to thousands of fans who were put off by the longer versions of the game. Stuart Robertson, the marketing manager of the ECB, proposed a 20 over per innings game to county chairmen in 2001 and they voted 11 -- 7 in favour of adopting the new format.
The first official Twenty20 matches were played on 13 June 2003 between the English counties in the Twenty20 Cup. The first season of Twenty20 in England was a relative success, with the Surrey Lions defeating the Warwickshire Bears by 9 wickets in the final to claim the title. The first Twenty20 match held at Lord 's, on 15 July 2004 between Middlesex and Surrey, attracted a crowd of 27,509, the largest attendance for any county cricket game at the ground other than a one - day final since 1953.
Soon after with the adoption of Twenty20 matches by other cricket boards, the popularity of the format grew with unexpected crowd attendance, new regional tournaments such as Pakistan 's Faysal Bank T20 Cup and Stanford 20 / 20 tournament and the financial incentive in the format.
The West Indies regional teams competed in what was named the Stanford 20 / 20 tournament. The event was financially backed by convicted fraudster Allen Stanford, who gave at least US $28,000,000 funding money, the fruit of his massive Ponzi scheme. It was intended that the tournament would be an annual event. Guyana won the inaugural event, defeating Trinidad and Tobago by 5 wickets, securing US $1,000,000 in prize money. A spin - off tournament, the Stanford Super Series, was held in October 2008 between Middlesex and Trinidad and Tobago, the respective winners of the English and Caribbean Twenty20 competitions, and a Stanford Superstars team formed from West Indies domestic players; Trinidad and Tobago won the competition, securing US $280,000 prize money. On 1 November, the Stanford Superstars played England in what was expected to be the first of five fixtures in as many years with the winner claiming a US $20,000,000 in each match.
On 17 February 2005 Australia defeated New Zealand in the first men 's full international Twenty20 match, played at Eden Park in Auckland. The game was played in a light - hearted manner -- both sides turned out in kit similar to that worn in the 1980s, the New Zealand team 's a direct copy of that worn by the Beige Brigade. Some of the players also sported moustaches / beards and hair styles popular in the 1980s taking part in a competition amongst themselves for best retro look, at the request of the Beige Brigade. Australia won the game comprehensively, and as the result became obvious towards the end of the NZ innings, the players and umpires took things less seriously -- Glenn McGrath jokingly replayed the Trevor Chappell underarm incident from a 1981 ODI between the two sides, and Billy Bowden showed him a mock red card (red cards are not normally used in cricket) in response.
It was first decided that every two years an ICC World Twenty20 tournament is to take place, except in the event of an Cricket World Cup being scheduled in the same year, in which case it will be held the year before. The first tournament was in 2007 in South Africa where India defeated Pakistan in the final. Two Associate teams had played in the first tournament, selected through the 2007 ICC World Cricket League Division One, a 50 - over competition. In December 2007 it was decided to hold a qualifying tournament with a 20 - over format to better prepare the teams. With six participants, two would qualify for the 2009 World Twenty20 and would each receive $250,000 in prize money. The second tournament was won by Pakistan who beat Sri Lanka by 8 wickets in England on 21 June 2009. The 2010 ICC World Twenty20 tournament was held in West Indies in May 2010, where England defeated Australia by 7 wickets. The 2012 ICC World Twenty20 was won by the West - Indies, by defeating Sri Lanka at the finals. For the first time, a host nation competed in the final of the ICC World Twenty20. There were 12 participants for the title including Ireland and Afghanistan as 2012 ICC World Twenty20 Qualifier. It was the first time the T20 World Cup tournament took place in an Asian country.
The 2012 edition was to be expanded into a 16 team format however this was reverted to 12. The 2014 tournament, held in Bangladesh was the first to feature 16 teams including all ten full members and six associate members who qualified through the 2013 ICC World Twenty20 Qualifier. However the top eight full member teams in the ICC T20I Championship rankings on 8 October 2012 were given a place in the Super 10 stage. The remaining eight teams competed in the group stage, from which two teams advance to the Super 10 stage. Three new teams (Nepal, Hong Kong and UAE) made their debut in this tournament.
All ICC full members qualify automatically for the tournament, with the remaining places filled by other ICC members through a qualification tournament, known as the World Twenty20 Qualifier. Qualification for the inaugural 2007 World Twenty20 came from the results of the first cycle of the World Cricket League, a 50 - over league for ICC associate and affiliate members. The two finalists of the 2007 WCL Division One tournament, Kenya and Scotland, qualified for the World Twenty20 later in the year. A separate qualification tournament was implemented for the 2009 World Twenty20, and has been retained since then. The number of teams qualifying through the World Twenty20 Qualifier has varied, however, ranging from two (in 2010 and 2012) to six (in 2014 and 2016).
In each group stage (both the preliminary round and the Super 10 round), teams are ranked against each other based on the following criteria:
In case of a tie (that is, both teams scoring the same number of runs at the end of their respective innings), a Super Over would decide the winner. In the case of a tie occurring again in the Super Over, the match is won by the team that has scored the most sixes in their innings. This is applicable in all stages of the tournament, having been implemented during the 2009 tournament. During the 2007 tournament, a bowl - out was used to decide the loser of tied matches.
The International Cricket Council 's executive committee votes for the hosts of the tournament after examining bids from the nations which have expressed an interest in holding the event. After South Africa in 2007, England, West Indies and Sri Lanka hosted the tournament in 2009, 2010 and 2012 respectively. Bangladesh hosted the tournament in 2014. India hosted the last edition of the tournament in 2016.
In December 2015, Tim Anderson, the ICC 's head of global development, suggested that a future tournament be hosted by the United States. He believed that hosting the event could help spur growth of the game in the country, where it is relatively obscure and faces competition by other sports such as baseball.
Note:
The ICC does not adjudicate rankings but only rounds a team achieves e.g. Semis, round one etc. The table below provides an overview of the performances of teams in the ICC World Twenty20.
The team ranking in each tournament is according to ICC. For each tournament, the number of teams in each finals tournament (in brackets) are shown.
Team appearing for the first time, in alphabetical order per year.
|
when was the west african rhino declared extinct | Western black rhinoceros - wikipedia
The western black rhinoceros (Diceros bicornis longipes) or West African black rhinoceros is a subspecies of the black rhinoceros, declared extinct by the IUCN in 2011. The western black rhinoceros was believed to have been genetically different from other rhino subspecies. It was once widespread in the savanna of sub-Saharan Africa, but its numbers declined due to poaching. The western black rhinoceros resided primarily in Cameroon, but surveys since 2006 have failed to locate any individuals.
This subspecies was named Diceros bicornis longipes by Ludwig Zukowsky in 1949. The word "longipes '' is of Latin origin, combining longus ("far, long '') and pēs ("foot ''). This refers to the species ' long distal limb segment, one of many special characteristics of the species. Other distinct features of the western black rhino included the square based horn, first mandibular premolar retained in the adults, simple formed crochet of the maxillary premolar, and premolars commonly possessed crista.
The population was first discovered in Southwest Chad, Central African Republic (CAR), North Cameroon, and Northeast Nigeria.
The western black rhinoceros is one of three subspecies of the black rhinoceros to become extinct in historical times, the other two being the southern black rhinoceros and the north - eastern black rhinoceros.
The western black rhinoceros measured 3 -- 3.75 m (9.8 -- 12.3 ft) long, had a height of 1.4 -- 1.8 m (4.6 -- 5.9 ft), and weighed 800 -- 1,400 kg (1,760 -- 3,090 lb). It had two horns, the first measuring 0.5 -- 1.4 m (1.6 -- 4.6 ft) and the second 2 -- 55 cm (0.79 -- 21.65 in). Like all Black Rhinos, they were browsers, and their common diet included leafy plants and shoots around their habitat. During the morning or evening, they would browse for food. During the hottest parts of the day, they slept or wallowed. They inhabited much of sub-Saharan Africa. Many people believe their horns held medicinal value, which led to heavy poaching. However, this belief has no grounding in scientific fact. Like most black rhinos, they are believed to have been nearsighted and would often rely on local birds, such as the red - billed oxpecker, to help them detect incoming threats.
The black rhino, of which the western black rhinoceros is a subspecies, was most commonly located in several countries towards the southeast region of the continent of Africa. The native countries of the black rhino included: Angola, Kenya, Mozambique, Namibia, South Africa, United Republic of Tanzania, Zimbabwe, Ethiopia, Cameroon, Chad, Rwanda, Botswana, Malawi, Swaziland, and Zambia. There were several subspecies found in the western and southern countries of Tanzania through Zambia, Zimbabwe and Mozambique, to the northern and north - western and north - eastern parts of South Africa. The Black Rhino 's most abundant population was found in South Africa and Zimbabwe, with a smaller population found in southern Tanzania. The Western subspecies of the Black Rhino was last recorded in Cameroon but is now considered to be extinct. However, other subspecies were introduced again into Botswana, Malawi, Swaziland and Zambia.
The western black rhinoceros was heavily hunted in the beginning of the 20th century, but the population rose in the 1930s after preservation actions were taken. As protection efforts declined over the years, so did the number of western black rhinos. By 1980 the population was in the hundreds. No animals are known to be held in captivity, however it was believed in 1988 that approximately 20 -- 30 were being kept for breeding purposes. Poaching continued and by 2000 only an estimated 10 survived. In 2001, this number dwindled to only five. While it was believed that around thirty still existed in 2004, this was later found to be based upon falsified data.
The western black rhino emerged about 7 to 8 million years ago. It was a sub-species of the black rhino. For much of the 1900s, its population was the highest out of all the rhino species at almost 850,000 individuals. There was a 96 % population decline in black rhinos, including the western black rhino, between 1970 and 1992. Widespread poaching is concluded to be partly responsible for bringing the species close to extinction, along with farmers killing rhinos to defend their crops in areas close to rhino territories, and trophy hunting.
By 1995 the number of western black rhinos had dropped to 2,500 individuals. The sub-species was declared officially extinct in 2011, with its last sighting reported in 2006 in Cameroon 's Northern Province.
In 2006, for six months, the NGO Symbiose and veterinarians Isabelle and Jean - François Lagrot with their local teams examined the common roaming ground of Diceros bicornis longipes in the northern province of Cameroon to assess the status of the last population of the western black rhino subspecies. For this experiment, 2500 km of patrol effort resulted in no sign of rhino presence over the course of six months. The teams had concluded that the rhino was extinct approximately five years before it was officially declared so by the IUCN.
There were many attempts to revive Western Black and Northern White. Rhino sperm was conserved in order to artificially fertilize females to produce offspring. Some attempts were successful, but most experiments failed due to different reasons, including stress and reduced time in the wild.
In 1999 WWF published a report called "African Rhino: Status Survey and Conservation Action Plan. '' This report recommended that all surviving specimens of the Western Black Rhino should be captured and placed in a specific region of modern Cameroon, in order to facilitate monitoring and reduce the attack rates of poachers. This experiment failed due to corruption. It demanded large amount of money, and the risk of failure was very high.
Western Black Rhinos and other subspecies were conserved in National Conservative parks and live species are still there. They live there under the protection of the government and all conditions are made for their survival. To monitor and protect white rhinos WWF focuses on better - integrated intelligence gathering networks on rhino poaching and trade, more antipoaching patrols and better equipped conservation law enforcement officers. WWF is setting up an Africa - wide rhino database using rhino horn DNA analysis (RhoDIS), which contributes to forensic investigations at the scene of the crime and for court evidence to greatly strengthen prosecution cases. WWF supports accredited training in environmental and crime courses, some of which have been adopted by South Africa Wildlife College. Special prosecutors have been appointed in countries like Kenya and South Africa to prosecute rhino crimes in a bid to deal with the mounting arrests and bring criminals to face swift justice with commensurate penalties.
In the 1950s, Mao Zedong effectively encouraged traditional Chinese medicine in an attempt to counter Western influences. While attempting to modernize this industry, several species were hunted. According to the official data published by the SATCM, 11,146 botanical and 1,581 zoological species, as well as 80 minerals were used. The Western Black Rhino was also hunted due to the value of its horn, which was believed to have the power to cure specific ailments and to be effective at detecting poisons (due to its high alkaline content). Price for horns could be high, for example 1 kg of horn could cost more than 50,000 US dollars, and the extinction of the species only increases the rarity and value of the horn.
Poaching, along with the lack of conservation efforts from the IUCN, contributed to the extinction of the subspecies.
Rhino horns were used in making ceremonial knife handles called "Janbiya ''. The hilt type known as saifani uses rhinoceros horn as material and is a symbol of wealth and status due to its cost.
In modern times, horns of other rhinoceros species are very valuable and can cost up to $100,000 per kg in places of high demand (e.g. Vietnam). An average horn may weigh between 1 and 3 kg. While locally respected doctors in Vietnam vouch for the rhino horns ' cancer - curing properties, there is no scientific evidence for this or any other imputed medical property of the horns.
|
who has sung the mcdonald's jingle i am loving it | I 'm Lovin ' It (song) - wikipedia
"I 'm Lovin ' It '' is a song recorded by American singer - songwriter Justin Timberlake. The song was produced by The Neptunes and is credited as being written by Pharrell Williams, Tom Batoy, Andreas Forberger and Franco Tortora.
The song was written as a jingle for McDonald 's commercials, based on a pre-existing German campaign originally developed as "Ich Liebe Es. '' Timberlake was paid $6 million to sing the jingle. Soon thereafter, the Neptunes produced a song based on the jingle and released it (along with an instrumental version) as part of a three - track EP in November 2003. A digital download EP with the same name was also released through the iTunes Store on December 16, 2003. The extended play included the title - track and a remix for all of the singles from Timberlake 's first solo studio album, Justified. The song was also included on the bonus audio CD of Timberlake 's first live DVD, Live From London.
A music video to promote the single was released in late 2003 and was directed by Paul Hunter.
Rapper Pusha T revealed his involvement with the song in June 2016 and claimed that he had created the "I 'm Lovin ' It '' jingle. However, co-writers Batoy and Tortora, as well as several others involved with the jingle 's creation, have disputed his claim.
|
where's the next world cup gonna be | 2026 FIFA World Cup - Wikipedia
The 2026 FIFA World Cup (Spanish: Copa mundial de la FIFA de 2026; French: Coupe du monde de la FIFA de 2026) will be the 23rd FIFA World Cup, the quadrennial international men 's association football championship contested by the national teams of the member associations of FIFA. The tournament will be jointly hosted by 16 cities in three North American countries; 60 matches, including the quarterfinals, semi-finals, and the final, will be hosted by the United States while neighboring Canada and Mexico will each host 10 matches. The tournament will be the first hosted by three nations.
The United 2026 bid beat a rival bid by Morocco during a final vote at the 68th FIFA Congress in Moscow. It will be the first World Cup since South Korea / Japan in 2002 that will be hosted by more than one nation. With its past hosting of the 1970 and 1986 tournaments, Mexico will also become the first country to host all or part of three men 's World Cups.
The 2026 World Cup will also see the tournament expanded from 32 to 48 teams.
Michel Platini, who was then the UEFA president, had suggested in October 2013 an expansion of the tournament to 40 teams, an idea that FIFA president Gianni Infantino also suggested in March 2016. A desire to increase the number of participants in the tournament from the previous 32 team format was announced on 4 October 2016. Four expansion options were considered:
On 10 January 2017, the FIFA Council voted unanimously to expand to a 48 - team tournament.
The tournament will open with a group stage consisting of 16 groups of three teams, with the top two teams progressing from each group to a knockout tournament starting with a round of 32 teams. The number of games played overall will increase from 64 to 80, but the number of games played by finalists remains at seven, the same as with 32 teams, but one group match will be replaced by a knockout match. The tournament will also be completed within 32 days, the same as previous 32 - team tournaments.
The European Club Association and its member clubs opposed the proposal for expansion, saying that the number of games was already at an "unacceptable '' level and they urged the governing body to reconsider its idea of increasing the number of teams that qualify. They contended that it was a decision taken for political reasons because Infantino would thus satisfy his electorate, rather than for sporting reasons. Liga de Fútbol Profesional president Javier Tebas agreed, affirming the unacceptability of the new format. He told Marca that the football industry is maintained thanks to clubs and leagues, not FIFA, and that Infantino did politics because to be elected he promised more countries in the World Cup; he wanted to keep the electoral promises. German national team coach Joachim Löw warned that expansion, as had occurred for Euro 2016, would dilute the value of the world tournament because players have already reached their physical and mental limit. Another criticism of the new format is that with three - team groups, the risk of collusion between the two teams playing in the last round of the group stage will increase compared with four - team groups (where simultaneous kick - offs have been employed). One suggestion by president Infantino is that group matches that end in draws will be decided by penalty shootouts.
On 30 March 2017, the Bureau of the FIFA Council (composed of the FIFA president and the presidents of each of the six confederations) proposed a slot allocation for the 2026 FIFA World Cup. The recommendation was submitted for the ratification by the FIFA Council.
On 9 May 2017, two days before the 67th FIFA Congress, the FIFA Council approved the slot allocation in a meeting in Manama, Bahrain. It includes an intercontinental playoff tournament involving six teams to decide the last two FIFA World Cup berths.
The issue of how to allocate automatic host country qualification given that there are multiple host countries has not yet been resolved and will be decided by the FIFA council. The United bid anticipated all three host countries being awarded automatic places.
A playoff tournament involving six teams will be held to decide the last two FIFA World Cup berths, consisting of one team per confederation (except for UEFA) and one additional team from the confederation of the host country (i.e. CONCACAF).
Two of the teams will be seeded based on the FIFA World Rankings, and the seeded teams will play for a FIFA World Cup berth against the winners of the first two knockout games involving the four unseeded teams.
The tournament is to be played in the host country (ies) and to be used as a test event for the FIFA World Cup. The existing playoff window of November 2025 has been suggested as a tentative date for the 2026 edition.
The FIFA Council went back and forth between 2013 and 2017 on limitations within hosting rotation based on the continental confederations. Originally, it was set that bids to be host would not be allowed from countries belonging to confederations that hosted the two preceding tournaments. It was temporarily changed to only prohibit countries belonging to the confederation that hosted the previous World Cup from bidding to host the following tournament, before the rule was changed back to its prior state of two World Cups. However, the FIFA Council did make an exception to potentially grant eligibility to member associations of the confederation of the second - to - last host of the FIFA World Cup in the event that none of the received bids fulfill the strict technical and financial requirements. In March 2017, FIFA president Gianni Infantino confirmed that "Europe (UEFA) and Asia (AFC) are excluded from the bidding following the selection of Russia and Qatar in 2018 and 2022 respectively. '' Therefore, the 2026 World Cup could be hosted by one of the remaining four confederations: CONCACAF (last hosted in 1994), CAF (last hosted in 2010), CONMEBOL (last hosted in 2014), or OFC (never hosted before), or potentially by UEFA in case no bid from those four met the requirements.
Co-hosting the FIFA World Cup -- which had been banned by FIFA after the 2002 World Cup -- was approved for the 2026 World Cup, though not limited to a specific number but instead evaluated on a case - by - case basis. Also by 2026, the FIFA general secretariat, after consultation with the Competitions Committee, will have the power to exclude bidders who do not meet the minimum technical requirements to host the competition.
Canada, Mexico and the United States had all publicly considered bidding for the tournament separately, but the United joint bid was announced on 10 April 2017. The United bid contacted 48 venues in 43 cities (3 venues in 3 cities in Mexico, 8 venues in 6 cities in Canada and 37 venues in 34 cities in the United States). This bid was ultimately cut down to 23 venues in 23 cities (3 in Canada, 3 in Mexico and 17 in the United States; the American venues will be cut down to 10 for a total of 16 venues for the tournament). Morocco announced its bid in August 2017, and had 14 proposed venues in 12 cities.
The voting took place on 13 June 2018, during FIFA 's annual congress in Moscow, and it was reopened to all eligible members. The United bid won receiving 134 valid ballots, while the Morocco bid received 65 valid ballots. Upon the selection, Canada becomes the fifth country to host both men 's and women 's World Cup -- the latter was in 2015, Mexico becomes the first country to host three men 's World Cups -- previously in 1970 and 1986, and the United States becomes the first country to host both men 's and women 's World Cup twice each -- having hosted the 1994 men 's and the 1999 and 2003 women 's World Cups.
The 2026 World Cup 's qualification process has yet to be decided. The FIFA Council is expected to decide which hosts, if any, will receive automatic qualifications to the tournament. The United Bid personnel anticipated that all three host countries would be awarded automatic places.
There are 23 candidate cities that will be narrowed down to 16 in 2020 or 2021 (3 in Canada, 3 in Mexico, and 10 in the United States):
FIFA president Gianni Infantino criticized the U.S. travel ban on several Muslim - majority nations. Infantino said, "When it comes to FIFA competitions, any team, including the supporters and officials of that team, who qualify for a World Cup need to have access to the country, otherwise there is no World Cup. That is obvious. ''
However, assurances were later given by the government that there would be no such discrimination.
U.S. President Donald Trump threatened the countries that intended to support the Morocco bid to host the 2026 World Cup, tweeting: "The US has put together a STRONG bid w / Canada & Mexico for the 2026 World Cup. It would be a shame if countries that we always support were to lobby against the U.S. bid. Why should we be supporting these countries when they do n't support us (including at the United Nations)? ''
FIFA awarded Fox and Telemundo the U.S. English and Spanish rights, respectively, to the 2026 World Cup on 12 February 2015. FIFA did so without opening it up for bidding with ESPN, Univision, and other interested American broadcasters, in order to placate Fox and Telemundo regarding the move of the 2022 World Cup (which they had the rights to) from summer (in the Northern Hemisphere) to November and December, during the heart of the National Football League regular season.
|
pericardial fluid is found between which 2 layers of the pericardium | Pericardium - wikipedia
The pericardium is a double - walled sac containing the heart and the roots of the great vessels. The pericardial sac has two layers, a serous layer and a fibrous layer. It encloses the pericardial cavity which contains pericardial fluid.
The pericardium fixes the heart to the mediastinum, gives protection against infection, and provides the lubrication for the heart. It receives its name from Ancient Greek peri (περί; "around '') and cardion (κάρδιον; "heart '').
The pericardium is a tough double layered fibroserous sac which covers the heart. The space between the two layers of serous pericardium (see below), the pericardial cavity, is filled with serous fluid which protects the heart from any kind of external jerk or shock. There are two layers to the pericardial sac: the outermost fibrous pericardium and the inner serous pericardium.
The fibrous pericardium is the most superficial layer of the pericardium. It is made up of dense and loose connective tissue, which acts to protect the heart, anchoring it to the surrounding walls, and preventing it from overfilling with blood. It is continuous with the outer adventitial layer of the neighboring great blood vessels.
The serous pericardium, in turn, is divided into two layers, the parietal pericardium, which is fused to and inseparable from the fibrous pericardium, and the visceral pericardium, which is part of the epicardium. Both of these layers function in lubricating the heart to prevent friction during heart activity.
The visceral layer extends to the beginning of the great vessels (the large blood vessels serving the heart) becoming one with the parietal layer of the serous pericardium. This happens at two areas: where the aorta and pulmonary trunk leave the heart and where the superior vena cava, inferior vena cava and pulmonary veins enter the heart.
In between the parietal and visceral pericardial layers there is a potential space called the pericardial cavity, which contains a supply of lubricating serous fluid known as the pericardial fluid.
When the visceral layer of serous pericardium comes into contact with heart (not the great vessels) it is known as the epicardium. The epicardium is the layer immediately outside of the heart muscle proper (the myocardium). The epicardium is largely made of connective tissue and functions as a protective layer. During ventricular contraction, the wave of depolarization moves from the endocardial to the epicardial surface.
Inflammation of the pericardium is called pericarditis. This condition typically causes chest pain that spreads to the back that is worsened by lying flat. In patients suffering with pericarditis, a pericardial friction rub can often be heard when listening to the heart with a stethoscope. Pericarditis and is often caused by a viral infection (glandular fever, cytomegalovirus, or coxsackievirus), or more rarely with a bacterial infection, but may also occur following a myocardial infarction. Pericarditis is usually a short - lived condition that can be successfully treated with painkillers, anti-inflammatories, and colchicine. In some cases, pericarditis can become a long - term condition causing scarring of the pericardium which restricts the heart 's movement, known as constrictive pericarditis. Constrictive pericarditis is sometimes treated by surgically removing the pericardium in a procedure called a pericardiectomy.
Fluid can build up within the pericardial sack, referred to as a pericardial effusion. Pericardial effusions often occur secondary to pericarditis, kidney failure, or tumours and frequently do not cause any symptoms. However, large effusions or effusions that accumulate rapidly can compress the heart in a condition known as cardiac tamponade, causing breathlessness and potentially fatal low blood pressure. Fluid can be removed from the pericardial space for diagnosis or to relieve tamponade using a syringe in a procedure called pericardiocentesis. For cases of recurrent pericardial effusion, an operation to create a hole between the pericardial and pleural spaces can be performed, known as a pericardial fenestration.
Fibrous pericardium
|
where is melo on the all time scoring list | List of National Basketball Association career scoring leaders - wikipedia
This article provides two lists:
The following is a list of National Basketball Association players by total career regular season points scored.
This is a progressive list of scoring leaders showing how the record increased through the years.
|
which god is wonder woman supposed to be | Wonder Woman - wikipedia
Wonder Woman is a fictional superhero appearing in American comic books published by DC Comics. The character is a founding member of the Justice League, goddess, and Ambassador - at - Large of the Amazonian people. The character first appeared in All Star Comics # 8 in October 1941 and first cover - dated on Sensation Comics # 1, January 1942. In her homeland, her official title is Princess Diana of Themyscira, Daughter of Hippolyta. When blending into the society of "Man 's World '', she adopts her civilian identity Diana Prince. The character is also referred to by such epithets as the "Amazing Amazon '', the "Spirit of Truth '', "Themyscira 's Champion '', and the "Goddess of Love and War ''.
Wonder Woman was created by the American psychologist and writer William Moulton Marston (pen name: Charles Moulton), and artist Harry G. Peter. Olive Byrne, Marston 's lover, and his wife, Elizabeth, are credited as being his inspiration for the character 's appearance. Marston drew a great deal of inspiration from early feminists, and especially from birth control pioneer Margaret Sanger; in particular, her piece "Woman and the New Race ''. The character first appeared in All Star Comics # 8 in October 1941 and first cover - dated on Sensation Comics # 1, January 1942. The Wonder Woman title has been published by DC Comics almost continuously except for a brief hiatus in 1986.
Wonder Woman 's origin story relates that she was sculpted from clay by her mother Queen Hippolyta and given life by Aphrodite, along with superhuman powers as gifts by the Greek gods. However, in recent years artists updated her profile: she has been depicted as the daughter of Zeus, and jointly raised by her mother Hippolyta and her aunts Antiope and Menalippe; artist George Perez gave her a muscular look and emphasized her Amazonian heritage; artist Jim Lee redesigned Diana 's costume to include pants (although now Wonder Woman uses a skirt and the New 52 pants design was never used officially); she inherits Ares 's divine abilities, becoming the personified "God of War ''.
Wonder Woman 's Amazonian training helped to develop a wide range of extraordinary skills in tactics, hunting, and combat. She possesses an arsenal of advanced technology, including the Lasso of Truth, a pair of indestructible bracelets, a tiara which serves as a projectile, and, in older stories, a range of devices based on Amazon technology. Wonder Woman was created during World War II; the character was initially depicted fighting Axis military forces as well as an assortment of colorful supervillains, although over time her stories came to place greater emphasis on characters, deities, and monsters from Greek mythology. Many stories depicted Wonder Woman rescuing herself from bondage, which defeated the "damsels in distress '' trope that was common in comics during the 1940s. In the decades since her debut, Wonder Woman has gained a cast of enemies bent on eliminating the Amazon, including classic villains such as Ares, Cheetah, Doctor Poison, Circe, Doctor Psycho, and Giganta, along with more recent adversaries such as Veronica Cale and the First Born. Wonder Woman has also regularly appeared in comic books featuring the superhero teams Justice Society (from 1941) and Justice League (from 1960).
Notable depictions of the character in other media include Gloria Steinem placing the character on the cover of the second edition of Ms. magazine in 1971; the 1975 -- 1979 Wonder Woman TV series starring Lynda Carter; as well as animated series such as the Super Friends and Justice League. Since Carter 's television series, studios struggled to introduce a new live - action Wonder Woman to audiences, although the character continued to feature in a variety of toys and merchandise, as well as animated adaptations of DC properties, including a direct - to - DVD animated feature starring Keri Russell. Attempts to return Wonder Woman to television have included a television pilot for NBC in 2011, closely followed by another stalled production for The CW. Gal Gadot portrays Wonder Woman in the DC Extended Universe, starting with the 2016 film Batman v Superman: Dawn of Justice, marking the character 's feature film debut after over 70 years of history. Gadot also starred in the character 's first solo live - action film Wonder Woman, which was released on June 2, 2017.
On October 21, 2016, the United Nations sparked controversy by naming Wonder Woman a "UN Honorary Ambassador for the Empowerment of Women and Girls '' in a ceremony attended by Under - Secretary - General for Communications and Public Information Cristina Gallach and by actors Lynda Carter and Gal Gadot. Two months later, she was dropped from her role as a UN Ambassador following a petition.
Marston combined his, Elizabeth 's and Olive 's feminist ideals to create a superhero character that young girls and boys could look up to.
In an October 25, 1940, interview with the Family Circle magazine, William Moulton Marston discussed the unfulfilled potential of the comic book medium. This article caught the attention of comics publisher Max Gaines, who hired Marston as an educational consultant for National Periodicals and All - American Publications, two of the companies that would merge to form DC Comics. At that time, Marston wanted to create his own new superhero; Marston 's wife Elizabeth suggested to him that it should be a female:
William Moulton Marston, a psychologist already famous for inventing the polygraph, struck upon an idea for a new kind of superhero, one who would triumph not with fists or firepower, but with love. "Fine, '' said Elizabeth. "But make her a woman. ''
Marston introduced the idea to Gaines. Given the go - ahead, Marston developed Wonder Woman, whom he believed to be a model of that era 's unconventional, liberated woman. Marston also drew inspiration from the bracelets worn by Olive Byrne, who lived with the couple in a polyamorous relationship. Wonder Woman debuted in All Star Comics # 8 (cover date Dec / Jan 1941 / 1942, released in October 1941), scripted by Marston.
Marston was the creator of a systolic - blood - pressure - measuring apparatus, which was crucial to the development of the polygraph (lie detector). Marston 's experience with polygraphs convinced him that women were more honest than men in certain situations and could work more efficiently.
Marston designed Wonder Woman to be an allegory for the ideal love leader; the kind of women who (he believed) should run society.
"Wonder Woman is psychological propaganda for the new type of woman who should, I believe, rule the world '', Marston wrote.
In a 1943 issue of The American Scholar, Marston wrote:
Not even girls want to be girls so long as our feminine archetype lacks force, strength, and power. Not wanting to be girls, they do n't want to be tender, submissive, peace - loving as good women are. Women 's strong qualities have become despised because of their weakness. The obvious remedy is to create a feminine character with all the strength of Superman plus all the allure of a good and beautiful woman.
Marston went on record by describing bondage and submission as a "respectable and noble practice ''. Marston wrote in a weakness for Wonder Woman, which was attached to a fictional stipulation that he dubbed "Aphrodite 's Law '', that made the chaining of her "Bracelets of Submission '' together by a man take away her Amazonian super strength. However, not everything about his creation was explicitly explained in any one source, which caused confusion among writers and fans for many years.
Initially, Wonder Woman was an Amazon champion who wins the right to return Steve Trevor -- a United States intelligence officer whose plane had crashed on the Amazons ' isolated island homeland -- to "Man 's World '' and to fight crime and the evil of the Nazis.
During this period, Wonder Woman joined the Justice Society of America as the team 's secretary.
During the Silver Age, under writer Robert Kanigher, Wonder Woman 's origin was revamped, along with other characters '. The new origin story increased the character 's Hellenic and mythological roots: receiving the blessing of each deity in her crib, Diana is destined to become "beautiful as Aphrodite, wise as Athena, as strong as Hercules, and as swift as Hermes. ''
At the end of the 1960s, under the guidance of Mike Sekowsky, Wonder Woman surrendered her powers in order to remain in Man 's World rather than accompany her fellow Amazons to another dimension. Wonder Woman begins using the alias Diana Prince and opens a mod boutique. She acquires a Chinese mentor named I Ching, who teaches Diana martial arts and weapons skills. Using her fighting skill instead of her powers, Diana engaged in adventures that encompassed a variety of genres, from espionage to mythology. This phase of her story was directly influenced by the British spy thriller The Avengers and Diana Rigg 's portrayal of Emma Peel.
In the early 1970s the character returned to her superhero roots in the Justice League of America and to the World War II era in her own title. This however, was ultimately due to the popularity of the TV series at the time also having Wonder Woman set in WW2 era, and was shifted back to the 1970s era once the TV show did the same.
With a new decade arriving, DC president Jenette Kahn ordered a revamp in Wonder Woman 's appearance. Artist Milton Glaser, who also designed the "bullet '' logo adopted by DC in 1977, created a stylized "WW '' emblem that evoked and replaced the eagle in her bodice, and debuted in 1982. The emblem in turn was incorporated by studio letterer Todd Klein onto the monthly title 's logo, which lasted for a year and a half before being replaced by a version from Glaser 's studio. With sales of the title continuing to decline in 1985 (despite an unpublished revamp that was solicited), the series was canceled and ended in issue # 329 (February 1986) written by Gerry Conway, depicting Steve Trevor 's marriage to Wonder Woman.
The Crisis on Infinite Earths cross-over of 1986 was designed and written with the purpose of streamlining most of DC 's characters into one more - focused continuity and reinventing them for a new era, thus Wonder Woman and Steve Trevor were declared to come from the Earth - Two dimension, and along with all of their exploits, were erased from history, so that a new Wonder Woman character, story and timeline could take priority.
Following the 1985 Crisis on Infinite Earths series, George Pérez, Len Wein, and Greg Potter rewrote the character 's origin story, depicting Wonder Woman as an emissary and ambassador from Themyscira to Patriarch 's World, charged with the mission of bringing peace to the outside world. Pérez incorporated a variety of deities and concepts from Greek mythology in Wonder Woman 's stories and origin. His rendition of the character acted as the foundation for the modern Wonder Woman stories, as he expanded upon the widely accepted origin of Diana being birthed out of clay. The relaunch was a critical and commercial success.
In August 2010 (issue # 600), J. Michael Straczynski took over the series ' writing duties and introduced Wonder Woman to an alternate timeline created by the Gods in which Paradise Island had been destroyed and the Amazons scattered around the world. He also introduced several "Easter eggs '' within his run. In this timeline, Diana is an orphan raised in New York. The entire world has forgotten Wonder Woman 's existence and the main story of this run was of Diana trying to restore reality even though she does not properly remember it herself. A trio of Death Goddesses called The Morrigan acted as the main enemy of Wonder Woman. In this run, Wonder Woman wore a new costume designed by Jim Lee. Straczynski determined the plot and continued writing duties until Wonder Woman # 605; writer Phil Hester then continued his run, which ultimately concluded in Wonder Woman # 614.
In 2011, DC Comics relaunched its entire line of publications to attract a new generation of readers, and thus released volume 4 of the Wonder Woman comic book title. Brian Azzarello and Cliff Chiang were assigned on writing and art duties respectively and revamped the character 's history considerably. In this new continuity, Wonder Woman wears a costume similar to her original Marston - costume, utilizes a sword and shield, and has a completely new origin. No longer a clay figure brought to life by the magic of the gods, she is, instead, a demi - goddess and the natural - born daughter of Hippolyta and Zeus. Azzarello and Chiang 's revamp of the character was critically acclaimed, but highly divisive among long time fans of the character.
In a side story as part of "Harley 's Little Black Book '' Wonder Woman meets Harley Quinn in London and has a brief team up with her in which we find that Harley has been a huge fan of Wonder Woman for years and has a bit of a crush on her. After the fight with the villain the two retire to a local bar where Harley suggests they join an English super team and then steals her magic lasso, but just to wrap it around herself so they and some of the other patrons can play truth or dare. They are last seen with Wonder Woman carrying Harley out of the bar asleep, though an additional piece of art shows Harley tied up and planting kisses on Wonder Woman.
In 2016, DC Comics once again relaunched all of its publications as part of the DC Rebirth continuity reboot, which has a new bi-monthly Wonder Woman series from writer Greg Rucka. The new series does not use a regular storyline that exists between each issue; instead the story is alternated between each issue for two separate storylines which first started with the storyline The Lies for the odd numbered issues and Year One for the even numbered issues. The new storyline as presented in these issues effectively retcons the events from the previous New 52 series. The Lies storyline reveals that a number of events from the previous Wonder Woman series in which Diana was made the Queen of the Amazons and the God of War, was in fact all an illusion created by a mysterious villain, and she had never once been back to Themyscira ever since she left, nor is she capable of returning there. The Year One story is presented as an all - new origin story for Diana, which reveals how she received her powers from the Olympian Gods, which was intended to bring her back to her classical DC roots. Wonder Woman appears in DC Rebirth with a revised look, which includes a red cape and light armor fittings. Along with her lasso and bracelets, she now regularly utilizes her sword and shield. Wonder Woman: Rebirth artist Liam Sharp described the new armor as a utilitarian piece which allows her to move more freely.
During Marston 's run, Diana Prince was the name of an army nurse whom Wonder Woman met. The nurse wanted to meet her fiancé, who was transferred to South America, but was unable to arrange for money to do so. As Wonder Woman needed a secret identity to look after Steve (who was admitted to the same army hospital in which Diana Prince worked), and because both of them looked alike, Wonder Woman gave the nurse money to go to her fiancé in exchange for the nurse 's credentials and took Diana Prince as her alias. She started to work as an army nurse and later as an Air Force secretary.
The identity of Diana Prince was especially prominent in a series published in the early 1970s, in which she fought crime only under the Prince alias and without her mystic powers. To support herself, she ran a mod clothing boutique.
The Diana Prince alias also played an important role after the events of Infinite Crisis. Wonder Woman was broadcast worldwide killing a villain named Maxwell Lord, as he was mind controlling Superman into killing Batman. When Wonder Woman caught him in her lasso, demanding to know how to stop Superman, Maxwell revealed that the only way to stop him was to kill Lord, so as a last resort Diana snapped his neck. To recover from the trauma of killing another person, the Amazon went into a self - imposed exile for one year. On her return to public life, Diana realized that her life as a full - time celebrity superhero and ambassador had kept her removed from humanity. Because of this she assumed the persona of Diana Prince and became an agent at the Department of Metahuman Affairs. During a later battle with the witch Circe, a spell was placed on Diana leaving her powerless when not in the guise of Wonder Woman.
In the current New 52 universe, Diana does not have a secret identity as stated in an interview by series writer Brian Azzarello. However, when she and Superman began dating, for her civilian identity she uses the Diana Prince alias whenever she is around Clark Kent; such as when she introduced herself to Lois Lane at Lois 's housewarming party under that name.
Princess Diana commands respect both as Wonder Woman and Diana Prince; her epithetical title -- The Amazon Princess -- illustrates the dichotomy of her character. She is a powerful, strong - willed character who does not back down from a fight or a challenge. Yet, she is a diplomat who strongly "favors the pen '', and a lover of peace who would never seek to fight or escalate a conflict. She 's simultaneously both the most fierce and most nurturing member of the Justice League; and her political connections as a United Nations Honorary Ambassador and the ambassador of a warrior nation makes her an invaluable addition to the team. With her powerful abilities, centuries of training and experienced at handling threats that range from petty crime to threats that are of a magical or supernatural nature, Diana is capable of competing with nearly any hero or villain.
Many writers have depicted Diana in different personalities and tone; between both of her diametric extremes; that of a worldy warrior, a highly compassionate and calm ambassador, and sometimes also as a naive and innocent person, depending on the writer. What has remained constant, and is a mainstay of the character, is her nurturing humanity: her overwhelming belief in love, empathy, compassion, and having a strong conscience. This trait had been the reason for her induction into the Star Sapphires.
Writer Gail Simone was applauded for her portrayal of Wonder Woman during her run on the series, with comic book reviewer Dan Phillips of IGN noting that "she 's molded Diana into a very relatable and sympathetic character. ''
Actress Gal Gadot described Wonder Woman as "an idealist. Experienced, super-confident. Open and sincere even in the midst of a gruesome, bloody conflict. Having many strengths and powers, but at the end of the day she 's a woman with a lot of emotional intelligence ''.
In the Golden Age, Wonder Woman adhered to an Amazon code of helping any in need, even misogynistic people, and never accepting a reward for saving someone; while conversely, the modern version of the character has been shown to perform lethal and fatal actions when left with no other alternative, exemplified in the killing of Maxwell Lord in order to save Superman 's life.
The New 52 version of the character has been portrayed to be a younger, more headstrong, loving, fierce and willful person. Brian Azzarello stated in a video interview with DC Comics that they 're building a very "confident '', "impulsive '' and "good - hearted '' character in her. He referred to her trait of feeling compassion as both her strength and weakness.
A distinctive trait of her characterization is a group of signature mythological exclamations, such as "Great Aphrodite! '' (historically the very first one), "Great Hera! '', "Merciful Minerva! '', and "Suffering Sappho! ', some of which were contributed by Elizabeth Holloway Marston.
Diana, after her death, was granted divinity as the Goddess of Truth by her gods for such faithful devotion. During her brief time as a god of Olympus, Diana was replaced in the role of Wonder Woman by her mother, Queen Hippolyta. Unlike Diana receiving the title of Wonder Woman in honor, Hippolyta 's role as Wonder Woman was meant to be a punishment for her betrayal in Artemis ' death as well as for unintentionally killing her own daughter. However, Hippolyta eventually grew to enjoy the freedom and adventure the title came with. Whereas Diana used the Lasso of Truth as her primary weapon, Hippolyta favored a broad sword.
John Byrne, the writer that introduced the concept of Hippolyta as the first Wonder Woman, has explained his intentions in a post in his message board:
I thought George 's one "mistake '' in rebooting Wonder Woman was making her only 25 years old when she left Paradise Island. I preferred the idea of a Diana who was thousands of years old (as, if I recall correctly, she was in the TV series). From that angle, I would have liked to have seen Diana having been Wonder Woman in WW2, and be returning to our world in the reboot.
Not having that option, I took the next best course, and had Hippolyta fill that role.
As Wonder Woman, Queen Hippolyta immediately got involved in a time travel mission back to the 1940s with Jay Garrick. After this mission, she elected to join the Justice Society of America and remained in that era for eight years, where her teammates nicknamed her "Polly ''. During that time she had a relationship with Ted Grant. Hippolyta also made visits into the past to see her godchild Lyta, daughter of Hippolyta 's protege Helena, the Golden Age Fury. These visits happened yearly from young Lyta 's perspective and also accounted for Hippolyta 's participation in the JSA / JLA team ups. When she returned from the past, Hippolyta took Diana 's place in the JLA as well.
Artemis of Bana - Mighdall briefly served as Wonder Woman during Hippolyta 's trials for a new Wonder Woman. Orana, a character similar to Artemis, defeated Diana in a new contest and became Wonder Woman in pre-Crisis on Infinite Earths continuity. Orana was killed during her first mission. Others who have donned the Wonder Woman persona include Nubia, Cassandra Sandsmark, and Donna Troy.
Diana is depicted as a masterful athlete, acrobat, fighter and strategist, trained and experienced in many ancient and modern forms of armed and unarmed combat, including exclusive Amazonian martial arts. In some versions, her mother trained her, as Wonder Girl, for a future career as Wonder Woman. From the beginning, she is portrayed as highly skilled in using her Amazon bracelets to stop bullets and in wielding her golden lasso. Batman once called her the "best melee fighter in the world ''. The modern version of the character is known to use lethal force when she deems it necessary. In the New 52 continuity, her superior combat skills are the result of her Amazon training, as well as receiving further training from Ares, the God of War, himself, since as early as her childhood. The Golden Age Wonder Woman also had knowledge in psychology, as did her Amazon sisters.
The Golden Age Wonder Woman had strength that was comparable to the Golden Age Superman. Wonder Woman was capable of bench pressing 15,000 pounds even before she had received her bracelets, and later hoisted a 50,000 pound boulder above her head to inspire Amazons facing the test. Even when her super strength was temporarily nullified, she still had enough mortal strength of an Amazon to break down a prison door to save Steve Trevor. In one of her earliest appearances, she is shown running easily at 60 mph (97 km / h), and later jumps from a building and lands on the balls of her feet.
She was able to heal faster than a normal human being due to her birthright consumption of water from Paradise Island 's Fountain of Eternal Youth.
Her strength would be removed in accordance with "Aphrodite 's Law '' if she allowed her bracelets to be bound or chained by a male.
She also had an array of mental and psychic abilities, as corresponding to Marston 's interest in parapsychology and metaphysics. Such an array included ESP, astral projection, telepathy (with or without the Mental Radio), mental control over the electricity in her body, the Amazonian ability to turn brain energy into muscle power, etc. Wonder Woman first became immune to electric shocks after having her spirit stripped from her atoms by Dr. Psycho 's Electro Atomizer; it was also discovered that she was unable to send a mental radio message without her body.
Wonder Woman (vol. 1) # 105 revealed that Diana was formed from clay by the Queen of the Amazons, given life and power by four of the Greek and Roman gods (otherwise known as the Olympian deities) as gifts, corresponding to her renowned epithet: "Beautiful as Aphrodite, wise as Athena, swifter than Hermes, and stronger than Hercules '', making her the strongest of the Amazons. Wonder Woman 's Amazon training gave her limited telepathy, profound scientific knowledge, and the ability to speak every language -- even caveman and Martian languages.
Between 1966 and 1967, new powers were added, such as super breath.
In the Silver and Bronze ages of comics, Wonder Woman was able to further increase her strength. In times of great need, removing her bracelets would temporarily augment her power tenfold, but cause her to go berserk in the process.
These powers received changes after the events of Crisis on Infinite Earths.
In the Post-Crisis universe, Wonder Woman receives her super powers as a blessing from Olympian deities just like the Silver Age version before, but with changes to some of her powers:
While not completely invulnerable, she is highly resistant to great amounts of concussive force and extreme temperatures and matches Superman in this regard, although edged weapons or projectiles applied with sufficient force are able to pierce her skin. Due to her divine origins, Diana can resist many forms of magical manipulation.
She is able to astrally project herself into various lands of myth. Her physical body reacts to whatever happens to her on the mythical astral plane, leaving her body cut, bruised, or sometimes strengthened once her mind and body are reunited. She can apparently leave the planet through meditation, and did this once to rescue Artemis while she was in hell.
After the 2011 relaunch, Diana gained new powers. As the biological daughter of Hippolyta and Zeus, she has inherited some of her father 's powers, which are held in check by the wearing her magic bracelets. She uses these powers in battle against the goddess Artemis and quickly renders her unconscious with ease with a series of carefully positioned counterattacks. While using her godly strength, her outfit and accoutrements lit up and her eyes glowed like her father 's.
After becoming the God of War in the pages of Wonder Woman, Diana inherits Ares 's divine abilities. Diana has not exhibited her full powers as War, but is seen in Superman / Wonder Woman # 5 to slip easily into telepathic rapport with a soldier, explaining "I am War. I know all soldiers, and they know me. ''
During the Rebirth retcon, the "Year One '' storyline explains that while put in a cell after coming to Man 's World, Diana was visited by the Greek gods in animal form, and each gave her powers that would reveal themselves when she needed them to. She first displays strength when she accidentally rips the bars off her cell door when visited by Steve Trevor, Etta Candy, and Barbara Ann Minerva. Later on a trip to the mall, she discovers super speed, great durability, and the power of flight while fighting off a terrorist attack.
Diana has an arsenal of powerful god - forged gear at her disposal, but her signature equipment are her indestructible bracelets and the Lasso of Truth.
Wonder Woman 's outfit has varied over time, although almost all of her outfit incarnations have retained some form of breastplate, tiara, bracelets, and her signature five - pointed star symbols.
Wonder Woman 's outfit design was originally rooted in American symbolism and iconography, which included her signature star symbols, a golden eagle on her chest, crimson red bustier, white belt, and a dark blue star spangled skirt / culotte.
She also had a pair of red glowing magnetic earrings which allowed her to receive messages from Queen Desira of the planet Venus.
At the time of her debut, Wonder Woman sported a red top with a golden eagle emblem, a white belt, blue star - spangled culottes, and red and golden go - go boots. She originally wore a skirt; however according to Elizabeth Martson, "It was too hard to draw and would have been over her head most of the time. '' This outfit was entirely based on the American flag, because Wonder Woman was purely an American icon as she debuted during World War II. Later in 1942, Wonder Woman 's outfit received a slight change -- the culottes were converted entirely into skin - tight shorts and she wore sandals. While earlier most of her back was exposed, during the imposition of the Comics Code Authority in the mid-1950s, Wonder Woman 's outfit was rectified to make her back substantially covered, in order to comply with the Authority 's rule of minimum exposure. During Mike Sekowsky 's run in the late 1960s, Diana surrendered her powers and started using her own skill to fight crime. She wore a series of jumpsuits as her attire, most popular of these was a white one.
After Sekowsky 's run ended in the early 1970s, Diana 's roots were reverted to her old mythological ones and she wore a more modernized version of her original outfit, a predecessor to her "bathing suit '' outfit. Later, in 1976, her glowing white belt was turned into a yellow one. For Series 3, artist Terry Dodson redrew her outfit as a strapless swimsuit.
After Crisis On Infinite Earths, George Pérez rebooted the character in 1987. She wore an outfit similar to her 1970s one, but now with a larger glowing golden belt. This outfit continued until William Messner - Loebs ' run, which had Diana pass on the role of Wonder Woman to Artemis. No longer Wonder Woman, Diana sported a new black biker - girl outfit designed by artist Mike Deodato Jr. After John Byrne took over writing and art duties, he redesigned the Wonder Woman outfit (Diana was reinstated as Wonder Woman at the end of Loebs ' run) and joined the emblem and belt together.
Her outfit did not receive any prominent change until after Infinite Crisis. Similar to her chest - plate, her glowing belt was also shaped into a "W ''. This outfit continued until issue # 600 -- J. Michael Straczynski 's run of Wonder Woman 's altered timeline changed her outfit drastically. Her outfit was redesigned by Jim Lee and included a redesigned emblem, a golden and red top, black pants, and a later discontinued blue - black jacket.
It was later retconned by Gail Simone that Wonder Woman 's outfit design had Amazonian roots. During a flashback in Vol. 3, Hippolyta is shown issuing orders to have a garment created for Diana, taking inspiration from the skies on the night Diana was born; a red hunter 's moon and a field of stars against deep blue, and the eagle breastplate being a symbol of Athena 's avian representations.
Another major outfit change came after DC Comics relaunched its entire line of publications, dubbing the event the New 52. Her original one - piece outfit was restored, although the color combination of red and blue was changed to dark red and blue - black. Her chest - plate, belt and tiara were also changed from gold to a platinum or sterling silver color. Along with her sword, she now also utilizes a shield. She wears many accessories such as arm and neck jewelery styled as the "WW '' motif. Her outfit is no longer made of fabric, as it now resembles a type of light, flexible body armor. Her boots are now a very dark blue rather than red. The design previously included black trousers, but they were removed and the one - piece look was restored during the time of publication.
After the events of Convergence, Diana gets a new armored suit with the classic armor and tiara returning.
Wonder Woman 's outfit is redesigned to resemble the one worn in Batman v Superman: Dawn of Justice: it is a red bustier with a gold eagle, a blue growing leather skirt with gold edges with two stars, and knee - high red boots with gold knee guards and accents. Her tiara once again becomes gold with a red star. She occasionally wears a red cape with a gold clasp and edges.
Her tiara 's signature star symbol is now an eight pointed starburst. According to designer Lindy Hemming and director Patty Jenkins, every design decision made for Themyscira came down to the same question: "How would I want to live that 's badass? '' "To me, they should n't be dressed in armor like men. It should be different. It should be authentic and real (...) and appealing to women. '' When asked about the decision to give the Amazons heeled sandals, Jenkins explained that they also have flats for fighting, adding "It 's total wish - fulfillment (...) I, as a woman, want Wonder Woman to be sexy, hot as hell, fight badass, and look great at the same time (...) the same way men want Superman to have ridiculously huge pecs and an impractically big body. That makes them feel like the hero they want to be. And my hero, in my head, has really long legs. '' This corresponds to the original intent by William Moulton Marston, who wanted his character to be alluringly feminine.
The Pre-Crisis version of the invisible plane was a necessity because before the Crisis on Infinite Earths rewrote Wonder Woman 's history -- along with the histories of many other heroes -- Wonder Woman simply could not fly. She grew increasingly powerful through the Silver Age of comic books and beyond, acquiring the power to ride wind currents thus allowing her to imitate flight over short distance. This had limitations, however; for example, if there was no wind and the air was completely still she would be trapped on the ground or if dropped from a distance that she would helplessly fall out of control to the ground. Though this meant that she would rely on the invisible plane less frequently, she always had need of it.
The Invisible Plane was a creation of Diana 's during her younger years on Paradise Island. She created it to be an improvement on her mother 's planes which would be shot down in Man 's World. The result of her innovation was an invisible plane that could fly at terrific speeds silently and not be detected by hostile forces, thus avoiding unpleasant conflict. Initially, it was portrayed as being transparent.
The Invisible Plane appeared in the very first comic stories, including All - Star Comics # 8, where it is shown as being able to fly at over 2,000 mph (3,200 km / h) and to send out rainbow rays that penetrate the mist around Paradise Island, as well as landing stealthily and having a built - in radio. Wonder Woman is seen storing the plane at an abandoned farm near Washington, D.C., in the barn; she goes there as Lt. Prince and changes clothes in some of the earliest tales. Though never explicitly stated, the Plane is presumably stored there when not in use for the rest of the Pre-Crisis era. In a story made shortly after, it flies at 40 miles (64 km) a second.
Shortly thereafter, the telepathic capacities of Wonder Woman 's tiara allow her to summon it, often to hover or swoop by the War Department, and she would exit on a rope ladder. She uses the plane to fly into outer space, and frequently transports Etta Candy and the Holliday Girls, Steve Trevor, or others. During the 1950s, the plane became a jet, and was often shown swooping over Lt. Prince 's office; she stripped out of her uniform at super speed and would bound to the plane. Though the Plane was depicted as semi-transparent for the reader 's convenience, in - story dialogue indicated that it actually was completely invisible, or at least able to become so as the need arose. (DC Comics Presents... # 41)
Wonder Woman continued to use the plane for super-speed, outer space, and multi-dimensional transport up until the un-powered era of Diana Prince. When Wonder Woman resumed super-powered, costumed operations in 1973, she continued to use the jet as before, but did glide on air currents for short distances. At one point, Aphrodite granted the plane the power to fly faster than the speed of light for any interstellar voyages her champion might undertake. Thanks to tinkering by gremlins, the Plane even developed intelligence and the power to talk. The Plane proved a good friend, eager to help his "mistress '' and her loved ones in any way possible. It got along especially well with Steve Trevor.
Diana 's bulletproof bracelets were formed from the remnants of Athena 's legendary shield, the Aegis, to be awarded to her champion. The shield was made from the indestructible hide of the great she - goat, Amalthea, who suckled Zeus as an infant. These forearm guards have thus far proven indestructible and able to absorb the impact of incoming attacks, allowing Wonder Woman to deflect automatic weapon fire and energy blasts. Diana can slam the bracelets together to create a wave of concussive force capable of making strong beings like Superman 's ears bleed. Recently, she gained the ability to channel Zeus 's lightning through her bracelets as well. Zeus explained to her that this power had been contained within the bracelets since their creation, because they were once part of the Aegis, and that he had only recently unlocked it for her use. After the 2011 relaunch of the character, it was revealed that Diana was the daughter of Zeus and Hippolyta and that the bracelets are able to keep the powers she had inherited from Zeus in check. In addition, Hephaestus has modified the bracelets to allow Wonder Woman the sorcerous ability to manifest a sword of grayish metal from each bracelet. Each sword, marked with a red star, takes shape from a flash of lightning, and when Wonder Woman is done with them, the swords disappear, supposedly, back into her bracelets. As such, she has produced other weapons from the bracelets in this way such as a bow that fires explosive arrows, spears and energy bolts among others.
The inspiration to give Diana bracelets came from the pair of bracelets worn by Olive Byrne, creator William Moulton Marston 's assistant and lover.
The Lasso of Truth, or Lasso of Hestia, was forged by Hephaestus from the golden girdle of Gaea. The original form of the Lasso in the Golden Age was called the Magic Lasso Of Aphrodite. It compels all beings who come into contact with it to tell the absolute truth and is virtually indestructible; in Identity Crisis, Green Arrow mistakenly describes it as "the only lie detector designed by Zeus. '' The only times it has been broken were when Wonder Woman herself refused to accept the truth revealed by the lasso, such as when she confronted Rama Khan of Jarhanpur, and by Bizarro in Matt Wagner 's non-canonical Batman / Superman / Wonder Woman: Trinity. During the Golden Age, the original form of the Lasso had the power to force anyone caught to obey any command given them, even overriding the mind control of others; this was effective enough to defeat strong - willed beings like Captain Marvel. Diana wields the Lasso with great precision and accuracy and can use it as a whip or noose.
Diana occasionally uses additional weaponry in formal battle, such as ceremonial golden armour with golden wings, pteruges, chestplate, and golden helmet in the shape of an eagle 's head. She possesses a magical sword forged by Hephaestus that is sharp enough to cut the electrons off an atom.
As early as the 1950s, Wonder Woman 's tiara has also been used as a razor - edged throwing weapon, returning to her like a boomerang. The tiara allows Wonder Woman to be invulnerable from telepathic attacks, as well as allowing her to telepathically contact people such as the Amazons back on Themyscira using the power of the red star ruby in its center.
The Golden, Silver, and Bronze Age portrayals of Wonder Woman showed her using a silent and invisible plane that could be controlled by mental command and fly at speeds up to 3,000 mph (4,800 km / h). Its appearance has varied over time; originally it had a propeller, while later it was drawn as a jet aircraft resembling a stealth aircraft.
During the golden age Wonder Woman possessed a Purple Ray capable of healing even a fatal gunshot wound to the brain. She also possessed a Mental Radio that could let her receive messages from those in need.
As a recent temporary inductee into the Star Sapphires, Wonder Woman gained access to the violet power ring of love. This ring allowed her to alter her costume at will, create solid - light energy constructs, and reveal a person 's true love to them. She was able to combine the energy with her lasso to enhance its ability.
In her debut in All Star Comics # 8, Diana was a member of a tribe of women called the Amazons, native to Paradise Island -- a secluded island set in the middle of a vast ocean. Captain Steve Trevor 's plane crashes on the island and he is found alive but unconscious by Diana and fellow Amazon, and friend, Mala. Diana has him nursed back to health and falls in love with him. A competition is held amongst all the Amazons by Diana 's mother, the Queen of the Amazons Hippolyta, in order to determine who is the most worthy of all the women; Hippolyta charges the winner with the responsibility of delivering Captain Steve Trevor back to Man 's World and to fight for justice. Hippolyta forbids Diana from entering the competition, but she takes part nonetheless, wearing a mask to conceal her identity. She wins the competition and reveals herself, surprising Hippolyta, who ultimately accepts, and must give in to, Diana 's wish to go to Man 's World. She then is awarded a special uniform made by her mother for her new role as Wonder Woman and safely returns Steve Trevor back to his home country.
Wonder Woman was jointly raised by Queen Hippolyta, General Antiope, and Menalippe. Entertainment Weekly writes: "This trio of immortals is responsible for both raising and training Diana -- the only child on this estrogen heavy isle -- but they do n't always agree. Hippolyta, a revolutionary leader, longs to shelter her beloved daughter from the outside world, but Antiope, the Amazon responsible for Diana 's training, wants to prepare her. "She is the only child they raised together... and their love for her manifests in a different way for each of them. ''
Coming to America for the first time, Wonder Woman comes upon a wailing army nurse. Inquiring about her state, she finds that the nurse wanted to leave for South America with her fiancé but was unable due to shortage of money. As both of them looked identical and Wonder Woman needed a job and a valid identity to look after Steve (who was admitted in the same army hospital), she gives her the money she had earned earlier to help her go to her fiancé in exchange for her credentials. The nurse reveals her name as Diana Prince, and thus, Wonder Woman 's secret identity was created, and she began working as a nurse in the army.
Wonder Woman then took part in a variety of adventures, mostly side by side with Trevor. Her most common foes during this period would be Nazi forces led by a German baroness named Paula von Gunther, occasionally evil deities / demigods such as Mars and the Duke of Deception, and then colorful villains like Hypnota, Doctor Psycho, and the Cheetah.
In the Silver Age, Wonder Woman 's history received several changes. Her earlier origin, which had significant ties to World War II, was changed and her powers were shown to be the product of the gods ' blessings, corresponding to her epithet, "beautiful as Aphrodite, wise as Athena, stronger than Hercules, and swifter than Hermes ''. The concepts of Wonder Girl and Wonder Tot were also introduced during this period.
Wonder Woman (vol. 1) # 179 (Nov. 1968) showed Wonder Woman giving up her powers and returning her costume and title to her mother in order to continue staying in Man 's World. The reason behind this was that all the Amazons were shifting to another dimension, but Diana was unable to accompany them as she needed to stay behind to help Steve, who had been wrongly convicted. Thus, she no longer held the title of Wonder Woman and after meeting and training under a blind martial arts mentor I - Ching, Diana resumed crime fighting as the powerless Diana Prince. She ran a mod - boutique as a business and dressed in a series of jumpsuits while fighting crime. During this period, Samuel R. Delany took over scripting duties with issue # 202. Delany was initially supposed to write a six - issue story arc, which would culminate in a battle over an abortion clinic, but Delany was removed reportedly due to criticism from Gloria Steinem, who, not knowing the content of the issues Delany was writing, was upset that Wonder Woman had lost her powers and was no longer wearing her traditional costume.
In Wonder Woman Vol 1 # 204, Diana 's powers and costume were returned to her and she is once again reinstated as Wonder Woman. I - Ching is killed by a crazy sniper in the same issue. Later, Diana meets her sister Nubia, who is Hippolyta 's daughter fashioned out of dark clay (hence Nubia 's dark complexion). Nubia claimed to be the "Wonder Woman of The Floating Island '', and she challenges Diana to a duel which ends in a draw. Returning to her home, Nubia would have further adventures involving Diana.
The last issue of Volume 1 showed Diana and Steve Trevor announce their love for each other and their subsequent marriage.
The events of Crisis on Infinite Earths greatly changed and altered the history of the DC Universe. Wonder Woman 's history and origin were considerably revamped by the event. Wonder Woman was now an emissary and ambassador from Themyscira (the new name for Paradise Island) to Patriarch 's World, charged with the mission of bringing peace to the outside world. Various deities and concepts from Greek mythology were blended and incorporated into Wonder Woman 's stories and origin. Diana was formed out of clay of the shores of Themyscira by Hippolyta, who wished for a child; the clay figure was then brought to life by the Greek deities. The Gods then blessed and granted her unique powers and abilities -- beauty from Aphrodite, strength from Demeter, wisdom from Athena, speed and flight from Hermes, Eyes of the Hunter and unity with beasts from Artemis and sisterhood with fire and the ability to discern the truth from Hestia. Due to the reboot, Diana 's operating methods were made distinctive from Superman and Batman 's with her willingness to use deadly force when she judges it necessary. In addition, her previous history and her marriage to Steve Trevor were erased. Trevor was introduced as a man much older than Diana who would later on marry Etta Candy.
Starting in Wonder Woman Vol 2 # 51, The Amazons, who had revealed their presence to the world in Wonder Woman Vol 2 # 50, are blamed for a series of murders and for the theft of various artifacts. The Amazons are then taken into custody, Queen Hippolyta is nowhere to be found and Steve Trevor is forced by General Yedziniak to attack Themyscira. These events lead to the "War of the Gods '' occurring. The culprit of the murders, thefts and the framing of the Amazons is revealed to be the witch Circe, who "kills '' Diana by reverting her form back into the clay she was born from. Later, Wonder Woman is brought back to life and together with Donna Troy, battles Circe and ultimately defeats her. Circe would later return by unknown means.
When Hippolyta and the other Amazons were trapped in a demonic dimension, she started receiving visions about the death of Wonder Woman. Fearing her daughter 's death, Hippolyta created a false claim that Diana was not worthy of continuing her role as Wonder Woman, and arranged for a contest to determine who would be the new Wonder Woman, thus protecting Diana from her supposed fate. The participants of the final round were Diana and Artemis, and with the help of some mystic manipulation by Hippolyta, Artemis won the contest. Thus, Diana was forced to hand over her title and costume to Artemis, who became the new Wonder Woman and Diana started fighting crime in an alternate costume. Artemis later died in battle with the White Magician -- thus, Hippolyta 's vision of a dying Wonder Woman did come true, albeit not of Diana as Wonder Woman. Diana once again became Wonder Woman, a request made by Artemis in her last seconds. Artemis would later return as Requiem. Prior to Artemis ' death, Hippolyta would admit to her daughter about her own part in Artemis ' death, which strained their relationship as Diana was unable to forgive her mother for sending another Amazon to her death knowingly for the sake of saving her own daughter.
The demon Neron engaged Diana in battle and managed to kill her. The Olympian Gods granted Diana divinity and the role of the Goddess of Truth who started to reside in Olympus; her mother Hippolyta then assumed the role of Wonder Woman and wore her own different incarnation of the costume. In Wonder Woman Vol 2 # 136, Diana was banished from Olympus due to interfering in earthly matters (as Diana was unable to simply watch over people 's misery on earth). She immediately returned to her duties as Wonder Woman, but ran into conflicts with her mother over her true place and role as Hippolyta seemed accustomed to her life in America. Their fight remained unsolved, as Hippolyta tragically died during an intergalactic war. Themyscira was destroyed during the war, but was restored and reformed as a collection of floating islands. Circe later resurrected Hippolyta in Wonder Woman Vol 3 # 8.
One of the events that led to Infinite Crisis was of Wonder Woman killing the villain Maxwell Lord in Wonder Woman (vol. 2) # 219. Maxwell Lord was mind - controlling Superman, who as a result was near to killing Batman. Wonder Woman tried to stop Superman, Lord (who was unable to mind control her) made Superman see her as his enemy Doomsday trying to kill Lois Lane. Superman then attacked Wonder Woman, and a vicious battle ensued. Buying herself time by slicing Superman 's throat with her tiara, Wonder Woman caught Lord in her Lasso of Truth and demanded to know how to stop his control over Superman. As the lasso forced the wearer to speak only the truth, Lord told her that the only way to stop him was to kill him. Left with no choice, Wonder Woman snapped Lord 's neck and ended his control over Superman. Unknown to her, the entire scene was broadcast live around every channel in the world by Brother Eye. The viewers were not aware of the entire situation, and saw only Wonder Woman murdering a Justice League associate. Wonder Woman 's actions put her at odds with Batman and Superman, as they saw Wonder Woman as a cold - blooded killer, despite the fact that she saved their lives.
At the end of Infinite Crisis, Wonder Woman temporarily retires from her costumed identity. Diana, once again using the alias Diana Prince, joins the Department of Metahuman Affairs. Donna Troy becomes the new Wonder Woman and is captured by Diana 's enemies. Diana then goes on a mission to rescue her sister, battling Circe and Hercules. Diana defeats the villains, freeing Donna and takes up the role of Wonder Woman again. Circe places a spell on Diana, which renders Diana into a normal, powerless human being when in the role of Diana Prince; her powers come to her only when she is in the role of Wonder Woman.
The storyline "The Circle '' was focused on the revelation of a failed assassination attempt on Diana when she was a baby, by four rogue Amazons. These Amazons -- Myrto, Charis, Philomela and Alkyone, collectively referred to as The Circle -- were Hippolyta 's personal guards and were extremely loyal and devoted to her. However, when Hippolyta decided to raise a daughter, The Circle was horrified and considered the baby ill - fate, one who would ruin their entire race. Thus, after Diana was sculpted out of clay and brought to life, The Circle decided to assassinate the baby. Their attempt was foiled however, and the four Amazons were imprisoned. After years, the Circle escaped their prisons with the help of Captain Nazi, and decided to accomplish their previously failed mission and kill Diana. Diana defeated Myrto, Charis, Philomela and then approached Alkyone, who runs off and succumbs to her death by falling into the ocean. The other three Amazons return to their prisons.
Issue # 600 introduced Wonder Woman to an alternate time - line created by the Gods in which Themyscira had been destroyed and the Amazons scattered around the world. In this timeline, Diana is an orphan raised in New York who is learning to cope with her powers. The entire world has forgotten Wonder Woman 's existence and the main story of this run was of Diana trying to restore reality even though she does not properly remember it herself. Diana has no memories of her prior adventures as Wonder Woman, recollecting her memories in bits and pieces and receiving different abilities and resources (such as the power of flight and her lasso) during the progression of her adventure. A trio of Death Goddesses called The Morrigan acted as Wonder Woman 's main enemies. Diana ultimately defeats the evil goddesses and returns everything back to normal.
In September 2011, DC Comics relaunched its entire publication line, dubbing the event the New 52. Among the major changes to the character, Wonder Woman now appears wearing a new costume similar to her older one, and has a completely new origin. In this new timeline, Wonder Woman is no longer a clay figure brought to life by the magic of the gods. Rather, she is the demigoddess daughter of Queen Hippolyta and Zeus: King of the Greek Gods. Her original origin is revealed as a cover story to explain Diana 's birth as a means to protect her from Hera 's wrath. Currently, Diana has taken on the role and title as the new "God of War ''.
The Greek messenger god, Hermes, entrusts Wonder Woman with the protection of Zola, a young woman, who is pregnant with Zeus 's child, from Hera, seething with jealousy and determined to kill the child. With the appearance of a bizarre, new, chalk - white enemy, the goddess Strife (a reimagined version of Eris, the goddess of discord who had battled Wonder Woman in post-Crisis continuity), Wonder Woman discovers she, herself, is the natural - born daughter of Hippolyta and Zeus, who, after a violent clash, became lovers. Hippolyta revealed Diana 's earlier origin story to be a lie, spread amongst the Amazons to protect Diana from the wrath of Hera, who is known for hunting and killing several illegitimate offspring of Zeus.
The first of these half - mortal siblings to reveal himself to Wonder Woman was her older half - brother, Lennox Sandsmark, who could transform himself into living, marble - like stone and, before his death, was revealed to be the father of Wonder Girl (Cassie Sandsmark). His killer, the First Born, the eldest progeny of Zeus, would become Wonder Woman 's first major super-villain of the New 52.
The story then focuses on Wonder Woman 's quest to rescue Zola from Hades, who had abducted her and taken her to Hell at the end of the sixth issue of the series. The male children of the Amazons are introduced and Diana learns about the birth of her "brothers '' -- the Amazons used to infrequently invade ships coming near their island and force themselves on the sailors, before killing them. After nine months, the birth of the resulting female children was highly celebrated and they were inducted into the ranks of the Amazons while the male children were rejected. In order to save the male children from being drowned to death by the Amazons, Hephaestus traded weapons to the Amazons in exchange for them.
After saving Zola from Hades, Wonder Woman tries to protect her further from Apollo, as it is prophesied that one of Zeus ' children will be his downfall whom Apollo considers to be Zola 's child. Wonder Woman receives the power of flight by one of Hermes ' feathers piercing her thigh and Zola 's baby is stolen by Hermes at the end and given to Demeter. The issue 's last page shows a dark and mysterious man rising from the snow, taking a helmet and disappearing. This man is later revealed to be Zeus ' first son, known only as First Born, who seeks to rule over Olympus and the rest of the world, and take Diana as his bride.
A stand - alone # 0 issue was released in September which explored Diana 's childhood and her tutelage under Ares, the God of War, now known most often as simply ' War '. The issue was narrated in the style of a typical Silver Age comic book and saw Diana in her childhood years. The main plot of the issue was Diana training under War as he thought of her being an extraordinary girl with immense potential. The issue ultimately concluded with Diana learning and experiencing the importance of mercy, which she first learned when War showed it to her during their sparring. This later translated into her refusal to kill the Minotaur -- a task given to her by War; however, this show of mercy makes her a failure in War 's eyes, which was actually his fault since he inadvertently "taught '' her mercy and affection as his protege. Later in the series, Wonder Woman is forced to kill War during a conflict with her evil half - brother, Zeus ' son First Born, and herself becomes the God of War. After the Amazons are restored, she rules over them both as a warrior queen and God of War, as the ongoing conflict with First Born escalates. At the end of Azzarello 's run, as part of a final conflict, Wonder Woman kills First Born, while Zeke is revealed to have be Zeus ' plan for resurrection, with Zola revealed to have been a mortal shell for the goddess Athena, who gave birth to Zeus just as he once did to her. Wonder Woman pleads with Athena not to allow the Zola personality, whom she has grown to love as a friend, die with Athena 's awakening. Athena leaves the site in animal form, leaving a stunned and confused Zola behind with Wonder Woman.
Wonder Woman appears as one of the lead characters in the Justice League title written by Geoff Johns and drawn by Jim Lee that was launched in 2011 as part of the New 52. In August 2012, she and Superman shared a kiss in Justice League Vol 2 # 12, which has since developed into a romantic relationship. DC launched a Superman / Wonder Woman series that debuted in late 2013, which focuses both the threats they face together, and on their romance as a "Power Couple ''.
After the events of Convergence, Wonder Woman would don a new costume. She would also face Donna Troy, who is now reimagined as a villanous doppellganger created by a vengeful Amazon elder, not only to physically defeat Wonder Woman but also to outmaneuver her in Themyscirian politics.
The New 52 version of Earth 2 was introduced in Earth 2 # 1 (2012). In that issue, the Earth 2 Wonder Woman is introduced via flashback. She, along with Superman and Batman, are depicted dying in battle with forces from Apokolips five years in the past. This Wonder Woman worshiped the deities of Roman mythology as opposed to the Greek; the Roman gods perish as a result of the conflict. An earlier version of the Earth - 2 Wonder Woman, prior to the Apokoliptian invasion, is seen in the comic book Batman / Superman, where she is seen riding a pegasus.
In Earth 2 # 8 (2013), Wonder Woman 's adult daughter, Fury, is introduced. She is loyal to the Apokoliptian Steppenwolf.
In 2016, DC Comics started DC Rebirth, a relaunch of its entire line of comic books.
Following the events of the Darkseid War, Wonder Woman is told by the dying Myrina Black that on the night of Diana 's birth, Hippolyta gave birth to a twin child. This child was revealed to be male, known as Jason, and is said to be incredibly powerful. Wonder Woman makes it her mission to find him. At the same time, she finds the truth behind her origin and history is now cluttered, as she remembers two versions: the pre-Flashpoint one, and the New 52 rendition. She can not locate Themiscyra or her fellow Amazons and the Lasso of Truth does not work for her anymore.
The "Year One '' storyline retells Diana 's origin growing up on Themyscira. She lives an idyllic life and harbors interest for the outside world, and the first connection to it comes in the form of Steve Trevor, who crashes on the island and is the sole survivor. A contest is held to determine which Amazon is the best candidate to take Steve home, with Diana volunteering despite knowing the cost to leave the island is to never return. Diana wins the contest and departs with Steve. Once arriving in America, Diana is taken into custody by the government to discern her origins. She meets Etta Candy and Barbara Ann Minerva along the way. While incarcerated Diana is visited by the gods in animal form and bestow upon her powers of strength, speed, agility, durability, and flight. She discovers Ares, the god of war, is working to destroy humanity. Accepting her new role in Man 's World, Diana, with the help of the gods in animal form, subdues Ares with the lasso. Now called Wonder Woman, Diana becomes one of the world 's greatest heroes.
The "Lies '' story arc runs parallel and explores Diana 's search. No longer able to get into Mount Olympus, Diana tracks down Barbara Ann Minerva, the Cheetah, to get help. Cheetah agrees to help in exchange for Diana aiding her in killing the god Urzkartaga and end Minerva 's curse. The pair battle their way through Urzkartaga 's minions, the Bouda, and defeat Andres Cadulo, a worshiper of Urzkartaga that planned to sacrifice Steve Trevor to the plant god. Once reverted to her human form, Minerva agreed to help Wonder Woman find her way back to Paradise Island. During this time Wonder Woman reconnects with Steve. Minerva eventually realizes Paradise Island is an embodiment of emotion instead of a physical place, so Wonder Woman and Steve head out to find the island. They succeed and Wonder Woman is greeted by her mother and sisters, though Steve senses something is wrong. Wonder Woman comes to realize nothing is as she remembers and, upon using the Lasso of Truth, discovers everything she thought she knew was a lie: she never really returned to Themyscira after departing with Steve years earlier. The revelation shatters Diana 's mind and she is left nearly insane. Veronica Cale, a businesswoman who has been desiring to find Themyscira and the leader of Godwatch, sends a military group called Poison after her, but Diana 's state has left her vulnerable and oblivious to the danger she and Steve are in. Steve wards them off long enough for them to be rescued, and reluctantly places Diana in a mental hospital so she can get help. While there she comes to grasp the reality she thought she knew was false, eventually coming out of her stupor and able to rejoin the others in tracking down Veronica Cale, who is trying to find Themyscira.
As a compassionate warrior with - god - like strength, Wonder Woman preferred peace and love to war and violence, a contradiction that has long made her a symbol of female empowerment, and the center of controversy. The early Wonder Woman stories featured an abundant amount of bondage portrayals, which had critics worried.
Although created to be a positive role model and a strong female character for girls and boys, Wonder Woman has had to deal with the misogyny that was commonplace in comic book industry for decades. For example, Wonder Woman was a founding member of the Justice Society of America. This roster included the original Flash and Green Lantern. Wonder Woman was an experienced leader and easily the most powerful of them all; yet was rendered a secretary. This would also be accompanied with her losing her powers or getting captured on most Justice League adventures. During the ' 50s and ' 60s, comic writers regularly made Wonder Woman love sick over Steve Trevor, a Major in the United States Army. Stories frequently featured Wonder Woman hoping or imagining what it would be like to marry Steve Trevor.
Wonder Woman was named the 20th greatest comic book character by Empire magazine. She was ranked sixth in Comics Buyer 's Guide 's "100 Sexiest Women in Comics '' list. In May 2011, Wonder Woman placed fifth on IGN 's Top 100 Comic Book Heroes of All Time.
Not all reaction to Wonder Woman has been positive. In the controversial Seduction of the Innocent, psychiatrist Fredric Wertham claimed Wonder Woman 's strength and independence made her a lesbian in a condemning way.
Feminist icon Gloria Steinem, founder of Ms. magazine, was responsible for the return of Wonder Woman 's original abilities. Offended that the most famous female superhero had been depowered into a boyfriend - obsessed damsel in distress, Steinem placed Wonder Woman (in costume) on the cover of the first issue of Ms. (1972) -- Warner Communications, DC Comics ' owner, was an investor -- which also contained an appreciative essay about the character. Wonder Woman 's powers and traditional costume were restored in issue # 204 (January -- February 1973).
In 1972, just months after the groundbreaking US Supreme Court decision Roe v. Wade, science fiction author Samuel R. Delany had planned a story for Ms. that culminated in a plainsclothes Wonder Woman protecting an abortion clinic. However, Steinem disapproved of Wonder Woman being out of costume, and the controversial story line never happened.
The original significance of Wonder Woman had the intentions of influencing many women of all ages, displaying the physical and mental strengths, values, and ethical attributes that not only men acquire. "Wonder Woman symbolizes many of the values of the women 's culture that feminists are now trying to introduce into the mainstream: strength and self - reliance for women; sisterhood and mutual support among women; peacefulness and esteem for human life; a diminishment both of ' masculine ' aggression and of the belief that violence is the only way of solving conflicts, '' Steinem wrote at the time.
The origin of Wonder Woman and the psychological reasoning behind why William Morton Marston created her in the way he did illustrated Marston 's educational, ethical, and moral values. "William Marston intended her to be a feminist character, showing young boys the illimitable possibilities of a woman who could be considered just as strong as the famed Superman. '' Gladys L. Knight explains the impact and influences that superheroes have on us in society ranging from the 1870s until the present day.
Marc DiPaolo introduces us to Wonder Woman 's creator and history and he demonstrates how she is a "WWII veteran, a feminist icon, and a sex symbol '' all throughout her "career ''. Wonder Woman stars in multiple films and is most commonly known for her red, white and blue one piece, and her tall, sexy assertiveness. What many people do n't know is that she is a big part of history in the comic and superhero world because of how her character influences real life people of all ages, sexes, ethnicities, and races. "Marston created the comic book character Wonder Woman to be both strong and sexy, as a means of encouraging woman to emulate her unapologetic assertiveness. ''
Continuing her legacy as an influential feminist icon, in 2015 Wonder Woman became the first superhero to officiate a same - sex wedding in a comic series.
On October 21, 2016, the United Nations controversially named Wonder Woman a UN Honorary Ambassador for the Empowerment of Women and Girls in a ceremony attended by Under - Secretary - General for Communications and Public Information Cristina Gallach and by actors Lynda Carter and Gal Gadot. The character was dropped from the role two months later after a petition against the appointment stated Wonder Woman was "not culturally... sensitive '' and it was "alarming that the United Nations would consider using a character with an overtly sexualized image ''.
Gloria Steinem, editor for Ms. Magazine and a big supporter of Wonder Woman, stated "... (Marston) had invented Wonder Woman as a heroine for little girls, and also as a conscious alternative to the violence of comic books for boys. '' Badower described a near - international incident (involving an unnamed Russian general rolling dozens of tanks and munitions through a shady mountain pass) as an outstanding example for standing up to bullies. "She ends up deflecting a bullet back and disarming the general, '' he says, adding that "she does n't actually do anything violent in the story. I just think that Wonder Woman is smarter than that. ''
Nick Pumphrey stated that Wonder Woman stands as a non-violent beacon of hope and inspiration for women and men. Grant Morrison stated "I sat down and I thought, ' I do n't want to do this warrior woman thing. ' I can understand why they 're doing it, I get all that, but that 's not what (Wonder Woman creator) William Marston wanted, that 's not what he wanted at all! His original concept for Wonder Woman was an answer to comics that he thought were filled with images of blood - curdling masculinity, and you see the latest shots of Gal Gadot in the costume, and it 's all sword and shield and her snarling at the camera. Marston 's Diana was a doctor, a healer, a scientist. ''
William Marston 's earliest works were notorious for containing "sapphic - undertones '' subtext. Fredric Wertham 's Seduction of the Innocent referred to her as the "lesbian counterpart to Batman '' (whom he also identified as a homosexual). In the decades since, DC Comics attempted to downplay her sexuality, and comic book writers and artists did n't do much more than hint at Wonder Woman 's erotic legacy.
Grant Morrison 's 2016 comic Wonder Woman: Earth One, which exists parallel to the current DC comics Rebirth canon, Diana is depicted being kissed on her right cheek by a blonde woman who has put her left arm around Diana.
Wonder Woman feels she need not be "labelled sexually '', that she "loves people for who they are '' and is "just herself ''. Coming from a society that was only populated by women, "lesbian '' in (the world 's) eyes may have been "straight '' for them. "Her culture is completely free from the shackles of heteronormativity in the first place so she would n't even have any ' concept ' of gender roles in sex. '' Wonder Woman is suggested as being queer or bisexual, as she and another Amazon, Io, had reciprocal feelings for each other
In 2016, Sensation Comics featured Wonder Woman officiating a same - sex wedding (Issue # 48) drawn by Australian illustrator Jason Badower. "My country is all women. To us, it 's not ' gay ' marriage. It 's just marriage '', she states to Superman. Inspired by the June Supreme Court ruling that established marriage equality in all 50 United States, Badower says DC Comics was "fantastic '' about his idea for the issue. In an interview with The Sydney Morning Herald, he said his editor "Was like ' great, I love it! Let 's do it. ' It was almost anticlimactic. '' "Diana 's mother, the queen, at the very least authorized or in some cases officiated these weddings, '' Badower says. "It just seems more like a royal duty Diana would take on, that she would do for people that would appreciate it. ''
Wonder Woman 's advocacy for gay rights was taken a step further in September 2016, when comic book writer Greg Rucka announced that she is canonically bisexual, according to her rebooted Rebirth origin. Rucka stated that in his opinion, she "has to be '' queer and has "obviously '' had same - sex relationships on an island surrounded by beautiful women. This follows the way Wonder Woman was written in the alternate continuity or non-canon Earth One by Grant Morrison, and fellow Wonder Woman writer Gail Simone staunchly supported Rucka 's statement. Surprised at the amount of backlash from her fanbase, Rucka responded to "haters '' that consensual sex with women is just as important to Wonder Woman as the Truth is to Superman.
Wonder Woman actress Gal Gadot reacted positively to Diana 's rebooted orientation, and agreed her sexuality was impacted by growing up in the women - only Themyscira.
Since her debut in All Star Comics # 8 (December 1941), Diana Prince / Wonder Woman has appeared in a number of formats besides comic books. Genres include animated television shows including Super Friends, direct - to - DVD animated films, video games, the 1970s live - action television show, Wonder Woman, the 2014 CGI theatrical release, The Lego Movie (technically her first silver screen debut), and the live - action DCEU films, Batman v Superman: Dawn of Justice (2016) and Wonder Woman (2017). In November 2017, she will appear in the DCEU release, Justice League.
An upcoming biographical drama titled Professor Marston & the Wonder Women, about Elizabeth Holloway Marston, her husband William Moulton Marston, Olive Byrne, and the creation of Wonder Woman, is scheduled for release on October 27, 2017.
|
what does the bible say about washing of feet | Foot washing - wikipedia
Maundy (from the Vulgate of John 13: 34 mandatum meaning "command ''), or the Washing of the Feet, is a religious rite observed by various Christian denominations. The name is taken from the first few Latin words sung at the ceremony of the washing of the feet, "Mandatum novum do vobis ut diligatis invicem sicut dilexi vos '' ("I give you a new commandment, That ye love one another as I have loved you '') (John 13: 34), and from the Latin form of the commandment of Christ that we should imitate His loving humility in the washing of the feet (John 13: 14 -- 17). The term mandatum (maundy), therefore, was applied to the rite of foot - washing on this day of the Christian Holy Week called Maundy Thursday.
John 13: 1 -- 17 recounts Jesus ' performance of this act. In verses 13: 14 -- 17, He instructs His disciples:
If I then, your Lord and Teacher, have washed your feet, you also ought to wash one another 's feet. For I have given you an example, that you should do as I have done to you. Most assuredly, I say to you, a servant is not greater than his master; nor is he who is sent greater than he who sent him. If you know these things, blessed are you if you do them.
Many denominations (including Anglicans, Lutherans, Methodists, Presbyterians, Mennonites, and Catholics) therefore observe the liturgical washing of the feet on Maundy Thursday of Holy Week. Moreover, for some denominations, foot - washing was an example, a pattern. Many groups throughout Church history and many modern denominations have practiced foot washing as a church ordinance including Adventists, Anabaptists, Baptists, and Pentecostals.
The origin of the word Maundy has at least two possibilities:
The root of this practice appears to be found in the hospitality customs of ancient civilizations, especially where sandals were the chief footwear. A host would provide water for guests to wash their feet, provide a servant to wash the feet of the guests or even serve the guests by washing their feet. This is mentioned in several places in the Old Testament of the Bible (e.g. Genesis 18: 4; 19: 2; 24: 32; 43: 24; I Samuel 25: 41; et al.), as well as other religious and historical documents. A typical Eastern host might bow, greet, and kiss his guest, then offer water to allow the guest to wash his feet or have servants do it. Though the wearing of sandals might necessitate washing the feet, the water was also offered as a courtesy even when shoes were worn.
I Samuel 25: 41 is the first biblical passage where an honored person offers to wash feet as a sign of humility. In John 12, Mary of Bethany anointed Jesus ' feet presumably in gratitude for raising her brother Lazarus from the dead, and in preparation for his death and burial. The Bible records washing of the saint 's feet being practised by the primitive church in I Timothy 5: 10 perhaps in reference to piety, submission and / or humility. There are several names for this practice: maundy, foot washing, washing the saints ' feet, pedilavium, and mandatum.
Christian denominations that observe foot washing do so on the basis of the authoritative example and command of Jesus as found in the Gospel of John 13: 1 -- 15:
Jesus demonstrates the custom of the time when he comments on the lack of hospitality in one Pharisees home by not providing water to wash his feet in The Gospel of Luke, chapter 7, verse 44:
The rite of foot washing finds its roots in scripture. Even after the death of the apostles or the end of the Apostolic Age, the practice was continued.
It appears to have been practiced in the early centuries of post-apostolic Christianity, though the evidence is scant. For example, Tertullian (145 -- 220) mentions the practice in his De Corona, but gives no details as to who practiced it or how it was practiced. It was practiced by the Church at Milan (ca. A.D. 380), is mentioned by the Council of Elvira (A.D. 300), and is even referenced by Augustine (ca. A.D. 400).
Observance of foot washing at the time of baptism was maintained in Africa, Gaul, Germany, Milan, northern Italy, and Ireland.
According to the Mennonite Encyclopedia "St. Benedict 's Rule (A.D. 529) for the Benedictine Order prescribed hospitality feetwashing in addition to a communal feetwashing for humility ''; a statement confirmed by the Catholic Encyclopedia. It apparently was established in the Roman church, though not in connection with baptism, by the 8th century.
The Albigenses observed feetwashing in connection with communion, and the Waldenses ' custom was to wash the feet of visiting ministers.
There is some evidence that it was observed by the early Hussites; and the practice was a meaningful part of the 16th century radical reformation. Foot washing was often "rediscovered '' or "restored '' by Protestants in revivals of religion in which the participants tried to recreate the faith and practice of the apostolic era which they had abandoned or lost.
In Catholic Church, the ritual washing of feet is now associated with the Mass of the Lord 's Supper, which celebrates in a special way the Last Supper of Jesus, before which he washed the feet of his twelve apostles.
Evidence for the practice on this day goes back at least to the latter half of the twelfth century, when "the pope washed the feet of twelve sub-deacons after his Mass and of thirteen poor men after his dinner. '' From 1570 to 1955, the Roman Missal printed, after the text of the Holy Thursday Mass, a rite of washing of feet unconnected with the Mass. For many years Pius IX performed the foot washing in the sala over the portico of Saint Peter 's, Rome.
In 1955 Pope Pius XII revised the ritual and inserted it into the Mass. Since then, the rite is celebrated after the homily that follows the reading of the gospel account of how Jesus washed the feet of his twelve apostles (John 13: 1 -- 15). Some persons who have been selected -- usually twelve, but the Roman Missal does not specify the number -- are led to chairs prepared in a suitable place. The priest goes to each and, with the help of the ministers, pours water over each one 's feet and dries them. There are some advocates of restricting this ritual to clergy or at least men.
In a notable break from the 1955 norms, Pope Francis washed the feet of two women and Muslims at a juvenile detention center in Rome 2013. In 2016 it was announced that the Roman Missal had been revised to permit women to have their feet washed on Maundy Thursday; previously it permitted only males to do so. In 2016 Catholic priests around the world washed both women 's and men 's feet on Holy Thursday "their gesture of humility represented to many the progress of inclusion in the Catholic church. ''
At one time, most of the European monarchs also performed the Washing of Feet in their royal courts on Maundy Thursday, a practice continued by the Austro - Hungarian Emperor and the King of Spain up to the beginning of the 20th century (see Royal Maundy). In 1181 Roger de Moulins, Grand Master of the Knights Hospitaller issued a statute declaring, "In Lent every Saturday, they are accustomed to celebrate maundy for thirteen poor persons, and to wash their feet, and to give to each a shirt and new breeches and new shoes, and to three chaplains, or to three clerics out of the thirteen, three deniers and to each of the others, two deniers ''.
The Eastern Orthodox and Eastern Catholic Churches practice the ritual of the Washing of Feet on Holy and Great Thursday (Maundy Thursday) according to their ancient rites. The service may be performed either by a bishop, washing the feet of twelve priests; or by an Hegumen (Abbot) washing the feet of twelve members of the brotherhood of his monastery. The ceremony takes place at the end of the Divine Liturgy.
After Holy Communion, and before the dismissal, the brethren all go in procession to the place where the Washing of Feet is to take place (it may be in the center of the nave, in the narthex, or a location outside). After a psalm and some troparia (hymns) an ektenia (litany) is recited, and the bishop or abbot reads a prayer. Then the deacon reads the account in the Gospel of John, while the clergy perform the roles of Christ and his apostles as each action is chanted by the deacon. The deacon stops when the dialogue between Jesus and Peter begins. The senior - ranking clergyman among those whose feet are being washed speaks the words of Peter, and the bishop or abbot speaks the words of Jesus. Then the bishop or abbot himself concludes the reading of the Gospel, after which he says another prayer and sprinkles all of those present with the water that was used for the foot washing. The procession then returns to the church and the final dismissal is given as normal.
Foot washing rites are also observed in the Oriental Orthodox churches on Maundy Thursday.
In the Coptic Orthodox Church the service is performed by the parish priest. He blesses the water for the foot washing with the cross, just as he would for blessing holy water and he washes the feet of the entire congregation.
In the Syrian Orthodox Church, this service is performed by a bishop or priest. There will be some 12 selected men, both priests and the lay people, and the bishop or priest will wash and kiss the feet of those 12 men. It is not merely a dramatization of the past event. Further it is a prayer where the whole congregation prays to wash and cleanse them of their sins.
Foot washing is observed by numerous Protestant and proto - Protestant groups, including Seventh - day Adventist, Pentecostal, and Pietistic groups, some Anabaptists, and several types of Southern Baptists. Foot washing rites are also practiced by many Anglican, Lutheran and Methodist churches, whereby foot washing is most often experienced in connection with Maundy Thursday services and, sometimes, at ordination services where the Bishop may wash the feet of those who are to be ordained. Though history shows that foot washing has at times been practiced in connection with baptism, and at times as a separate occasion, by far its most common practice has been in connection with the Lord 's supper service. The Moravian Church practiced Foot Washing until 1818. There has been some revival of the practice as other liturgical churches have also rediscovered the practice.
The observance of washing the saints ' feet is quite varied, but a typical service follows the partaking of unleavened bread and wine. Deacons (in many cases) place pans of water in front of pews that have been arranged for the service. The men and women participate in separate groups, men washing men 's feet and women washing women 's feet. Each member of the congregation takes a turn washing the feet of another member. Each foot is placed one at a time into the basin of water, is washed by cupping the hand and pouring water over the foot, and is dried with a long towel girded around the waist of the member performing the washing. Most of these services appear to be quite moving to the participants.
Among groups that do not observe foot washing as an ordinance or rite, the example of Jesus is usually held to be symbolic and didactic. Among these groups, foot washing is nevertheless sometimes literally practiced. First, some reserve it to be a practice of hospitality or a work of necessity. Secondly, some present it as a dramatic lesson acted out in front of the congregation.
Groups descending from the 1708 Schwarzenau Brethren, such as the Grace Brethren, Church of the Brethren, Brethren Church, Old German Baptist Brethren, and the Dunkard Brethren regularly practice foot washing (generally called "feetwashing '') as one of three ordinances that compose their Lovefeast, the others being the Eucharist and a fellowship meal. Historically related groups such as the Amish and most Mennonites also wash feet, tracing the practice to the 1632 Dordrecht Confession of Faith. For members, this practice promotes humility towards and care for others, resulting in a higher egalitarianism among members.
Many Baptists observe the literal washing of feet as a third ordinance. The communion and foot washing service is practiced regularly by members of the Separate Baptists in Christ, General Association of Baptists, Free Will Baptists, Primitive Baptists, Union Baptists, Old Regular Baptist, Christian Baptist Church of God, and Brethren in Christ. Feet washing is also practiced as a third ordinance by many Southern Baptists, General Baptists, and Independent Baptists.
In the mid-1830s, Joseph Smith introduced the original temple rites of the Latter Day Saint movement in Kirtland, Ohio, which primarily involved foot washing, followed by speaking in tongues and visions. This foot washing took place exclusively among men, and was based upon the Old and New Testament. After Joseph Smith was initiated into the first three degrees of Freemasonry, this was adapted into the whole body "Endowment '' ritual more similar to contemporary Mormon practice, which is nearly identical to Masonic temple rites, and does not specifically involve the feet. In 1843, Smith included a foot washing element in the faith 's second anointing ceremony in which elite married couples are anointed as heavenly monarchs and priests.
The True Jesus Church includes footwashing as a scriptural sacrament based on John 13: 1 -- 11. Like the other two sacraments, namely Baptism and the Lord 's Supper, members of the church believe that footwashing imparts salvific grace to the recipient -- in this case, to have a part with Christ (John 13: 8).
Most Church of God denominations also include footwashing in their Passover ceremony as instructed by Jesus in John 13: 1 -- 11.
Most Seventh - day Adventist congregations schedule an opportunity for foot washing preceding each quarterly (four times a year) Communion service. As with their "open '' Communion, all believers in attendance, not just members or pastors, are invited to share in the washing of feet with another: men with men, women with women, and frequently, spouse with spouse. This service is alternatively called the Ordinance of Foot - Washing or the Ordinance of Humility. Its primary purpose is to renew the cleansing that only comes from Christ, but secondarily to seek and celebrate reconciliation with another member before Communion / the Lord 's Supper.
A number of Jewish rabbis who disagree with the initiation custom of brit milah, or circumcision of a male baby, instead have offered brit shalom, or a multi-part naming ceremony which eschews circumcision. One portion of the ritual, Brit rechitzah, involves the washing of the baby 's feet.
see also
|
who sang white sportcoat and a pink carnation | A White Sport Coat - Wikipedia
"A White Sport Coat (and a Pink Carnation) '' is a 1957 country and western song with words and music both written by Marty Robbins. It was recorded January 25, 1957, and released on the Columbia Records label March 4, 1957. The arranger and recording session conductor was Ray Conniff, an in - house conductor / arranger at Columbia. Robbins had demanded to have Conniff in charge of the song after his earlier hit, "Singing the Blues '', had been quickly eclipsed on the charts by Guy Mitchell'a cover version scored and conducted by Conniff in October, 1956.
Robbins recalled writing the song in about 20 minutes while being driven in a car. He is said to have had the inspiration for the song while driving from a motel to a venue in Ohio where he was due to perform that evening. During the course of the journey, he passed a high school, where the students were dressed ready for their prom.
The song reached number one on the U.S. country chart becoming Marty Robbins ' third number one, the song reached number two on the Billboard pop chart in the U.S. and # 1 in the Australian music charts in 1957. A version by Johnny Desmond got some play also, peaking at # 62 on the US pop charts.
In UK the song was a notable hit for the English Rock'n'Roll singer Terry Dene, and also for The King Brothers. The Terry Dene version reached # 18 in the UK Charts, while The King Brothers ' recording peaked at # 6, both in early summer 1957.
Jimmy Buffett 's 1973 album A White Sport Coat and a Pink Crustacean spoofs the title of the song.
|
who wrote they say it's your birthday | Birthday (Beatles song) - wikipedia
"Birthday '' is a song written by Lennon -- McCartney and performed by the Beatles on their double album The Beatles (often known as "the White Album ''). It is the opening track on the third side of the LP (or the second disc in CD versions of the record). The song is an example of the Beatles ' return to more traditional rock and roll form, although their music had increased in complexity and it had developed more of its own characteristic style by this point. Surviving Beatles McCartney and Ringo Starr performed it for Starr 's 70th birthday at Radio City Music Hall on 7 July 2010.
The song was largely written during a recording session at Abbey Road Studios on 18 September 1968 by John Lennon and Paul McCartney. McCartney: "We thought, ' Why not make something up? ' So we got a riff going and arranged it around this riff. So that is 50 - 50 John and me, made up on the spot and recorded all in the same evening. '' During the session, the Beatles and the recording crew made a short trip around the corner to McCartney 's house to watch the 1956 rock & roll movie The Girl Ca n't Help It which was being shown for the first time on British television. After the movie they returned to record "Birthday ''.
George Martin was away so his assistant Chris Thomas produced the session. His memory is that the song was mostly Paul 's: "Paul was the first one in, and he was playing the ' Birthday ' riff. Eventually the others arrived, by which time Paul had literally written the song, right there in the studio. '' Everyone in the studio sang in the chorus and it was 5 am by the time the final mono mix was completed.
John Lennon said in his Playboy interview in 1980: "' Birthday ' was written in the studio. Just made up on the spot. I think Paul wanted to write a song like ' Happy Birthday Baby ' (sic), the old fifties hit. But it was sort of made up in the studio. It was a piece of garbage. ''
"Birthday '' begins with an intro drum fill, then moves directly into a blues progression in A (in the form of a guitar riff doubled by the bass) with McCartney singing at the top of his chest voice with Lennon on a lower harmony. After this section, a drum break lasting eight measures brings the song into the middle section, which rests entirely on the dominant. A repeat of the blues progression / guitar riff instrumental section, augmented by piano brings the song into a bridge before returning to a repeat of the first vocal section, this time with the piano accompaniment.
According to Ian MacDonald:
McCartney released a live version on 8 October 1990 in the UK, with a US release albeit only as a cassette on 16 October. The single reached number 29 on the UK singles chart. The B - side was a live version of "Good Day Sunshine ''. McCartney also released a 12 '' single and CD single with those songs and two more tracks, "P.S. Love Me Do '' and "Let ' Em In ''. "P.S. Love Me Do '' is a combination of "P.S. I Love You '' and "Love Me Do ''.
Underground Sunshine recorded the song as a single in 1969. Their version was a minor hit in the US, reaching # 19 on the Cash Box chart and # 26 on the Billboard Hot 100. (((cn)))
Paul Weller covered the song for McCartney 's 70th birthday. This version was available for download on 18 June 2012 for one day only. Even with this limited mode of distribution, the track reached # 64 on the UK charts.
|
who sings she shook me all night long | You Shook Me All Night Long - wikipedia
"You Shook Me All Night Long '' is a song by Australian hard rock band AC / DC, from the album Back in Black. The song also reappeared on their later album Who Made Who. AC / DC 's first single with Brian Johnson as the lead singer, it reached number 35 on the USA 's Hot 100 pop singles chart in 1980. The single was re-released internationally in 1986, following the release of the album Who Made Who. The re-released single in 1986 contains the B - side (s): B1. "She 's Got Balls '' (Live, Bondi Lifesaver ' 77); B2. "You Shook Me All Night Long '' (Live ' 83 -- 12 - inch maxi - single only).
"You Shook Me All Night Long '' placed at number 10 on VH1 's list of "The 100 Greatest Songs of the 80s ''. It was also number 1 on VH1 's "Top Ten AC / DC Songs ''. Guitar World placed "You Shook Me All Night Long '' at number 80 on their "100 Greatest Guitar Solos '' list.
The song has also become a staple of AC / DC concerts, and is rarely excluded from the setlist.
Four live versions of the song were officially released. The first one appeared on the 1986 maxi - single "You Shook Me All Night Long ''; the second one was included on the band 's album Live; the third version is on the soundtrack to the Howard Stern movie Private Parts, and also appears on the AC / DC box set Backtracks; and the fourth one is on the band 's live album, Live at River Plate.
"You Shook Me All Night Long '' was also the second song to be played by AC / DC on Saturday Night Live in 2000, following their performance of "Stiff Upper Lip. '' When AC / DC was inducted to the Rock and Roll Hall of Fame in 2003 by Steven Tyler of Aerosmith, they performed this song with Tyler.
Johnson performed the song with Billy Joel at Madison Square Garden in New York, US in March 2014. The Salon publication stated on the following morning in its introduction to the video footage of the performance: "This will either be your favorite video today, or a total musical nightmare! ''
The song is in the key G major. The main verse and riff follows a G -- C -- D chord progression. The lyrics borrow heavily from the Willie Dixon tune "You Shook Me ''.
Two versions of the music video exist. The first version, directed by Eric Dionysius and Eric Mistler, is similar to the other Back in Black videos ("Back in Black '', "Hells Bells '', "What Do You Do For Money Honey '', "Rock and Roll Ai n't Noise Pollution '' and "Let Me Put My Love Into You '') and is available on the special Back in Black, The Videos. It is also included on the Backtracks box set.
In the second version, directed by David Mallet and released six years after the song 's original release, Angus and Malcolm Young follow Johnson around the English town of Huddersfield, with Angus Young wearing his signature schoolboy outfit. The video clip casts the English glamour model Corinne Russell, a former Hill 's Angel and Page 3 Girl -- along with other leather clad women wearing suits with zips at the groin region -- pedaling bicycles like machines in the background.
The VH1 series Pop - Up Video revealed that, during the scene with the mechanical bull, the woman playing Johnson 's lover accidentally jabbed herself with her spur twice. The roadie who came to her aid married her a year later, and Angus Young gave them a mechanical bull for a wedding present as a joke. The original 1980 video Phil Rudd played drums, while the 1986 video showed Simon Wright, who replaced Rudd in 1983. Rudd returned to AC / DC in 1994.
sales figures based on certification alone shipments figures based on certification alone
|
who plays gandalf the grey in lord of the rings | Ian McKellen - wikipedia
Sir Ian Murray McKellen, CH, CBE (born 25 May 1939) is an English actor. He is the recipient of six Laurence Olivier Awards, a Tony Award, a Golden Globe Award, a Screen Actors Guild Award, a BIF Award, two Saturn Awards, four Drama Desk Awards, and two Critics ' Choice Awards. He has also received two Oscar nominations, four BAFTA nominations and five Emmy Award nominations.
McKellen 's career spans genres ranging from Shakespearean and modern theatre to popular fantasy and science fiction. The BBC states his "performances have guaranteed him a place in the canon of English stage and film actors ''. A recipient of every major theatrical award in the UK, McKellen is regarded as a British cultural icon. He started his professional career in 1961 at the Belgrade Theatre as a member of their highly regarded repertory company. In 1965 McKellen made his first West End appearance. In 1969 he was invited to join the Prospect Theatre Company to play the lead parts in Shakespeare 's Richard II and Marlowe 's Edward II, and he firmly established himself as one of the country 's foremost classical actors. In the 1970s McKellen became a stalwart of the Royal Shakespeare Company and the National Theatre of Great Britain. He achieved worldwide fame for his notable film roles, which include Magneto in the X-Men films and Gandalf in The Lord of the Rings and The Hobbit trilogies, both of which introduced McKellen to a new generation.
McKellen was appointed Commander of the Order of the British Empire in the 1979 Birthday Honours, was knighted in the 1991 New Year Honours for services to the performing arts, and made a Companion of Honour for services to drama and to equality in the 2008 New Year Honours. He has been openly gay since 1988, and continues to be a champion for LGBT social movements worldwide. He was made a Freeman of the City of London in October 2014.
McKellen was born on 25 May 1939 in Burnley, Lancashire, the son of Margery Lois (née Sutcliffe) and Denis Murray McKellen, a civil engineer. He was their second child, with a sister, Jean, five years his senior. Shortly before the outbreak of the Second World War in September 1939, his family moved to Wigan. They lived there until Ian was twelve years old, before relocating to Bolton in 1951, after his father had been promoted. The experience of living through the war as a young child had a lasting impact on him, and he later said that "only after peace resumed... did I realise that war was n't normal. '' When an interviewer remarked that he seemed quite calm in the aftermath of 11 September attacks, McKellen said: "Well, darling, you forget -- I slept under a steel plate until I was four years old. '' Even though he only lived in Bolton for less than seven years, as opposed to the eleven years in Wigan beforehand, he refers to Bolton as his "Hometown ''.
McKellen 's father was a civil engineer and lay preacher, and was of Protestant Irish and Scottish descent. Both of McKellen 's grandfathers were preachers, and his great - great - grandfather, James McKellen, was a "strict, evangelical Protestant minister '' in Ballymena, County Antrim. His home environment was strongly Christian, but non-orthodox. "My upbringing was of low nonconformist Christians who felt that you led the Christian life in part by behaving in a Christian manner to everybody you met. '' When he was 12, his mother died of breast cancer; his father died when he was 24. After his coming out of the closet to his stepmother, Gladys McKellen, who was a member of the Religious Society of Friends, he said, "Not only was she not fazed, but as a member of a society which declared its indifference to people 's sexuality years back, I think she was just glad for my sake that I was n't lying anymore. '' His great - great - grandfather Robert J. Lowes was an activist and campaigner in the ultimately successful campaign for a Saturday half - holiday in Manchester, the forerunner to the modern five - day work week, thus making Lowes a "grandfather of the modern weekend ''.
McKellen attended Bolton School (Boys ' Division), of which he is still a supporter, attending regularly to talk to pupils. McKellen 's acting career started at Bolton Little Theatre, of which he is now the patron. An early fascination with the theatre was encouraged by his parents, who took him on a family outing to Peter Pan at the Opera House in Manchester when he was three. When he was nine, his main Christmas present was a wood and bakelite, fold - away Victorian theatre from Pollocks Toy Theatres, with cardboard scenery and wires to push on the cut - outs of Cinderella and of Laurence Olivier 's Hamlet.
His sister took him to his first Shakespeare play, Twelfth Night, by the amateurs of Wigan 's Little Theatre, shortly followed by their Macbeth and Wigan High School for Girls ' production of A Midsummer Night 's Dream, with music by Mendelssohn, with the role of Bottom played by Jean McKellen, who continued to act, direct, and produce amateur theatre until her death.
In 1958, McKellen, at the age of 18, won a scholarship to St Catharine 's College, Cambridge, where he read English literature. He has since been made an Honorary Fellow of the College. While at Cambridge, McKellen was a member of the Marlowe Society, where he appeared in 23 plays over the course of 3 years. At that young age he was already giving performances that have since become legendary such as his Justice Shallow in Henry IV alongside Trevor Nunn and Derek Jacobi (March 1959), Cymbeline (as Posthumus, opposite Margaret Drabble as Imogen) and Doctor Faustus. During this period McKellen had already been directed by Peter Hall, John Barton and Dadie Rylands all of whom would have a huge impact on McKellen 's future career.
McKellen made his first professional appearance in 1961 at the Belgrade Theatre, as Roper in A Man for All Seasons, although an audio recording of the Marlowe Society 's Cymbeline had gone on commercial sale as part of the Argo Shakespeare series.
After four years in regional repertory theatres he made his first West End appearance, in A Scent of Flowers, regarded as a "notable success ''. In 1965 he was a member of Laurence Olivier 's National Theatre Company at the Old Vic, which led to roles at the Chichester Festival. With the Prospect Theatre Company, McKellen made his breakthrough performances of Richard II (directed by Richard Cottrell) and Marlowe 's Edward II (directed by Toby Robertson) at the Edinburgh festival in 1969, the latter causing a storm of protest over the enactment of the homosexual Edward 's lurid death.
In the 1970s and 1980s McKellen became a well - known figure in British theatre, performing frequently at the Royal Shakespeare Company and the Royal National Theatre, where he played several leading Shakespearean roles, including the title role in Macbeth (which he had first played for Trevor Nunn in a "gripping... out of the ordinary '' production, with Judi Dench, at Stratford in 1976), and Iago in Othello, in award - winning productions directed by Nunn. Both of these productions were adapted into television films, also directed by Nunn.
In 2007 he returned to the Royal Shakespeare Company, in productions of King Lear and The Seagull, both directed by Trevor Nunn. In 2009 he appeared in a very popular revival of Waiting for Godot at London 's Haymarket Theatre, directed by Sean Mathias, and playing opposite Patrick Stewart. He is Patron of English Touring Theatre and also President and Patron of the Little Theatre Guild of Great Britain, an association of amateur theatre organisations throughout the UK. In late August 2012, he took part in the opening ceremony of the London Paralympics, portraying Prospero from The Tempest.
McKellen had taken film roles throughout his career -- beginning in 1969 with his role of George Matthews in A Touch of Love, and his first leading role was in 1980 as D.H. Lawrence in Priest of Love, but it was not until the 1990s that he became more widely recognised in this medium after several roles in blockbuster Hollywood films. In 1993 he had a supporting role as a South African tycoon in the critically acclaimed Six Degrees of Separation, in which he starred with Stockard Channing, Donald Sutherland, and Will Smith. In the same year, he appeared in minor roles in the television miniseries Tales of the City, based on the novel by his friend Armistead Maupin, and the film Last Action Hero, in which he played Death.
Later in 1993, McKellen appeared in the television film And the Band Played On, about the discovery of the AIDS virus, for which McKellen won a CableACE Award for Supporting Actor in a Movie or Miniseries and was nominated for the Emmy Award for Outstanding Supporting Actor in a Miniseries or a Movie. In 1995, he played the title role in Richard III, which transported the setting into an alternative 1930s in which England is ruled by fascists. The film was a critical success. McKellen co-produced and co-wrote the film, adapting the play for the screen based on a stage production of Shakespeare 's play directed by Richard Eyre for the Royal National Theatre, in which McKellen had appeared. As executive producer he returned his £ 50,000 fee to complete the filming of the final battle. In his review of the film, Washington Post film critic Hal Hinson, called McKellen 's performance a "lethally flamboyant incarnation '', and said his "florid mastery... dominates everything ''. His performance in the title role garnered BAFTA and Golden Globe nominations for Best Actor, and won the European Film Award for Best Actor. His screenplay was nominated for the BAFTA Award for Best Adapted Screenplay.
He appeared in the modestly acclaimed film Apt Pupil, which was directed by Bryan Singer and based on a story by Stephen King. McKellen portrayed a fugitive Nazi officer, living under a false name in the US, who is befriended by a curious teenager (Brad Renfro) who threatens to expose him unless he tells his story in detail. He was subsequently nominated for the Academy Award for Best Actor for his role in the 1998 film Gods and Monsters, wherein he played James Whale, the director of Show Boat (1936) and Frankenstein.
In 1999 McKellen was cast, again under the direction of Bryan Singer, to play the comic book supervillain Magneto in the 2000 film X-Men and its sequels X2: X-Men United (2003) and X-Men: The Last Stand (2006). He later made a short appearance as an older Magneto in 2014 's X-Men: Days of Future Past.
While filming the first X-Men film in 1999, McKellen was cast as the wizard Gandalf in Peter Jackson 's three - film adaptation of The Lord of the Rings (consisting of The Fellowship of the Ring (2001), The Two Towers (2002), and The Return of the King (2003)). He received honors from the Screen Actors Guild for Best Supporting Actor in a Motion Picture for his work in The Fellowship of the Ring, and was nominated for the Academy Award for Best Supporting Actor for the same role. He provided the voice of Gandalf for several video game adaptations of the Lord of the Rings films, then reprised the role on screen in Jackson 's film adaptation of The Hobbit, which was released in three parts from 2012 to 2014.
On 16 March 2002, he hosted Saturday Night Live. In 2003, McKellen made a guest appearance as himself on the American cartoon show The Simpsons, in a special British - themed episode entitled "The Regina Monologues '', along with the then UK Prime Minister Tony Blair and author J.K. Rowling. In April and May 2005, he played the role of Mel Hutchwright in Granada Television 's long running British soap opera, Coronation Street, fulfilling a lifelong ambition. He narrated Richard Bell 's film Eighteen, as a grandfather who leaves his World War II memoirs on audio - cassette for his teenage grandson.
McKellen has appeared in limited release films, such as Emile (which was shot in three weeks following the X2 shoot), Neverwas and Asylum. He appeared as Sir Leigh Teabing in The Da Vinci Code. During a 17 May 2006 interview on The Today Show with the Da Vinci Code cast and director, Matt Lauer posed a question to the group about how they would have felt if the film had borne a prominent disclaimer that it is a work of fiction, as some religious groups wanted. McKellen responded, "I 've often thought the Bible should have a disclaimer in the front saying ' This is fiction. ' I mean, walking on water? It takes... an act of faith. And I have faith in this movie -- not that it 's true, not that it 's factual, but that it 's a jolly good story. '' He continued, "And I think audiences are clever enough and bright enough to separate out fact and fiction, and discuss the thing when they 've seen it ''. McKellen appeared in the 2006 BBC series of Ricky Gervais ' comedy series Extras, where he played himself directing Gervais ' character Andy Millman in a play about gay lovers. McKellen received a 2007 Primetime Emmy Award for Outstanding Guest Actor - Comedy Series nomination for his performance. In 2009 he portrayed Number Two in The Prisoner, a remake of the 1967 cult series The Prisoner. In 2013, McKellen co-starred in the ITV sitcom Vicious as Freddie Thornhill, alongside Derek Jacobi. The series revolves around an elderly gay couple who have been together for 50 years. On 23 August 2013 the show was renewed for a six - episode second series which began airing in June 2015.
In November 2013, McKellen appeared in the Doctor Who 50th anniversary comedy homage The Five (ish) Doctors Reboot. He reprised his role as Magneto in X-Men: Days of Future Past, released in May 2014; he shared the role with Michael Fassbender, who played a younger version of the character in 2011 's X-Men: First Class. In October 2015, McKellen appeared as Norman to Anthony Hopkins ' Sir in a BBC Two production of Ronald Harwood 's The Dresser, alongside Edward Fox and Emily Watson. In 2017, McKellen voiced Cogsworth, the Beast 's loyal majordomo, who was turned into a pendulum clock, in a live - action adaptation of Beauty and the Beast.
McKellen and his first partner, Brian Taylor, a history teacher from Bolton, began their relationship in 1964. Their relationship lasted for eight years, ending in 1972. They lived in London, where McKellen continued to pursue his career as an actor. For over a decade, he has lived in a five - storey Victorian conversion in Narrow Street, Limehouse. In 1978 he met his second partner, Sean Mathias, at the Edinburgh Festival. This relationship lasted until 1988, and acording to Mathias, was tempestuous, with conflicts over McKellen 's success in acting versus Mathias 's somewhat less - successful career. Mathias later directed McKellen in Waiting for Godot at the Theatre Royal Haymarket in 2009. The pair entered into a business partnership with Evgeny Lebedev, purchasing the lease on The Grapes public house in Narrow Street.
McKellen is an atheist. In the late 1980s, McKellen lost his appetite for meat except for fish, and has since followed a mainly pescetarian diet. In 2001, Ian McKellen received the Artist Citizen of the World Award (France).
He has a tattoo of the Elvish number nine, written using J.R. R Tolkien 's artificial script of Tengwar, on his shoulder in reference to his involvement in the Lord of the Rings and the fact that his character was one of the original nine companions of the Fellowship of the Ring. The other actors of "The Fellowship '' (Elijah Wood, Sean Astin, Orlando Bloom, Billy Boyd, Sean Bean, Dominic Monaghan and Viggo Mortensen) have the same tattoo. John Rhys - Davies, whose character was also one of the original nine companions, arranged for his stunt double to get the tattoo instead.
He was diagnosed with prostate cancer in 2006. In 2012, McKellen stated on his blog that "There is no cause for alarm. I am examined regularly and the cancer is contained. I 've not needed any treatment. ''
He became an ordained minister of the Universal Life Church in early 2013 in order to preside over the marriage of his friend and X-Men co-star Patrick Stewart to his then fiancée Sunny Ozell.
McKellen was awarded an honorary Doctorate of Letters by Cambridge University on 18 June 2014. He was made a Freeman of the city of London on Thursday 30 October 2014. The ceremony took place at Guildhall in London. McKellen was nominated by London 's Lord Mayor Fiona Woolf, who said he was chosen as he was an "exceptional actor '' and "tireless campaigner for equality ''. He is also a Fellow of St Catherine 's College, Oxford.
While McKellen had made his sexual orientation known to fellow actors early on in his stage career, it was not until 1988 that he came out to the general public, in a programme on BBC Radio. The context that prompted McKellen 's decision -- overriding any concerns about a possible negative effect on his career -- was that the controversial Section 28 of the Local Government Bill, known simply as Section 28, was then under consideration in the British Parliament. Section 28 proposed prohibiting local authorities from promoting homosexuality "... as a kind of pretended family relationship ''. McKellen became active in fighting the proposed law, and, during a BBC Radio 3 programme where he debated Section 28 with the conservative journalist Peregrine Worsthorne, declared himself gay. McKellen has stated that he was influenced in his decision by the advice and support of his friends, among them noted gay author Armistead Maupin. In a 1998 interview that discusses the 29th anniversary of the Stonewall riots, McKellen commented,
I have many regrets about not having come out earlier, but one of them might be that I did n't engage myself in the politicking.
He has said of this period:
My own participating in that campaign was a focus for people (to) take comfort that if Ian McKellen was on board for this, perhaps it would be all right for other people to be as well, gay and straight.
Section 28 was, however, enacted and remained on the statute books until 2000 in Scotland and 2003 in England and Wales. Section 28 never applied in Northern Ireland.
In 2003, during an appearance on Have I Got News For You, McKellen claimed when he visited Michael Howard, then Environment Secretary (responsible for local government), in 1988 to lobby against Section 28, Howard refused to change his position but did ask him to leave an autograph for his children. McKellen agreed, but wrote, "Fuck off, I 'm gay. '' McKellen described Howard 's junior ministers, Conservatives David Wilshire and Dame Jill Knight, who were the architects of Section 28, as the ' ugly sisters ' of a political pantomime.
McKellen has continued to be very active in LGBT rights efforts. In a statement on his website regarding his activism, the actor commented that:
I have been reluctant to lobby on other issues I most care about -- nuclear weapons (against), religion (atheist), capital punishment (anti), AIDS (fund - raiser) because I never want to be forever spouting, diluting the impact of addressing my most urgent concern; legal and social equality for gay people worldwide.
McKellen is a co-founder of Stonewall, an LGBT rights lobby group in the United Kingdom, named after the Stonewall riots. McKellen is also patron of LGBT History Month, Pride London, Oxford Pride, GAY - GLOS, The Lesbian & Gay Foundation, and FFLAG where he appears in their video "Parents Talking ''.
In 1994, at the closing ceremony of the Gay Games, he briefly took the stage to address the crowd, saying, "I 'm Sir Ian McKellen, but you can call me Serena '': This nickname, given to him by Stephen Fry, had been circulating within the gay community since McKellen 's knighthood was conferred. In 2002, he was the Celebrity Grand Marshal of the San Francisco Pride Parade and he attended the Academy Awards with his then - boyfriend, New Zealander Nick Cuthell. In 2006, McKellen spoke at the pre-launch of the 2007 LGBT History Month in the UK, lending his support to the organisation and its founder, Sue Sanders. In 2007, he became a patron of The Albert Kennedy Trust, an organisation that provides support to young, homeless and troubled LGBT people.
In 2006, he became a patron of Oxford Pride, stating:
I send my love to all members of Oxford Pride, their sponsors and supporters, of which I am proud to be one... Onlookers can be impressed by our confidence and determination to be ourselves and gay people, of whatever age, can be comforted by the occasion to take the first steps towards coming out and leaving the closet forever behind.
McKellen has taken his activism internationally, and caused a major stir in Singapore, where he was invited to do an interview on a morning show and shocked the interviewer by asking if they could recommend him a gay bar; the programme immediately ended. In December 2008, he was named in Out 's annual Out 100 list.
In 2010, McKellen extended his support for Liverpool 's Homotopia festival in which a group of gay and lesbian Merseyside teenagers helped to produce an anti-homophobia campaign pack for schools and youth centres across the city. In May 2011, he called Sergey Sobyanin, Moscow 's mayor, a "coward '' for refusing to allow gay parades in the city.
In 2014, he was named in the top 10 on the World Pride Power list.
In April 2010, along with actors Brian Cox and Eleanor Bron, McKellen appeared in a series of TV advertisements to support Age UK, the charity recently formed from the merger of Age Concern and Help the Aged. All three actors gave their time free of charge.
A cricket fan since childhood, McKellen umpired in March 2011 for a charity cricket match in New Zealand to support earthquake victims of the February 2011 Christchurch earthquake.
McKellen is an honorary board member for the New York and Washington, DC based organization Only Make Believe. Only Make Believe creates and performs interactive plays in children 's hospitals and care facilities. He was honoured by the organisation in 2012 and hosted their annual Make Believe on Broadway Gala in November 2013. He garnered publicity for the organisation by stripping down to his Lord of the Rings underwear on stage.
McKellen also has a history of supporting individual theatres. While in New Zealand filming The Hobbit in 2012, he announced a special New Zealand tour "Shakespeare, Tolkien, and You! '', with proceeds going to help save the Isaac Theatre Royal, which suffered extensive damage during the 2011 Christchurch earthquake. McKellen said he opted to help save the building as it was the last theatre he played in New Zealand (Waiting for Godot in 2010) and the locals ' love for it made it a place worth supporting. In July 2017, he performed a new one - man show for a week at Park Theatre (London), donating the proceeds to the theatre.
A friend of Ian Charleson and an admirer of his work, McKellen contributed an entire chapter to For Ian Charleson: A Tribute. A recording of McKellen 's voice is heard before performances at the Royal Festival Hall, reminding patrons to ensure their mobile phones and watch alarms are switched off and to keep coughing to a minimum. He also took part in the 2012 Summer Paralympics opening ceremony in London as Prospero from Shakespeare 's The Tempest.
Notes
|
who had a battering ram for a weapon | Battering ram - wikipedia
A battering ram is a siege engine that originated in ancient times and designed to break open the masonry walls of fortifications or splinter their wooden gates.
In its simplest form, a battering ram is just a large, heavy log carried by several people and propelled with force against an obstacle; the ram would be sufficient to damage the target if the log were massive enough and / or it were moved quickly enough (that is, if it had enough momentum). Later rams encased the log in an arrow - proof, fire - resistant canopy mounted on wheels. Inside the canopy, the log was swung from suspensory chains or ropes.
Rams proved effective weapons of war because old fashioned wall - building materials such as stone and brick were weak in tension, and therefore prone to cracking when impacted with force. With repeated blows, the cracks would grow steadily until a hole was created. Eventually, a breach would appear in the fabric of the wall -- enabling armed attackers to force their way through the gap and engage the inhabitants of the citadel.
The introduction in the later Middle Ages of siege cannons, which harnessed the explosive power of gunpowder to propel weighty stone or iron balls against fortified obstacles, spelled the end of battering rams and other traditional siege weapons. Smaller, hand - held versions of battering rams are still used today by law enforcement officers and military personnel to bash open locked doors.
During the Iron Age, in the ancient Middle East and Mediterranean, the battering ram 's log was slung from a wheeled frame by ropes or chains so that it could be made more massive and be more easily bashed against its target. Frequently, the ram 's point would be reinforced with a metal head or cap while vulnerable parts of the shaft were bound with strengthening metal bands. Vitruvius details in his text On Architecture that Ceras the Carthaginian was the first to make a ram with a wooden base with wheels and a wooden superstructure. Within this the ram was hung so that it could be used against the wall. This structure moved so slowly, however, that he called it the testudo (the Latin word for "tortoise '').
Another type of ram was one that maintained the normal shape and structure, but the support beams were instead made of saplings that were lashed together. The frame was then covered in hides as normal to defend from fire. The only solid beam present was the actual ram that was hung from the frame. The frame itself was so light that it could be carried on the shoulders of the men transporting the ram, and the same men could beat the ram against the wall when they reached it.
Many battering rams possessed curved or slanted wooden roofs and side - screens covered in protective materials, usually fresh wet hides, presumably skinned from animals eaten by the besiegers. These hide canopies stopped the ram from being set on fire. They also safeguarded the operators of the ram against arrow and spear volleys launched from above.
A well - known image of an Assyrian battering ram depicts how sophisticated attacking and defensive practices had become by the 9th century BC. The defenders of a town wall are trying to set the ram alight with torches and have also put a chain under it. The attackers are trying to pull on the chain to free the ram, while the aforementioned wet hides on the canopy provide protection against the flames.
The first confirmed employment of rams in the Occident happened from 503 to 502 BC when Opiter Verginius became consul of the Romans during the fight against Aurunci people:
Soldier in a first line followed Opiter Verginius, next to them, there were battering - rams (vinea) which were used for war
Second appeared in 427 BC, when the Spartans besieged Plataea. The first use of rams within the actual Mediterranean Basin, featuring in this case the simultaneous employment of siege towers to shelter the rammers from attack, occurred on the island of Sicily in 409 BC, at the Selinus siege.
Defenders manning castles, forts or bastions would sometimes try to foil battering rams by dropping obstacles in front of the ram, such as a large sack of sawdust, just before the ram 's head struck a wall or gate, or by using grappling hooks to immobilize the ram 's log. Alternatively, the ram could be set ablaze, doused in fire - heated sand, pounded by boulders dropped from battlements or invested by a rapid sally of troops.
Some battering rams were not slung from ropes or chains, but were instead supported by rollers. This allowed the ram to achieve a greater speed before striking its target, making it more destructive. Such a ram, as used by Alexander the Great, is described by the writer Vitruvius.
Alternatives to the battering ram included the drill, the sapper 's mouse, the pick, the siege hook, and the hunting ram. These devices were smaller than a ram and could be used in confined spaces.
Battering rams had an important effect on the evolution of defensive walls, which were constructed ever more ingeniously in a bid to nullify the effects of siege engines. Historical instances of the usage of battering rams in sieges of major cities include:
There is a popular myth in Gloucester that the famous children 's rhyme, Humpty Dumpty, is about a battering ram used in the siege of Gloucester in 1643, during the English Civil War. However, the story is almost certainly untrue; during the siege, which lasted only one month, no battering rams were used, although many cannons were. The idea seems to have originated in a spoof history essay by Professor David Daube written for The Oxford Magazine in 1956, which was widely believed despite obvious improbabilities (e.g., planning to cross River Severn by running the ram down a hill at speed, although the river is about 30 m (100 feet) wide at this point).
A capped ram is a battering ram that has an accessory at the head (usually made of iron or steel and sometimes punningly shaped into the head and horns of an ovine ram) to do more damage to a building. It was much more effective at destroying enemy walls and buildings than an uncapped ram but was heavier to carry.
Pliny the Elder in his Naturalis Historia describes a battering ram used in mining, where hard rock needed to be broken down to release the ore. The pole possessed a metal tip weighing 150 pounds, so the whole device will have weighed at least twice as much in order to preserve its balance. Whether or not it was supported by being suspended with ropes from a frame remains unknown, but very likely given its total weight. Such devices were used during coal mining in the 19th century in Great Britain before the widespread use of explosives, which were expensive and dangerous to use in practice.
Battering rams still have a use in modern times. SWAT teams and other police forces often employ small, one - man or two - man metal rams for forcing open locked portals or effecting a door breaching. Modern battering rams sometimes incorporate a cylinder, along the length of which a piston fires automatically upon striking a hard object, thus enhancing the momentum of the impact significantly.
In The Lord of the Rings, an enchanted battering ram named Grond was used to assault the Great Gate of Minas Tirith. It was 150 feet long and capped with an iron wolf 's head.
|
where was the last scene of minority report filmed | Minority Report (film) - wikipedia
Minority Report is a 2002 American neo-noir science fiction film directed by Steven Spielberg and loosely based on the short story of the same name by Philip K. Dick. It is set primarily in Washington, D.C., and Northern Virginia in the year 2054, where "PreCrime '', a specialized police department, apprehends criminals based on foreknowledge provided by three psychics called "precogs ''. The cast includes Tom Cruise as Chief of PreCrime John Anderton, Colin Farrell as Department of Justice agent Danny Witwer, Samantha Morton as the senior precog Agatha, and Max von Sydow as Anderton 's superior Lamar Burgess. The film combines elements of tech noir, whodunit, thriller and science fiction genres, as well as a traditional chase film, as the main protagonist is accused of a crime he has not committed and becomes a fugitive. Spielberg has characterized the story as "fifty percent character and fifty percent very complicated storytelling with layers and layers of murder mystery and plot ''. The film 's central theme is the question of free will versus determinism. It examines whether free will can exist if the future is set and known in advance. Other themes include the role of preventive government in protecting its citizenry, the role of media in a future state where technological advancements make its presence nearly boundless, the potential legality of an infallible prosecutor, and Spielberg 's repeated theme of broken families.
The film was first optioned in 1992 as a sequel to another Dick adaptation, Total Recall, and started its development in 1997, after a script by Jon Cohen reached Spielberg and Cruise. Production suffered many delays due to Cruise 's Mission: Impossible 2 and Spielberg 's A.I. running over schedule, eventually starting in March 2001. During pre-production, Spielberg consulted numerous scientists in an attempt to present a more plausible future world than that seen in other science fiction films, and some of the technology designs in the film have proven prescient. Minority Report has a unique visual style. It uses high contrast to create dark colors and shadows, much like a film noir picture. The film 's overlit shots feature desaturated colors which were achieved by bleach - bypassing the film 's negative in post-production.
Minority Report was one of the best reviewed films of 2002. It received praise for its writing, visuals and themes, but earned some criticism for its ending which was considered inconsistent with the tone of the rest of the movie. The film was nominated for and won several awards. It received an Academy Award nomination for Best Sound Editing, and eleven Saturn Award nominations, including Best Actor, Best Supporting Actor, and Saturn Award for Best Music, winning Best Science Fiction Film, Best Direction, Best Writing, and Best Supporting Actress. The film was a commercial success, earning over $358 million worldwide against an overall budget of $142 million (including advertising). Over four million DVDs were sold in its first few months of home release.
In April 2054, Washington, DC 's PreCrime police stops murderers before they act, reducing the murder rate to zero. Murders are predicted using three mutated humans, called "Precogs '', who "previsualize '' crimes by receiving visions of the future. Would - be murderers are imprisoned in their own happy virtual reality. The Federal government is on the verge of adopting the controversial program.
Since the disappearance of his son Sean, PreCrime Captain John Anderton has both separated from his wife Lara and become a drug addict. While United States Department of Justice agent Danny Witwer is auditing the program, the Precogs generate a new prediction, saying Anderton will murder a man named Leo Crow in 36 hours. Anderton does not know Crow, but flees the area as Witwer begins a manhunt. Anderton seeks the advice of Dr. Iris Hineman, the creator of PreCrime technology. She reveals that sometimes, one of the Precogs, usually Agatha, has a different vision than the other two, a "minority report '' of a possible alternate future; this has been kept a secret as it would damage the system 's credibility. Anderton resolves to recover the minority report to prove his innocence.
Anderton goes to a black market doctor for a risky eye transplant so as to avoid the citywide optical recognition system. He returns to PreCrime and kidnaps Agatha, shutting down the system, as the Precogs operate as a group mind. Anderton takes Agatha to a hacker to extract the minority report of Leo Crow, but none exists; instead, Agatha shows him an image of the murder of Ann Lively, a woman who was drowned by a hooded figure in 2049.
Anderton and Agatha go to Crow 's hotel room as the 36 - hour time nears, finding numerous photos of children, including Sean 's. Crow arrives and Anderton prepares to kill him, accusing him of being a serial child killer. Agatha talks Anderton out of shooting Crow by telling him that he has the ability to choose his future now that he is aware of it. Crow however begs to be killed, having been hired to plant the photos and be killed in exchange for his family 's financial well being. Crow grabs Anderton 's gun and pushes the trigger, killing himself. Anderton and Agatha flee to Lara 's house outside the city for refuge. There they learn Lively was Agatha 's drug - addicted mother who sold her to PreCrime. Lively had sobered up and attempted to reclaim Agatha, but was murdered. Anderton realizes he is being targeted for knowing about Lively 's existence and her connection to Agatha.
Witwer, studying Crow 's death, suspects Anderton is being framed. He examines the footage of Lively 's murder and finds there were two attempts on her life, the first having been stopped by PreCrime but the second, occurring minutes later, having succeeded. Witwer reports this to the director and founder of PreCrime, Lamar Burgess, but Burgess responds by killing Witwer using Anderton 's gun. With the Precogs still offline, the murder is not detected.
Lara calls Burgess to reveal that Anderton is with her, and Anderton is captured, accused of both murders, and fitted with the brain device that puts him permanently into a dreamlike sleep. As his body is deposited into the prison, the warden tells him, "that all your dreams come true ''.
Agatha is reconnected to the PreCrime system. While attempting to comfort Lara, Burgess accidentally reveals himself as Lively 's murderer. Lara frees Anderton from stasis, and Anderton exposes Burgess at a PreCrime celebratory banquet by playing the full video of Agatha 's vision of Burgess killing Lively. A new report is generated at PreCrime: Burgess will kill Anderton. Burgess corners Anderton, and explains that as he could not afford to let Lively take Agatha back without impacting PreCrime, he arranged to kill Lively following an actual attempt on her life, so that the murder would appear as an echo to the technician within PreCrime and be ignored. Anderton points out Burgess 's dilemma: If Burgess kills Anderton, he will be imprisoned for life, but PreCrime will be validated; if he spares Anderton, PreCrime will be discredited and shut down. Anderton reveals the ultimate flaw of the system: once people are aware of their future, they are able to change it. Burgess shoots himself.
After Burgess 's death, the PreCrime system is shut down. All the prisoners are unconditionally pardoned and released, although they are kept under occasional surveillance. Anderton and Lara are soon to have a new child together. The Precogs are sent to an undisclosed location to live their lives in peace.
Dick 's story was first optioned by producer and writer Gary Goldman in 1992. He created the initial script for the film with Ron Shusett and Robert Goethals (uncredited). It was supposed to be a sequel to the 1990 Dick adaptation Total Recall, which starred Arnold Schwarzenegger. Novelist Jon Cohen was hired in 1997 to adapt the story for a potential film version that would have been directed by Dutch filmmaker Jan de Bont. Meanwhile, Cruise and Spielberg, who met and became friends on the set of Cruise 's film Risky Business in 1983, had been looking to collaborate for ten years. Spielberg was set to direct Cruise in Rain Man, but left to make Indiana Jones and the Last Crusade. Cruise read Cohen 's script, and passed it onto Spielberg, who felt it needed some work. Spielberg was not directly involved in the writing of the script; however, he was allowed to decide whether the picture 's screenplay was ready to be filmed. When Cohen submitted an acceptable revision, he called Cruise and said, "Yeah, I 'll do this version of the script. '' In that version, Witwer creates a false disk which shows Anderton killing him. When Anderton sees the clip, his belief in the infallibility of the precogs ' visions convinces him it is true, therefore the precogs have a vision of him killing Witwer. At the end, Anderton shoots Witwer and one of the brother precogs finishes him off, because Witwer had slain his twin. Spielberg was attracted to the story because as both a mystery and a movie set 50 years in the future, it allowed him to do "a blending of genres '' which intrigued him.
In 1998, the pair joined Minority Report and announced the production as a joint venture of Spielberg 's DreamWorks and Amblin Entertainment, 20th Century Fox, Cruise 's Cruise / Wagner Productions, and De Bont 's production company, Blue Tulip. Spielberg however stated that despite being credited, De Bont never became involved with the film. Cruise and Spielberg, at the latter 's insistence, reportedly agreed to each take 15 % of the gross instead of any money up front to try to keep the film 's budget under $100 million. Spielberg said he had done the same with name actors in the past to great success: "Tom Hanks took no cash for Saving Private Ryan but he made a lot of money on his profit participation. '' He made this agreement a prerequisite:
I have n't worked with many movie stars -- 80 per cent of my films do n't have movie stars -- and I 've told them if they want to work with me I want them to gamble along with me. I have n't taken a salary in 18 years for a movie, so if my film makes no money I get no money. They should be prepared to do the same.
Production was delayed for several years; the original plan was to begin filming after Cruise 's Mission: Impossible 2 was finished. However, that film ran over schedule, which also allowed Spielberg time to bring in screenwriter Scott Frank to rework Cohen 's screenplay. John August did an uncredited draft to polish the script, and Frank Darabont was also invited to rewrite, but was by then busy with The Majestic. The film closely follows Frank 's final script (written May 16, 2001), and contains much of Cohen 's third draft (May 24, 1997). Frank removed the character of Senator Malcolm from Cohen 's screenplay, and inserted Burgess, who became the "bad guy ''. He also rewrote Witwer from a villain to a "good guy '', as he was in the short story. In contrast to Spielberg 's next science fiction picture, War of the Worlds, which he called "100 percent character '' driven, Spielberg said the story for Minority Report became "fifty percent character and fifty percent very complicated storytelling with layers and layers of murder mystery and plot. '' According to film scholar Warren Buckland, "It appears that... Cohen and... Frank did not see '' the "Goldman and Schusett screenplay; instead; they worked on their own adaptation. '' Goldman and Schusett however claimed the pair used a lot of material from their script, so the issue went through the Writer 's Guild arbitration process. They won a partial victory; they were not given writing credits, but were listed as executive producers. The film was delayed again so Spielberg could finish A.I. after the death of his friend Stanley Kubrick. When Spielberg originally signed on to direct, he planned to have an entirely different supporting cast. He offered the role of Witwer to Matt Damon, Iris Hineman to Meryl Streep, Burgess to Ian McKellen, Agatha to Cate Blanchett, and Lara to Jenna Elfman. However, Streep declined the role, Damon opted out, and the other roles were recast due to the delays. Spielberg also offered the role of Witwer to Javier Bardem, who turned it down.
After E.T., Spielberg started to consult experts, and put more scientific research into his science fiction films. In 1999, he invited fifteen experts convened by Peter Schwartz and Stewart Brand to a hotel in Santa Monica for a three - day "think tank ''. He wanted to consult with the group to create a plausible "future reality '' for the year 2054 as opposed to a more traditional "science fiction '' setting. Dubbed the "think tank summit '', the experts included architect Peter Calthorpe, author Douglas Coupland, urbanist and journalist Joel Garreau, computer scientist Neil Gershenfeld, biomedical researcher Shaun Jones, computer scientist Jaron Lanier, and former Massachusetts Institute of Technology (MIT) architecture dean William J. Mitchell. Production designer Alex McDowell kept what was nicknamed the "2054 bible '', an 80 - page guide created in preproduction which listed all the aspects of the future world: architectural, socio - economic, political, and technological. While the discussions did not change key elements in the film, they were influential in the creation of some of the more utopian aspects, though John Underkoffler, the science and technology advisor for the film, described it as "much grayer and more ambiguous '' than what was envisioned in 1999. Underkoffler, who designed most of Anderton 's interface after Spielberg told him to make it "like conducting an orchestra '', said "it would be hard to identify anything (in the movie) that had no grounding in reality. '' McDowell teamed up with architect Greg Lynn to work on some of the technical aspects of the production design. Lynn praised his work, saying that "(a) lot of those things Alex cooked up for Minority Report, like the 3 - D screens, have become real. ''
Spielberg described his ideas for the film 's technology to Roger Ebert before the movie 's release:
I wanted all the toys to come true someday. I want there to be a transportation system that does n't emit toxins into the atmosphere. And the newspaper that updates itself...
The Internet is watching us now. If they want to. They can see what sites you visit. In the future, television will be watching us, and customizing itself to what it knows about us. The thrilling thing is, that will make us feel we 're part of the medium. The scary thing is, we 'll lose our right to privacy. An ad will appear in the air around us, talking directly to us.
Minority Report was the first film to have an entirely digital production design. Termed "previz '', as an abbreviation of previsualization (a term borrowed from the film 's narrative), production designer Alex McDowell said the system allowed them to use Photoshop in place of painters, and employ 3 - D animation programs (Maya and XSI) to create a simulated set, which could be filled with digital actors then used to block out shots in advance. The technology also allowed the tie - in video game and special effects companies to cull data from the previs system before the film was finished, which they used to establish parameters for their visuals. When Spielberg quickly became a fan, McDowell said "(i) t became pretty clear that (he) would n't read an illustration as a finished piece, but if you did it in Photoshop and created a photorealistic environment he focused differently on it. '' Filming took place from March 22 to July 18, 2001, in Washington, D.C., Virginia, and Los Angeles. Film locations included the Ronald Reagan Building (as PreCrime headquarters) and Georgetown. The skyline of Rosslyn, Virginia is visible when Anderton flies across the Potomac River. During production, Spielberg made regular appearances on a video - only webcam based in the craft services truck, both alone and with Tom Cruise; together they conferenced publicly with Ron Howard and Russell Crowe via a similar webcam on the set of "A Beautiful Mind '' in New York.
The location of the small, uncharted island in the last shot of the film is Butter Island off North Haven, Maine in the Penobscot Bay.
Although it takes place in an imagined future world of advanced technology, Minority Report attempts to embody a more "realistic '' depiction of the future. Spielberg decided that to be more credible, the setting had to keep both elements of the present and ones which specialists expected would be forthcoming. Thus Washington, D.C. as depicted in the movie keeps well - known buildings such as the Capitol and the Washington Monument, as well as a section of modern buildings on the other side of the Potomac River. Production designer Alex McDowell was hired based on his work in Fight Club and his storyboards for a film version of Fahrenheit 451 which would have starred Mel Gibson. McDowell studied modern architecture, and his sets contain many curves, circular shapes, and reflective materials. Costume designer Deborah L. Scott decided to make the clothes worn by the characters as simple as possible, so as not to make the depiction of the future seem dated.
The stunt crew was the same one used in Cruise 's Mission: Impossible 2, and was responsible for complex action scenes. These included the auto factory chase scene, filmed in a real facility using props such as a welding robot, and the fight between Anderton and the jetpack - clad officers, filmed in an alley set built on the Warner Bros. studio lot. Industrial Light & Magic did most of the special effects, and DreamWorks - owned PDI was responsible for the Spyder robots. The company Pixel Liberation Front did previsualization animatics. The holographic projections and the prison facility were filmed by several roving cameras which surrounded the actors, and the scene where Anderton gets off his car and runs along the Maglev vehicles was filmed on stationary props, which were later replaced by computer - generated vehicles.
The Philip K. Dick story only gives you a springboard that really does n't have a second or third act. Most of the movie is not in the Philip K. Dick story -- to the chagrin of the Philip K. Dick fans, I 'm sure.
Like most film adaptations of Dick 's works, many aspects of his story were changed in their transition to film, such as the addition of Lamar Burgess and the change in setting from New York City to Washington, D.C., Baltimore, and Northern Virginia. The character of John Anderton was changed from a balding and out - of - shape old man to an athletic officer in his 40s to fit its portrayer and the film 's action scenes. The film adds two stories of tragic families; Anderton 's, and that of the three pre-cogs. In the short story, Anderton is married with no children, while in the film, he is the divorced father of a kidnapped son, who is most likely deceased. Although it is implied, but unclear in the film whether Agatha is related to the twin pre-cogs, her family was shattered when Burgess murdered her mother, Anne Lively. The precogs were intellectually disabled and deformed individuals in the story, but in the film, they are the genetically mutated offspring of drug addicts. Anderton 's future murder and the reasons for the conspiracy were changed from a general who wants to discredit PreCrime to regain some military funding, to a man who murdered a precog 's mother to preserve PreCrime. The subsequent murders and plot developed from this change. The film 's ending also differs from the short story 's. In Dick 's story, Anderton prevents the closure of the PreCrime division, however, in the movie Anderton successfully brings about the end of the organization. Other aspects were updated to include current technology. For instance in the story, Anderton uses a punch card machine to interpret the precogs ' visions; in the movie, he uses a virtual reality interface.
The main theme of Minority Report is the classic philosophical debate of free will versus determinism. Other themes explored by the film include involuntary commitment, the nature of political and legal systems in a high technology - advanced society, the rights of privacy in a media - dominated world, and the nature of self - perception. The film also continues to follow Spielberg 's tradition of depicting broken families, which he has said is motivated by his parents ' divorce when he was a child.
The score was composed and conducted by John Williams and orchestrated by John Neufeld, with vocals by Deborah Dietrich. Williams normally enters Spielberg productions at an early stage, well before the movie starts shooting. For Minority Report however, his entry was delayed due to his work on Star Wars: Episode II -- Attack of the Clones, and he joined the film when it was nearly completed, leaving him scant production time. The soundtrack takes inspiration from Bernard Herrmann 's work. Williams decided not to focus on the science fiction elements, and made a score suitable for film noir. He included traditional noir elements such as a female singer in the Anne Lively scenes, but the "sentimental scenes '', which Williams considered unusual for that genre, led to soothing themes for Anderton 's ex-wife Lara and son Sean. The track "Sean 's Theme '' is described as the only one "instantly recognizable as one of Williams ' '' by music critic Andrew Granade. Spielberg typified it as "a black and white score '' and said, "I think Johnny Williams does a really nice bit of homage to Benny Herrmann. ''
In an interview which appeared in The New York Times, Williams said that the choices for many of the pieces of classical music were made by the studio. He also said that while he did not know why certain pieces were chosen, Franz Schubert 's Symphony No. 8 (commonly known as the Unfinished Symphony), which features prominently in the film, was most likely included because Anderton was a big fan of classical music in the script. Some of the other choices, such as Gideon 's playing of Jesu, Joy of Man 's Desiring by Bach on an organ in the subterranean prison, were also in the screenplay, and he figured that "(t) hey are some writer 's conception of what this character might have listened to. '' Williams did choose the minuet from a Haydn string quartet (Op. 64, No. 1) which plays on the radio in the scene where Dr. Hineman is gardening in her greenhouse. He said he picked the piece because "(i) t seemed to me to be the kind of thing a woman like this would play on the radio. '' The New York Times characterized the score as "evocative '' and said it was "thoroughly modern '' while also being "interlaced with striking snippets of masterworks. '' The piece heard when Anderton is shown entering his apartment and he tells his computer system "I 'm home '' is the 2nd movement of Pyotr Ilyich Tchaikovsky 's Symphony No. 6 in B minor, known as the "Pathetique '' Symphony.
The most commonly criticized element of the film is its ending. The film has a more traditional "happy ending '' which contradicts the tone of the rest of the picture. This has led to speculation that this ending is the product of John 's imagination, caused by hallucinations from his forced coma after he is incarcerated. As one observer mused, "The conclusion of Minority Report strikes me as a joke Spielberg played on his detractors -- an act of perfectly measured deviltry. ''
One critic theorized, "... (r) ather than end this Brazil - ian sci - fi dystopia with the equivalent of that film 's shot of its lobotomized hero, which puts the lie to the immediately previous scene of his imagined liberation, Spielberg tries to pass off exactly the same ending but without the rimshot, just to see if the audience is paying attention. '' Film scholars Nigel Morris and Jason P. Vest point to a line in the film as possible evidence of this. After Anderton is captured, Gideon tells him that, "It 's actually kind of a rush. They say you have visions. That your life flashes before your eyes. That all your dreams come true. '' While Vest considers the blissful dream ending a possibility, he questions why Anderton did not imagine his son as having returned.
Buckland expressed disappointment in the ending, but blamed Frank. He felt that given the water theme, and closely tied together tragic parent - child theme, Anderton should have ended the film by taking Agatha in his care if Spielberg wanted a happy ending. Especially since "Anderton kidnaps Agatha from the precog pool just as his son was kidnapped from a swimming pool '' and because Anderton could act as a "substitute parent for Agatha, and Agatha... a substitute child for Anderton. '' This opportunity is missed however, when the precogs are sent to the remote island, and Anderton reunites with his wife; an ending which Buckland finds more "forced '' than the "more authentic '' path he feels he noticed.
Minority Report is a futuristic film which portrays elements of a both dystopian and utopian future. The movie renders a much more detailed view of its future world than the book, and contains new technologies not in Dick 's story. From a stylistic standpoint, Minority Report resembles Spielberg 's previous film A.I., but also incorporates elements of film noir. Spielberg said that he "wanted to give the movie a noir feel. So I threw myself a film festival. Asphalt Jungle. Key Largo. The Maltese Falcon. '' The picture was deliberately overlit, and the negative was bleach - bypassed during post-production. The scene in which Anderton is dreaming about his son 's kidnapping at the pool is the only one shot in "normal '' color. Bleach - bypassing gave the film a distinctive look; it desaturated the film 's colors, to the point that it nearly resembles a black - and - white movie, yet the blacks and shadows have a high contrast like a film noir picture. The color was reduced by "about 40 % '' to achieve the "washed - out '' appearance. Elvis Mitchell, formerly of The New York Times, commented that "(t) he picture looks as if it were shot on chrome, caught on the fleeing bumper of a late ' 70s car. ''
Cinematographer Janusz Kamiński shot the movie with high - speed film, which Spielberg preferred to the then - emerging digital video format. The movie 's camera work is very mobile, alternating between handheld and Steadicam shots, which are "exaggerated by the use of wide angle lenses and the occasional low camera angle '' to increase the perception of movement according to film scholar Warren Buckland. Kamiński said that he never used a lens longer than 27mm, and alternated between 17, 21, and 27mm lenses, as Spielberg liked to "keep the actors as close to the camera as possible ''. He also said, "We staged a lot of scenes in wide shots that have a lot of things happening with the frame. '' The duo also used several long takes to focus on the emotions of the actors, rather than employing numerous cuts. Spielberg eschewed the typical "shot reverse shot '' cinematography technique used when filming characters ' interactions in favor of the long takes, which were shot by a mobile, probing camera. McDowell relied on colorless chrome and glass objects of curved and circular shapes in his set designs, which, aided by the "low - key contrastive lighting '', populated the film with shadows, creating a "futuristic film noir atmosphere ''.
Buckland describes the film 's 14 minute opening sequence as the "most abstract and complex of any Spielberg film. '' The first scene is a distorted precog vision of a murder, presented out of context. The speed of the film is sped up, slowed, and even reversed, and the movie "jumps about in time and space '' by intercutting the images in no discernible order. When it ends, it becomes clear that the scene was presented through Agatha 's eyes, and that this is how previsions appear to her. Fellow scholar Nigel Morris called this scene a "trailer '', because it foreshadows the plot and establishes the type of "tone, generic expectations, and enigmas '' that will be used in the film. The visions of the pre-cogs are presented in a fragmented series of clips using a "squishy lens '' device, which distorts the images, blurring their edges and creating ripples across them. They were created by a two - man production team, hired by Spielberg, who chose the "layered, dreamlike imagery '' based on some comments from cognitive psychologists the pair consulted. In the opening 's next scene, Anderton is "scrubbing the images '', by standing like a composer (as Spielberg terms it), and manipulating them, while Jad assists him. Next the family involved in the murder in Agatha 's vision is shown interacting, which establishes that the opening scene was a prevision. The picture then cuts back to Anderton and the precogs ' images, before alternating between the three. The opening is self - contained, and according to Buckland acts merely as a setup for numerous elements of the story. It lasts 14 minutes, includes 171 shots, and has an average shot length of five seconds as opposed to the 6.5 second average for the entire film. The opening 's five - second average is attained despite "very fast cutting '' in the beginning and ending, because the middle has longer takes, which reach 20 seconds in some instances. Spielberg also continues his tradition of "heavily diffused backlighting '' for much in the interior shots.
Spielberg typically keeps the plot points of his films closely guarded before their release, and Minority Report was no different. He said he had to remove some scenes, and a few "F - words '' to get the film 's PG - 13 rating. Following the disappointing box office results of Spielberg 's A.I., the marketing campaign for Minority Report downplayed his role in the movie and sold the film as a Cruise action thriller.
Tom Rothman, chairman of the film 's co-financier Fox Filmed Entertainment, described the film 's marketing strategy thus: "How are we marketing it? It 's Cruise and Spielberg. What else do we need to do? '' The strategy made sense; coming into the film, Spielberg had made 20 films which grossed a domestic total of $2.8 billion, while Cruise 's resume featured 23 films and $2 billion in domestic revenues. With their combined 30 % take of the film 's box office though, sources such as BusinessWeek 's Ron Grover predicted the studios would have a hard time making the money needed to break even. Despite the outward optimism, as a more adult - oriented, darker film than typical blockbusters, the studio held different box office expectations for the movie than they would a more family friendly film. Entertainment Weekly projected the film would gross $40 million domestic in its opening weekend, and Variety predicted that the high concept storyline would not appeal to children and would render it a "commercial extra-base hit rather than a home run. ''
Minority Report 's world premiere took place in New York City on June 19, 2002. An online "popcorn cam '' broadcast live from inside the premiere. Cruise attended the London premiere the following week, and mingled with thousands of adoring fans as he walked through the city 's Leicester Square. It debuted at first place in the U.S. box office, collecting $35.677 million in its opening weekend. Forbes considered those numbers below expectations, as they gave the film a small edge over Lilo & Stitch, which debuted in second place ($35.260 million). Lilo & Stitch sold more tickets, but since much of the film 's attendees were children, its average ticket price was much lower. The movie opened at the top of the box office in numerous foreign markets; it made $6.7 million in 780 locations in Germany its opening weekend, and accounted for 35 % of France 's total box office weekend office gross when it collected $5 million in 700 theaters. In Great Britain, Minority Report made $36.9 million in its first three days, in Italy, $6.2 million in its first ten, in Belgium, $815,000 in its 75 location opening weekend, and in Switzerland, $405,000 in an 80 theater opening weekend. The BBC felt the film 's UK performance was "buoyed by Cruise 's charm offensive at last week 's London premiere. '' Minority Report made a total of $132 million in the United States and $226.3 million overseas.
DreamWorks spent several million dollars marketing the film 's DVD and VHS releases. The campaign included a tie - in video game released by Activision, which contained a trailer for the movie 's DVD. Minority Report was successful in the home video market, selling at least four million DVDs in its first few months of release. The DVD took two years to produce. For the first time, Spielberg allowed filmmakers to shoot footage on the set of one of his films. Premiere - award - winning DVD producer Laurent Bouzereau, who would become a frequent Spielberg DVD collaborator, shot hundreds of hours of the film 's production in the then - new high - definition video format. It contained over an hour of featurettes which discussed various aspects of film production, included breakdowns of the film 's stunt sequences, and new interviews with Spielberg, Cruise, and other "Academy Award - winning filmmakers ''. The film was released on a two - disc Blu - ray by Paramount Pictures (now the owner of the early DreamWorks library) on May 16, 2010. It included exclusive extras and interactive features, such as a new Spielberg interview, that were not included in the DVD edition. The film was transferred from its "HD master '' which retained the movie 's distinctive grainy appearance.
A video game based on the film titled Minority Report: Everybody Runs was developed by Treyarch, published by Activision and released on November 18, 2002 for Game Boy Advance, Nintendo GameCube, PlayStation 2 and Xbox. It received mixed reviews.
The film received critical acclaim. On the review aggregator Rotten Tomatoes, Minority Report received 90 % positive reviews based on 239 critics, with an average rating of 8.2 / 10. The site 's critical consensus is, "Thought - provoking and visceral, Steven Spielberg successfully combines high concept ideas and high octane action in this fast and febrile sci - fi thriller. '' The website listed it among the best reviewed films of 2002. The film also earned an 80 out of a possible 100 on the similar review aggregating website Metacritic. Most critics gave the film 's handling of its central theme (free will vs. determinism) positive reviews, and many ranked it as the film 's main strength. Other reviewers however, felt that Spielberg did not adequately tackle the issues he raised. The movie has inspired significant discussion and analysis, the scope of which has been compared to the continuing analysis of Blade Runner. This discussion has advanced past the realm of standard film criticism. Slovenian philosopher Slavoj Žižek fashioned a criticism of the Cheney Doctrine, by comparing its preemptive strike methodology to that of the film 's PreCrime system.
Richard Corliss of Time said it 's "Spielberg 's sharpest, brawniest, most bustling entertainment since Raiders of the Lost Ark ''. Mike Clark of USA Today felt it succeeded due to a "breathless 140 - minute pace with a no - flab script packed with all kinds of surprises. '' Lisa Schwarzbaum of Entertainment Weekly praised the film 's visuals, and Todd McCarthy of Variety complimented the cast 's performances. Film scholar Warren Buckland recommended the film, but felt that the comedic elements -- aside from Stormare 's lines -- detracted from the plot and undermined the film 's credibility.
Several critics used their reviews to discuss Spielberg and analyze what the movie signified in his development as a filmmaker. Andrew O'Hehir of the online magazine Salon expressed excitement over the atypically hard edge of the movie. "Little Steven Spielberg is all grown up now... into of all things a superior film artist... It 's too early to know whether Minority Report, on the heels of A.I., marks a brief detour in Spielberg 's career or a permanent change of course, but either way it 's a dark and dazzling spectacle. '' J. Hoberman of The Village Voice said it is "the most entertaining, least pretentious genre movie Steven Spielberg has made in the decade since Jurassic Park. '' Randy Shulman of Metro Weekly said that "the movie is a huge leap forward for the director, who moves once and for all into the world of adult movie making. '' Roger Ebert called the film a "masterpiece '' and said that when most directors of the period were putting "their trust in technology '', Spielberg had already mastered it, and was emphasizing "story and character '' while merely using technology as a "workman uses his tools ''. David Edelstein of Slate echoed the positive sentiments, saying "(i) t has been a long time since a Spielberg film felt so nimble, so unfettered, so free of self - cannibalizing. '' Jonathan Rosenbaum, then of the Chicago Reader, was less convinced. Though he approved of the movie, he derided it in his review as a superficial action film, cautioning audiences to enjoy the movie, but not "be conned into thinking that some sort of serious, thoughtful statement is being delivered along with the roller - coaster ride. ''
Andrew Sarris of The New York Observer gave the film a negative review in which he described the script as full of plot holes, the car chases as silly, and criticized the mixture of futuristic environments with "defiantly retro costuming ''. The complexity of the storyline was also a source of criticism for Kenneth Turan of the Los Angeles Times, who considered the plot "too intricate and difficult to follow ''. Rick Groen of The Globe and Mail criticized Tom Cruise 's performance, and though Hoberman liked the movie, he described the film as "miscast, misguided, and often nonsensical ''. Both Rosenbaum and Hoberman belittled the titular minority report as a "red herring ''. More positive reviews have seen it similarly, but referred to it as a "MacGuffin ''.
The film earned nominations for many awards, including Best Sound Editing at the Academy Awards, and Best Visual Effects at the BAFTAs. It was nominated for eleven Saturn Awards including Best Actor for Cruise, Best Supporting Actor for von Sydow and Best Music for Williams, and won four: Best Science Fiction Film, Best Direction for Spielberg, Best Writing for Frank and Cohen and Supporting Actress for Morton. It also won the BMI Film Music Award, the Online Film Critics Society Award for Best Supporting Actress, and the Empire Awards for Best Actor for Cruise, Best Director for Spielberg and Best British Actress for Morton. Ebert listed Minority Report as the best film of 2002, as did online film reviewer James Berardinelli. The film was also included in top ten lists by critic Richard Roeper, and both reviewers at USA Today.
In 2008, the American Film Institute nominated this film for its Top 10 Science Fiction Films list.
On September 9, 2014, it was announced that a follow - up television series had been given a pilot commitment at Fox. Max Borenstein wrote the script and served as executive producer alongside Spielberg, Justin Falvey and Darryl Frank. The series was envisioned to be set 10 years after the film, and focused on a male precog who teams up with a female detective to find a purpose to his gift. On February 13, 2015, Daniel London and Li Jun Li joined the cast. On February 24, 2015, Laura Regan was cast as Agatha Lively, replacing Samantha Morton, who was said to have been offered to reprise the role. In March 2015, Stark Sands and Meagan Good landed the lead roles with Sands playing the role of Dash, one of the male precogs, and Good playing Lara Vega, a detective haunted by her past, who works with Dash to help him find a purpose for his gift, Li Jun Li plays Akeela, a CSI technician, Daniel London reprised his role as Wally the Caretaker from the original film and Wilmer Valderrama was cast as a police detective. The show was picked up to series by Fox on May 9, 2015, and made its broadcast debut on September 21, 2015. However, the show was cancelled on May 13, 2016 by Fox.
|
discuss different things you can do in punta del este uruguay brainly | Uruguay Round - wikipedia
The Uruguay Round was the 8th round of multilateral trade negotiations (MTN) conducted within the framework of the General Agreement on Tariffs and Trade (GATT), spanning from 1986 to 1994 and embracing 123 countries as "contracting parties ''. The Round led to the creation of the World Trade Organization, with GATT remaining as an integral part of the WTO agreements. The broad mandate of the Round had been to extend GATT trade rules to areas previously exempted as too difficult to liberalize (agriculture, textiles) and increasingly important new areas previously not included (trade in services, intellectual property, investment policy trade distortions). The Round came into effect in 1995 with deadlines ending in 2000 (2004 in the case of developing country contracting parties) under the administrative direction of the newly created World Trade Organization (WTO).
The Doha Development Round was the next trade round, beginning in 2001 and still unresolved after missing its official deadline of 2005.
The main objectives of the Uruguay Round were:
They also wanted to draft a code to deal with copyright violation and other forms of intellectual property rights.
The round was launched in Punta del Este, Uruguay in September 1986, followed by negotiations in Geneva, Brussels, Washington, D.C., and Tokyo, with the 20 agreements finally being signed in Marrakesh -- the Marrakesh Agreement -- in April 1994.
The 1986 Ministerial Declaration identified problems including structural deficiencies, spill - over impacts of certain countries ' policies on world trade GATT could not manage. To address these issues, the eighth GATT round (known as the Uruguay Round) was launched in September 1986, in Punta del Este, Uruguay. It was the biggest negotiating mandate on trade ever agreed: the talks were going to extend the trading system into several new areas, notably trade in services and intellectual property, and to reform trade in the sensitive sectors of agriculture and textiles; all the original GATT articles were up for review.
The round was supposed to end in December 1990, but the US and EU disagreed on how to reform agricultural trade and decided to extend the talks. Finally, In November 1992, the US and EU settled most of their differences in a deal known informally as "the Blair House accord '', and on April 15, 1994, the deal was signed by ministers from most of the 123 participating governments at a meeting in Marrakesh, Morocco. The agreement established the World Trade Organization, which came into being upon its entry into force on January 1, 1995, to replace the GATT system. It is widely regarded as the most profound institutional reform of the world trading system since the GATT 's establishment.
The position of Developing Countries was detailed in the book: Brazil in the Uruguay Round of the GATT: The Evolution of Brazil 's Position in the Uruguay Round, with Emphasis on the Issue of Services. In this book, the polemics about the issue of services are described, as well as the opposition of Developing Countries to the so called "New Issues ''.
The GATT still exists as the WTO 's umbrella treaty for trade in goods, updated as a result of the Uruguay Round negotiations (a distinction is made between GATT 1994, the updated parts of GATT, and GATT 1947, the original agreement which is still the heart of GATT 1994). The GATT 1994 is not, however, the only legally binding agreement included in the Final Act; a long list of about 60 agreements, annexes, decisions and understandings was adopted. In fact, the agreements fall into a simple structure with six main parts:
The agreements for the two largest areas under the WTO, goods and services, share a three - part outline:
One of the achievements of the Uruguay round would be the Uruguay Round Agreement on Agriculture, administered by the WTO, which brings agricultural trade more fully under the GATT. Prior to the Uruguay Round, conditions for agricultural trade were deteriorating with increasing use of subsidies, build - up of stocks, declining world prices and escalating costs of support. It provides for converting quantitative restrictions to tariffs and for a phased reduction of tariffs. The agreement also imposes rules and disciplines on agricultural export subsidies, domestic subsidies, and sanitary and phytosanitary (SPS) measures through the Agreement on the Application of Sanitary and Phytosanitary Measures
Groups such as Oxfam have criticized the Uruguay Round for paying insufficient attention to the special needs of developing countries. One aspect of this criticism is that figures very close to rich country industries -- such as former Cargill executive Dan Amstutz -- had a major role in the drafting of Uruguay Round language on agriculture and other matters. As with the WTO in general, non-governmental organizations (NGOs) such as Health Gap and Global Trade Watch also criticize what was negotiated in the Round on intellectual property and industrial tariffs as setting up too many constraints on policy - making and human needs. An article asserts that the developing countries ' lack of experience in WTO negotiations and lack of knowledge of how the developing economies would be affected by what the industrial countries wanted in the WTO new areas; the intensified mercantilist attitude of the GATT / WTO 's major power, the US.; the structure of the WTO that made the GATT tradition of decision by consensus ineffective, so that a country would not preserve the status quo, were the reasons for this imbalance.
|
who's won the most champions league trophies | List of UEFA club competition Winners - wikipedia
The Union of European Football Associations (UEFA) is the governing body for association football in Europe. It organises three club competitions: the UEFA Champions League (formerly European Cup), the UEFA Europa League (formerly UEFA Cup) and the UEFA Super Cup. UEFA was also responsible for the Cup Winners ' Cup and the Intertoto Cup, until their discontinuation in 1999 and 2008, respectively. Together with the Confederación Sudamericana de Fútbol (CONMEBOL), it also organised the Intercontinental Cup, which was last held in 2004, before its replacement by FIFA 's Club World Cup.
Spanish side Real Madrid have won a record total of 22 titles in UEFA competitions, five more than Milan (Italy). The only team to have won every UEFA club competition is Juventus (Italy). They received The UEFA Plaque on 12 July 1988, in recognition of winning the three seasonal confederation trophies -- UEFA Cup in 1977, Cup Winners ' Cup in 1984, and European Cup in 1985. Juventus then won their first Super Cup in 1984, their first Intercontinental Cup in 1985, and the Intertoto Cup in 1999.
Spanish clubs have won the most titles (59), ahead of clubs from Italy (48) and England (40). Italy is the only country in European football history whose clubs won the three main competitions in the same season: in 1989 -- 90, Milan retained the European Cup, Sampdoria won the Cup Winners ' Cup, and Juventus secured the UEFA Cup.
While the Inter-Cities Fairs Cup is considered to be the predecessor of the UEFA Cup, it is not officially recognised by UEFA and therefore successes in this competition are not included in this list. Also excluded are the unofficial 1972 European Super Cup and the Club World Cup, a FIFA competition.
Real Madrid hold the record for the most overall titles, with 22 followed by Milan 's 17 titles. Spanish teams hold the record for the most wins in each of the three main UEFA club competitions: Real Madrid, with thirteen European Cup / UEFA Champions League titles; Sevilla, with five UEFA Cup / UEFA Europa League titles; and Barcelona, with four Cup Winners ' Cup titles. Milan share the most Super Cup wins (five) with Barcelona, and the most Intercontinental Cup wins (three) with Real Madrid. German clubs Hamburg, Schalke 04 and Stuttgart, and Spanish club Villarreal are the record holders by titles won in the UEFA Intertoto Cup (twice each).
Juventus, Ajax, Bayern Munich, Chelsea and Manchester United are the only teams to have won all of UEFA 's three main club competitions (European Cup / UEFA Champions League, Cup Winners ' Cup, UEFA Cup / UEFA Europa League). However, Juventus is the only team to have won every UEFA club competition, which additionally includes the Super Cup, the Intertoto Cup, and the Intercontinental Cup.
The following table lists all the clubs that have won at least one UEFA club competition, and is updated as of 26 May 2018 (in chronological order).
Spanish clubs are the most successful in UEFA competitions, with a total of 59 titles, and hold a record number of wins in the European Cup / UEFA Champions League (17), UEFA Super Cup (14), and UEFA Cup / UEFA Europa league (11). Italian clubs have the most victories in the Intercontinental Cup (7). In third place, English clubs have secured 40 titles, including a record eight wins in the Cup Winners ' Cup. French clubs, ranked sixth in UEFA competition titles, have won the Intertoto Cup the most times (12). Italian clubs are the only in European football history to have won the three main UEFA competitions in the same season (1989 -- 90).
The following table lists all the countries whose clubs have won at least one UEFA competition, and is updated as of 26 May 2018 (in chronological order).
* = Germany 's record includes West Germany and East Germany.
General
Specific
|
is judge not lest ye be judged in the bible | Matthew 7: 2 - wikipedia
Matthew 7: 2 is the second verse of the seventh chapter of the Gospel of Matthew in the New Testament and is part of the Sermon on the Mount. This verse continues the discussion of judgmentalism.
In the King James Version of the Bible the text reads:
The World English Bible translates the passage as:
For a collection of other versions see BibRef Matthew 7: 2
This verse simply states that he who judges will himself be judged. If you impose standards upon others, those same standards will be applied to you.
As Schweizer notes this verse, if read literally, is a contradiction of the previous one. While the first says not to judge, this one established rules for judging. Luz advances the explanation that this verse states that if you search to find faults with others, that God will then search to find fault with you, and since all humans are infinitely flawed you would then easily be condemned. Thus even a small amount of judging by a person will bring a great punishment form God, and this verse essentially repeats the argument of the first against judging. More scholars simply believe that the condemnation of judging in Matthew 7: 1 is far from absolute.
While, as in the previous verse, the wording seems to imply that God is the final judge, Fowler mentions other possibilities. This could be a teaching on healthy interpersonal relations, and the verse could be arguing that any who judges their fellows will themselves be judged by those around them. If you find fault with others, others will find fault with you. It could also refer to the danger of excessive internal criticism and self - consciousness. If you are constantly judging others, you will feel others are doing the same and will force yourself to try and meet their standards, in direct contrast to the condemnation of worry in the previous chapter.
The phrase "measure to measure '', which also appears at Mark 4: 24 in a different context, may be linked to the Rabbinic belief that God has two measures for the world - mercy and justice. This phrase is most notable for being reused as the title of the Shakespeare play Measure for Measure.
|
who pitched the only perfect game in world series history | Don Larsen 's perfect game - wikipedia
On October 8, 1956, in Game 5 of the 1956 World Series, Don Larsen of the New York Yankees threw a perfect game against the Brooklyn Dodgers. Larsen 's perfect game is the only perfect game in the history of the World Series; it is one of only 23 perfect games in MLB history. His perfect game remained the only no - hitter of any type ever pitched in postseason play until Philadelphia Phillies pitcher Roy Halladay threw a no - hitter against the Cincinnati Reds on October 6, 2010, in Game 1 of the National League Division Series, and the only postseason game in which any team faced the minimum 27 batters until Kyle Hendricks and Aroldis Chapman of the Chicago Cubs managed to combine for the feat in the decisive sixth game of the 2016 National League Championship Series.
Don Larsen of the New York Yankees made his first start in a World Series game in the 1955 World Series against the Brooklyn Dodgers. Larsen lost the game.
The Yankees and Dodgers faced each other in the 1956 World Series. Behind Sal Maglie, the Dodgers defeated the Yankees in Game 1. Casey Stengel, the manager of the Yankees, selected Larsen to start Game 2 against the Dodgers ' Don Newcombe. Despite being given a 6 -- 0 lead by the Yankees ' batters, he lasted only 1 ⁄ innings against the Dodgers in a 13 -- 8 loss. He only gave up one hit, a single by Gil Hodges, but walked four batters, which led to four runs in the process, but none of them were earned because of an error by first baseman Joe Collins. The Yankees won Games 3 and 4 to tie the series at two games apiece.
With the series tied at two games apiece, Larsen started Game 5 for the Yankees. Larsen 's opponent in the game was Maglie. The Yankees scored two runs on Maglie, as Mickey Mantle hit a home run and Hank Bauer had a run batted in single. Larsen retired all 27 batters he faced to complete the perfect game.
Larsen needed just 97 pitches to complete the game, and only one Dodger batter (Pee Wee Reese in the first inning) was able to get a 3 - ball count. In 1998, Larsen recalled, "I had great control. I never had that kind of control in my life. '' The closest the Dodgers came to a hit were in the second inning, when Jackie Robinson hit a line drive off third baseman Andy Carey 's glove, the ball caroming to shortstop Gil McDougald, who threw Robinson out by a step, and in the fifth, when Mickey Mantle ran down Gil Hodges ' deep fly ball. Brooklyn 's Maglie gave up only two runs on five hits and was perfect himself until Mantle 's fourth - inning home run broke the scoreless tie. The Yankees added an insurance run in the sixth as Hank Bauer 's single scored Carey, who had opened the inning with a single and was sacrificed to second by Larsen. After Roy Campanella grounded out to Billy Martin for the second out of the 9th inning, Larsen faced pinch hitter Dale Mitchell, a. 311 career hitter. Throwing fastballs, Larsen got ahead in the count at 1 -- 2. On his 97th pitch, Larsen struck out Mitchell for the 27th consecutive and final out. Mitchell tried to check his swing on that last pitch, but home plate umpire Babe Pinelli, who would retire at the end of this World Series, called the last pitch a strike. Mitchell, who only struck out 119 times in 3,984 at - bats (or once every 34 at - bats) during his career, always maintained that the third strike he took was really a ball.
In one of the most iconic images in sports history, catcher Yogi Berra leaped into Larsen 's arms after the final out. With the death of Berra on September 22, 2015, Larsen is the last living player for either team who played in this game.
NBC televised the game, with announcers Mel Allen (for the Yankees) and Vin Scully (for the Dodgers). In 2006, it was announced that a nearly - complete kinescope recording of the Game 5 telecast (featuring Larsen 's perfect game) had been preserved and discovered by a collector. That kinescope recording aired during the MLB Network 's first night on the air on January 1, 2009, supplemented with an interview of both Larsen and Yogi Berra by Bob Costas. The first inning of the telecast is still considered lost and was not aired by the MLB Network or included in a subsequent DVD release of the game.
The Dodgers won Game 6 of the series, but the Yankees won the decisive Game 7. Larsen 's performance earned him the World Series Most Valuable Player Award and the Babe Ruth Award. When the World Series ended, Larsen did a round of endorsements and promotional work around the United States, but he stopped soon after because it was "disrupting his routine ''.
Larsen 's perfect game remained the only no - hitter thrown in the MLB postseason until Roy Halladay of the Philadelphia Phillies threw a no - hitter on October 6, 2010, against the Cincinnati Reds in the first game of the 2010 National League Division Series. Halladay, who had already thrown a perfect game earlier in the 2010 season, faced 28 batters after giving up a walk (to Jay Bruce of the Cincinnati Reds) in the fifth inning. Larsen 's perfect game would remain the only postseason game in which any team faced the minimum 27 batters until Kyle Hendricks and Aroldis Chapman of the Chicago Cubs managed to achieve the feat in Game 6 of the 2016 National League Championship Series. In that game, the Cubs gave up two hits, a walk and committed a fielding error, but managed to put out all four opposing baserunners (three via double plays and one on a pick off).
Italics denotes post-season perfect game
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.