question
stringlengths 15
100
| context
stringlengths 18
412k
|
---|---|
where does it talk about checks and balances in the constitution | Separation of powers under the United States Constitution - wikipedia
Separation of powers is a political doctrine originating in the writings of Charles de Secondat, Baron de Montesquieu in The Spirit of the Laws, in which he argued for a constitutional government with three separate branches, each of which would have defined abilities to check the powers of the others. This philosophy heavily influenced the writing of the United States Constitution, according to which the Legislative, Executive, and Judicial branches of the United States government are kept distinct in order to prevent abuse of power. This United States form of separation of powers is associated with a system of checks and balances.
During the Age of Enlightenment, philosophers such as Montesquieu advocated the principle in their writings, whereas others, such as Thomas Hobbes, strongly opposed it. Montesquieu was one of the foremost supporters of separating the legislature, the executive, and the judiciary. His writings considerably influenced the opinions of the framers of the United States Constitution.
Strict separation of powers did not operate in the United Kingdom, the political structure of which served in most instances as a model for the government created by the U.S. Constitution.
Some U.S. states did not observe a strict separation of powers in the 18th century. In New Jersey, the Governor also functioned as a member of the state 's highest court and as the presiding officer of one house of the New Jersey Legislature. The President of Delaware was a member of the Court of Appeals; the presiding officers of the two houses of the state legislature also served in the executive department as Vice Presidents. In both Delaware and Pennsylvania, members of the executive council served at the same time as judges. On the other hand, many southern states explicitly required separation of powers. Maryland, Virginia, North Carolina and Georgia all kept the branches of government "separate and distinct. ''
Congress has the sole power to legislate for the United States. Under the nondelegation doctrine, Congress may not delegate its lawmaking responsibilities to any other agency. In this vein, the Supreme Court held in the 1998 case Clinton v. The city of New York that Congress could not delegate a "line - item veto '' to the President, by powers vested in the government by the Constitution.
Where Congress does not make great and sweeping delegations of its authority, the Supreme Court has been less stringent. One of the earliest cases involving the exact limits of non-delegation was Wayman v. Southard 23 U.S. (10 Wet.) 1, 42 (1825). Congress had delegated to the courts the power to prescribe judicial procedure; it was contended that Congress had thereby unconstitutionally clothed the judiciary with legislative powers. While Chief Justice John Marshall conceded that the determination of rules of procedure was a legislative function, he distinguished between "important '' subjects and mere details. Marshall wrote that "a general provision may be made, and power is given to those who are to act under such general provisions, to fill up the details. ''
Marshall 's words and future court decisions gave Congress much latitude in delegating powers. It was not until the 1930s that the Supreme Court held a delegation of authority unconstitutional. In a case involving the creation of the National Recovery Administration called A.L.A. Schechtes, 295 U.S. 495 (1935), Congress could not authorize the president to formulate codes of "fair competition. '' It was held that Congress must set some standards governing the actions of executive officers. The Court, however, has deemed that phrases such as "just and reasonable, '' "public interest '' and "public convenience '' suffice.
Executive power is vested, with exceptions and qualifications, in the President. By law (Section 2.) the president becomes the Commander in Chief of the Army and Navy, Militia of several states when called into service, has power to make treaties and appointments to office "with the Advice and Consent of the Senate, '' receive Ambassadors and Public Ministers, and "take care that the laws be faithfully executed '' (Section 3.) By using these words, the Constitution does not require the president to personally enforce the law; rather, officers subordinate to the president may perform such duties. The Constitution empowers the president to ensure the faithful execution of the laws made by Congress and approved by the President. Congress may itself terminate such appointments, by impeachment, and restrict the president. Bodies such as the War Claims Commission (created by the War Claims Act of 1948), the Interstate Commerce Commission, and the Federal Trade Commission -- all quasi-judicial -- often have direct Congressional oversight.
Congress often writes legislation to restrain executive officials to the performance of their duties, as laid out by the laws Congress passes. In Immigration and Naturalization Service v. Chadha (1983), the Supreme Court decided (a) The prescription for legislative action in Art. I, § 1 -- requiring all legislative powers to be vested in a Congress consisting of a Senate and a House of Representatives -- and § 7 -- requiring every bill passed by the House and Senate, before becoming law, to be presented to the president, and, if he disapproves, to be repassed by two - thirds of the Senate and House -- represents the Framers ' decision that the legislative power of the Federal Government be exercised in accord with a single, finely wrought and exhaustively considered procedure. This procedure is an integral part of the constitutional design for the separation of powers. Further rulings clarified the case; even both Houses acting together can not override Executive vetos without a ⁄ majority. Legislation may always prescribe regulations governing executive officers.
Judicial power -- the power to decide cases and controversies -- is vested in the Supreme Court and inferior courts established by Congress. The judges must be appointed by the president with the advice and consent of the Senate, hold office during good behavior and receive compensations that may not be diminished during their continuance in office. If a court 's judges do not have such attributes, the court may not exercise the judicial power of the United States. Courts exercising the judicial power are called "constitutional courts. ''
Congress may establish "legislative courts, '' which do not take the form of judicial agencies or commissions, whose members do not have the same security of tenure or compensation as the constitutional court judges. Legislative courts may not exercise the judicial power of the United States. In Murray 's Lessee v. Hoboken Land & Improvement Co. (1856), the Supreme Court held that a legislative court may not decide "a suit at the common law, or in equity, or admiralty, '' as such a suit is inherently judicial. Legislative courts may only adjudicate "public rights '' questions (cases between the government and an individual and political determinations).
The president exercises a check over Congress through his power to veto bills, but Congress may override any veto (excluding the so - called "pocket veto '') by a two - thirds majority in each house. When the two houses of Congress can not agree on a date for adjournment, the president may settle the dispute. Either house or both houses may be called into emergency session by the president. The Vice President serves as president of the Senate, but he may only vote to break a tie.
The president, as noted above, appoints judges with the Senate 's advice and consent. He also has the power to issue pardons and reprieves. Such pardons are not subject to confirmation by either the House of Representatives or the Senate, or even to acceptance by the recipient. The President is not mandated to carry out the orders of the Supreme Court. The Supreme Court does not have any enforcement power; the enforcement power lies solely with the executive branch. Thus, the executive branch can place a check on the Supreme Court through refusal to execute the orders of the court. For example, in Worcester v. Georgia, President Jackson refused to execute the orders of the Supreme Court.
The president is the civilian Commander in Chief of the Army and Navy of the United States. He has the authority to command them to take appropriate military action in the event of a sudden crisis. However, only the Congress is explicitly granted the power to declare war per se, as well as to raise, fund and maintain the armed forces. Congress also has the duty and authority to prescribe the laws and regulations under which the armed forces operate, such as the Uniform Code of Military Justice, and requires that all Generals and Admirals appointed by the president be confirmed by a majority vote of the Senate before they can assume their office.
Courts check both the executive branch and the legislative branch through judicial review. This concept is not written into the Constitution, but was envisioned by many of the Constitution 's Framers (for example, The Federalist Papers mention it). The Supreme Court established a precedent for judicial review in Marbury v. Madison. There were protests by some at this decision, born chiefly of political expediency, but political realities in the particular case paradoxically restrained opposing views from asserting themselves. For this reason, precedent alone established the principle that a court may strike down a law it deems unconstitutional.
A common misperception is that the Supreme Court is the only court that may determine constitutionality; the power is exercised even by the inferior courts. But only Supreme Court decisions are binding across the nation. Decisions of a Court of Appeals, for instance, are binding only in the circuit over which the court has jurisdiction.
The power to review the constitutionality of laws may be limited by Congress, which has the power to set the jurisdiction of the courts. The only constitutional limit on Congress ' power to set the jurisdiction of the judiciary relates to the Supreme Court; the Supreme Court may exercise only appellate jurisdiction except in cases involving states and cases affecting foreign ambassadors, ministers or consuls.
The Chief Justice presides in the Senate during a president 's impeachment trial. The rules of the Senate, however, generally do not grant much authority to the presiding officer. Thus, the Chief Justice 's role in this regard is a limited one.
McCulloch v. Maryland, decided in 1819, established two important principles. One of which explains that states can not make actions to impede on valid constitutional exercises of power by the federal government. The other explains that Congress has the implied powers to implement the express powers written in the Constitution to create a functional national government. All three branches of the US government have certain powers and those powers relate to the other branches of government. One of these powers is called the express powers. These powers are expressly given, in the Constitution, to each branch of government. Another power is the implied powers. These powers are those that are necessary to perform expressed powers. There are also inherent and concurrent powers. Inherent powers are those that are not found in the Constitution yet the different branches of government can still exercise them. Concurrent powers are those that are given to both state and federal governments. There are also powers that are not lined out in the Constitution that are given to the federal government. These powers are then given to the states in a system called federalism.
Congress is one of the branches of government so it has a lot of powers of its own that it uses to pass laws and establish regulations. These include express, implied, and concurrent powers. It uses its express powers to regulate bankruptcies, business between states and other nations, the armed forces, and the National Guard or militia. They also establish all laws necessary and proper for carrying out other powers. In addition to this Congress makes laws for naturalization. Implied powers are used to keep the regulation of taxes, the draft, immigration, protection of those with disabilities, minimum wage, and outlaw discrimination. Congress 's inherent powers are used to control national borders, deal with foreign affairs, acquire new territories, defend the state from revolution, and decide the exclusion or establishment of aliens. Concurrent powers makes it so that both federal and state governments can create laws, deal with environmental protection, maintain national parks and prisons, and provide a police force.
The judicial branch of government holds powers as well. They have the ability to use express and concurrent powers to make laws and establish regulations. They use express powers to interpret laws and perform judicial review. Implied powers are used by this branch to declare laws that were previously passed by a lower court unconstitutional. They can also use express powers to declare laws that are in the process of being passed unconstitutional. Concurrent powers are used to make it so that state courts can conduct trials and interpret laws without the approval of federal courts and federal courts can hear appeals form lower state courts.
The executive branch also has powers of its own that they use to make laws and establish regulations. The powers that are used in this branch are express, implied, and inherent. The President uses express powers to approve and veto bills and to make treaties as well. The President is constitutionally obligated to make sure that laws are faithfully executed and uses their powers to do just this. He uses implied powers to issue executive orders and enter into treaties with foreign nations. The executive branch uses inherent powers to establish executive privilege, which means that they can enforce statutes and laws already passed by Congress. They can also enforce the Constitution and treaties that were previously made by other branches of government.
The system of checks and balances makes it so that no one branch of government has more power than another and can not overthrow another. It creates a balance of power that is necessary for a government to function, if it is to function well. This, in most situations, makes it so that each branch is held to a certain standard of conduct. If a branch of the government thinks that what another branch is doing is unconstitutional, they can "call them out '' so to say. Each branch is able to look at the other branches wrong doing and change it to meet the needs of the people whom they serve. Humans as a whole have a history of abusing positions of power but the system of checks and balances makes it so much more difficult to do so. Also the fact that there is more than one person running each branch gives room for debate and discussion before decisions are made within a single branch. Even so, some laws have been made and then retracted because of the fact that they were an abuse of the power given to that particular branch. The people that created these laws had been serving a selfish agenda when forming these laws instead of looking out for the welfare of those people that they were supposed to be protecting by making certain laws. While this is a horrible scenario, it does happen. That does not mean that it can not be fixed though. Indeed it can be, by another branch of government stepping up to right the wrongs that had been done.
The federal government is fully capable to intervene in affairs of Native Americans on reservations to some extent. Their ability to create and enforce treaties makes it so that they can interact with the Native Americans and build a treaty that works for both parties and make reservations for the Native Americans to live on and make it so that the people that would live on the reservation not be interrupted by the outside world and be able to live their lives as they please. This responsibility also falls on to the states as well. This happens because the federal government is the one that creates the treaties but the reservations are then put in the jurisdiction of the states. The states are then responsible for maintaining the relationships with the Native Americans on those reservations and to honor the treaties that were previously made by the federal government.
The Constitution does not explicitly indicate the pre-eminence of any particular branch of government. However, James Madison wrote in Federalist 51, regarding the ability of each branch to defend itself from actions by the others, that "it is not possible to give to each department an equal power of self - defense. In republican government, the legislative authority necessarily predominates. ''
One may claim that the judiciary has historically been the weakest of the three branches. In fact, its power to exercise judicial review -- its sole meaningful check on the other two branches -- is not explicitly granted by the U.S Constitution. The U.S. Supreme Court exercised its power to strike down congressional acts as unconstitutional only twice prior to the Civil War: in Marbury v. Madison (1803) and Dred Scott v. Sandford (1857). The Supreme Court has since then made more extensive use of judicial review.
Throughout America 's history dominance of one of the three branches has essentially been a see - saw struggle between Congress and the president. Both have had periods of great power and weakness such as immediately after the Civil War when republicans had a majority in Congress and were able to pass major legislation and shoot down most of the president 's vetoes. They also passed acts to essentially make the president subordinate to Congress, such as the Tenure of Office Act. Johnson 's later impeachment also cost the presidency much political power. However the president has also exercised greater power largely during the 20th century. Both Roosevelts greatly expanded the powers of the president and wielded great power during their terms.
The first six presidents of the United States did not make extensive use of the veto power: George Washington only vetoed two bills, James Monroe one, and John Adams, Thomas Jefferson and John Quincy Adams none. James Madison, a firm believer in a strong executive, vetoed seven bills. None of the first six Presidents, however, used the veto to direct national policy. It was Andrew Jackson, the seventh President, who was the first to use the veto as a political weapon. During his two terms in office, he vetoed 12 bills -- more than all of his predecessors combined. Furthermore, he defied the Supreme Court in enforcing the policy of ethnically cleansing Native American tribes ("Indian Removal ''); he stated (perhaps apocryphally), "John Marshall has made his decision. Now let him enforce it! ''
Some of Jackson 's successors made no use of the veto power, while others used it intermittently. It was only after the Civil War that presidents began to use the power to truly counterbalance Congress. Andrew Johnson, a Democrat, vetoed several Reconstruction bills passed by the "Radical Republicans. '' Congress, however, managed to override fifteen of Johnson 's twenty - nine vetoes. Furthermore, it attempted to curb the power of the presidency by passing the Tenure of Office Act. The Act required Senate approval for the dismissal of senior Cabinet officials. When Johnson deliberately violated the Act, which he felt was unconstitutional (Supreme Court decisions later vindicated such a position), the House of Representatives impeached him; he was acquitted in the Senate by one vote.
Johnson 's impeachment was perceived to have done great damage to the presidency, which came to be almost subordinate to Congress. Some believed that the president would become a mere figurehead, with the Speaker of the House of Representatives becoming a de facto prime minister. Grover Cleveland, the first Democratic President following Johnson, attempted to restore the power of his office. During his first term, he vetoed over 400 bills -- twice as many bills as his 21 predecessors combined. He also began to suspend bureaucrats who were appointed as a result of the patronage system, replacing them with more "deserving '' individuals. The Senate, however, refused to confirm many new nominations, instead demanding that Cleveland turn over the confidential records relating to the suspensions. Cleveland steadfastly refused, asserting, "These suspensions are my executive acts... I am not responsible to the Senate, and I am unwilling to submit my actions to them for judgment. '' Cleveland 's popular support forced the Senate to back down and confirm the nominees. Furthermore, Congress finally repealed the controversial Tenure of Office Act that had been passed during the Johnson Administration. Overall, this meant that Cleveland 's Administration marked the end of presidential subordination.
Several 20th - century presidents have attempted to greatly expand the power of the presidency. Theodore Roosevelt, for instance, claimed that the president was permitted to do whatever was not explicitly prohibited by the law -- in direct contrast to his immediate successor, William Howard Taft. Franklin Delano Roosevelt held considerable power during the Great Depression. Congress had granted Franklin Roosevelt sweeping authority; in Panama Refining v. Ryan, the Court for the first time struck down a Congressional delegation of power as violative of the doctrine of separation of powers. The aforementioned Schechter Poultry Corp. v. United States, another separation of powers case, was also decided during Franklin Roosevelt 's presidency. In response to many unfavorable Supreme Court decisions, Roosevelt introduced a "Court Packing '' plan, under which more seats would be added to the Supreme Court for the president to fill. Such a plan (which was defeated in Congress) would have seriously undermined the judiciary 's independence and power.
Richard Nixon used national security as a basis for his expansion of power. He asserted, for example, that "the inherent power of the President to safeguard the security of the nation '' authorized him to order a wiretap without a judge 's warrant. Nixon also asserted that "executive privilege '' shielded him from all legislative oversight; furthermore, he impounded federal funds (that is to say, he refused to spend money that Congress had appropriated for government programs). In the specific cases aforementioned, however, the Supreme Court ruled against Nixon. This was also because of an ongoing criminal investigation into the Watergate tapes, even though they acknowledged the general need for executive privilege. Since then, Nixon 's successors have sometimes asserted that they may act in the interests of national security or that executive privilege shields them from Congressional oversight. Though such claims have in general been more limited than Nixon 's, one may still conclude that the presidency 's power has been greatly augmented since the 18th and 19th centuries.
Many political scientists believe that separation of powers is a decisive factor in what they see as a limited degree of American exceptionalism. In particular, John W. Kingdon made this argument, claiming that separation of powers contributed to the development of a unique political structure in the United States. He attributes the unusually large number of interest groups active in the United States, in part, to the separation of powers; it gives groups more places to try to influence, and creates more potential group activity. He also cites its complexity as one of the reasons for lower citizen participation.
Separation of powers has again become a current issue of some controversy concerning debates about judicial independence and political efforts to increase the accountability of judges for the quality of their work, avoiding conflicts of interest, and charges that some judges allegedly disregard procedural rules, statutes, and higher court precedents.
Many legislators hold the view that separation of powers means that powers are shared among different branches; no one branch may act unilaterally on issues (other than perhaps minor questions), but must obtain some form of agreement across branches. That is, it is argued that "checks and balances '' apply to the Judicial branch as well as to the other branches -- for example, in the regulation of attorneys and judges, and the establishment by Congress of rules for the conduct of federal courts, and by state legislatures for state courts. Although in practice these matters are delegated to the Supreme Court, the Congress holds these powers and delegates them to the Supreme Court only for convenience in light of the Supreme Court 's expertise, but can withdraw that delegation at any time.
On the other side of this debate, many judges hold the view that separation of powers means that the Judiciary is independent and untouchable within the judicial sphere. In this view, separation of powers means that the Judiciary alone holds all powers relative to the judicial function and that the Legislative and Executive branches may not interfere in any aspect of the Judicial branch. An example of the second view at the state level is found in the Florida Supreme Court holding that only the Florida Supreme Court may license and regulate attorneys appearing before the courts of Florida, and only the Florida Supreme Court may set rules for procedures in the Florida courts. The State of New Hampshire also follows this system.
|
the watsons go to birmingham who is mr robert | The Watsons go to Birmingham -- 1963 - wikipedia
The Watsons Go to Birmingham -- 1963 (1995) is a historical - fiction novel by Christopher Paul Curtis. First published in 1995, it was reprinted in 1997. It tells the story of a loving African - American family living in the town of Flint, Michigan, in 1963. When the oldest son (Byron) begins to get into a bit of trouble, the parents decide he should spend the summer and possibly the next school year with Grandma Sands in Birmingham, Alabama. The entire family travels there together by car, and during their visit, tragic events take place.
The book was adapted for a TV movie of the same name, produced in 2013 and aired on the Hallmark Channel.
The book takes place from approximately January to October 1963, a turbulent time during the Civil Rights Movement. The Watson family is fictional, but the characters are based on members of the author 's family, including himself, and many of the events in the first half of the book are based on events from the author 's childhood.
Events later in the story center on the historic 1963 16th Street Baptist Church bombing in Birmingham, soon after the civil rights protests had gained negotiation with white city leaders for integration. KKK members bombed the church on September 15, 1963, killing four girls and injuring many more. In the novel, the incident is depicted as occurring a bit earlier than the historical date, allowing the Watson family to still be on summer vacation in Birmingham when it took place.
The bombing was a catalyst for increased activity in the Civil Rights Movement and work on voter registration in Mississippi, during Freedom Summer of 1964.
The novel is a first - person account narrarated by Kenny Watson, who lives in Flint, Michigan with his parents, Daniel and Wilona Watson, his older brother Byron, and younger sister Joetta. The opening chapters establish Kenny as a very bright and shy 4th grader who has difficulty making true friends until Rufus Fry arrives in town from Arkansas. Rufus is also bullied by the students at Clark Elementary for his "country '' clothes and accent, making Kenny reluctant to befriend him at first, but they are soon inseparable. Kenny is alternatively bullied and protected by his 13 year old brother Byron, whom he calls "an official teenage juvenile delinquent ''. Byron has been retained twice because he often skips school and is still in 6th grade. He invents a series of "fantastic adventures '' which constantly get him into trouble and include playing with matches in the house, abusing his parent 's credit at the corner grocery store to buy himself treats, and getting a "conk '' hairstyle against his parents ' orders.
Daniel and Wilona eventually become so frustrated with their inability to "straighten out '' Byron that they decide to send him to Birmingham, Alabama to live with Grandma Sands (Wilona 's mother) for at least the summer and possibly an entire year. As soon as the school year concludes, the Watsons ready their car ("the Brown Bomber '') and embark on a roadtrip from Flint to Birmingham to deliver Byron. Kenny had been looking forward to the "battle royal '' between his grandma and Byron, but is disappointed when just a few sharp words from the "old, old '' lady have Byron speaking respectfully and generally behaving himself, causing Kenny to seek out his own "adventures ''. Grandma Sands had warned the children to avoid a particular local swimming hole because of a dangerous "whirlpool '', which Kenny misheard as "Wool Pooh '' due to her thick Alabama accent. Kenny wants to swim there anyway, and is frustrated when Byron and Joetta refuse to go along. Ignoring the warnings of both Grandma Sands and Byron, Kenny jumps into the seemingly tranquil pool and edges deeper and deeper water until he loses his footing and almost drowns. Remembering his grandmother 's words, he imagines that a strange monster he thinks is the mysterious "Wool Pooh '' swam up from below to grab his ankle and pull him under. Byron returns and jumps in to save Kenny just in time. He later insists that there was nothing else in the water, but Kenny is convinced that the Wool Pooh actually exists.
Shortly afterwards, a bomb goes off at a nearby church where Joetta is attending Sunday school. (This was the 16th Street Baptist Church bombing, a real historical event.) Kenny wanders into the still - smoking church in the immediate aftermath looking for his sister, but instead sees the Wool Pooh in the smoke clinging to a torn girl 's shoe that looks like Joetta 's. In shock, he walks back to Grandma Sands ' house without anyone noticing that he had been at the church, and he 's again shocked and confused to find Joetta already there. She claims that it was Kenny who had called her away from the church and led her home, and she does not even know that a bombing had taken place right after she 'd left Sunday school.
As soon as they realize that Joetta is safe, the Watsons decide to immediately return home to Flint, trying to avoid explaining the full implications of what has happened to the children. Kenny is unable to process the events in Birmingham and avoids his family and friends over the ensuing weeks, instead spending many hours hiding behind the sofa. Byron eventually coaxes him out and gets Kenny to talk about what happened, which finally brings a flood of tears from Kenny. Encouraging his little brother to "keep on stepping '', Byron explains that although the world is not perfect, he has to keep moving on.
Kenneth Bernard "Kenny '' Watson - The main character and narrator of the story, the younger of the two sons of the Watsons. Kenny is ten years old. He is an excellent / intelligent student with many capability, which makes him the target of bullying at Clark Elementary School.
Wilona Sands Watson - Usually called "Momma ''. She 's the wife of Daniel and the mother of the three children. A native of Birmingham, she slips into a thick Southern accent when mad or excited, and she complains about Flint 's harsh winters. She is strict and overprotective but loves her kids.
Daniel Watson - The husband of Wilona and father of the three children. He 's known for having a good sense of humor and is referred to as "Dad ''.
Byron "Daddy Cool '' Watson - Older brother of Kenny and Joey. He is considered the "God '' of Clark Elementary School. He bullies smaller kids along with his best friend Buphead. He is known for being a terrible student and is also known for breaking the rules and being a rebel. Byron is thirteen.
Joetta "Joey '' Watson - Younger sister of Byron and Kenny. She follows the rules and is very religious. Joey has a special relationship with Byron. She is five.
Buphead - Byron 's best friend, who is also an "official delinquent, '' helps Byron bully many kids, including Kenny, although Byron and Buphead stand up for Kenny and Rufus when they 're being bullied by Larry.
Grandma Sands - Grandmother of Kenny, Byron, and Joey, mother of Wilona, and mother - in - law of Daniel. She is supposed to be very strict. Seen by Wilona, Kenny, Byron, Joey, and Daniel when they arrived in Birmingham. Laughs like an evil witch. Grandma Sands walks with a cane due to having a stroke. Her husband died before the book began.
Rufus Fry - Kenny 's new best friend and Cody 's big brother. His family moves to Flint from the South. He and his little brother Cody befriend Kenny.
Cody Fry - Rufus 's little brother. Rufus and Cody come from a poor Southern African - American family.
Lawrence "Larry '' Dunn - the school bully in Kenny and Rufus 's class, until Byron teaches him a lesson for stealing Kenny 's winter gloves.
Mr. Robert - a dear friend of Grandma Sands. Mr. Robert started helping Grandma Sands out around the house after her husband died. It 's hinted that Grandma Sands has a crush on Mr. Robert.
Mrs. Davidson - is the religious next - door neighbor of the Watsons. Joey goes to church with Mrs. Davidson three times a week. Sometimes Wilona makes Kenny go to Sunday School with Joey.
LJ Jones - a former playmate of Kenny who stole lots of Kenny 's toy dinosaurs.
The Watsons Go to Birmingham -- 1963 was Christopher Paul Curtis ' first novel, earning him a Newbery Honor, a Coretta Scott King (wife of Dr. Martin Luther King, Jr.) Honor, and the Golden Kite Award. Curtis also wrote the Newbery Award - winning novel Bud, Not Buddy; Elijah of Buxton, and The Mighty Miss Malone.
In 2013, a television film based on the book premiered on the Hallmark Channel, starring Anika Noni Rose, Wood Harris, Latanya Richardson, Skai Jackson and David Alan Grier. The movie adapted the story by condensing and trimming events and characters from Flint in the first half of the novel and adding new scenes showing Kenny and Byron helping local youths organize Civil Rights events in Birmingham.
|
how many goals did shevchenko scored for chelsea | Andriy Shevchenko - wikipedia
Andriy Mykolayovych Shevchenko (Ukrainian: Андрій Миколайович Шевченко, pronounced (ɑndˈrij mɪkoˈlɑjovɪtʃ ʃɛwˈtʃɛnko); born 29 September 1976) is a politician, football manager and retired Ukrainian footballer who played for Dynamo Kyiv, Milan, Chelsea and the Ukraine national team as a striker. From February to July 2016, he was an assistant coach of the Ukraine national team, at the time led by Mykhailo Fomenko. On 15 July 2016, shortly following the nation 's elimination from UEFA Euro 2016, Shevchenko was appointed as Ukraine 's head coach.
Shevchenko is ranked as the fifth top goalscorer in all European competitions with 67 goals. With a tally of 175 goals scored for Milan, Shevchenko is the second most prolific player in the history of the club, and is also the all - time top scorer of the Derby della Madonnina (the derby between Milan and their local rivals Internazionale) with 14 goals. Furthermore, he is the all - time top scorer for the Ukrainian national team with 48 goals. In 2012, he quit football and joined Ukraine -- Forward! to take part in elections.
Shevchenko 's career has been highlighted by many awards, the most prestigious of which was the Ballon d'Or in 2004 (becoming the third Ukrainian, after Oleh Blokhin and Igor Belanov, to receive it). He won the UEFA Champions League in 2003 with Milan, and he has also won various league and cup titles in Ukraine, Italy and England. He was also an UEFA Champions League runner - up in 2005 and 2008.
In his illustrious international career, the striker led Ukraine as captain to the quarter - finals in their first ever FIFA World Cup appearance in 2006, and also took part at UEFA Euro 2012.
On 28 July 2012, Shevchenko announced that he was quitting football for politics. He was standing for election to the Ukrainian Parliament in the October 2012 Ukrainian parliamentary election, but his party failed to win parliamentary representation.
Shevchenko was born in the family of Praporshchik Mykola Hryhorovych Shevchenko in 1976. In 1979, his family moved to the newly built neighborhood in Kiev -- Obolon (Minsk District was created in 1975). In Kiev, Shevchenko went to the 216th City School and in 1986 (aged 9) enrolled into the football section coached by Oleksandr Shpakov. Because of the accident at the Chernobyl Nuclear Power Plant, together with his sport group he was evacuated temporarily from the city. At an early age, he also was a competitive boxer in the LLWI Ukrainian junior league, but eventually he elected to move on to football.
In 1986, Shevchenko failed a dribbling test for entrance to a specialist sports school in Kiev, but happened to catch the eye of a Dynamo Kyiv scout while playing in a youth tournament, and was thus brought to the club. Four years later, Shevchenko was on the Dynamo under - 14 team for the Ian Rush Cup (now the Welsh Super Cup); he finished as the tournament 's top scorer and was awarded a pair of Rush 's boots as a prize by the then - Liverpool player.
Shevchenko started out his professional career at age 16 when he came on for only 12 minutes as substitute in a 0: 2 home loss to the Odessa second team Chornomorets - 2 Odessa on 5 May 1993. He was a substitute for the last six home games of the 1992 -- 93 Ukrainian First League and did not score any goals. The next 1993 -- 94 season at the second tier, Shevchenko was the top goal scorer for Dynamo - 2 with 12 goals, and he made his first appearance in the starting XI. Shevchenko scored his first goal against Krystal Chortkiv at the home 1: 1 draw on 7 October 1993. During the same season he recorded his first hat - trick in a home game against Artania Ochakiv on 21 November 1993 which Dynamo - 2 won 4: 1. Shevchenko stayed with Dynamo - 2 until the end of 1994 and once again he was called up for one game in late 1996.
He made his Premier League debut for Dynamo senior squad on 8 November 1994 in an away game against Shakhtar Donetsk when he was 18. It was actually his second game for the senior squad overall after he played a home game of National Cup competition on 5 November 1994 against Skala Stryi. That year Shevchenko became a national champion and became a cup holder with Dynamo. He won his second league title the next season, scoring 6 goals in 20 matches, and scored a hat - trick in the first half of a 1997 -- 98 UEFA Champions League road match against Barcelona, which Dynamo won 4 -- 0. His 19 goals in 23 league matches and six goals in ten Champions League matches (including a hat trick over two legs against Real Madrid) were followed by 28 total goals in all competitions in 1998 -- 99. He won the domestic league title with Dynamo in each of his five seasons with the club.
In 1999, Shevchenko joined Italian club Milan for a then - record transfer fee of $25 million. He made his league debut on 28 August 1999 in a 2 -- 2 draw with Lecce. Alongside five other players -- Michel Platini, John Charles, Gunnar Nordahl, Istvan Nyers, and Ferenc Hirzer -- he managed, as a foreign player, to win the Serie A scoring title in his debut season, finishing with 24 goals in 32 matches. Shevchenko maintained his excellent form into the 2000 -- 01 season, scoring 24 goals in 34 matches. Shevchenko also managed to score nine goals in 14 matches in the Champions League. Milan, however, failed to get past the second group stage.
Despite netting only five times in 24 matches, mainly due to injuries, Shevchenko became the first Ukrainian - born player to win the Champions League after Milan lifted their sixth trophy in 2002 -- 03. He scored the winning penalty in the shoot out against arch - rivals Juventus in the final, which had ended goalless after extra time. Following Milan winning the Champions League, Shevchenko flew to Kiev to put his medal by the grave of Valeriy Lobanovskyi (who he was managed by when he was at Dynamo), who died in 2002. He finished top goalscorer in Serie A in 2003 -- 04 for the second time in his career, scoring 24 goals in 32 matches as Milan won the Scudetto for the first time in five years. He also scored the winning goal in the UEFA Super Cup victory over Porto, leading to Milan 's second trophy of the season. In August 2004, he scored three goals against Lazio as Milan won the Supercoppa Italiana. Shevchenko capped off the year by being named the 2004 European Player of the Year, becoming the third Ukrainian player ever to win the award after Oleg Blokhin and Igor Belanov. In the same year, Shevchenko was also inducted into the FIFA 100.
He scored 17 goals in the 2004 -- 05 season after missing several games with a fractured cheekbone. Shevchenko made Champions League history the following season; on 23 November 2005, he scored all four goals in Milan 's 4 -- 0 group stage drubbing of Fenerbahçe, becoming only the fifth player to accomplish this feat; his company includes Marco van Basten, Simone Inzaghi, Dado Pršo and Ruud van Nistelrooy (while Lionel Messi joined that group in the 2009 -- 10 season and Robert Lewandowski in 2012 -- 13), and the only one to have done it in an away game. Milan eventually lost the tournament when Shevchenko missed the crucial penalty in the final against Liverpool. In the 2005 -- 06 season, he scored his last Milan goal in the second leg of the quarter - finals as they eliminated Lyon after a last - minute comeback in a 3 -- 1 victory. In the semi-finals, Milan lost to eventual winners Barcelona 1 -- 0, a match where Shevchenko controversially had a last minute equalizer denied by the referee. Despite this, he still ended up being the top scorer of the whole competition with 9 goals in 12 games.
On 8 February 2006, Shevchenko became Milan 's second highest all - time goalscorer, behind Gunnar Nordahl, after netting against Treviso. He finished the season as joint fourth - top scorer with 19 goals in 28 games. Shevchenko ended his seven - year stint with Milan with 175 goals in 296 games.
During the summer of 2005, there were persistent reports that Chelsea owner Roman Abramovich offered a record sum € 73 million and striker Hernán Crespo to Milan in exchange for Shevchenko. Milan refused the monetary offer but took Crespo on loan. Chelsea chief executive Peter Kenyon was quoted as saying, "I think Shevchenko is the type of player we would like. At the end of the day to improve what we have got, it has to be a great player and Shevchenko certainly comes into that class. '' Shevchenko cited that the persistence of Abramovich was a key factor in his move. Milan, desperate to keep the striker, offered Shevchenko a six - year contract extension.
On 28 May 2006, Shevchenko left Milan for Chelsea for £ 30.8 million (€ 43.875 million), topping Michael Essien 's transfer fee from the previous year and also breaking the record for a player signed by an English club. He received the number seven shirt, as Chelsea manager José Mourinho said that Shevchenko could continue wearing it.
Shevchenko made his debut for Chelsea on 13 August 2006 in the FA Community Shield, scoring his side 's goal in a 2 -- 1 loss to Liverpool. On 23 August, he scored his first Premier League goal -- and his 300th in top - flight and international football -- in a 2 -- 1 loss to Middlesbrough. He scored goals sporadically throughout the season, including equalisers against Porto and Valencia in the Champions League and another against Tottenham Hotspur to help take his side into the FA Cup semi-finals. He finished with a total of 14 from 51 games. During the campaign, he netted his 57th career goal in European competitions, leaving him second behind Gerd Müller on the all - time European goalscorers list, before Filippo Inzaghi made the record his own in the 2007 -- 08 season. Shevchenko 's 2006 -- 07 season was cut short due to injury and a hernia operation. He missed the Champions League semi-finals against Liverpool and the FA Cup Final against Manchester United at the new Wembley Stadium on 19 May 2007. He did, however, start for Chelsea in the 2007 League Cup final victory over Arsenal in which he hit the bar which would have given Chelsea a 3 -- 1 lead.
Shevchenko was handed his first start of the 2007 -- 08 season against Blackburn Rovers at home to cover for the injured Didier Drogba, but the game finished goalless. His first goal of the season came three days later, equalising for Chelsea in a match against Rosenborg, which turned out to be José Mourinho 's last game as manager of Chelsea. Throughout the season, Shevchenko was in and out of the starting lineup because of injuries and the appointment of Avram Grant following the departure of Mourinho. During the Christmas period, however, Shevchenko enjoyed a good run of form. He scored the first goal in Chelsea 's 2 -- 0 win over Sunderland, and he was named Man of the Match in Chelsea 's 4 -- 4 draw against Aston Villa at Stamford Bridge, scoring twice (including a stunning 25 - yard shot into the top left hand corner) and assisting Alex to make the score 3 -- 2 in Chelsea 's favour. Shevchenko scored his last goal in the 2007 -- 08 season in a 1 -- 1 draw with Bolton Wanderers. He finished the season with five league goals in 17 games. Shevchenko also played a part in a pre-season match which was against his former team, Milan.
Shevchenko was not used very often in the starting lineup at Chelsea, and with the appointment of Luiz Felipe Scolari, he was deemed surplus to requirements. Due to this, Milan vice-president Adriano Galliani offered to take Shevchenko back to the San Siro and Shevchenko was loaned back to his old club for the 2008 -- 09 season.
Shevchenko 's second spell was considered unsuccessful, as he failed to score any league goals and only scored 2 goals in 26 appearances, starting only nine of those games. At the end of the season, Milan confirmed that Shevchenko would be returning to Chelsea for the final year of his four - year contract. At the end of that season, it was also announced that Milan 's manager, Carlo Ancelotti, would also be leaving to join Chelsea.
Shevchenko was not even on the bench for Chelsea 's penalty shoot - out victory over Manchester United at Wembley at the weekend in the season - opening Community Shield. After making a late appearance for Chelsea in their second game of the 2009 -- 10 season, Ancelotti announced that Shevchenko would be likely to leave Chelsea before the summer transfer window closed. Despite this, Ancelotti said it had nothing to do with his decision to leave Shevchenko out of Chelsea 's 2009 -- 10 Champions League squad, but just to continue playing first - team football.
On 28 August 2009, Shevchenko signed a two - year deal at his former club Dynamo Kyiv and scored a penalty - goal in his first game upon returning to his former club against Metalurh Donetsk in Dynamo 's 3 -- 1 victory on 31 August 2009. He was mostly used as a left winger, and was named left winger in the 2009 team of the season. On 16 September 2009, Shevchenko played his first Champions League match after returning to Dynamo, against Rubin Kazan, in Dynamo 's first game of the 2009 -- 10 season. In October 2009, he was named the best player of the Ukrainian Premier League. On 4 November 2009, he scored a goal in the game against Internazionale, cross-city rivals of his former club Milan, in the fourth game of the Champions League season. It was the 15th goal he had scored against Inter in his career.
On 25 August 2010, he scored a penalty against Ajax in the second game against two teams in qualifications level for 2010 -- 11 Champions League.
On 28 July 2012, Shevchenko announced that he was quitting football for politics.
Shevchenko achieved 111 caps and scored 48 goals for the Ukrainian national team, whom he represented at the 2006 FIFA World Cup and UEFA Euro 2012. He earned his first cap in 1995 and scored his first international goal in May 1996 in a friendly against Turkey.
During qualification for the 1998 World Cup, Shevchenko scored three times as Ukraine finished second in Group G to earn a place in the play - offs. Ukraine were knocked out 3 -- 1 on aggregate by Croatia, the team who would go on to finish third in the finals, with Shevchenko scoring Ukraine 's goal in the home leg.
Ukraine performed similarly impressively in UEFA Euro 2000 qualifying, again making the play - offs after finishing one point behind World Champions France in Group 4. However, the team again failed at the play - off stage, losing to underdogs Slovenia. Overall, Shevchenko scored four times for Ukraine during their Euro 2000 qualifying campaign.
In March 2000, Dynamo manager Valeri Lobanovsky became Ukraine 's coach, with the aim to qualify for the 2002 World Cup finals. Shevchenko scored ten goals in the qualifiers, but Ukraine again failed to qualify after losing a play - off, this time against Germany. He then scored a total of three goals in Ukraine 's Euro 2004 qualifying round, but the team failed to qualify for the play - offs, finishing below Greece and Spain in third place in Group 6.
Shevchenko scored six goals in qualifying for the 2006 World Cup, to take his country to its first ever major tournament. He captained the team at the finals and scored in Ukraine 's first ever World Cup win, a 4 -- 0 defeat of Saudi Arabia. He then scored the winning goal from a penalty kick as Ukraine beat Tunisia 1 -- 0 to qualify for the second round where, despite Shevchenko failing with their first kick, Ukraine knocked out Switzerland on penalties. Ukraine were then beaten 3 -- 0 by eventual champions Italy at the quarter - final stage.
After only playing two games for Milan in the 2008 -- 09 season, some critics suggested Shevchenko was past his best but he silenced his critics after scoring an equaliser in an 2010 World Cup qualifying match against England at Wembley Stadium. Ukraine, however, went on to lose the game 2 -- 1 after his former Chelsea teammate John Terry scored from a free kick delivered by David Beckham.
In a 21 December 2009 interview with UEFA, Shevchenko declared that he was keen to play in his home country at Euro 2012. "After a disappointing 2010 World Cup qualifying campaign, that is my new challenge, or even dream. I will do everything to achieve that. ''
In May 2012, Shevchenko was named in the Ukrainian squad for Euro 2012. In Ukraine 's opening game, Shevchenko scored two headers to beat Sweden 2 -- 1 in Group D. In Ukraine 's final game, against England, Marko Dević scored a "ghost goal '' in the second half, with Ukraine losing 1 -- 0 to a Wayne Rooney goal. Dević 's shot was hooked clear from behind the England goal - line by Shevchenko 's former Chelsea teammate John Terry under the eyes of the additional assistant referee standing beside the goal (as confirmed by video replays). The incident reopened football 's goal - line technology debate. Replays, however, also showed Artem Milevskiy should have been ruled offside before Dević 's shot. After this game, Shevchenko announced he would retire from international football, having been Ukraine 's youngest and oldest goalscorer and record marksman with 48 goals in 111 appearances.
In November 2012, Shevchenko refused to accept Football Federation of Ukraine 's proposal to become head coach of the Ukraine national team.
A fast, hardworking, energetic, opportunistic and prolific goalscorer, Shevchenko was usually deployed as a centre - forward, although he was capable of attacking from the left wing as well, a position which he occupied at the beginning of his career and during his second stint with Dynamo Kyiv; he was also effective from set - pieces and was an accurate penalty taker. A strong and physical striker with an eye for goal, he was primarily known for his excellent positional sense and his powerful and accurate shot with either foot, although he also possessed good technique and aerial ability, and was capable of playing off of his teammates.
On 15 July 2016, Shevchenko was appointed as manager of the Ukraine national team. The 39 - year - old replaced Mykhaylo Fomenko, whose four - year spell ended with elimination at the group stage of Euro 2016. Prior to his appointment, Shevchenko served as the assistant manager of the national team. He signed a two - year contract with the possibility of another two - year extension. Former Italy and Milan defender Mauro Tassotti, who was assistant coach when Shevchenko was at Milan, joined his coaching staff, as did former Dynamo coach Raúl Riancho, and former Milan Youth System coach Andrea Maldera.
In the late 1990s, Shevchenko and other teammates of Dynamo Kyiv publicly backed the Social Democratic Party of Ukraine (united). During the 2004 Ukrainian presidential election, Shevchenko publicly endorsed candidate Viktor Yanukovych.
After his retirement in June 2012, Shevchenko immediately joined Ukraine -- Forward! (formerly known as Ukrainian Social Democratic Party) and took second place on the party list for the October 2012 Ukrainian parliamentary election. This was in spite of him stating a month earlier that he wanted to coach after his playing career: "This is the world I understand, the world I want to stay in. '' In the election his party won 1.58 % of the national votes and no constituencies and thus failed to win parliamentary representation.
Shevchenko is married to American model Kristen Pazik. The couple met at a Giorgio Armani afterparty in 2002, and married on 14 July 2004 in a private ceremony on a golf course in Washington, D.C. They communicate with each other in Italian, though Shevchenko had previously made public his desire to learn English. After his return to Dynamo Kyiv in August 2009, the couple declared that they want their children to learn Ukrainian.
The couple have four sons: Jordan, born on 29 October 2004, Christian, born on 10 November 2006, Alexander, born on 1 October 2012 and Rider Gabriel, born on 6 April 2014. Shevchenko commemorated Jordan 's birth by scoring against Sampdoria the following day (Milan won 1 -- 0). Milan owner and former Prime Minister of Italy Silvio Berlusconi is the godfather of Shevchenko 's first son, Jordan. The day after Christian 's birth, Shevchenko scored in a 4 -- 0 Chelsea victory over Watford and he and several of his teammates gathered and performed the popular "rock - the - baby '' goal celebration as a tribute.
Shevchenko is a close friend of fashion designer Giorgio Armani, and has modelled for Armani and opened two boutiques with him in Kiev. With his wife, he has started an e-commerce Web site called Ikkon.com, dedicated to men 's fashion and lifestyle.
In June 2005, he became an ambassador for the SOS Children 's Villages charity. Shevchenko also has a foundation to support orphaned children.
Shevchenko, an avid golfer, participated in his first professional golf tournament, the Kharkov Superior Cup, in September 2013.
Shevchenko represented the Rest of the World team against England for Soccer Aid on 8 June 2014.
Shevchenko 's first name (Андрій in Ukrainian) has multiple ways of being transliterated from its original spelling in the Ukrainian Cyrillic alphabet into the Latin alphabet. "Andriy '' is the spelling used throughout the player 's official web site. It has also been adopted by UEFA and FIFA and is the preferred spelling in most English publications (although "Andrii '' is used by World Soccer magazine and "Andrei '' by Sky Sports). The Ukrainian pronunciation is (anˈdrij).
Shevchenko features in EA Sports ' FIFA video game series; he was on the cover of FIFA 2005, and was introduced as one of the Ultimate Team Legends in FIFA 14 and has been an Ultimate Team Legend in every FIFA game since.
* Other tournaments include Supercoppa Italiana, Community Shield, Football League Cup and Intercontinental Cup
Dynamo Kyiv
Milan
Chelsea
|
list of countries who have won the world cup | List of FIFA World Cup finals - wikipedia
The FIFA World Cup is an international association football competition established in 1930. It is contested by the men 's national teams of the members of Fédération Internationale de Football Association (FIFA), the sport 's global governing body. The tournament has taken place every four years, except in 1942 and 1946, when the competition was cancelled due to World War II. The most recent World Cup, hosted by Brazil in 2014, was won by Germany, who beat Argentina 1 -- 0 after extra time.
The World Cup final match is the last of the competition, and the result determines which country is declared world champions. If after 90 minutes of regular play the score is a draw, an additional 30 - minute period of play, called extra time, is added. If such a game is still tied after extra time it is decided by kicks from the penalty shoot - out. The team winning the penalty shoot - out are then declared champions. The tournament has been decided by a one - off match on every occasion except 1950, when the tournament winner was decided by a final round - robin group contested by four teams (Uruguay, Brazil, Sweden, and Spain). Uruguay 's 2 -- 1 victory over Brazil was the decisive match (and one of the last two matches of the tournament) which put them ahead on points and ensured that they finished top of the group as world champions. Therefore, this match is regarded by FIFA as the de facto final of the 1950 World Cup.
In the 20 tournaments held, 77 nations have appeared at least once. Of these, 12 have made it to the final match, and eight have won. With five titles, Brazil is the most successful World Cup team and also the only nation to have participated in every World Cup finals tournament. Italy and Germany have four titles. The other former champions are Uruguay and Argentina with two titles each, and England, France, and Spain with one each. The current champions, Germany, took their fourth title in 2014, and it is the first title for the reunified German team. The 2014 German team also became the first European team to win in South America. The team that wins the finals receive the FIFA World Cup Trophy, and their name is engraved in the bottom side of the trophy.
The 1970 and 1994 along with the 1986, 1990 and 2014 games are to date the only matches competed by the same teams (Brazil -- Italy and Argentina -- Germany respectively). As of 2014, the 1934 final remains the latest final to have been between two teams playing their first final. The final match of the upcoming 2018 World Cup in Russia is scheduled to take place at the country 's biggest sports complex, the Luzhniki Stadium in Moscow.
The 1930 and the 1966 games are the only ones that did not take place on Sunday. The former did on a Wednesday and the latter on a Saturday.
As of 2014, only nations from Europe and South America have competed in a World Cup Final.
General
Specific
|
a trail slope or course for winter sport | Mogul skiing - wikipedia
Mogul skiing is a freestyle skiing competition consisting of one timed run of free skiing on a steep, heavily moguled course, stressing technical turns, aerial maneuvers and speed. Internationally, the sport is contested at the FIS Freestyle World Ski Championships, and at the Winter Olympic Games.
Moguls are a series of bumps on a piste formed when skiers push snow into mounds as they do sharp turns. This tends to happen naturally as skiers use the slope but they can also be constructed artificially. Once formed, a naturally occurring mogul tends to grow as skiers follow similar paths around it, further deepening the surrounding grooves known as troughs. Since skiing tends to be a series of linked turns, moguls form together to create a bump field.
The term "mogul '' is from the Bavarian / Austrian German word Mugel, meaning "mound, hillock ''.
The first competition involving mogul skiing occurred in 1971. The FIS created the Freestyle World Cup Circuit in 1980. The first World Championships were held in 1986, and are currently held in odd - numbered years. It was a demonstration sport in freestyle skiing at the 1988 Winter Olympics in Calgary. It has been a medal event in the Winter Olympics since 1992.
Mogul courses are between 200 and 270 metres with an average slope grade of 26 degrees. The moguls themselves are set approximately 3.5 metres apart. The course includes two small jumps which are used as a take - off for aerial maneuvers. Athletes can perform upright or inverted tricks off these jumps in the course of a competition run. Dual Mogul competition consists of elimination rounds where pairs of competitors compete against each other. Each loser is eliminated and each winner advances to the next round until a final result is achieved.
Scoring:
|
what are the common law states in the united states | Common - law marriage in the United states - wikipedia
Common - law marriage, also known as sui juris marriage, informal marriage, marriage by habit and repute, or marriage in fact is a legal framework in a limited number of jurisdictions where a couple is legally considered married, without that couple having formally registered their relation as a civil or religious marriage. The original concept of a "common - law marriage '' is a marriage that is considered valid by both partners, but has not been formally recorded with a state or religious registry, or celebrated in a formal religious service. In effect, the act of the couple representing themselves to others as being married, and organizing their relation as if they were married, acts as the evidence that they are married.
The term common - law marriage has wide informal use, often to denote relations which are not legally recognized as common - law marriages. The term common - law marriage is often used colloquially or by the media to refer to cohabiting couples, regardless of any legal rights that these couples may or may not have, which can create public confusion both in regard to the term and in regard to the legal rights of unmarried partners.
The requirements for a common - law marriage to be validly contracted differ from state to state.
All states, however, recognize common - law marriages that were validly contracted in other states under their laws of comity and choice of law / conflict of laws. (The Full Faith and Credit Clause of the United States Constitution does not apply to common law marriages because they are not public acts (i.e. statutes, ordinances, general laws, etc.), not public records, and not judicial proceedings.)
Common - law marriage also exists in the Native American tribes. Among Native American tribes, for example, the Navajo Nation permits common - law marriage and allows its members to marry through tribal ceremonial processes and traditional processes. Otherwise, common - law marriages can no longer be contracted in any of the other states.
A common - law marriage is recognized for federal tax purposes if it is recognized by the state or jurisdiction where the taxpayers currently live, or in the state where the common - law marriage began. If the marriage is recognized under the law and customs of the state or jurisdiction in which the marriage takes place (even in a foreign country), the marriage is valid for tax purposes (Rev. Rul. 58 - 66). Practitioners should be alert to the specific state or jurisdiction requirements in order for their clients who are contemplating filing joint returns satisfy the requirements of the state or jurisdiction to be considered common - law married.
In February 2015, the United States Department of Labor issued its final rule amending the definition of "spouse '' under the Family and Medical Leave Act of 1993 "FMLA '' in response to the United States v. Windsor decision recognizing same - sex marriage. The new DOL rule became effective March 27, 2015. The revised definition of "spouse '' extends FMLA leave rights and job protections to eligible employees in a same - sex marriage or a common - law marriage entered into in a state or jurisdiction where those statuses are legally recognized, regardless of the state in which the employee currently works or resides. Accordingly, even if an employer has employees working where same - sex or common - law marriage is not recognized, those employees ' spouses would trigger FMLA coverage if an employee was married in one of the many states that recognize same - sex marriage or common - law marriage.
Common - law marriage in the United States can still be contracted in the District of Columbia as well as in the following nine states: Colorado, Iowa, Kansas, Montana, Oklahoma, Rhode Island, South Carolina, Texas, and Utah.
Common - law marriages can no longer be contracted in the following twenty - seven states, as of the dates given: Alabama (2016)., Alaska (1917), Arizona (1913), California (1895), Florida (1968), Georgia (1997), Hawaii (1920), Idaho (1996), Illinois (1905), Indiana (1958), Kentucky (1852), Maine (1652, when it became part of Massachusetts; then a state, 1820), Massachusetts (1646), Michigan (1957), Minnesota (1941), Mississippi (1956), Missouri (1921), Nebraska (1923), Nevada (1943), New Jersey (1939), New Mexico (1860), New York (1933, also 1902 -- 1908), North Dakota (1890), Ohio (1991), Pennsylvania (2005), South Dakota (1959) and Wisconsin (1917).
Common - law marriages have never been permitted to be contracted in the following thirteen states: Arkansas, Connecticut, Delaware, Louisiana, Maryland, North Carolina, Oregon, Tennessee, Vermont, Virginia, Washington, West Virginia, and Wyoming.
One state recognizes common - law marriage only through death for probate purposes: New Hampshire.
All states -- including those that have abolished the contract of common - law marriage within their boundaries -- recognize common - law marriages lawfully contracted in jurisdictions that permit it. Some states that do not recognize common - law marriage also afford legal rights to parties to a putative marriage (i.e. in circumstances when someone who was not actually married, e.g. due to a failure to obtain or complete a valid marriage license from the proper jurisdiction, believed in good faith that he or she was married) that arise before a marriage 's invalidity is discovered. This is because all states provide that validity of foreign marriage is determined per lex loci celebrationis -- that is, "by law of the place of celebration. '' In addition, the full faith and credit clause of the U.S. Constitution, discussed below, requires all U.S. states to recognize the validity of official acts of other U.S. states. Thus, a marriage validly contracted in Ohio, including common - law marriages entered into before that state abolished new common - law marriages in 1991, is valid in Indiana, even though the common - law marriage could not have been legally contracted in Indiana, because Ohio law is the basis of its validity. However, by the same principle, a marriage that was not lawfully contracted in Ohio would not be valid in Indiana even if it could have been lawfully contracted there. Additionally, some courts have held that all marriages performed within the U.S. must be valid in all states under the Full Faith and Credit Clause of the U.S. Constitution. However, none of the cases to date has actually used the Clause to validate a sister - state marriage, and there is currently no known appellate case on the issue, working its way through U.S. courts, that is likely to reach the U.S. Supreme Court -- whose decision would apply nationally, not just locally or within a particular state or a federal circuit.
Note there is no such thing as "common - law divorce '' in the United States -- that is, a married couple can not terminate a common - law marriage as easily as they got into one. Only the contract of the marriage is irregular; everything else about the marriage is the same as a regularly licensed and solemnized marriage. Divorce or dissolution of marriage requires filing a petition for divorce or dissolution in the appropriate court in their state.
Rulings on the validity of a particular common - law marriage frequently refrain from identifying a specific date of marriage when this is not necessary, because often, there is no one event or marriage ceremony that establishes this date. Even when a relationship begins in a state that does not recognize common - law marriage, if the couple relocates to a state that recognizes common - law marriage, a common - law marriage between the parties is often recognized if the couple 's relationship continues in the new state. It is not uncommon for someone to claim to be a spouse based upon time the couple spent together in a common - law marriage state even after the couple leaves that state. The case law does not definitively establish whether a couple who are briefly present in a common - law marriage state, and who otherwise are eligible to be considered common - law married, but who do not establish domicile in that state, will be recognized as common - law married in a state that does not itself have common - law marriage.
The requirements for a common - law marriage to be validly contracted differ in the eleven U.S. jurisdictions which still permit them.
The elements of a common - law marriage are, with respect to both spouses: (1) holding themselves out as husband and wife; (2) consenting to the marriage; (3) cohabitation; and (4) having the reputation in the community as being married. Different sources disagree regarding the requirement of cohabitation and some indicate that consummation (i.e. post-marital sexual intercourse) is also an element of common - law marriage. Colorado, by statute, no longer recognizes common - law marriages entered by minors in Colorado, and also does not recognize foreign common - law marriages entered into by minors, even if that marriage would have been valid where entered into under local law. See Section 14 - 2 - 109.5, Colorado Revised Statutes. The constitutionality of this limitation as applied to foreign marriages has not been tested in litigation.
Colorado, Montana, and Texas are the only U.S. states to recognize both putative marriage and common - law marriage.
According to the District of Columbia Department of Human Services, a common - law marriage is "A marriage that is legally recognized even though there has been no ceremony and there is no certification of marriage. A common - law marriage exists if the two persons are legally free to marry, if it is the intent of the two persons to establish a marriage, and if the two are known to the community as husband and wife. ''
Common - law marriages have been recognized in the District of Columbia since 1931. Holding common - law marriages legal, District Court of Appeals Justice D. Laurence Groner said,
"We think it can not now be controverted that an agreement between a man and woman to be husband and wife, consummated by cohabitation as husband and wife, constitutes a valid marriage unless there be in existence in the State in which the agreement is made, a statute declaring the marriage to be invalid unless solemnized in a prescribed manner. We think it equally true that the rule now generally recognized is that statutes requiring a marriage to be preceded by a license or to be solemnized by a religious ceremony without words of nullity as to marriages contracted otherwise are directory merely and failure to procure the license or to go through a religious ceremony does not invalidate the marriage... There is nothing in the statute which declares that a marriage shall not be valid unless solemnized in the prescribed manner, nor does it declare any particular thing requisite to the validity of the marriage. The act confines itself wholly with providing the mode of solemnizing the marriage and to the persons authorized to perform the ceremony. Indeed, the statue itself declares the purpose underlying the requirements to be secure registration and evidence of the marriage rather than to deny validity to marriages not performed according to its terms. ''
The three elements of a common - law marriage are: (1) the present intent and agreement to be married; (2) continuous cohabitation; and (3) public declaration that the parties are husband and wife. The public declaration or holding out to the public is considered to be the acid test of a common - law marriage.
Adm. Rule 701 -- 73.25 (425) of the Iowa Administrative Code, titled Common Law Marriage, states:
A common law marriage is a social relationship that meets all the necessary requisites of a marriage except that it was not solemnized, performed or witnessed by an official authorized by law to perform marriages. The necessary elements of a common law marriage are: (a) a present intent of both parties freely given to become married, (b) a public declaration by the parties or a holding out to the public that they are husband and wife, (c) continuous cohabitation together as husband and wife (this means consummation of the marriage), and (d) both parties must be capable of entering into the marriage relationship. No special time limit is necessary to establish a common law marriage.
Edit: 701 -- 73.26 Rescinded, effective October 2, 1985.
This rule is intended to implement Iowa Code section 425.17.
Under Kansas Statute 23 - 2502, both parties to a common - law marriage must be 18 years old. The three requirements that must coexist to establish a common - law marriage in Kansas are: (1) capacity to marry; (2) a present marriage agreement; and (3) a holding out of each other as husband and wife to the public.
A common - law marriage is established when a couple: "(1) is competent to enter into a marriage, (2) mutually consents and agrees to a common - law marriage, and (3) cohabits and is reputed in the community to be husband and wife. ''
New Hampshire recognizes common - law marriage for purposes of probate only. In New Hampshire "(P) ersons cohabiting and acknowledging each other as husband and wife, and generally reputed to be such, for the period of 3 years, and until the decease of one of them, shall thereafter be deemed to have been legally married. '' Thus, the state posthumously recognizes common - law marriages to ensure that a surviving spouse inherits without any difficulty.
The situation in Oklahoma has been unclear since the mid-1990s, with legal scholars reporting each of 1994, 1998, 2005, and 2010 as the year common - law marriage was abolished in the state. However, as of September 12, 2016, the Oklahoma Tax Commission continues to represent common - law marriage as legal there, and the Department of Corrections continues to reference common - law marriage, though that could refer to older marriages. No reference to the ban appears in the relevant statutes; the 2010 bill that attempted to abolish common - law marriage passed the state Senate, but died in a House committee. Multiple legal websites continue to represent common - law marriage as legal in Oklahoma.
Oklahoma common - law status is much controverted, but as of February 19, 2014, several Oklahoma executive agencies continue to represent it as legal, and a reputed ban in 2010 can not be found in its statutes.
The criteria for a common - law marriage are: (1) the parties seriously intended to enter into the husband - wife relationship; (2) the parties ' conduct is of such a character as to lead to a belief in the community that they were married.
The criteria for a common - law marriage are: (1) when two parties have a present intent (usually, but not necessarily, evidenced by a public and unequivocal declaration) to enter into a marriage contract; and (2) "a mutual agreement between the parties to assume toward each other the relation of husband and wife. '' Common law marriages can dissolve in legal divorce and alimony.
The Texas Family Code, Sections 2.401 through 2.405, define how a common - law marriage (which is known as both "marriage without formalities '' and "informal marriage '' in the text) can be established in one of two ways. Both parties must be at least age 18 to enter into a common - law marriage.
First, a couple can file a legal "Declaration of Informal Marriage '', which is a legally binding document. The form must be completed by both marriage partners and sworn or affirmed in presence of the County Clerk. The Declaration is formally recorded as part of the Official County Records by Volume and Page number, and is then forwarded by the County Clerk to the Texas Bureau of Vital Statistics, where it is again legally recorded as formal evidence of marriage. This is the same procedure that is used when a marriage license is issued and filed; the term "Informal '' refers only to the fact that no formal wedding ceremony (whether civil or religious) was conducted.
Second, a couple can meet a three - prong test, showing evidence of all of the following:
Regarding the second prong, in the actual text of the Texas Family Code, there is no specification on the length of time that a couple must cohabitate to meet this requirement. As such, an informal marriage can occur under Texas law if the couple lives together for as little as one day, if the other requirements (an agreement to be married and holding out as married to the public) can be shown.
Likewise, a couple can cohabit for 50 years, but if they never have an agreement to be married, or hold themselves out to the public as married, their 50 - year cohabitation will not make them informally married under Texas law.
Dissolution of this type marriage requires formal Annulment or Divorce Proceedings, the same as with the other more recognized forms of ' ceremonial ' marriages. However, if a couple does not commence a proceeding to prove their relationship was a marriage within two years of the end of their cohabitation and relationship, there is a legal presumption that they were never informally married, but this presumption is rebuttable.
Utah 's status with common - law marriage is mixed. Government websites claim that common - law marriage does not exist in Utah. However, other legal websites state that non-matrimonial relationships may be recognized as marriage within one year after the relationship ends. This is very similar to common - law marriage.
Utah recognizes common - law marriages only if they have been validated by a court or administrative order. For a common - law marriage to be legal and valid, "a court or administrative order must establish that '' the parties: (1) "are of legal age and capable of giving consent ''; (2) "are legally capable of entering a solemnized marriage under the provisions of Title 30, Chap. 1 of the Utah Code; (3) "have cohabited ''; (4) "mutually assume marital rights, duties, and obligations ''; and (5) "hold themselves out as and have acquired a uniform and general reputation as husband and wife '' In Utah, the fact that two parties are legally incapable of entering into a common - law marriage, because they are already married, does not preclude criminal liability for bigamy or polygamy.
Also, non-matrimonial relationships may be recognized as marriage within one year after the relationship ends, via validation by the above mentioned court or administrative order.
Since January 1, 2017, Alabama has abolished common - law marriage. Common law marriages contracted before this date are still valid. A valid common - law marriage exists when there is capacity to enter into a marriage, the parties must be at least 16 with legal parental consent and present agreement or consent to be married, public recognition of the existence of the marriage, and consummation.
California Family Code Section 308 provides that a marriage validly contracted in another jurisdiction is valid in California. Thus, a common - law marriage validly contracted in another jurisdiction is valid in California notwithstanding it could not be legally contracted within California; and a common - law marriage that was not validly contracted in another U.S. jurisdiction is not valid in California. All other states have similar statutory provisions. Exceptions to this rule are marriages deemed by the jurisdiction to be "odious to public policy ''. In general, states which have abolished common - law marriage continue to recognize such marriages contracted in the past (i.e. before the date when they were abolished).
Nevada does n't recognize common - law marriage. However, in Williams v. Williams, 120 Nev. 559, 97 P. 3d 1124, 2004 Nev. LEXIS 84, 120 Nev. Adv. Rep. 64 (Nev. 2004), adopted the majority opinion of the putative spouse doctrine. This doctrine, accepted by a majority of states, is when a marriage is found to be void because of a prior legal impediment. In the Williams case, the wife and husband filed for a marriage, license, held a ceremony, and both felt they were married. However, Mr. Williams ' divorce to a previous wife was found to have been dismissed by the court without Mr. Williams ' knowledge. With his previous divorce not valid, under Nevada law his marriage to the new Mrs. Williams was void. He could not get married if he was still married. The Nevada courts ruled Mrs. Williams was a putative spouse and for the purposes of the new divorce against Mr. Williams, the courts would allow Mrs. Williams to plead for community property rights as much as any other spouse.
Pennsylvania 's domestic relations marriage statute now reads: "No common - law marriage contracted after January 1, 2005, shall be valid. Nothing in this part shall be deemed or taken to render any common - law marriage otherwise lawful and contracted on or before January 1, 2005, invalid. '' The situation in Pennsylvania became unclear in 2003 when an intermediate appellate court purported to abolish common - law marriage even though the state Supreme Court had recognized (albeit somewhat reluctantly) the validity of common - law marriages only five years before. The Pennsylvania legislature resolved most of the uncertainty by abolishing common - law marriages entered into after January 1, 2005. However, it is still not certain whether Pennsylvania courts will recognize common - law marriages entered into after the date of the Stamos decision and before the effective date of the statute (i.e., after September 17, 2003, and on or before January 1, 2005), because the other intermediate appellate court has suggested that it might not follow the Stamos decision.
|
what is the bitrate of google play music | Google Play Music - wikipedia
Google Play Music is a music and podcast streaming service and online music locker operated by Google. The service was announced on May 10, 2011, and after a six - month, invitation - only beta period, it was publicly launched on November 16, 2011.
Users with standard accounts can upload and listen to up to 50,000 songs from their personal libraries at no cost. A paid Google Play Music subscription entitles users to on - demand streaming of any song in the Google Play Music catalog, as well as access to YouTube Music Premium. Users in the United States, Australia, New Zealand and Mexico also have access to YouTube Premium. Users can purchase additional tracks for their library through the music store section of Google Play. In addition to offering music streaming for Internet - connected devices, the Google Play Music mobile apps allow music to be stored and listened to offline.
Google Play Music offers all users storage of up to 50,000 files for free. Users can listen to songs through the service 's web player and mobile apps. The service scans the user 's collection and matches the files to tracks in Google 's catalog, which can then be streamed or downloaded in up to 320 kbps quality. Any files that are not matched are uploaded to Google 's servers for streaming or re-download. Songs purchased through the Google Play Store do not count against the 50,000 - song upload limit.
Supported file formats for upload include: MP3, AAC, WMA, FLAC, Ogg, or ALAC. Non-MP3 uploads will be converted to MP3. Files can be up to 300 MB after conversion.
Songs can be downloaded on the mobile apps for offline playback, and on computers through the Music Manager app.
Standard users located in the United States, Canada, and India can also listen to curated radio stations, supported by video and banner advertisements. Stations are based on "an activity, your mood, or your favorite popular music ''. Up to six songs per hour can be skipped when listening to curated radio.
With a paid subscription to Google Play Music, in addition to the standard features users get access to on - demand streaming of 40 million songs, without advertisements during listening, no limit on number of skips, and offline music playback on the mobile apps. A one - time 30 - day free trial for a subscription to Google Play Music is offered for new users. Paid subscribers also receive access to YouTube Premium (including YouTube Music) in eligible countries.
On computers, music can be listened to from a dedicated Google Play Music section of the Google Play website.
On smartphones and tablets, music can be listened to through the Google Play Music mobile app for the Android and iOS operating systems. Up to five smartphones can be used to access the library in Google Play Music, and up to ten devices total. Listening is limited to one device at a time.
In April 2017, reports surfaced that the default music player on the then - new Samsung Galaxy S8 would be Google Play Music, continuing a trend that started with the S7 in 2016. However, for the S8, Samsung partnered with Google to incorporate additional exclusive features into the app, including the ability to upload up to 100,000 tracks, an increase from the 50,000 tracks users are normally allowed to upload. Google also stated that it would develop other "special features in Google Play Music just for Samsung customers ''. In June, Google Play Music on the S8 was updated to exclusively feature "New Release Radio '', a daily, personalized playlist of new music releases. In July, the playlist was made available to all users, with Google noting in a press release that the exclusivity on Samsung devices was part of an "early access program '' for testing and feedback purposes.
Google first hinted at releasing a cloud media player during their 2010 Google I / O developer conference, when Google 's then - Senior Vice President of Social Vic Gundotra showed a "Music '' section of the then - called Android Market during a presentation. A music service was officially announced at the following year 's I / O conference on May 10, 2011, under the name "Music Beta ''. Initially, it was only available by invitation to residents of the United States, and had limited functionality; the service featured a no - cost "music locker '' for storage of up to 20,000 songs, but no music store was present during the beta period, as Google was not yet able to reach licensing deals with major record labels.
After a six - month beta period, Google publicly launched the service in the US on November 16, 2011, as "Google Music '' with its "These Go to Eleven '' announcement event. The event introduced several features of the service, including a music store integrated into the then - named Android Market, music sharing via the Google+ social network, "Artist Hub '' pages for musicians to self - publish music, and song purchasing reflected on T - Mobile phone bills. At launch, Google had partnerships with three major labels - Universal Music Group, EMI, and Sony Music Entertainment - along with other, smaller labels, although no agreement had been reached with Warner Music Group; in total, 13 million tracks were covered by these deals, 8 million of which were available for purchase on launch date. To promote the launch, several artists released free songs and exclusive albums through the store; The Rolling Stones debuted the live recording Brussels Affair (Live 1973), and Pearl Jam released a live concert recorded in Toronto as 9.11. 2011 Toronto, Canada.
In January 2012, a feature was added to Google Music that allows users to download 320kbps MP3 copies of any file in their library, with a two - download limit per track via the web, or unlimited downloads via the Music Manager app.
According to a February 2012 report from CNET, Google executives were displeased with Google Music 's adoption rate and revenues in its first three months.
In March 2012, the company rebranded the Android Market and its digital content services as "Google Play ''; the music service was renamed "Google Play Music ''.
Google announced in October 2012 that they had signed deals with Warner Music Group that would bring "their full music catalog '' to the service.
At the Google I / O developer conference in May 2013, Google announced that Google Play Music would be expanded to include a paid on - demand music streaming service called "All Access '', allowing users to stream any song in the Google Play catalog. It debuted immediately in the United States for $9.99 per month ($7.99 per month if the users signed up before June 30). The service allows users to combine the All Access catalog with their own library of songs.
Google Play Music was one of the first four apps compatible with Google 's Chromecast digital media player that launched in July 2013.
In October 2014, a new "Listen Now '' feature was introduced, providing contextual and curated recommendations and playlists. The feature was adapted from technology by Songza, which Google acquired earlier in the year.
On November 12, 2014, Google subsidiary YouTube announced "Music Key '', a new premium service succeeding All Access that included the Google Play Music streaming service, along with advertising - free access to streaming music videos on YouTube. Additionally, aspects of the two platforms were integrated; Google Play Music recommendations and YouTube music videos are available across both services. The service was re-launched in a revised form as YouTube Red (now YouTube Premium) on October 28, 2015, expanding its scope to offer ad - free access to all YouTube videos, as opposed to just music videos, as well as premium content produced in collaboration with notable YouTube producers and personalities.
In December 2015, Google started offering a Google Play Music family plan, that allows unlimited access for up to six family members for US $14.99 / month. The family plan is currently only available in Australia, Belgium, Brazil, Canada, Chile, the Czech Republic, France, Germany, Ireland, Italy, Japan, Mexico, the Netherlands, New Zealand, Norway, Russia, South Africa, Spain, Ukraine, the United Kingdom, and the United States.
In April 2016, Google announced that podcasts would be coming to Google Play Music. Its first original podcast series, "City Soundtracks '', was announced in March 2017, and "will feature interviews with various musicians about how their hometowns influenced their work, including the people and the moments that had an impact ''.
In November 2016, Google introduced the Google Home smart speaker system, with built - in support for Google Play Music.
In June 2018, Google announced that YouTube Red would be replaced by YouTube Premium along with YouTube Music. As a result, users subscribed to Google Play Music in the United States, Australia, New Zealand and Mexico are now given access to YouTube Premium -- which includes YouTube Music Premium. Users outside of those four countries are still required to pay the regular YouTube Premium price to access Premium features, but are given free access to YouTube Music Premium. Google plans to transition Google Play Music subscribers to the YouTube Music service no earlier than 2019.
Standard accounts on Google Play Music are available in 63 countries. The full list includes: Argentina, Australia, Austria, Belarus, Belgium, Bolivia, Bosnia - Herzegovina, Brazil, Bulgaria, Canada, Chile, Colombia, Costa Rica, Croatia, Cyprus, Czech Republic, Denmark, Dominican Republic, Ecuador, El Salvador, Estonia, Finland, France, Germany, Greece, Guatemala, Honduras, Hungary, Iceland, India, Ireland, Italy, Japan, Latvia, Liechtenstein, Lithuania, Luxembourg, Macedonia, Malta, Mexico, Netherlands, New Zealand, Nicaragua, Norway, Panama, Paraguay, Peru, Poland, Portugal, Romania, Russia, Serbia, Slovakia, Slovenia, South Africa, Spain, Sweden, Switzerland, Ukraine, United Kingdom, United States, Uruguay, and Venezuela.
Premium subscriptions are available in the same countries as Standard accounts.
Availability of music was introduced in the United Kingdom, France, Germany, Italy, and Spain in October 2012, Czech Republic, Finland, Hungary, Liechtenstein, Netherlands, Russia, and Switzerland in September 2013, Mexico in October 2013, Germany in December 2013, Greece, Norway, Sweden, and Slovakia in March 2014, Canada, Poland and Denmark in May 2014, Bolivia, Chile, Colombia, Costa Rica, Peru, and Ukraine in July 2014, Dominican Republic, Ecuador, Guatemala, Honduras, Nicaragua, Panama, Paraguay, El Salvador, and Venezuela in August 2014, Brazil and Uruguay in September 2014, 13 new countries in November 2014, Brazil in November 2014, Argentina in June 2015, Japan in September 2015, South Africa and Serbia in December 2015, and India in September 2016, where only purchasing of music was offered. The All Access subscription service launched in India in April 2017.
In 2013, Entertainment Weekly compared a number of music services and gave Google Play Music All Access a "B + '' score, writing, "The addition of uploading to augment the huge streaming archive fills in some huge gaps. ''
|
markov chains with finite and countable state space | Markov chain - wikipedia
A Markov chain is "a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event ''.
In probability theory and related fields, a Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness ''). Roughly speaking, a process satisfies the Markov property if one can make predictions for the future of the process based solely on its present state just as well as one could knowing the process 's full history, hence independently from such history; i.e., conditional on the present state of the system, its future and past states are independent.
A Markov chain is a type of Markov process that has either discrete state space or discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).
Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Random walks on integers and the gambler 's ruin problem are examples of Markov processes. Some variations of these processes were studied hundreds of years earlier in the context of independent variables. Two important examples of Markov processes are the Wiener process, also known as the Brownian motion process, and the Poisson process, which are considered the most important and central stochastic processes in the theory of stochastic processes, and were discovered repeatedly and independently, both before and after 1906, in various settings. These two processes are Markov processes in continuous time, while random walks on the integers and the gambler 's ruin problem are examples of Markov processes in discrete time.
Markov chains have many applications as statistical models of real - world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, exchange rates of currencies, storage systems such as dams, and population growths of certain animal species. The algorithm known as PageRank, which was originally proposed for the internet search engine Google, is based on a Markov process. Furthermore, Markov processes are the basis for general stochastic simulation methods known as Gibbs sampling and Markov Chain Monte Carlo, are used for simulating random objects with specific probability distributions, and have found extensive application in Bayesian statistics.
The adjective Markovian is used to describe something that is related to a Markov process.
A Markov chain is a stochastic process with the Markov property. The term "Markov chain '' refers to the sequence of random variables such a process moves through, with the Markov property defining serial dependence only between adjacent periods (as in a "chain ''). It can thus be used for describing systems that follow a chain of linked events, where what happens next depends only on the current state of the system.
The system 's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time:
Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain '' is reserved for a process with a discrete set of times, i.e. a discrete - time Markov chain (DTMC), but a few authors use the term "Markov process '' to refer to a continuous - time Markov chain (CTMC) without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model). Moreover, the time index need not necessarily be real - valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous - time Markov chain is general to such a degree that it has no designated term.
While the time parameter is usually discrete, the state space of a Markov chain does not have any generally agreed - on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time - index and state - space parameters, there are many other variations, extensions and generalizations (see Variations). For simplicity, most of this article concentrates on the discrete - time, discrete state - space case, unless mentioned otherwise.
The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate.
A discrete - time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps.
Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system 's future can be predicted. In many applications, it is these statistical properties that are important.
A famous Markov chain is the so - called "drunkard 's walk '', a random walk on the number line where, at each step, the position may change by + 1 or − 1 with equal probability. From any position there are two possible transitions, to the next or previous integer. The transition probabilities depend only on the current position, not on the manner in which the position was reached. For example, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5 are 0. These probabilities are independent of whether the system was previously in 4 or 6.
Another example is the dietary habits of a creature who eats only grapes, cheese, or lettuce, and whose dietary habits conform to the following rules:
This creature 's eating habits can be modeled with a Markov chain since its choice tomorrow depends solely on what it ate today, not what it ate yesterday or any other time in the past. One statistical property that could be calculated is the expected percentage, over a long period, of the days on which the creature will eat grapes.
A series of independent events (for example, a series of coin flips) satisfies the formal definition of a Markov chain. However, the theory is usually applied only when the probability distribution of the next step depends non-trivially on the current state.
Andrey Markov studied Markov chains in the early 20th century. Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains.
In 1912 Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée - Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.
Andrei Kolmogorov developed in a 1931 paper a large part of the early theory of continuous - time Markov processes. Kolmogorov was partly inspired by Louis Bachelier 's 1900 work on fluctuations in the stock market as well as Norbert Wiener 's work on Einstein 's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov 's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman -- Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov -- Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene Dynkin, starting in the 1950s.
Suppose that you start with $10, and you wager $1 on an unending, fair, coin toss indefinitely, or until you lose all of your money. If X n (\ displaystyle X_ (n)) represents the number of dollars you have after n tosses, with X 0 = 10 (\ displaystyle X_ (0) = 10), then the sequence (X n: n ∈ N) (\ displaystyle \ (X_ (n): n \ in \ mathbb (N) \)) is a Markov process. If I know that you have $12 now, then it would be expected that with even odds, you will either have $11 or $13 after the next toss. This guess is not improved by the added knowledge that you started with $10, then went up to $11, down to $10, up to $11, and then to $12.
The process described here is a Markov chain on a countable state space that follows a random walk.
If one pops one hundred kernels of popcorn in an oven, each kernel popping at an independent exponentially - distributed time, then this would be a continuous - time Markov process. If X t (\ displaystyle X_ (t)) denotes the number of kernels which have popped up to time t, the problem can be defined as finding the number of kernels that will pop in some later time. The only thing one needs to know is the number of kernels that have popped prior to the time "t ''. It is not necessary to know when they popped, so knowing X t (\ displaystyle X_ (t)) for previous times "t '' is not relevant.
The process described here is an approximation of a Poisson point process - Poisson processes are also Markov processes.
Suppose that there is a coin purse containing five quarters (each worth 25 ¢), five dimes (each worth 10 ¢), and five nickels (each worth 5 ¢), and one - by - one, coins are randomly drawn from the purse and are set on a table. If X n (\ displaystyle X_ (n)) represents the total value of the coins set on the table after n draws, with X 0 = 0 (\ displaystyle X_ (0) = 0), then the sequence (X n: n ∈ N) (\ displaystyle \ (X_ (n): n \ in \ mathbb (N) \)) is not a Markov process.
To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. Thus X 6 = $ 0.50 (\ displaystyle X_ (6) = \ $0.50). If we know not just X 6 (\ displaystyle X_ (6)), but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that X 7 ≥ $ 0.60 (\ displaystyle X_ (7) \ geq \ $0.60) with probability 1. But if we do not know the earlier values, then based only on the value X 6 (\ displaystyle X_ (6)) we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about X 7 (\ displaystyle X_ (7)) are impacted by our knowledge of values prior to X 6 (\ displaystyle X_ (6)).
However, it is possible to model this scenario as a Markov process. Instead of defining X n (\ displaystyle X_ (n)) to represent the total value of the coins on the table, one could define X n (\ displaystyle X_ (n)) to represent the count of the various coin types on the table. For instance, X 6 = 1, 0, 5 (\ displaystyle X_ (6) = 1, 0, 5) could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table. This new model would be represented by 216 possible states (that is, 6x6x6 states, since each of the three coin types could have zero to five coins on the table), and is more complex than the previous (total value) model which has 41 possible states (that is, the values ranging from 0 ¢ to 200 ¢, counting by 5 ¢).
A discrete - time Markov chain is a sequence of random variables X, X, X,... with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states
The possible values of X form a countable set S called the state space of the chain.
Markov chains are often described by a sequence of directed graphs, where the edges of graph n are labeled by the probabilities of going from one state at time n to the other states at time n + 1, Pr (X n + 1 = x ∣ X n = x n) (\ displaystyle \ Pr (X_ (n + 1) = x \ mid X_ (n) = x_ (n))). The same information is represented by the transition matrix from time n to time n + 1. However, Markov chains are frequently assumed to be time - homogeneous (see variations below), in which case the graph and matrix are independent of n and are thus not presented as sequences.
These descriptions highlight the structure of the Markov chain that is independent of the initial distribution Pr (X 1 = x 1) (\ displaystyle \ Pr (X_ (1) = x_ (1))). When time - homogeneous, the chain can be interpreted as a state machine assigning a probability of hopping from each vertex or state to an adjacent one. The probability Pr (X n = x ∣ X 1 = x 1) (\ displaystyle \ Pr (X_ (n) = x \ mid X_ (1) = x_ (1))) of the machine 's state can be analyzed as the statistical behavior of the machine with an element x 1 (\ displaystyle x_ (1)) of the state space as input, or as the behavior of the machine with the initial distribution Pr (X 1 = y) = (x 1 = y) (\ displaystyle \ Pr (X_ (1) = y) = (x_ (1) = y)) of states as input, where (P) (\ displaystyle (P)) is the Iverson bracket.
The fact that some sequences of states might have zero probability of occurring corresponds to a graph with multiple connected components, where we omit edges that would carry a zero transition probability. For example, if a has a nonzero probability of going to b, but a and x lie in different connected components of the graph, then Pr (X n + 1 = b ∣ X n = a) (\ displaystyle \ Pr (X_ (n + 1) = b \ mid X_ (n) = a)) is defined, while Pr (X n + 1 = b ∣ X 1 = x,..., X n = a) (\ displaystyle \ Pr (X_ (n + 1) = b \ mid X_ (1) = x, \ ldots, X_ (n) = a)) is not.
where m is finite, is a process satisfying
A state diagram for a simple example is shown in the figure on the right, using a directed graph to picture the state transitions. The states represent whether a hypothetical stock market is exhibiting a bull market, bear market, or stagnant market trend during a given week. According to the figure, a bull week is followed by another bull week 90 % of the time, a bear week 7.5 % of the time, and a stagnant week the other 2.5 % of the time. Labelling the state space (1 = bull, 2 = bear, 3 = stagnant) the transition matrix for this example is
The distribution over states can be written as a stochastic row vector x with the relation x = x P. So if at time n the system is in state x, then three time periods later, at time n + 3 the distribution is
In particular, if at time n the system is in state 2 (bear), then at time n + 3 the distribution is
Using the transition matrix it is possible to calculate, for example, the long - term fraction of weeks during which the market is stagnant, or the average number of weeks it will take to go from a stagnant to a bull market. Using the transition probabilities, the steady - state probabilities indicate that 62.5 % of weeks will be in a bull market, 31.25 % of weeks will be in a bear market and 6.25 % of weeks will be stagnant, since:
A thorough development and many examples can be found in the on - line monograph Meyn & Tweedie 2005.
A finite state machine can be used as a representation of a Markov chain. Assuming a sequence of independent and identically distributed input signals (for example, symbols from a binary alphabet chosen by coin tosses), if the machine is in state y at time n, then the probability that it moves to state x at time n + 1 depends only on the current state.
A continuous - time Markov chain (X) is defined by a finite or countable state space S, a transition rate matrix Q with dimensions equal to that of the state space and initial probability distribution defined on the state space. For i ≠ j, the elements q are non-negative and describe the rate of the process transitions from state i to state j. The elements q are chosen such that each row of the transition rate matrix sums to zero, while the row - sums of a probability transition matrix in a (discrete) Markov chain are all equal to one.
There are three equivalent definitions of the process.
Let X be the random variable describing the state of the process at time t, and assume that the process is in a state i at time t.
Then, knowing X = i, X is independent of previous values (X: s < t), and as h → 0 for all j and for all t:
using the little - o notation. The q can be seen as measuring how quickly the transition from i to j happens
Define a discrete - time Markov chain Y to describe the nth jump of the process and variables S, S, S,... to describe holding times in each of the states where S follows the exponential distribution with rate parameter − q.
For any value n = 0, 1, 2, 3,... and times indexed up to this value of n: t, t, t,... and all states recorded at these times i, i, i, i,... it holds that
where p is the solution of the forward equation (a first - order differential equation)
with initial condition P (0) is the identity matrix.
The probability of going from state i to state j in n time steps is
and the single - step transition is
For a time - homogeneous Markov chain:
and
The n - step transition probabilities satisfy the Chapman -- Kolmogorov equation, that for any k such that 0 < k < n,
where S is the state space of the Markov chain.
The marginal distribution Pr (X = x) is the distribution over states at time n. The initial distribution is Pr (X = x). The evolution of the process through one time step is described by
Note: The superscript (n) is an index and not an exponent.
A Markov chain is said to be irreducible if it is possible to get to any state from any state. The following explains this definition more formally.
A state j is said to be accessible from a state i (written i → j) if a system started in state i has a non-zero probability of transitioning into state j at some point. Formally, state j is accessible from state i if there exists an integer n ≥ 0 such that
This integer is allowed to be different for each pair of states, hence the subscripts in n. Allowing n to be zero means that every state is accessible from itself by definition. The accessibility relation is reflexive and transitive, but not necessarily symmetric.
A state i is said to communicate with state j (written i ↔ j) if both i → j and j → i. A communicating class is a maximal set of states C such that every pair of states in C communicates with each other. Communication is an equivalence relation, and communicating classes are the equivalence classes of this relation.
A communicating class is closed if the probability of leaving the class is zero, namely if i is in C but j is not, then j is not accessible from i. The set of communicating classes forms a directed, acyclic graph by inheriting the arrows from the original state space. A communicating class is closed if and only if it has no outgoing arrows in this graph.
A state i is said to be essential or final if for all j such that i → j it is also true that j → i. A state i is inessential if it is not essential. A state is final if and only if its communicating class is closed.
A Markov chain is said to be irreducible if its state space is a single communicating class; in other words, if it is possible to get to any state from any state.
A state i has period k if any return to state i must occur in multiples of k time steps. Formally, the period of a state is defined as
(where "gcd '' is the greatest common divisor) provided that this set is not empty. Otherwise the period is not defined. Note that even though a state has period k, it may not be possible to reach the state in k steps. For example, suppose it is possible to return to the state in (6, 8, 10, 12,...) time steps; k would be 2, even though 2 does not appear in this list.
If k = 1, then the state is said to be aperiodic. Otherwise (k > 1), the state is said to be periodic with period k. A Markov chain is aperiodic if every state is aperiodic. An irreducible Markov chain only needs one aperiodic state to imply all states are aperiodic.
Every state of a bipartite graph has an even period.
A state i is said to be transient if, given that we start in state i, there is a non-zero probability that we will never return to i. Formally, let the random variable T be the first return time to state i (the "hitting time ''):
The number
is the probability that we return to state i for the first time after n steps. Therefore, state i is transient if
State i is recurrent (or persistent) if it is not transient. Recurrent states are guaranteed (with probability 1) to have a finite hitting time. Recurrence and transience are class properties, that is, they either hold or do not hold equally for all members of a communicating class.
Even if the hitting time is finite with probability 1, it need not have a finite expectation. The mean recurrence time at state i is the expected return time M:
State i is positive recurrent (or non-null persistent) if M is finite; otherwise, state i is null recurrent (or null persistent).
It can be shown that a state i is recurrent if and only if the expected number of visits to this state is infinite, i.e.,
A state i is called absorbing if it is impossible to leave this state. Therefore, the state i is absorbing if and only if
If every state can reach an absorbing state, then the Markov chain is an absorbing Markov chain.
A state i is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state i is ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic.
It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a number N such that any state can be reached from any other state in any number of steps greater than or equal to a number N. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled with N = 1.
A Markov chain with more than one state and just one out - going transition per state is either not irreducible or not aperiodic, hence can not be ergodic.
If the Markov chain is a time - homogeneous Markov chain, so that the process is described by a single, time - independent matrix p i j (\ displaystyle p_ (ij)), then the vector π (\ displaystyle (\ boldsymbol (\ pi))) is called a stationary distribution (or invariant measure) if ∀ j ∈ S (\ displaystyle \ forall j \ in S) it satisfies
An irreducible chain has a positive stationary distribution (a stationary distribution such that ∀ i, π i > 0 (\ displaystyle \ forall i, \ pi _ (i) > 0)) if and only if all of its states are positive recurrent. In that case, π is unique and is related to the expected return time:
where C (\ displaystyle C) is the normalizing constant. Further, if the positive recurrent chain is both irreducible and aperiodic, it is said to have a limiting distribution; for any i and j,
Note that there is no assumption on the starting distribution; the chain converges to the stationary distribution regardless of where it begins. Such π (\ displaystyle \ pi) is called the equilibrium distribution of the chain.
If a chain has more than one closed communicating class, its stationary distributions will not be unique (consider any closed communicating class C i (\ displaystyle C_ (i)) in the chain; each one will have its own unique stationary distribution π i (\ displaystyle \ pi _ (i)). Extending these distributions to the overall chain, setting all values to zero outside the communication class, yields that the set of invariant measures of the original chain is the set of all convex combinations of the π i (\ displaystyle \ pi _ (i)) ' s). However, if a state j is aperiodic, then
and for any other state i, let f be the probability that the chain ever visits state j if it starts at i,
If a state i is periodic with period k > 1 then the limit
does not exist, although the limit
does exist for every integer r.
A Markov chain need not necessarily be time - homogeneous to have an equilibrium distribution. If there is a probability distribution over states π (\ displaystyle (\ boldsymbol (\ pi))) such that
for every state j and every time n then π (\ displaystyle (\ boldsymbol (\ pi))) is an equilibrium distribution of the Markov chain. Such can occur in Markov chain Monte Carlo (MCMC) methods in situations where a number of different transition matrices are used, because each is efficient for a particular kind of mixing, but each matrix respects a shared equilibrium distribution.
If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the (i, j) th element of P equal to
Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix.
A stationary distribution π is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix P on it and so is defined by
By comparing this definition with that of an eigenvector we see that the two concepts are related and that
is a normalized (∑ i π i = 1 (\ displaystyle \ textstyle \ sum _ (i) \ pi _ (i) = 1)) multiple of a left eigenvector e of the transition matrix P with an eigenvalue of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution.
The values of a stationary distribution π i (\ displaystyle \ textstyle \ pi _ (i)) are associated with the state space of P and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as ∑ i 1 ⋅ π i = 1 (\ displaystyle \ textstyle \ sum _ (i) 1 \ cdot \ pi _ (i) = 1) we see that the dot product of π with a vector whose components are all 1 is unity and that π lies on a simplex.
If the Markov chain is time - homogeneous, then the transition matrix P is the same after each step, so the k - step transition probability can be computed as the k - th power of the transition matrix, P.
If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution π. Additionally, in this case P converges to a rank - one matrix in which each row is the stationary distribution π, that is,
where 1 is the column vector with all entries equal to 1. This is stated by the Perron -- Frobenius theorem. If, by whatever means, lim k → ∞ P k (\ displaystyle \ scriptstyle \ lim _ (k \ to \ infty) \ mathbf (P) ^ (k)) is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below.
For some stochastic matrices P, the limit lim k → ∞ P k (\ displaystyle \ scriptstyle \ lim \ limits _ (k \ to \ infty) \ mathbf (P) ^ (k)) does not exist while the stationary distribution does, as shown by this example:
Note that this example illustrates a periodic Markov chain.
Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let P be an n × n matrix, and define Q = lim k → ∞ P k. (\ displaystyle \ scriptstyle \ mathbf (Q) = \ lim _ (k \ to \ infty) \ mathbf (P) ^ (k).)
It is always true that
Subtracting Q from both sides and factoring then yields
where I is the identity matrix of size n, and 0 is the zero matrix of size n × n. Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q. Including the fact that the sum of each the rows in P is 1, there are n + 1 equations for determining n unknowns, so it is computationally easier if on the one hand one selects one row in Q and substitute each of its elements by one, and on the other one substitute the corresponding element (the one in the same column) in the vector 0, and next left - multiply this latter vector by the inverse of transformed former matrix to find Q.
Here is one method for doing so: first, define the function f (A) to return the matrix A with its right-most column replaced with all 1 's. If (f (P − I)) exists then
One thing to notice is that if P has an element P on its main diagonal that is equal to 1 and the ith row or column is otherwise filled with 0 's, then that row or column will remain unchanged in all of the subsequent powers P. Hence, the ith row or column of Q will have the 1 and the 0 's in the same positions as in P.
As stated earlier, from the equation π = π P, (\ displaystyle (\ boldsymbol (\ pi)) = (\ boldsymbol (\ pi)) \ mathbf (P),) (if exists) the stationary (or steady state) distribution π is a left eigenvector of row stochastic matrix P. Then assuming that P is diagonalizable or equivalently that P has n linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, i.e. defective matrices, one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way.)
Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P, i.e. Σ = diag (λ, λ, λ,..., λ). Then by eigendecomposition
Let the eigenvalues be enumerated such that:
Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other π which solves the stationary distribution equation above). Let u be the i - th column of U matrix, i.e. u is the left eigenvector of P corresponding to λ. Also let x be a length n row vector that represents a valid probability distribution; since the eigenvectors u span R n, (\ displaystyle \ mathbb (R) ^ (n),) we can write
If we multiply x with P from right and continue this operation with the results, in the end we get the stationary distribution π. In other words, π = u ← xPP... P = xP as k → ∞. That means
Since π = u, π approaches to π as k → ∞ with a speed in the order of λ / λ exponentially. This follows because λ 2 ⩾ ⋯ ⩾ λ n, (\ displaystyle \ lambda _ (2) \ geqslant \ cdots \ geqslant \ lambda _ (n),) hence λ / λ is the dominant term. Random noise in the state distribution π can also speed up this convergence to the stationary distribution.
A Markov chain is said to be reversible if there is a probability distribution π over its states such that
for all times n and all states i and j. This condition is known as the detailed balance condition (some books call it the local balance equation).
Considering a fixed arbitrary time n and using the shorthand
the detailed balance equation can be written more compactly as
The single time - step from n to n + 1 can be thought of as each person i having π dollars initially and paying each person j a fraction p of it. The detailed balance condition states that upon each payment, the other person pays exactly the same amount of money back. Clearly the total amount of money π each person has remains the same after the time - step, since every dollar spent is balanced by a corresponding dollar received. This can be shown more formally by the equality
which essentially states that the total amount of money person j receives (including from himself) during the time - step equals the amount of money he pays others, which equals all the money he initially had because it was assumed that all money is spent (i.e. p sums to 1 over i). The assumption is a technical one, because the money not really used is simply thought of as being paid from person j to himself (i.e. p is not necessarily zero).
As n was arbitrary, this reasoning holds for any n, and therefore for reversible Markov chains π is always a steady - state distribution of Pr (X = j X = i) for every n.
If the Markov chain begins in the steady - state distribution, i.e., if Pr (X = i) = π, then Pr (X = i) = π for all n and the detailed balance equation can be written as
The left - and right - hand sides of this last equation are identical except for a reversing of the time indices n and n + 1.
Kolmogorov 's criterion gives a necessary and sufficient condition for a Markov chain to be reversible directly from the transition matrix probabilities. The criterion requires that the products of probabilities around every closed loop are the same in both directions around the loop.
Reversible Markov chains are common in Markov chain Monte Carlo (MCMC) approaches because the detailed balance equation for a desired distribution π necessarily implies that the Markov chain has been constructed so that π is a steady - state distribution. Even with time - inhomogeneous Markov chains, where multiple transition matrices are used, if each such transition matrix exhibits detailed balance with the desired π distribution, this necessarily implies that π is a steady - state distribution of the Markov chain.
For any time - homogeneous Markov chain given by a transition matrix P ∈ R n × n (\ displaystyle P \ in \ mathbb (R) ^ (n \ times n)), any norm ⋅ (\ displaystyle \ cdot) on R n × n (\ displaystyle \ mathbb (R) ^ (n \ times n)) which is induced by a scalar product, and any probability vector π (\ displaystyle \ pi), there exists a unique transition matrix P ∗ (\ displaystyle P ^ (*)) which is reversible according to π (\ displaystyle \ pi) and which is closest to P (\ displaystyle P) according to the norm ⋅. (\ displaystyle \ cdot.) The matrix P ∗ (\ displaystyle P ^ (*)) can be computed by solving a quadratic - convex optimization problem. A GNU licensed Matlab script that computes the nearest reversible Markov chain can be found here.
For example, consider the following Markov chain:
This Markov chain is not reversible. According to the Frobenius Norm the closest reversible Markov chain according to π = (1 3, 1 3, 1 3) (\ displaystyle \ pi = \ left ((\ frac (1) (3)), (\ frac (1) (3)), (\ frac (1) (3)) \ right)) can be computed as
If we choose the probability vector randomly as π = (1 4, 1 4, 1 2) (\ displaystyle \ pi = \ left ((\ frac (1) (4)), (\ frac (1) (4)), (\ frac (1) (2)) \ right)), then the closest reversible Markov chain according to the Frobenius norm is approximately given by
A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is even independent of the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process.
For an overview of Markov chains on a general state space, see the article Markov chains on a measurable state space.
Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through Harris chains. The main idea is to see if there is a point in the state space that the chain hits with probability one. Generally, it is not true for continuous state space, however, we can define sets A and B along with a positive number ε and a probability measure ρ, such that
Then we could collapse the sets into an auxiliary point α, and a recurrent Harris chain can be modified to contain α. Lastly, the collection of Harris chains is a comfortable level of generality, which is broad enough to contain a large number of interesting examples, yet restrictive enough to allow for a rich theory.
The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space.
Considering a collection of Markov chains whose evolution takes in account the state of other Markov chains, is related to the notion of locally interacting Markov chains. This corresponds to the situation when the state space has a (Cartesian -) product form. See interacting particle system and stochastic cellular automata (probabilistic cellular automata). See for instance Interaction of Markov Processes or
In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the ' current ' and ' future ' states. For example, let X be a non-Markovian process. Then define a process Y, such that each state of Y represents a time - interval of states of X. Mathematically, this takes the form:
If Y has the Markov property, then it is a Markovian representation of X.
An example of a non-Markovian process with a Markovian representation is an autoregressive time series of order greater than one.
Write P (t) for the matrix with entries p = P (X = j X = i). Then the matrix P (t) satisfies the forward equation, a first - order differential equation
where the prime denotes differentiation with respect to t. The solution to this equation is given by a matrix exponential
In a simple case such as a CTMC on the state space (1, 2). The general Q matrix for such a process is the following 2 × 2 matrix with α, β > 0
The above relation for forward matrix can be solved explicitly in this case to give
However, direct solutions are complicated to compute for larger matrices. The fact that Q is the generator for a semigroup of matrices
is used.
The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values of t. Observe that for the two - state process considered earlier with P (t) given by
as t → ∞ the distribution tends to
Observe that each row has the same distribution as this does not depend on starting state. The row vector π may be found by solving
with the additional constraint that
The image to the right describes a continuous - time Markov chain with state - space (Bull market, Bear market, Stagnant market) and transition rate matrix
The stationary distribution of this chain can be found by solving π Q = 0 subject to the constraint that elements must sum to 1 to obtain
The image to the right describes a discrete - time Markov chain with state - space (1, 2, 3, 4, 5, 6, 7, 8, 9). The player controls Pac - Man through a maze, eating pac - dots. Meanwhile, he is being hunted by ghosts. For convenience, the maze shall be a small 3x3 - grid and the monsters move randomly in horizontal and vertical directions. A secret passageway between states 2 and 8 can be used in both directions. Entries with probability zero are removed in the following transition matrix:
Q = (1 2 1 2 1 4 1 4 1 4 1 4 1 2 1 2 1 3 1 3 1 3 1 4 1 4 1 4 1 4 1 3 1 3 1 3 1 2 1 2 1 4 1 4 1 4 1 4 1 2 1 2) (\ displaystyle Q = (\ begin (pmatrix) & (\ frac (1) (2)) && (\ frac (1) (2)) \ \ (\ frac (1) (4)) && (\ frac (1) (4)) && (\ frac (1) (4)) &&& (\ frac (1) (4)) \ \ & (\ frac (1) (2)) &&&& (\ frac (1) (2)) \ \ (\ frac (1) (3)) &&&& (\ frac (1) (3)) && (\ frac (1) (3)) \ \ & (\ frac (1) (4)) && (\ frac (1) (4)) && (\ frac (1) (4)) && (\ frac (1) (4)) \ \ && (\ frac (1) (3)) && (\ frac (1) (3)) &&&& (\ frac (1) (3)) \ \ &&& (\ frac (1) (2)) &&&& (\ frac (1) (2)) \ \ & (\ frac (1) (4)) &&& (\ frac (1) (4)) && (\ frac (1) (4)) && (\ frac (1) (4)) \ \ &&&&& (\ frac (1) (2)) && (\ frac (1) (2)) \ end (pmatrix)))
This Markov chain is irreducible, because the ghosts can fly from every state to every state in a finite amount of time. Due to the secret passageway, the Markov chain is also aperiodic, because the monsters can move from any state to any state both in an even and in an uneven number of state transitions. Therefore, a unique stationary distribution exists and can be found by solving π Q = 0 subject to the constraint that elements must sum to 1. The solution of this linear equation subject to the constraint is π = (7.7, 15.4, 7.7, 11.5, 15.4, 11.5, 7.7, 15.4, 7.7) %. (\ displaystyle \ pi = (7.7, 15.4, 7.7, 11.5, 15.4, 11.5, 7.7, 15.4, 7.7) \ %.) The central state and the border states 2 and 8 of the adjacent secret passageway are visited most and the corner states are visited least.
The hitting time is the time, starting in a given set of states until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition.
For a subset of states A ⊆ S, the vector k of hitting times (where element k i A (\ displaystyle k_ (i) ^ (A)) represents the expected value, starting in state i that the chain enters one of the states in the set A) is the minimal non-negative solution to
For a CTMC X, the time - reversed process is defined to be X ^ t = X T − t (\ displaystyle \ scriptstyle (\ hat (X)) _ (t) = X_ (T-t)). By Kelly 's lemma this process has the same stationary distribution as the forward process.
A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov 's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
One method of finding the stationary probability distribution, π, of an ergodic continuous - time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete - time Markov chain, sometimes referred to as a jump process. Each element of the one - step transition probability matrix of the EMC, S, is denoted by s, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by
From this, S may be written as
where I is the identity matrix and diag (Q) is the diagonal matrix formed by selecting the main diagonal from the matrix Q and setting all other elements to zero.
To find the stationary probability distribution vector, we must next find φ (\ displaystyle \ phi) such that
with φ (\ displaystyle \ phi) being a row vector, such that all elements in φ (\ displaystyle \ phi) are greater than 0 and φ 1 (\ displaystyle \ phi _ (1)) = 1. From this, π may be found as
Note that S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.
Another discrete - time process that may be derived from a continuous - time Markov chain is a δ - skeleton -- the (discrete - time) Markov chain formed by observing X (t) at intervals of δ units of time. The random variables X (0), X (δ), X (2δ),... give the sequence of states visited by the δ - skeleton.
Research has reported the application and usefulness of Markov chains in a wide range of topics such as physics, chemistry, medicine, music, game theory and sports.
Markovian systems appear extensively in thermodynamics and statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time - invariant, and that no relevant history need be considered which is not already included in the state description.
The paths, in the path integral formulation of quantum mechanics, are Markov chains.
Markov chains are used in lattice QCD simulations.
Markov chains and continuous - time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state.
The classical model of enzyme activity, Michaelis -- Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis - Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains.
An algorithm based on a Markov chain was also used to focus the fragment - based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products. As a molecule is grown, a fragment is selected from the nascent molecule as the "current '' state. It is not aware of its past (i.e., it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds.
Also, the growth (and composition) of copolymers may be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain 's composition may be calculated (e.g., whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due to steric effects, second - order Markov effects may also play a role in the growth of some polymer chains.
Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide materials can be accurately described by Markov chains.
Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket '', arranging these chains in several recursive layers ("wafering '') and producing more efficient test sets -- samples -- as a replacement for exhaustive testing. MCSTs also have uses in temporal state - based networks; Chilukuri et al. 's paper entitled "Temporal Uncertainty Reasoning Networks for Evidence Fusion with Applications to Object Detection and Tracking '' (ScienceDirect) gives a background and case study for applying MCSTs to a wider range of applications.
Hidden Markov models are the basis for most modern automatic speech recognition systems.
Markov chains are used throughout information processing. Claude Shannon 's famous 1948 paper A Mathematical Theory of Communication, which in a single step created the field of information theory, opens by introducing the concept of entropy through Markov modeling of the English language. Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy encoding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. Markov chains also play an important role in reinforcement learning.
Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use the Viterbi algorithm for error correction), speech recognition and bioinformatics (such as in rearrangements detection).
The LZMA lossless data compression algorithm combines Markov chains with Lempel - Ziv compression to achieve very high compression ratios.
Markov chains are the basis for the analytical treatment of queues (queueing theory). Agner Krarup Erlang initiated the subject in 1917. This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth).
Numerous queueing models use continuous - time Markov chains. For example, an M / M / 1 queue is a CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to a Poisson process and describe job arrivals, while transitions from i to i -- 1 (for i > 1) occur at rate μ (job service times are exponentially distributed) and describe completed services (departures) from the queue.
The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page i (\ displaystyle i) in the stationary distribution on the following Markov chain on all (known) webpages. If N (\ displaystyle N) is the number of known webpages, and a page i (\ displaystyle i) has k i (\ displaystyle k_ (i)) links to it then it has transition probability α k i + 1 − α N (\ displaystyle (\ frac (\ alpha) (k_ (i))) + (\ frac (1 - \ alpha) (N))) for all pages that are linked to and 1 − α N (\ displaystyle (\ frac (1 - \ alpha) (N))) for all pages that are not linked to. The parameter α (\ displaystyle \ alpha) is taken to be about 0.85.
Markov models have also been used to analyze web navigation behavior of users. A user 's web link transition on a particular website can be modeled using first - or second - order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.
Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called Markov chain Monte Carlo (MCMC). In recent years this has revolutionized the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically.
Markov chains are used in finance and economics to model a variety of different phenomena, including asset prices and market crashes. The first financial model to use a Markov chain was from Prasad et al. in 1974. Another was the regime - switching model of James D. Hamilton (1989), in which a Markov chain is used to model switches between periods high and low GDP growth (or alternatively, economic expansions and recessions). A more recent example is the Markov Switching Multifractal model of Laurent E. Calvet and Adlai J. Fisher, which builds upon the convenience of earlier regime - switching models. It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns.
Dynamic macroeconomics heavily uses Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in a general equilibrium setting.
Credit rating agencies produce annual tables of the transition probabilities for bonds of different credit ratings.
Markov chains are generally used in describing path - dependent arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to Karl Marx 's Das Kapital, tying economic development to the rise of capitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the middle class, the ratio of urban to rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning from authoritarian to democratic regime.
Markov chains also have many applications in biological modelling, particularly population processes, which are useful in modelling processes that are (at least) analogous to biological populations. The Leslie matrix, is one such example used to describe the population dynamics of many species, though some of its entries are not probabilities (they may be greater than 1). Another example is the modeling of cell shape in dividing sheets of epithelial cells. Yet another example is the state of ion channels in cell membranes.
Markov chains are also used in simulations of brain function, such as the simulation of the mammalian neocortex. Markov chains have also been used to model viral infection of single cells.
Markov chains have been used in population genetics in order to describe the change in gene frequencies in small populations affected by genetic drift, for example in diffusion equation method described by Motoo Kimura.
Markov chains can be used to model many games of chance. The children 's games Snakes and Ladders and "Hi Ho! Cherry - O '', for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).
Markov chains are employed in algorithmic music composition, particularly in software such as CSound, Max and SuperCollider. In a first - order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric.
A second - order Markov chain can be introduced by considering the current state and also the previous state, as indicated in the second table. Higher, nth - order chains tend to "group '' particular notes together, while ' breaking off ' into other patterns and sequences occasionally. These higher - order chains tend to generate results with a sense of phrasal structure, rather than the ' aimless wandering ' produced by a first - order system.
Markov chains can be used structurally, as in Xenakis 's Analogique A and B. Markov chains are also used in systems which use a Markov model to react interactively to music input.
Usually musical systems need to enforce specific control constraints on the finite - length sequences they generate, but control constraints are not compatible with Markov models, since they induce long - range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed.
Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half - inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at - bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team. He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such as bunting and base stealing and differences when playing on grass vs. astroturf.
Markov processes can also be used to generate superficially real - looking text given a sample document. Markov processes are used in a variety of recreational "parody generator '' software (see dissociated press, Jeff Harrison, Mark V Shaney)
In the bioinformatics field, they can be used to simulate DNA sequences.
|
who plays morgan le fay in the librarians | Alicia Witt - wikipedia
Alicia Roanne Witt (born August 21, 1975) is an American actress, singer - songwriter, and pianist. She first came to fame as a child actress after being discovered by David Lynch, who cast her as Alia Atreides in his film Dune (1984) and in a guest role in his television series Twin Peaks (1990).
Witt later had a critically acclaimed role as a disturbed teenager in Fun (1994), and appeared as a music student in Mr. Holland 's Opus (1995), and as a college coed in the horror film Urban Legend (1998). She then appeared in Cameron Crowe 's Vanilla Sky (2001), Last Holiday (2006), and the thriller 88 Minutes (2007). Witt has made television appearances in The Walking Dead, The Sopranos, Nashville, Two and a Half Men, Friday Night Lights, Law & Order: Criminal Intent, Cybill, Justified, and "Twin Peaks The Return ''. In addition to acting, Witt has been described as a musical prodigy, as an accomplished pianist, singer, and songwriter. She released her self - titled debut album in 2009. Witt has also appeared in ten Hallmark Channel movies.
Witt was born on August 21, 1975, in Worcester, Massachusetts, to Diane (née Pietro), a junior high school reading teacher, and Robert Witt, a science teacher and photographer. She has a brother, Ian. "Talking by age two and reading by the age of four, '' Witt has been described as a child prodigy. Her acting talent was recognized by director David Lynch in 1980, when he heard her recite Shakespeare 's Romeo and Juliet on the television show That 's Incredible! at age five. He would begin working with her in film and television even before Witt earned her high school equivalency credential (at age 14). She did undergraduate work in piano at Boston University and competed nationally.
Witt 's discovery by Lynch led to his casting of the "flame - haired '' child in the movie Dune (1984) as Paul Atreides 's sister Alia; she turned eight during filming. She worked with Lynch again when she appeared in an episode of Twin Peaks, playing the younger sister of Lara Flynn Boyle 's character Donna.
These experiences led to Witt 's having small parts in Mike Figgis 's Liebestraum (1991), in which her brother Ian also appears, the Gen - X drama Bodies, Rest & Motion (1993), and the TV movie The Disappearance of Vonnie (1994). In 1994, Witt landed her first lead role in a film, playing a disturbed, murderous teenager in Fun, and receiving the Special Jury Recognition Award at the Sundance Festival. Witt was then cast in the "desultory '' Four Rooms, as Madonna 's lover in the episode, "The Missing Ingredient ''.
Witt was introduced to a larger audience in the role of Cybill Shepherd 's daughter, Zoey Woodbine, in the sitcom Cybill. While playing that part, from 1995 to 1998, she also had film roles in Stephen Herek 's Mr. Holland 's Opus, Alexander Payne 's "delightful '' satirical comedy Citizen Ruth, Robert Allan Ackerman 's Passion 's Way (based on the Edith Wharton novel, The Reef), and Richard Sears ' comedy Bongwater. After the Cybill series was cancelled, Witt went on to leading roles in Jamie Blanks ' horror film Urban Legend (alongside "other nascently twinkling stars '' Jared Leto, Joshua Jackson, and Rebecca Gayheart), and in Kevin Altieri / Touchstone Pictures ' limited - release animated feature, Gen13. In 2000, Witt had starring roles on episodes of the television shows Ally McBeal and The Sopranos, the lead role in the Matthew Huffman comedy Playing Mona Lisa (featuring Harvey Fierstein, Elliott Gould, and Marlo Thomas), and a "memorable '' part in John Waters ' "scathing satire '' Cecil B. Demented.
She then went on to a small part in Cameron Crowe 's Vanilla Sky (2001). Witt played "Two '', the college graduate discussing loss of her virginity, in Rodrigo Garcia 's Ten Tiny Love Stories (a series of 10 monologues, alongside ones by Radha Mitchell, Lisa Gay Hamilton, Debi Mazar, Elizabeth Peña, and others), and played the role of promiscuous Barbie, half - sister of the title character, in American Girl (starring Jena Malone). Witt also appeared in Marc Lawrence 's romantic comedy Two Weeks Notice (2002), starring Hugh Grant and Sandra Bullock.
In 2003 -- 04, she took up residence in the United Kingdom, though she portrayed Joan Allen 's daughter in the US - based comic drama The Upside of Anger (with Kevin Costner, Keri Russell, Evan Rachel Wood, and others). Between these two projects, Witt went to South Africa to shoot a film interpretation of the epic poem "Das Nibelungenlied '', played one of the central characters Kriemhild in the German TV movie Kingdom in Twilight (with title Dark Kingdom: The Dragon King in the U.S., and The Ring of the Nibelungs and The Sword of Xanten elsewhere).
Witt filmed the Last Holiday (2006) and the thriller 88 Minutes (2007, alongside Al Pacino), and joined the cast of Law & Order: Criminal Intent for the 2007 -- 2008 season. In the latter she played Detective Nola Falacci, a character temporarily replacing Megan Wheeler as Chris Noth 's partner (otherwise played by Julianne Nicholson, who was away on maternity leave), and was a recurring character in the 2007 -- 2008 season. Witt appeared in the role of Amy in the film Peep World (2010).
Witt appeared as the character Elaine Clayton in Cowgirls n ' Angels (2012), and in 2013 co-starred in the independent film Cold Turkey opposite Peter Bogdanovich and Cheryl Hines; there, she additionally performed an original musical piece over the end credits (see below). Her dramatic performance in this film was critically acclaimed, with New York Magazine 's David Edelstein proclaiming her turn one of the top performances of 2013. Witt also appeared in four Christmas films in 2013: the feature film Tyler Perry 's A Madea Christmas, A Snow Globe Christmas for Lifetime Channel, and A Very Merry Mix - Up for Hallmark Channel, and in 2014 Hallmark Channel 's Christmas at Cartwright 's.
Also in 2014, Witt appeared in a major guest starring role on the DirecTV series Kingdom, which aired that October. Witt was a regular part of the cast of Justified in Season 5, which began airing in January 2014. In April 2016, Witt appeared in two episodes of the hit series The Walking Dead; the same month, it was announced that she would also be reprising her role as Gersten Hayward in the 2017 Twin Peaks series. Witt filmed a major guest starring role on Season 12 of Supernatural as Lily Sunder, a former enemy of Castiel 's, due to air in 2017.
Witt made her stage debut in 2001, at Los Angeles ' historic Tiffany Theater, in Robbie Fox 's musical The Gift, in which she played a high - priced, though diseased, stripper.
While in residence in the U.K. in 2004, she starred as Evelyn in a stage production of Neil LaBute 's The Shape of Things at the New Ambassadors Theatre. In September 2006, Witt returned to the London stage at the Royal Court Theatre, in the critically well - received Piano / Forte, wherein she was "well - cast '' in portraying the stammering, emotionally damaged pianist Abigail, sister to "unloved attention - seeker '' Louise (Kelly Reilly).
Witt performed alongside Amber Tamblyn in Neil LaBute 's play, Reasons To Be Pretty, at the Geffen Playhouse, which ran until 31 August 2014.
In addition to acting, Witt is a professional singer - songwriter and pianist, and is reported to have been a musical prodigy. During her work with David Lynch, she supported herself by playing piano at the Beverly Wilshire Hotel.
Responses to her 2006 stage portrayal of Abigail Piano / Forte included recognition of her skill as an "outstanding pianist '' of "formidable skill ''.
In 2009, Witt released her self - titled extended play album, followed by Live At Rockwood in 2012 and Revisionary History in 2015. In 2013, Witt and Ben Folds performed a song they had co-written on the soundtrack for the independent film Cold Turkey.
In 2016, Witt joined the cast of ABC 's Nashville in a recurring capacity playing established country singer Autumn Chase. Witt performed several songs throughout season 4.
In August of 2018, Witt released a 5 song EP album entitled 15,000 Days (a reference to the length of time she had lived when she recorded the album) working with Grammy - winning producer Jacquire King.
On June 14, 2004, Witt modeled what is believed to be the most expensive hat ever made, for Christie 's auction house in London. The Chapeau d'Amour, designed by Louis Mariette, is valued at $2.7 million (US) and is encrusted in diamonds.
In September 1990, Witt competed on Wheel of Fortune.
|
list the four main areas of responsibility compartmentalized by the homeland security act of 2002 | Homeland Security Act - wikipedia
The Homeland Security Act (HSA) of 2002, (Pub. L. 107 -- 296, 116 Stat. 2135, enacted November 25, 2002) was introduced in the aftermath of the September 11 attacks and subsequent mailings of anthrax spores. The HSA was cosponsored by 118 members of Congress. It was signed into law by President George W. Bush in November 2002.
HSA created the United States Department of Homeland Security and the new cabinet - level position of Secretary of Homeland Security. It is the largest federal government reorganization since the Department of Defense was created via the National Security Act of 1947 (as amended in 1949). It also includes many of the organizations under which the powers of the USA PATRIOT Act are exercised.
The new department assumed a large number of services, offices and other organizations previously conducted in other departments, such as the Customs Service, Coast Guard, and U.S. Secret Service. It superseded, but did not replace, the Office of Homeland Security, which retained an advisory role. The Homeland Security Appropriations Act of 2004 provided the new department its first funding. A major reason for the implementation of HSA is to ensure that the border function remains strong within the new Department.
The Act is similar to the Intelligence Reform and Terrorism Prevention Act (IRTPA) in reorganizing and centralizing Federal security functions to meet post -- Cold War threats and challenges. Like IRTPA, there are some inherent contradictions in the bill not solved by reorganization. These reflect compromises with other committees needed to secure passage, but the result is at times inconsistent or conflicting authorities. For example, the Act identifies the Department of Homeland Security 's (DHS) first responsibility as preventing terrorist attacks in the United States; but, the law 's language makes clear that investigation and prosecution of terrorism remains with the Federal Bureau of Investigation and assigns DHS only an analytical and advisory role in intelligence activities. Similarly, with Critical Infrastructure Protection (CIP), which relates to the preparedness and response to serious incidents, the Act gave DHS broad responsibility to minimize damage but only limited authority to share information and to coordinate the development of private sector best practices.
The Homeland Security Act of 2002 is the foundation for many other establishments, including:
The Homeland Security Act of 2002 documented under Public Law is divided into 17 titles that establishes the Department of Homeland Security and other purposes. Each title is broken down into several sections, summarized below.
The United States Department of Homeland Security (DHS), formed November 25, 2002 through the Homeland Security Act, is a Cabinet department composed of several different divisions that work to protect the United States from terrorists and natural disasters. It was created as a response to the September 11 attacks in 2001. The Department of Homeland Security manages the Emergency Preparedness and Response Directorate. The directorate helps fulfill the Department 's overarching goal: to keep America safe from terrorist attacks. The Department also works to enhance preparedness and response efforts and to integrate these efforts with prevention work. With the Homeland Security Act there are several provisions that identify the specific duties for the EP&R Directorate.
The Homeland Security Act contains several provisions that identify specific duties for the EP&R Directorate. Title V and Title II outline the way the department ensures the following: that the use of intelligence and its own threat analysis of terrorist capabilities are intended to distribute funds to those areas where the terrorist threat is greatest; and that states provide the Federal Government with their Emergency Response Plans so that the department can coordinate priorities regionally and nationally.
|
when was the history of the peloponnesian war written | History of the Peloponnesian War - wikipedia
The History of the Peloponnesian War (Greek: Ἱστορίαι, "Histories '') is a historical account of the Peloponnesian War (431 -- 404 BC), which was fought between the Peloponnesian League (led by Sparta) and the Delian League (led by Athens). It was written by Thucydides, an Athenian historian who also happened to serve as an Athenian general during the war. His account of the conflict is widely considered to be a classic and regarded as one of the earliest scholarly works of history. The History is divided into eight books.
Analyses of the History generally occur in one of two camps. On the one hand, some scholars view the work as an objective and scientific piece of history. The judgment of J.B. Bury reflects his traditional interpretation of the work: "(The History is) severe in its detachment, written from a purely intellectual point of view, unencumbered with platitudes and moral judgments, cold and critical. ''
On the other hand, in keeping with more recent interpretations that are associated with reader - response criticism, the History can be read as a piece of literature rather than an objective record of the historical events. This view is embodied in the words of W.R. Connor, who describes Thucydides as "an artist who responds to, selects and skillfully arranges his material, and develops its symbolic and emotional potential. ''
Thucydides is considered to be one of the great "fathers '' of Western history, thus making his methodology the subject of much analysis in area of historiography.
Thucydides is one of the first western historians to employ a strict standard of chronology, recording events by year, with each year consisting of the summer campaign season and a less active winter season. This method contrasts sharply with Herodotus.
Thucydides also makes extensive use of speeches in order to elaborate on the event in question. While the inclusion of long first - person speeches is somewhat alien to modern historical method, in the context of ancient Greek oral culture speeches are expected. These include addresses given to troops by their generals before battles and numerous political speeches, both by Athenian and Spartan leaders, as well as debates between various parties. Of the speeches, the most famous is the funeral oration of Pericles, which is found in Book Two. Thucydides undoubtedly heard some of these speeches himself while for others he relied on eyewitness accounts.
These speeches are suspect in the eyes of Classicists, however, inasmuch as it is not sure to what degree Thucydides altered these speeches in order to most clearly elucidate the crux of the argument presented. Some of the speeches are probably fabricated according to his expectations of, as he puts it, "what was called for in each situation '' (1.22. 1).
Despite being an Athenian and a participant in the conflict, Thucydides is often regarded as having written a generally unbiased account of the conflict with respect to the sides involved in it. In the introduction to the piece he states, "my work is not a piece of writing designed to meet the taste of an immediate public, but was done to last for ever '' (1.22. 4).
There are scholars, however, who doubt this. Ernst Badian, for example has argued that Thucydides has a strong pro-Athenian bias. In keeping with this sort of doubt, other scholars claim that Thucydides had an ulterior motive in his Histories, specifically to create an epic comparable to those of the past such as the works of Homer, and that this led him to create a nonobjective dualism favoring the Athenians. The work does display a clear bias against certain people involved in the conflict, such as Cleon.
The gods play no active role in Thucydides ' work. This is very different from Herodotus, who frequently mentions the role of the gods, as well as a nearly ubiquitous divine presence in the centuries - earlier poems of Homer. Instead, Thucydides regards history as being caused by the choices and actions of human beings.
Despite the absence of actions of the gods, religion and piety play critical roles in the actions of the Spartans, and to a lesser degree, the Athenians. Thus natural occurrences such as earthquake and eclipses were viewed as religiously significant (1.23. 3; 7.50. 4)
Despite the absence of the gods from Thucydides ' work, he still draws heavily from the Greek mythos, especially from Homer, whose works are prominent in Greek mythology. Thucydides references Homer frequently as a source of information, but always adds a distancing clause, such as "Homer shows this, if that is sufficient evidence, '' and "assuming we should trust Homer 's poetry in this case too. ''
However, despite Thucydides ' lack of trust in information that was not experienced firsthand, such as Homer 's, he does use the poet 's epics to infer facts about the Trojan War. For instance, while Thucydides considered the number of over 1,000 Greek ships sent to Troy to be a poetic exaggeration, he uses Homer 's Catalog of Ships to determine the approximate number of Greek soldiers who were present. Later, Thucydides claims that since Homer never makes reference to a united Greek state, the pre-Hellenic nations must have been so disjointed that they could not organize properly to launch an effective campaign. In fact, Thucydides claims that Troy could have been conquered in half the time had the Greek leaders allocated resources properly and not sent a large portion of the army on raids for supplies.
Thucydides makes sure to inform his reader that he, unlike Homer, is not a poet prone to exaggeration, but instead a historian, whose stories may not give "momentary pleasure, '' but "whose intended meaning will be challenged by the truth of the facts. '' By distancing himself from the storytelling practices of Homer, Thucydides makes it clear that while he does consider mythology and epics to be evidence, these works can not be given much credibility, and that it takes an impartial and empirically minded historian, such as himself, to accurately portray the events of the past.
The first book of the History, after a brief review of early Greek history and some programmatic historiographical commentary, seeks to explain why the Peloponnesian War broke out when it did and what its causes were. Except for a few short excursuses (notably 6.54 -- 58 on the Tyrant Slayers), the remainder of the History (books 2 through 8) rigidly maintains its focus on the Peloponnesian War to the exclusion of other topics.
While the History concentrates on the military aspects of the Peloponnesian War, it uses these events as a medium to suggest several other themes closely related to the war. It specifically discusses in several passages the socially and culturally degenerative effects of war on humanity itself. The History is especially concerned with the lawlessness and atrocities committed by Greek citizens to each other in the name of one side or another in the war. Some events depicted in the History, such as the Melian dialogue, describe early instances of realpolitik or power politics.
The History is preoccupied with the interplay of justice and power in political and military decision - making. Thucydides ' presentation is decidedly ambivalent on this theme. While the History seems to suggest that considerations of justice are artificial and necessarily capitulate to power, it sometimes also shows a significant degree of empathy with those who suffer from the exigencies of the war.
For the most part, the History does not discuss topics such as the art and architecture of Greece.
The History emphasizes the development of military technologies. In several passages (1.14. 3, 2.75 -- 76, 7.36. 2 -- 3), Thucydides describes in detail various innovations in the conduct of siegeworks or naval warfare. The History places great importance upon naval supremacy, arguing that a modern empire is impossible without a strong navy. He states that this is the result of the development of piracy and coastal settlements in earlier Greece.
Important in this regard was the development, at the beginning of the classical period (c. 500 BC), of the trireme, the supreme naval ship for the next several hundred years. In his emphasis on sea power, Thucydides resembles the modern naval theorist Alfred Thayer Mahan, whose influential work The Influence of Sea Power upon History helped set in motion the naval arms race prior to World War I.
The History explains that the primary cause of the Peloponnesian War was the "growth in power of Athens, and the alarm which this inspired in Sparta '' (1.23. 6). Thucydides traces the development of Athenian power through the growth of the Athenian empire in the years 479 BC to 432 BC in book one of the History (1.89 -- 118). The legitimacy of the empire is explored in several passages, notably in the speech at 1.73 -- 78, where an anonymous Athenian legation defends the empire on the grounds that it was freely given to the Athenians and not taken by force. The subsequent expansion of the empire is defended by these Athenians, "... the nature of the case first compelled us to advance our empire to its present height; fear being our principal motive, though honor and interest came afterward. '' (1.75. 3)
The Athenians also argue that, "We have done nothing extraordinary, nothing contrary to human nature in accepting an empire when it was offered to us and then in refusing to give it up. '' (1.76) They claim that anyone in their position would act in the same fashion. The Spartans represent a more traditional, circumspect, and less expansive power. Indeed, the Athenians are nearly destroyed by their greatest act of imperial overreach, the Sicilian expedition, described in books six and seven of the History.
Thucydides correlates, in his description of the 426 BC Malian Gulf tsunami, for the first time in the recorded history of natural science, quakes and waves in terms of cause and effect.
Thucydides ' History is extraordinarily dense and complex. His particular ancient Greek prose is also very challenging, grammatically, syntactically, and semantically. This has resulted in much scholarly disagreement on a cluster of issues of interpretation.
It is commonly thought that Thucydides died while still working on the History, since it ends in mid-sentence and only goes up to 410 BC, leaving six years of war uncovered. Furthermore, there is a great deal of uncertainty whether he intended to revise the sections he had already written. Since there appear to be some contradictions between certain passages in the History, it has been proposed that the conflicting passages were written at different times and that Thucydides ' opinion on the conflicting matter had changed. Those who argue that the History can be divided into various levels of composition are usually called "analysts '' and those who argue that the passages must be made to reconcile with one another are called "unitarians ''. This conflict is called the "strata of composition '' debate. The lack of progress in this debate over the course of the twentieth century has caused many Thucydidean scholars to declare the debate insoluble and to side - step the issue in their work.
The History is notoriously reticent about its sources. Thucydides almost never names his informants and alludes to competing versions of events only a handful of times. This is in marked contrast to Herodotus, who frequently mentions multiple versions of his stories and allows the reader to decide which is true. Instead, Thucydides strives to create the impression of a seamless and irrefutable narrative. Nevertheless, scholars have sought to detect the sources behind the various sections of the History. For example, the narrative after Thucydides ' exile (4.108ff.) seems to focus on Peloponnesian events more than the first four books, leading to the conclusion that he had greater access to Peloponnesian sources at that time.
Frequently, Thucydides appears to assert knowledge of the thoughts of individuals at key moments in the narrative. Scholars have asserted that these moments are evidence that he interviewed these individuals after the fact. However, the evidence of the Sicilian Expedition argues against this, since Thucydides discusses the thoughts of the generals who died there and whom he would have had no chance to interview. Instead it seems likely that, as with the speeches, Thucydides is looser than previously thought in inferring the thoughts, feelings, and motives of principal characters in his History from their actions, as well as his own sense of what would be appropriate or likely in such a situation.
The historian J.B. Bury writes that the work of Thucydides... "marks the longest and most decisive step that has ever been taken by a single man towards making history what it is today. ''
Historian H.D. Kitto feels that Thucydides wrote about the Peloponnesian War not because it was the most significant war in antiquity but because it caused the most suffering. Indeed, several passages of Thucydides ' book are written "with an intensity of feeling hardly exceeded by Sappho herself. ''
In his Open Society and Its Enemies, Karl R. Popper writes that Thucydides was the "greatest historian, perhaps, who ever lived. '' Thucydides ' work, however, Popper goes on to say, represents "an interpretation, a point of view; and in this we need not agree with him. '' In the war between Athenian democracy and the "arrested oligarchic tribalism of Sparta, '' we must never forget Thucydides ' "involuntary bias, '' and that "his heart was not with Athens, his native city: ''
"Although he apparently did not belong to the extreme wing of the Athenian oligarchic clubs who conspired throughout the war with the enemy, he was certainly a member of the oligarchic party, and a friend neither of the Athenian people, the demos, who had exiled him, nor of its imperialist policy. ''
Thucydides ' History has been enormously influential in both ancient and modern historiography. It was embraced by many of the author 's contemporaries and immediate successors with enthusiasm; indeed, many authors sought to complete the unfinished history. For example, Xenophon wrote his Hellenica as a continuation of Thucydides ' work, beginning at the exact moment that Thucydides ' History leaves off. Xenophon 's work, however, is generally considered inferior in style and accuracy compared with Thucydides '. In later antiquity, Thucydides ' reputation suffered somewhat, with critics such as Dionysius of Halicarnassus rejecting the History as turgid and excessively austere. Lucian also parodies it (among others) in his satire The True Histories. Woodrow Wilson read the History on his voyage across the Atlantic to the Versailles Peace Conference.
Most critics writing about the History, including this article, use a standard format to direct readers to passages in the text: book. chapter. section. For example, the notation that Pericles ' last speech runs from 2.60. 1 to 2.64. 6, this means that it can be found in the second book, from the sixtieth chapter through the sixty - fourth. Most modern editions and translations of the History include the chapter numbers in the margins.
The most important manuscripts include: Codex Parisinus suppl. Gr. 255, Codex Vaticanus 126, Codex Laurentianus LXIX. 2, Codex Palatinus 252, Codex Monacensis 430, Codex Monacensis 228, and Codex Britannicus II, 727.
Grenfell and Hunt discovered about 20 papyrus fragments copied some time between the 1st and 6th centuries AD in Oxyrhynchus.
|
who plays the pirate in game of thrones | Lucian Msamati - Wikipedia
Lucian Gabriel Wiina Msamati, sometimes credited as Wiina Msamati, is a British - Tanzanian film, television and theatre actor. He played Salladhor Saan in HBO series Game of Thrones and was the first black actor to play Iago at the Royal Shakespeare Company (in 2015).
Lucian Gabriel Wiina Msamati was born in the United Kingdom and brought up in Zimbabwe by his Tanzanian parents, a doctor and a nurse; he is the eldest of four siblings. His primary education began at Olympio Primary School in Dar - es - Salaam, Tanzania, and continued at Avondale Primary School in Harare, Zimbabwe. After secondary education at Prince Edward School in Harare, he studied towards a BA Honours Degree in French and Portuguese at the University of Zimbabwe from 1995 -- 97.
After university he took a day - job as an advertising copywriter and freelance radio presenter. He also worked as a voice - over artist, compere and after - dinner speaker.
In 1994 Msamati and school friends, Shaheen Jassat (deceased), Craig and Gavin Peter, Kevin Hanssen, Roy Chizivano, Sarah Norman founded what would become Zimbabwe 's acclaimed Over the Edge Theatre Company in Harare, later joined by Erica Glyn - Jones, Zane E Lucas, Chipo Chung, Karin Alexander, Rob Hollands and Michael Pearce. The company celebrated its 10th anniversary in December 2004 having flown the Zimbabwe flag across Europe, the US and South Africa. The last few years have seen individual members pursuing other interests. Though not officially disbanded, there are no immediate plans of an Over the Edge reunion. From 1998 to 2001, the company performed at the Edinburgh Festival Fringe in Edinburgh, Scotland; some plays were written by Msamati.
Currently starring in Amadeus at the Royal Natiional Theatre (2016 / 17)
He has appeared in several theatrical productions in London, UK, including:
In November 2010 Msamati was appointed Artistic Director of British - African theatre company Tiata Fahodzi, until being succeeded in 2014 by Natalie Ibu. He has continued to work with Tiata Fahodzi, directing Boi Boi is Dead in February -- March 2015.
In spring 2015, Msamati became the first black actor ever to play Iago in a Royal Shakespeare Company production of Othello.
He has also appeared in several television productions, including episodes of the television series Ultimate Force and Spooks. In 2008 he took on his most prominent role, playing JLB Matekoni in the BBC / HBO - produced series The No. 1 Ladies ' Detective Agency. He has guest starred in episodes of the BBC television series Luther, Ashes to Ashes, Doctor Who and Death in Paradise, as well as playing the part of the pirate Salladhor Saan in the HBO series Game of Thrones.
Msamati appeared in the film The International (2009). Other film credits include Lumumba (1999), directed by Raoul Peck; animated feature The Legend of the Sky Kingdom; Dr. Juju (2000), directed by Roger Hawkins and Richard II, directed by Rupert Goold.
He permanently moved to the UK in 2003, and now resides in London.
|
where did the term mike linebacker come from | Linebacker - wikipedia
A linebacker (LB or backer) is a playing position in American football and Canadian football. Linebackers are members of the defensive team, and line up approximately three to five yards (4 m) behind the line of scrimmage, behind the defensive linemen, and therefore "back up the line. '' Linebackers generally align themselves before the ball is snapped by standing upright in a "two - point stance '' (as opposed to the defensive linemen, who put one or two hands on the ground for a "three - point stance '' or "four - point stance '' before the ball is snapped).
The goal of the linebacker is to provide either extra run protection or extra pass protection based on the particular defensive play being executed. Another key play of the linebacker position is blitzing. A blitz occurs when a linebacker acts as an extra pass rusher running into any exposed gap. When a blitz is called by the defense, it is mainly to sack or hurry the opposing offense 's quarterback.
Linebackers are often regarded as the most important position in defense, due to their versatility in providing hard hits on running plays or an additional layer of pass protection, when required. Similar to the "free safety '' position, linebackers are required to use their judgment on every snap, to determine their role during that particular play.
Before the advent of the two - platoon system with separate units for offense and defense, the player who was the team 's center on offense was often, though not always, the team 's linebacker on defense. Hence today one usually sees four defensive linemen to the offense 's five or more. Most sources claim coach Fielding H. Yost and center Germany Schulz of the University of Michigan invented the position. Schulz was Yost 's first linebacker in 1904 when he stood up from his usual position on the line. Yost was horrified at first, but came to see the wisdom in Schulz 's innovation. William Dunn of Penn St. was another Western linebacker soon after Schulz.
However, there are various historical claims tied to the linebacker position, including some before 1904. For example, Percy Given of Georgetown is another center with a claim to the title "first linebacker, '' supposedly standing up behind the line well before Schulz in a game against Navy in 1902. Despite Given, most sources have the first linebacker in the South as Frank Juhan of Sewanee.
In the East, Ernest Cozens of Penn was "one of the first of the roving centers, '' another, archaic term for the position, supposedly coined by Hank Ketcham of Yale. Walter E. Bachman of Lafayette was said to be "the developer of the "roving center '' concept. '' Edgar Garbisch of Army was credited with developing the "roving center method '' of playing defensive football in 1921.
In professional football, Cal Hubbard is credited with pioneering the linebacker position. He starred as a tackle and end, playing off the line in a style similar to that of a modern linebacker.
The middle or inside linebacker (MLB or ILB), sometimes called the "Mike '' or "Mack '', is often referred to as the "quarterback of the defense. '' Often it is the middle linebacker who receives the defensive play calls from the sideline and relays that play to the rest of the team, and in the NFL he is usually the defensive player with the electronic sideline communicator. A jack - of - all - trades, the middle linebacker can be asked to blitz (though they often blitz less than the outside linebacker), cover, spy the quarterback, or even have a deep middle - of - the - field responsibility in the Tampa 2 defense. In standard defenses, middle linebackers commonly lead the team in tackles. The terms middle and inside linebacker are often used interchangeably; they are also used to distinguish between a single middle linebacker playing in a 4 -- 3 defense, and two inside linebackers playing in a 3 -- 4 defense. In a 3 -- 4 defense, the larger, more run - stopping - oriented linebacker is usually still called "Mike '', while the smaller, more pass protection / route coverage - oriented player is called "Will ''. "Mikes '' usually line up towards the strong side or on the side the offense is more likely to run on (based on personnel matchups) while "Wills '' may line up on the other side or even a little farther back between the defensive line and the secondary.
The outside linebacker (OLB), sometimes called the "Buck, Sam, and Rebel '' is usually responsible for outside containment. This includes the strongside and weakside designations below. They are also responsible for blitzing the quarterback.
The strongside linebacker (SLB) is often nicknamed the "Sam '' for purposes of calling a blitz. Since the strong side of the offensive team is the side on which the tight end lines up, or whichever side contains the most personnel, the strongside linebacker usually lines up across from the tight end. Often the strongside linebacker will be called upon to tackle the running back on a play, because the back will be following the tight end 's block. He is most often the strongest linebacker; at the least he possesses the ability to withstand, shed, and fight off blocks from a tight end or fullback blocking the backside of a pass play. The linebacker should also have strong safety abilities in pass situation to cover the tight end in man on man situations. He should also have considerable quickness to read and get into coverage in zone situations. The strongside linebacker is also commonly known as the left outside linebacker (LOLB).
The weakside linebacker (WLB), or the "Will '' in 4 -- 3 Defense, sometimes called the backside linebacker, or "Buck '', as well as other names like Jack or Bandit must be the fastest of the three, because he is often the one called into pass coverage. He is also usually chasing the play from the backside, so the ability to maneuver through traffic is a necessity for the Will. The Will usually aligns off the line of scrimmage at the same depth as Mike. Because of his position on the weakside, the Will does not often have to face large interior linemen one on one unless one is pulling. In coverage, the Will often covers the back that attacks his side of the field first in man coverage, while covering the weak flat in Texas Loop or hook / curl areas in zone coverage. The weakside linebacker is also commonly known as the right outside linebacker (ROLB).
The number of linebackers is dependent upon the formation called for in the play; formations can call for as few as none, or as many as seven. Most defensive schemes call for three or four, and they are generally named for the number of linemen, followed by the number of linebackers (with the 46 defense being an exception). For example, the 4 -- 3 defense has four defensive linemen and three linebackers; conversely, the 3 -- 4 defense has three linemen and four linebackers.
In the 4 -- 3 defense there are four down linemen and three linebackers. The middle linebacker is designated "Mike '' (or "Mac '') and two outside linebackers are designated "Sam '' and "Will '' according to how they line up against the offensive formation. If there is a strong call, the linebacker on the strongside is called "Sam '', while the linebacker on the weakside is called "Will ''. The outside linebacker 's job is to cover the end to make sure a run does n't escape, and to watch the pass and protect from it. The middle linebacker 's job is to stop runs between the tackles and watch the entire field to see the play develop. On pass plays, the linebackers ' responsibilities vary based upon whether a man or zone coverage is called. In a zone coverage, the linebackers will generally drop into hook zones across the middle of the field. However, some zones will send the outside linebackers into the flats (area directly to the left and right of the hash marks, extending 4 -- 5 yards downfield). In a man - to - man call, the "Sam '' will often cover the tight end with help from a safety over the top, while at other times, the "Sam '' and "Will '' will be responsible for the first man out of the backfield on their side of the center, with the "Mike '' covering if a second man exits on that side of the field.
In the "Tampa 2 '' zone defense the middle linebacker is required to drop quickly into a deep middle zone pass coverage thus requiring a quick player at this position.
In the 3 -- 4 defense there are three linemen playing the line of scrimmage with four linebackers backing them up, typically two outside linebackers and two inside linebackers. The weak side inside linebacker is typically called the "Will, '' while the strong side or middle inside linebacker is called the "Mike ''. "Sam '' is a common designation for strong outside linebacker, while the other position is usually called "Jack '' and is often a hybrid DE / LB. Usually, teams that run a 3 -- 4 defense look for college defensive ends that are too small to play the position in the pros and not quite fluid enough to play outside linebacker in a 4 -- 3 defense as their "Jack '' linebacker.
The idea behind the 3 -- 4 defense is to disguise where the fourth rusher will come from. Instead of the standard four down - linemen in the 4 -- 3, only three players are clearly attacking on nearly every play. A key for running this defense successfully is having a defensive front of three large defensive linemen who command constant double teams. In particular, the nose tackle, who plays over the offensive center, must be able to hold ground and to occupy several offensive blockers to allow the linebackers to make plays. The focus of the 3 -- 4 defensive line is to occupy offensive linemen thus freeing the linebackers to tackle the running back or to rush the passer or otherwise drop into pass coverage.
Generally, the primary responsibilities for both outside linebackers are to stop the run and rush the quarterback in passing situations, in which they line in front of the tackles like true defensive ends. The outside linebackers in a 3 -- 4 defense are players who are very skilled at rushing the quarterback and they would be playing defensive end in a 4 -- 3 defense. When it comes to the inside linebackers, one is generally a run - stuffing player who is better able to handle offensive linemen and stop running backs when the offense features a running play, while the other is often a smaller, faster player who excels in pass coverage. However, the smaller or cover LB should also be able to scrape and plug running lanes decently.
The design concept of the 3 -- 4 defense is to confuse the offensive line in their blocking assignments, particularly in pass blocking, and to create a more complex read for the quarterback. Many 3 -- 4 defenses have the ability to quickly hybrid into a 4 -- 3 on the field.
In the 46 defense there are four linemen, three linebackers and a safety who is moved up behind the line of scrimmage. Thus, it appears as if there are 4 linebackers, but it is really 3 linebackers with one safety playing up with the other linebackers.
Three of the defensive linemen are over both of the offensive guards and the center, thereby making it difficult to double team any one of the three interior defensive linemen. This can also take away the ability of the offense to pull the guards on a running play, because this would leave one of the defenders unblocked, or, at best, give another lineman a very difficult block to make on one of the defenders. The safety, like the linebacker, can blitz, play man - on - man, play zone, or drop back into deep coverage like a normal safety would do. The 46 is used in heavy run situations to stop the run, when a team wants to bring lots of pressure, or merely to confuse the quarterback and offensive line.
This defense is effective at run - stopping but is weaker than a 4 -- 3 defense at pass coverage because it uses only three defensive backs. One of the outside linebackers is usually called into either blitz or pass coverage to make up for the missing DB. In the NFL and college football, this alignment is used mainly in short yardage situations or near the goal line. It is commonly used in high school football.
|
who is the tallest basketball player in the nba 2018 | List of tallest players in National Basketball Association history - wikipedia
This is a list of the tallest players in National Basketball Association history. Twenty five of them have been listed at 7 feet, 3 inches or taller. Two are active as of the 2017 -- 18 season; Kristaps Porziņģis of the New York Knicks and Boban Marjanović of the Los Angeles Clippers. The tallest player inducted into the Naismith Memorial Basketball Hall of Fame is 7 feet, 6 inch Yao Ming. In addition to Yao, Ralph Sampson and Arvydas Sabonis were the only other players 7'3 '' or taller selected to the Hall of Fame.
In the seventh round of the 1981 NBA Draft, the Golden State Warriors used the 171st pick to select Yasutaka Okayama, a Japanese basketball player who was measured at 7 feet 8 inches (2.34 m) and 330 pounds (150 kg). Okayama, who attended and played junior varsity basketball at the University of Portland for one and a half years in 1976 as an exchange student, declined to try out for the Warriors and never played in the NBA. He is the tallest person ever drafted and would have been the tallest player in the NBA had he played in the league.
|
who is the best alchemist in fullmetal alchemist | Fullmetal Alchemist - wikipedia
Fullmetal Alchemist (Japanese: 鋼 の 錬金術 師, Hepburn: Hagane no Renkinjutsushi, lit. "Alchemist of Steel '') is a Japanese shōnen manga series written and illustrated by Hiromu Arakawa. It was serialized in Square Enix 's Monthly Shōnen Gangan magazine between August 2001 and June 2010; the publisher later collected the individual chapters into twenty - seven tankōbon volumes. The world of Fullmetal Alchemist is styled after the European Industrial Revolution. Set in a fictional universe in which alchemy is one of the most advanced scientific techniques, the story follows two alchemist brothers named Edward and Alphonse Elric, who are searching for the philosopher 's stone to restore their bodies after a failed attempt to bring their mother back to life using alchemy.
The manga was published and localized in English by Viz Media in North America, Madman Entertainment in Australasia, and Chuang Yi in Singapore. Yen Press also has the rights for the digital release of the volumes in North America due to the series being a Square Enix title. It has been adapted into two anime television series, two animated films -- all animated by Bones studio -- and light novels. Funimation dubbed the television series, films and video games. The series has generated original video animations, video games, supplementary books, a collectible card game, and a variety of action figures and other merchandise. A live action film based on the series was also released in 2017.
The manga has sold over 70 million volumes worldwide, making it one of the best - selling manga series. The English release of the manga 's first volume was the top - selling graphic novel during 2005. In two TV Asahi web polls, the anime was voted the most popular anime of all time in Japan. At the American Anime Awards in February 2007, it was eligible for eight awards, nominated for six, and won five. Reviewers from several media conglomerations had positive comments on the series, particularly for its character development, action scenes, symbolism and philosophical references.
Fullmetal Alchemist takes place in an alternate history, in the fictional country of Amestris (アメストリス, Amesutorisu). In this world, alchemy is one of the most - practiced sciences; Alchemists who work for the government are known as State Alchemists (国家 錬金術 師, Kokka Renkinjutsushi) and are automatically given the rank of Major in the military. Alchemists have the ability, with the help of patterns called Transmutation Circles, to create almost anything they desire. However, when they do so, they must provide something of equal value in accordance with the Law of Equivalent Exchange. The only things Alchemists are forbidden from transmuting are humans and gold. There has never been a successful human transmutation; those who attempt it lose a part of their body and the result is a horrific inhuman mass. Attemptees are confronted by Truth (真理, Shinri), a pantheistic and semi-cerebral God - like being who tauntingly regulates all alchemy use and whose nigh - featureless appearance is relative to the person to whom Truth is conversing with; the series ' antagonist, Father, and some other characters, claim and believe that Truth is a personal God who punishes the arrogant, a belief that Edward denies, citing a flaw in Father 's interpretation of Truth 's works.
Attemptees of Human Transmutation are also thrown into the Gate of Truth (真理 の 扉, Shinri no Tobira), where they receive an overwhelming dose of information, but also allowing them to transmute without a circle. All living things possess their own Gate of Truth, and per the Gaea hypothesis heavenly bodies like planets also have their own Gates of Truth. It is possible to bypass the Law of Equivalent Exchange (to an extent) using a Philosopher 's Stone, a red, enigmatic substance. Philosopher 's Stones can be used to create Homunculi, artificial humans of proud nature. Homunculi have numerous superhuman abilities unique amongst each other and look down upon all humanity. With the exception of one, they do not age and can only be killed via the destruction of their Philosopher 's Stones.
There are several cities throughout Amestris. The main setting is the capital of Central City (セントラル シティ, Sentoraru Shiti), along with other military cities such as the northern city of Briggs (ブリッグズ, Burigguzu). Towns featured include Resembool (リゼンブール, Rizenbūru), the rural hometown of the Elrics; Liore (リオール, Riōru), a city tricked into following a cult; Rush Valley (ラッシュ バレー, Rasshu Barē), a town that specializes in automail manufacturing; and Ishbal, a conservative - religion region that rejects alchemy and was destroyed in the Ishbalan Civil War instigated after a soldier shot an Ishbalan child. Outside of Amestris, there are few named countries, and none are seen in the main story. The main foreign country is Xing. Heavily reminiscent of China, Xing has a complex system of clans and emperors, as opposed to Amestris 's government - controlled election of a Führer. It also has its own system of alchemy, called Alkahestry (錬 丹 術, Rentanjutsu), which is more medical and can be bi-located using kunai; in turn, it is implied that all countries have different forms of alchemy.
Edward and Alphonse Elric live in the rural town of Resembool with their mother Trisha, their father Van Hohenheim having left without a reason. Edward bears a grudge against their father as he and Alphonse showed a talent for alchemy before Trisha died of the plague. After finishing their alchemy training under Izumi Curtis, the brothers attempt to bring their mother back with alchemy. But the transmutation backfires and in law with equivalent exchange, Edward loses his left leg while Alphonse is dragged into the Gate of Truth. Edward sacrifices his right arm to retrieve Alphonse 's soul, binding it to a suit of armor with a blood seal. Edward is invited by Roy Mustang to become a State Alchemist to research a way to restore Alphonse 's body, passing his exams while given the title of Fullmetal Alchemist based on his prosthetic automail limbs and use of metal in his alchemy. The Elrics spent the next three years searching for the mythical Philosopher 's Stone to achieve their goals. One such lead results in them exposing a corrupt religious leader in the city of Liore while unaware of events occurring behind the scenes by the mysterious Homunculi.
Following their time with the State Alchemist Shou Tucker, which teaches them a horrific lesson, the Elric brothers have a near - death experience from encountering an Ishbalan serial killer labeled as Scar who targets State Alchemists for his people 's genocide in the Ishbalan civil war. After returning to Resembool to have Edward 's limbs repaired by their childhood friend and mechanic, Winry Rockbell, the Elrics meet the guilt - ridden former State Alchemist Dr. Marcoh who provides them with clues to learn that a Philosopher 's Stone is created from human souls. After the Homunculi hinder them by destroying the hidden laboratory, the brothers are joined by Winry as they attempt to find an alternate means to restore themselves. At the same time, Mustang 's friend Maes Hughes continues the Elrics ' research and ends up murdered by a disguised homonculus, Envy, when he learns of the Homunculi 's plan.
After their defeat at the hands of Scar, the Elric Brothers decide to visit their teacher Izumi Curtis in the city of Dublith, hoping that she might be able to train them in higher forms of alchemy. This backfires when she discovers their failed attempt at Human Transmutation, with Izumi telling the Elrics how she committed human transmutation on her stillborn child. Izumi expels them as her apprentices, though following a determined argument, she allows them to stay with her for extra training. Following this, Alphonse is captured by the rogue homunculus Greed, who in turn is attacked by Amestris ' leader King Bradley, revealed to be the homunculus Wrath. When Greed refuses to rejoin his fellow Homunculi, he is consequently melted down by and reabsorbed within the Homunculi 's creator, Father.
After running into the Xingese prince Lin Yao, who is also after a Philosopher 's Stone to cement his position as heir to his country 's throne, the Elrics and Winry return to Central City where they learn of Hughes 's death with Lieutenant Maria Ross framed for the murder. Mustang fakes Maria 's death and smuggles her out of the country with Lin 's help so he can focus on the Homunculi. The events that follow result in the death of the homunculi Lust, revealing that a Philosopher 's Stone forms a Homunculus 's core along with an upcoming event the Homunculi are working towards. Meanwhile, Scar forms a small band with the Xingese princess May Chang, who also seeks the stone, and a former military officer named Yoki whom the Elrics exposed as a corrupt official.
Following an attempt to capture the homonculi Gluttony using Lin 's sensory skills, the Homunculus end up accidentally swallowing Edward, Lin, and Envy into his void - like stomach, with the two humans learning the Homunculi orchestrated Ametris 's history over the centuries. Gluttony takes Alphonse to meet Father whilst the others manage to escape from Gluttony 's stomach, eventually meeting Father. Father considers killing Lin for not being one of the human sacrifices like the Elrics. Instead, he makes Lin the vessel of a new incarnation of Greed with the Elrics attempting to escape upon seeing Scar. Edward has Envy admit to having caused the Ishbalan civil war, whilst Scar meets Winry and realises the evil nature of his desire for revenge. Soon after, with Winry used against them as a hostage, the Elrics are allowed to continue their quest as long as they no longer oppose Father. Mustang receives a similar threat with his subordinates scattered to the other military branches. At the same time, finding Dr. Marcoh held captive, Scar spirited him out of Central as Scar 's group head north.
The Elrics eventually reach Fort Briggs under the command of General Olivier Armstrong, revealing what they know following the discovery of an underground tunnel beneath Briggs made by the Homunculus Sloth. The brothers soon learn from Hughes 's research that Father created Amestris to amass a large enough population to create a massive Philosopher 's Stone. Forced to work with Solf J. Kimblee, a murderous former State Alchemist and willing ally of the Homunculi in tracking down Scar, the Elrics make their move to save Winry and split up with Kimblee 's chimera subordinates joining them. As Edward is joined by Lin / Greed, who regained his former self 's memories, Alphonse encounters Hohenheim in Liore. Honenheim reveals he was made an immortal when Father, once simply known as ' Homunculus ', arranged the fall of Cselkcess four centuries ago to create his body while giving half of the sacrificed souls to Hohenheim. Hohenheim also explains he left his family to try and stop Father from sacrificing the Amestrisan people to create a massive philosopher 's stone, and achieving godhood by absorbing the being beyond the Gate of Truth on the ' Promised Day '.
The Promised Day arrives, with Father preparing to initiate his plan using an eclipse and his desired ' human sacrifices ' in order to trigger the transmutation. The protagonists, having assembled days prior, orchestrate an all - out attack on Central with Sloth, Envy, and Wrath killed in the process while Gluttony was devoured by Pride. Despite the opposition, Father manages to activate the nationwide transmutation once the Elrics, Izumi, Hohenheim are gathered along with Mustang after being forced by Pride to perform Human Transmutation. Hohenheim and Scar activate the countermeasures put in place by Hohenheim to save the Amestrians, causing Father to become unstable from housing the absorbed superior being within him without the souls needed to subdue it. Father is confronted above ground where the protagonists battle him to wear down his Philosopher 's Stone while he attempts to replenish himself, Edward managing to defeat the gravely weakened Pride before joining the fray.
Alphonse, whose armor is all but destroyed, sacrifices his soul to restore Edward 's right arm while Greed leaves Lin 's body and sacrifices himself to weaken Father 's body enough for Edward to destroy Father 's Philosopher 's Stone. This causes Father to implode out of reality while dragged into the Gate of Truth from which he was created. Edward sacrifices his ability to perform alchemy to retrieve a fully restored Alphonse, Lin receiving a Philosopher 's Stone while promising May to be a just ruler. Hohenheim takes his leave and visits Trisha 's grave where he dies with a smile on his face. The Elrics return home months later, still motivated by those they failed to save in learning new forms of alchemy to prevent repeated tragedies. This leads to the Elrics leaving Amestris two years later to study other cultures and their knowledge, with Alphonse leaving for Xing in the east while Edward heads westward. The epilogue finishes with a family photo of Alphonse, May, Edward, Winry, and Ed 's and Winry 's son and daughter.
After reading about the concept of the Philosopher 's Stone, Arakawa became attracted to the idea of her characters using alchemy in the manga. She started reading books about alchemy, which she found complicated because some books contradict others. Arakawa was attracted more by the philosophical aspects than the practical ones. For the Equivalent Exchange (等価 交換, Tōka Kōkan) concept, she was inspired by the work of her parents, who had a farm in Hokkaido and worked hard to earn the money to eat.
Arakawa wanted to integrate social problems into the story. Her research involved watching television news programs and talking to refugees, war veterans and former yakuza. Several plot elements, such as Pinako Rockbell caring for the Elric brothers after their mother dies, and the brothers helping people to understand the meaning of family, expand on these themes. When creating the fictional world of Fullmetal Alchemist, Arakawa was inspired after reading about the Industrial Revolution in Europe; she was amazed by differences in the culture, architecture, and clothes of the era and those of her own culture. She was especially interested in England during this period and incorporated these ideas into the manga.
When the manga began serialization, Arakawa was considering several major plot points, including the ending. She wanted the Elric brothers to recover their bodies -- at least partly. As the plot continued, she thought that some characters were maturing and decided to change some scenes. Arakawa said the manga authors Suihō Tagawa and Hiroyuki Eto are her main inspirations for her character designs; she describes her artwork as a mix of both of them. She found that the easiest of the series 's characters to draw were Alex Louis Armstrong, and the little animals. Arakawa likes dogs so she included several of them in the story. Arakawa made comedy central to the manga 's story because she thinks it is intended for entertainment, and tried to minimize sad scenes.
When around forty manga chapters had been published, Arakawa said that as the series was nearing its end and she would try to increase the pace of the narrative. To avoid making some chapters less entertaining than others, unnecessary details from each of them were removed and a climax was developed. The removal of minor details was also necessary because Arakawa had too few pages in Monthly Shōnen Gangan to include all the story content she wanted to add. Some characters ' appearances were limited in some chapters. At first, Arakawa thought the series would last twenty - one volumes but the length increased to twenty - seven. Serialization finished after nine years, and Arakawa was satisfied with her work because she had told everything she wanted with the manga.
During the development of the first anime, Arakawa allowed the anime staff to work independently from her, and requested a different ending from that of the manga. She said that she would not like to repeat the same ending in both media, and wanted to make the manga longer so she could develop the characters. When watching the ending of the anime, she was amazed about how different the homunculi creatures were from the manga and enjoyed how the staff speculated about the origins of the villains. Because Arakawa helped the Bones staff in the making of the series, she was kept from focusing on the manga 's cover illustrations and had little time to make them.
The series explores social problems, including discrimination, scientific advancement, political greed, brotherhood, family, and war. Scar 's backstory and his hatred of the state military references the Ainu people, who had their land taken by other people. This includes the consequences of guerrilla warfare and the amount of violent soldiers a military can have. Some of the people who took the Ainus ' land were originally Ainu; this irony is referenced in Scar 's use of alchemy to kill alchemists even though it was forbidden in his own religion. The Elrics being orphans and adopted by Pinako Rockbell reflects Arakawa 's beliefs about the ways society should treat orphans. The characters ' dedication to their occupations reference the need to work for food. The series also explores the concept of equivalent exchange; to obtain something new, one must pay with something of equal value. This is applied by alchemists when creating new materials and is also a philosophical belief the Elric brothers follow.
Written and drawn by Hiromu Arakawa, Fullmetal Alchemist was serialized in Square Enix 's monthly manga magazine Monthly Shōnen Gangan. Its first installment was published in the magazine 's August 2001 issue on July 12, 2001; publication continued until the series concluded in June 2010 with the 108th installment. A side - story to the series was published in the October 2010 issue of Monthly Shōnen Gangan on September 11, 2010. In the July 2011 issue of the same magazine, the prototype version of the manga was published. Square Enix compiled the chapters into twenty - seven tankōbon volumes. The first volume was released on January 22, 2002, and the last on November 22, 2010. A few chapters have been re-released in Japan in two "Extra number '' magazines and Fullmetal Alchemist, The First Attack, which features the first nine chapters of the manga and other side stories. On July 22, 2011, Square Enix started republishing the series in kanzenban format.
Viz Media localized the tankōbon volumes in English in North America between May 3, 2005, and December 20, 2011. On June 7, 2011, Viz started publishing the series in omnibus format, featuring three volumes in one. Yen Press has the rights for the digital release of the volumes in North America since 2014 and on December 12, 2016 has released the series on the ComiXology website. Other English localizations were done by Madman Entertainment for Australasia and Chuang Yi in Singapore. The series has been also localized in Polish, French, Portuguese, Italian, and Korean.
Fullmetal Alchemist was adapted into two anime series for television: a loose adaptation titled Fullmetal Alchemist in 2003 -- 2004, and a more faithful 2009 -- 2010 retelling titled Fullmetal Alchemist: Brotherhood.
Two feature - length anime films were produced; Fullmetal Alchemist the Movie: Conqueror of Shamballa, a sequel / conclusion to the 2003 series, and Fullmetal Alchemist: The Sacred Star of Milos, set during the time period of Brotherhood.
A live - action film based on the manga was released on November 19, 2017. Fumihiko Sori directed the film. The film stars Ryosuke Yamada as Edward Elric, Tsubasa Honda as Winry Rockbell and Dean Fujioka as Roy Mustang.
Square Enix has published a series of six Fullmetal Alchemist Japanese light novels, written by Makoto Inoue. The novels were licensed for an English - language release by Viz Media in North America, with translations by Alexander O. Smith and illustrations -- including covers and frontispieces -- by Arakawa. The novels are spin - offs of the manga series and follow the Elric brothers on their continued quest for the philosopher 's stone. The first novel, Fullmetal Alchemist: The Land of Sand, was animated as the episodes eleven and twelve of the first anime series. The fourth novel contains an extra story about the military called "Roy 's Holiday ''. Novelizations of the PlayStation 2 games Fullmetal Alchemist and the Broken Angel, Curse of the Crimson Elixir, and The Girl Who Succeeds God have also been written, the first by Makoto Inoue and the rest by Jun Eishima.
There have been two series of Fullmetal Alchemist audio dramas. The first volume of the first series, Fullmetal Alchemist Vol. 1: The Land of Sand (砂礫 の 大地, Sareki no Daichi), was released before the anime and tells a similar story to the first novel. The Tringham brothers reprised their anime roles. Fullmetal Alchemist Vol. 2: False Light, Truth 's Shadow (偽り の 光 真実 の 影, Itsuwari no Hikari, Shinjitsu no Kage) and Fullmetal Alchemist Vol. 3: Criminals ' Scar (咎人 たち の 傷跡, Togabitotachi no Kizuato) are stories based on different manga chapters; their State Military characters are different from those in the anime. The second series of audio dramas, available only with purchases of Shōnen Gangan, consists two stories in this series, each with two parts. The first, Fullmetal Alchemist: Ogutāre of the Fog (霧 の オグターレ, Kiri no Ogutāre), was included in Shōnen Gangan 's April and May 2004 issues; the second story, Fullmetal Alchemist: Crown of Heaven (天上 の 宝冠, Tenjō no Hōkan), was issued in the November and December 2004 issues.
Video games based on Fullmetal Alchemist have been released. The storylines of the games often diverge from those of the anime and manga, and feature original characters. Square Enix has released three role - playing games (RPG) -- Fullmetal Alchemist and the Broken Angel, Curse of the Crimson Elixir, and Kami o Tsugu Shōjo. Bandai has released two RPG titles, Fullmetal Alchemist: Stray Rondo (鋼 の 錬金術 師 迷走 の 輪 舞曲, Hagane no Renkinjutsushi Meisō no Rondo) and Fullmetal Alchemist: Sonata of Memory (鋼 の 錬金術 師 想い出 の 奏鳴曲, Hagane no Renkinjutsushi Omoide no Sonata), for the Game Boy Advance and one, Dual Sympathy, for the Nintendo DS. In Japan, Bandai released an RPG Fullmetal Alchemist: To the Promised Day (鋼 の 錬金術 師 Fullmetal Alchemist 約束 の 日 へ, Hagane no Renkinjutsushi Fullmetal Alchemist Yakusoku no Hi e) for the PlayStation Portable on May 20, 2010. Bandai also released a fighting game, Dream Carnival, for the PlayStation 2. Destineer released a game based on the trading card game in North America for the Nintendo DS. Of the seven games made in Japan, Broken Angel, Curse of the Crimson Elixir, and Dual Sympathy have seen international releases. For the Wii, Akatsuki no Ōji (暁 の 王子, lit. Fullmetal Alchemist: Prince of the Dawn) was released in Japan on August 13, 2009. A direct sequel of the game, Tasogare no Shōjo (黄昏 の 少女, lit. Fullmetal Alchemist: Daughter of the Dusk), was released on December 10, 2009, for the same console.
Funimation licensed the franchise to create a new series of Fullmetal Alchemist - related video games to be published by Destineer Publishing Corporation in the United States. Destineer released its first Fullmetal Alchemist game for the Nintendo DS, a translation of Bandai 's Dual Sympathy, on December 15, 2006, and said that they plan to release further titles. On February 19, 2007, Destineer announced the second game in its Fullmetal Alchemist series, the Fullmetal Alchemist Trading Card Game, which was released on October 15, 2007. A third game for the PlayStation Portable titled Fullmetal Alchemist: Senka wo Takuseshi Mono (背中 を 託せ し 者) was released in Japan on October 15, 2009. A European release of the game, published by with Namco Bandai, was announced on March 4, 2010. The massively multiplayer online role - playing game MapleStory also received special in - game items based on the anime series.
Arakawa oversaw the story and designed the characters for the RPG games, while Bones -- the studio responsible for the anime series -- produced several animation sequences. The developers looked at other titles -- specifically Square Enix 's action role - playing game Kingdom Hearts and other games based on manga series, such as Dragon Ball, Naruto or One Piece games -- for inspiration. The biggest challenge was to make a "full - fledged '' game rather than a simple character - based one. Tomoya Asano, the assistant producer for the games, said that development took more than a year, unlike most character - based games.
The Fullmetal Alchemist has received several artbooks. Three artbooks called The Art of Fullmetal Alchemist (イラスト 集 FULLMETAL ALCHEMIST, Irasuto Shū Fullmetal Alchemist) were released by Square Enix; two of those were released in the US by Viz Media. The first artbook contains illustrations made between May 2001 to April 2003, spanning the first six manga volumes, while the second has illustrations from September 2003 to October 2005, spanning the next six volumes. The last one includes illustrations from the remaining volumes.
The manga also has three guidebooks; each of them contains timelines, guides to the Elric brothers ' journey, and gaiden chapters that were never released in manga volumes. Only the first guidebook was released by Viz Media, titled Fullmetal Alchemist Profiles. A guidebook titled "Fullmetal Alchemist Chronicle '' (鋼 の 錬金術 師 CHRONICLE), which contains post-manga story information, was released in Japan on July 29, 2011.
Action figures, busts, and statues from the Fullmetal Alchemist anime and manga have been produced by toy companies, including Medicom and Southern Island. Medicom has created high end deluxe vinyl figures of the characters from the anime. These figures are exclusively distributed in the United States and UK by Southern Island. Southern Island released its own action figures of the main characters in 2007, and a 12 '' statuette was scheduled for release the same year. Southern Island has since gone bankrupt, putting the statuette 's release in doubt. A trading card game was first published in 2005 in the United States by Joyride Entertainment. Since then, six expansions have been released. The card game was withdrawn on July 11, 2007. Destineer released a Nintendo DS adaptation of the game on October 15, 2007.
Overall, the franchise has received widespread critical acclaim and commercial success.
Along with Yakitate!! Japan, the series won the forty - ninth Shogakukan Manga Award for shōnen in 2004. It won the public voting for Eagle Award 's "Favourite Manga '' in 2010 and 2011. The manga also received the Seiun Award for best science fiction comic in 2011.
In a survey from Oricon in 2009, Fullmetal Alchemist ranked ninth as the manga that fans wanted to be turned into a live - action film. The series is also popular with amateur writers who produce dōjinshi (fan fiction) that borrows characters from the series. In the Japanese market Super Comic City, there have been over 1,100 dōjinshi based on Fullmetal Alchemist, some of which focused on romantic interactions between Edward Elric and Roy Mustang. Anime News Network said the series had the same impact in Comiket 2004 as several female fans were seen there writing dōjinshi.
The series has become one of Square Enix 's best - performing properties, along with Final Fantasy and Dragon Quest. With the release of volume 27, the manga sold over 50 million copies in Japan. As of January 10, 2010, every volume of the manga has sold over a million copies each in Japan. Square Enix reported that the series had sold 70.3 million copies worldwide as of April 25, 2018, 16.4 million of those outside Japan. The series is also one of Viz Media 's best sellers, appearing in "BookScan 's Top 20 Graphic Novels '' and the "USA Today Booklist ''. It was featured in the Diamond Comic Distributors ' polls of graphic novels and The New York Times Best Seller Manga list. The English release of the manga 's first volume was the top - selling graphic novel during 2005.
During 2008, volumes 19 and 20 sold over a million copies, ranking as the 10th and 11th best seller comics in Japan respectively. In the first half of 2009, it ranked as the seventh best - seller in Japan, having sold over 3 million copies. Volume 21 ranked fourth, with more than a million copies sold and volume 22 ranked sixth with a similar number of sold copies. Producer Kouji Taguchi of Square Enix said that Volume 1 's initial sales were 150,000 copies; this grew to 1.5 million copies after the first anime aired. Prior to the second anime 's premiere, each volume sold about 1.9 million copies, and then it changed to 2.1 million copies.
Fullmetal Alchemist has generally been well received by critics. Though the first volumes were thought to be formulaic, critics said that the series grows in complexity as it progresses. Jason Thompson called Arakawa one of the best at creating action scenes and praised the series for having great female characters despite being a boys ' manga. He also noted how the story gets dark by including real - world issues such as government corruption, war and genocide. Thompson finished by stating that Fullmetal Alchemist "will be remembered as one of the classic shonen manga series of the 2000s. '' Melissa Harper of Anime News Network praised Arakawa for keeping all of her character designs unique and distinguishable, despite many of them wearing the same basic uniforms. IGN 's Hilary Goldstein wrote that the characterization of the protagonist Edward balances between being a "typical clever kid '' and a "stubborn kid '', allowing him to float between the comical moments and the underlying drama without seeming false. Holly Ellingwood for Active Anime praised the development of the characters in the manga and their beliefs changing during the story, forcing them to mature. Mania Entertainment 's Jarred Pine said that the manga can be enjoyed by anybody who has watched the first anime, despite the similarities in the first chapters. Like other reviewers, Pine praised the dark mood of the series and the way it balances the humor and action scenes. Pine also praised the development of characters who have few appearances in the first anime. In a review of volume 14, Sakura Eries -- also of Mania Entertainment -- liked the revelations, despite the need to resolve several story arcs. She also praised the development of the homunculi, such as the return of Greed, as well as their fights.
The first Fullmetal Alchemist novel, The Land of the Sand, was well received by Jarred Pine of Mania Entertainment as a self - contained novelization that remained true to the characterizations of the manga series. He said that while the lack of backstory aims it more towards fans of the franchise than new readers, it was an impressive debut piece for the Viz Fiction line. Ai n't It Cool News also found the novel to be true to its roots, and said that while it added nothing new, it was compelling enough for followers of the series to enjoy a retelling. The reviewer said it was a "work for young - ish readers that 's pretty clear about some darker sides of politics, economics and human nature ''. Charles Solomon of the Los Angeles Times said that the novel has a different focus than the anime series; The Land of Sand "created a stronger, sympathetic bond '' between the younger brothers than is seen in its two - episode anime counterpart.
|
who has won premier league and champions league | Premier League - wikipedia
The Premier League is the top level of the English football league system. Contested by twenty clubs, it operates on a system of promotion and relegation with the English Football League (EFL).
The Premier League is a corporation in which the member clubs act as shareholders. Seasons run from August to May with each team playing 38 matches (playing each other home and away). Most games are played on Saturday and Sunday afternoons. It is known outside the UK as the English Premier League (EPL).
The competition was formed as the FA Premier League on 20 February 1992 following the decision of clubs in the Football League First Division to break away from the Football League, founded in 1888, and take advantage of a lucrative television rights deal. The deal was worth £ 1 billion a year domestically as of 2013 -- 14, with BSkyB and BT Group securing the domestic rights to broadcast 116 and 38 games respectively. The league generates € 2.2 billion per year in domestic and international television rights. In 2014 -- 15, teams were apportioned revenues of £ 1,600 million, rising sharply to £ 2,400 million in 2016 -- 17.
The Premier League is the most - watched sports league in the world, broadcast in 212 territories to 643 million homes and a potential TV audience of 4.7 billion people. In the 2014 -- 15 season, the average Premier League match attendance exceeded 36,000, second highest of any professional football league behind the Bundesliga 's 43,500. Most stadium occupancies are near capacity. The Premier League ranks third in the UEFA coefficients of leagues based on performances in European competitions over the past five seasons.
Forty - nine clubs have competed since the inception of the Premier League in 1992. Six of them have won the title: Manchester United (13), Chelsea (5), Arsenal (3), Manchester City (2), Blackburn Rovers (1) and Leicester City (1).
Despite significant European success in the 1970s and early 1980s, the late ' 80s marked a low point for English football. Stadiums were crumbling, supporters endured poor facilities, hooliganism was rife, and English clubs were banned from European competition for five years following the Heysel Stadium disaster in 1985. The Football League First Division, the top level of English football since 1888, was behind leagues such as Italy 's Serie A and Spain 's La Liga in attendances and revenues, and several top English players had moved abroad.
By the turn of the 1990s the downward trend was starting to reverse: at the 1990 FIFA World Cup, England reached the semi-finals; UEFA, European football 's governing body, lifted the five - year ban on English clubs playing in European competitions in 1990, resulting in Manchester United lifting the UEFA Cup Winners ' Cup in 1991, and the Taylor Report on stadium safety standards, which proposed expensive upgrades to create all - seater stadiums in the aftermath of the Hillsborough disaster, was published in January of that year.
The 1980s also saw the major English clubs, led by the likes of Martin Edwards of Manchester United, Irving Scholar of Tottenham Hotspur and David Dein of Arsenal, beginning to be transformed into business ventures that applied commercial principles to the running of the clubs, which led to the increasing power of the elite clubs. By threatening to break away, the top clubs from Division One managed to increase their voting power, and took 50 % share all television and sponsorship income in 1986. Revenue from television also became more important: the Football League received £ 6.3 million for a two - year agreement in 1986, but by 1988, in a deal agreed with ITV, the price rose to £ 44 million over four years with the leading clubs taking 75 % of the cash. The 1988 negotiations were conducted under the threat of ten clubs leaving to form a "super league '', but were eventually persuaded to stay with the top clubs taking the lion share of the deal. As stadiums improved and match attendance and revenues rose, the country 's top teams again considered leaving the Football League in order to capitalise on the influx of money into the sport.
In 1990, the managing director of London Weekend Television (LWT), Greg Dyke, met with the representatives of the "big five '' football clubs in England (Manchester United, Liverpool, Tottenham, Everton and Arsenal) over a dinner. The meeting was to pave the way for a break away from The Football League. Dyke believed that it would be more lucrative for LWT if only the larger clubs in the country were featured on national television and wanted to establish whether the clubs would be interested in a larger share of television rights money. The five clubs decided it was a good idea and decided to press ahead with it; however, the league would have no credibility without the backing of The Football Association and so David Dein of Arsenal held talks to see whether the FA were receptive to the idea. The FA did not enjoy an amicable relationship with the Football League at the time and considered it as a way to weaken the Football League 's position.
At the close of the 1991 season, a proposal was tabled for the establishment of a new league that would bring more money into the game overall. The Founder Members Agreement, signed on 17 July 1991 by the game 's top - flight clubs, established the basic principles for setting up the FA Premier League. The newly formed top division would have commercial independence from The Football Association and the Football League, giving the FA Premier League licence to negotiate its own broadcast and sponsorship agreements. The argument given at the time was that the extra income would allow English clubs to compete with teams across Europe. Although Dyke played a significant role in the creation of the Premier League, Dyke and ITV would lose out in the bidding for broadcast rights as BSkyB won with a bid of £ 304 million over five years with the BBC awarded the highlights package broadcast on Match of the Day.
In 1992, the First Division clubs resigned from the Football League en masse and on 27 May 1992 the FA Premier League was formed as a limited company working out of an office at the Football Association 's then headquarters in Lancaster Gate. This meant a break - up of the 104 - year - old Football League that had operated until then with four divisions; the Premier League would operate with a single division and the Football League with three. There was no change in competition format; the same number of teams competed in the top flight, and promotion and relegation between the Premier League and the new First Division remained the same as the old First and Second Divisions with three teams relegated from the league and three promoted.
The league held its first season in 1992 -- 93. It was composed of 22 clubs for that season. The first Premier League goal was scored by Brian Deane of Sheffield United in a 2 -- 1 win against Manchester United. The 22 inaugural members of the new Premier League were Arsenal, Aston Villa, Blackburn Rovers, Chelsea, Coventry City, Crystal Palace, Everton, Ipswich Town, Leeds United, Liverpool, Manchester City, Manchester United, Middlesbrough, Norwich City, Nottingham Forest, Oldham Athletic, Queens Park Rangers, Sheffield United, Sheffield Wednesday, Southampton, Tottenham Hotspur, and Wimbledon. Luton Town, Notts County, and West Ham United were the three teams relegated from the old first division at the end of the 1991 -- 92 season, and did not take part in the inaugural Premier League season.
One significant feature of the Premier League in the mid-2000s was the dominance of the so - called "Big Four '' clubs: Arsenal, Chelsea, Liverpool and Manchester United. During this decade, they dominated the top four spots, which came with UEFA Champions League qualification, taking all top - four places in 5 out of 6 seasons from 2003 -- 04 to 2008 -- 09 inclusive. Arsenal went as far as winning the league without losing a single game in 2003 -- 04, the only time it has ever happened in the Premier League. In May 2008 Kevin Keegan stated that "Big Four '' dominance threatened the division, "This league is in danger of becoming one of the most boring but great leagues in the world. '' Premier League chief executive Richard Scudamore said in defence: "There are a lot of different tussles that go on in the Premier League depending on whether you 're at the top, in the middle or at the bottom that make it interesting. ''
The years following 2009 marked a shift in the structure of the "Big Four '' with Tottenham Hotspur and Manchester City both breaking into the top four. In the 2009 -- 10 season, Tottenham finished fourth and became the first team to break the top four since Everton in 2005. Criticism of the gap between an elite group of "super clubs '' and the majority of the Premier League has continued, nevertheless, due to their increasing ability to spend more than the other Premier League clubs.
Manchester City won the title in the 2011 -- 12 season, becoming the first club outside the "Big Four '' to win since 1994 -- 95. That season also saw two of the Big Four (Chelsea and Liverpool) finish outside the top four places for the first time since 1994 -- 95. The arrival of Manchester City as a new force in English football combined with Tottenham Hotspur finishing third and then second in the 2015 -- 16 and 2016 -- 17 seasons respectively meant that many viewed the "Big Four '' as proliferating into the "Big Six ''.
With only four UEFA Champions League qualifying places available in the league, greater competition for qualification now exists, albeit from a narrow base of six clubs. In the following five seasons after the 2011 -- 12 campaign, Manchester United and Liverpool both found themselves outside of the top four three times while Chelsea finished 10th in the 2015 -- 16 season. Arsenal finished 5th in 2016 -- 17, ending their record of 20 consecutive top - four finishes.
Off the pitch, the "Big Six '' wield financial power and influence, with these clubs arguing that they should be entitled to a greater share of revenue due to the greater stature of their clubs globally and the attractive football they aim to play. Objectors argue that the egalitarian revenue structure in the Premier League helps to maintain a competitive league which is vital for its future success.
Due to insistence by the International Federation of Association Football (FIFA), the international governing body of football, that domestic leagues reduce the number of games clubs played, the number of clubs was reduced to 20 in 1995 when four teams were relegated from the league and only two teams promoted. On 8 June 2006, FIFA requested that all major European leagues, including Italy 's Serie A and Spain 's La Liga be reduced to 18 teams by the start of the 2007 -- 08 season. The Premier League responded by announcing their intention to resist such a reduction. Ultimately, the 2007 -- 08 season kicked off again with 20 teams.
The league changed its name from the FA Premier League to simply the Premier League in 2007.
The Football Association Premier League Ltd (FAPL) is operated as a corporation and is owned by the 20 member clubs. Each club is a shareholder, with one vote each on issues such as rule changes and contracts. The clubs elect a chairman, chief executive, and board of directors to oversee the daily operations of the league. The current chairman is Sir Dave Richards, who was appointed in April 1999, and the chief executive is Richard Scudamore, appointed in November 1999. The former chairman and chief executive, John Quinton and Peter Leaver, were forced to resign in March 1999 after awarding consultancy contracts to former Sky executives Sam Chisholm and David Chance. The Football Association is not directly involved in the day - to - day operations of the Premier League, but has veto power as a special shareholder during the election of the chairman and chief executive and when new rules are adopted by the league.
The Premier League sends representatives to UEFA 's European Club Association, the number of clubs and the clubs themselves chosen according to UEFA coefficients. For the 2012 -- 13 season the Premier League has 10 representatives in the Association: Arsenal, Aston Villa, Chelsea, Everton, Fulham, Liverpool, Manchester City, Manchester United, Newcastle United and Tottenham Hotspur. The European Club Association is responsible for electing three members to UEFA 's Club Competitions Committee, which is involved in the operations of UEFA competitions such as the Champions League and UEFA Europa League.
There are 20 clubs in the Premier League. During the course of a season (from August to May) each club plays the others twice (a double round - robin system), once at their home stadium and once at that of their opponents ', for a total of 38 games. Teams receive three points for a win and one point for a draw. No points are awarded for a loss. Teams are ranked by total points, then goal difference, and then goals scored. If still equal, teams are deemed to occupy the same position. If there is a tie for the championship, for relegation, or for qualification to other competitions, a play - off match at a neutral venue decides rank. The three lowest placed teams are relegated into the EFL Championship, and the top two teams from the Championship, together with the winner of play - offs involving the third to sixth placed Championship clubs, are promoted in their place.
As of the 2009 -- 10 season qualification for the UEFA Champions League changed, the top four teams in the Premier League qualify for the UEFA Champions League, with the top three teams directly entering the group stage. Previously only the top two teams qualified automatically. The fourth - placed team enters the Champions League at the play - off round for non-champions and must win a two - legged knockout tie in order to enter the group stage.
The team placed fifth in the Premier League automatically qualifies for the UEFA Europa League, and the sixth and seventh - placed teams can also qualify, depending on the winners of the two domestic cup competitions i.e. the FA Cup and the EFL Cup. Two Europa League places are reserved for the winners of each tournament; if the winner of either the FA Cup or EFL Cup qualifies for the Champions League, then that place will go to the next - best placed finisher in the Premier League.
An exception to the usual European qualification system happened in 2005, after Liverpool won the Champions League the year before, but did not finish in a Champions League qualification place in the Premier League that season. UEFA gave special dispensation for Liverpool to enter the Champions League, giving England five qualifiers. UEFA subsequently ruled that the defending champions qualify for the competition the following year regardless of their domestic league placing. However, for those leagues with four entrants in the Champions League, this meant that if the Champions League winner finished outside the top four in its domestic league, it would qualify at the expense of the fourth - placed team in the league. At that time, no association could have more than four entrants in the Champions League. This occurred in 2012, when Chelsea -- who had won the Champions League that summer, but finished sixth in the league -- qualified for the Champions League in place of Tottenham Hotspur, who went into the Europa League.
Starting with the 2015 -- 16 season, the Europa League champion automatically qualifies for the following season 's Champions League, and the maximum number of Champions League places for any single association has increased to five. An association with four Champions League places, such as The FA, will only earn a fifth place if a club from that association that does not qualify for the Champions League through its league wins either the Champions League or Europa League.
In 2007, the Premier League became the highest ranking European League based on the performances of English teams in European competitions over a five - year period. This broke the eight - year dominance of the Spanish league, La Liga.
Between the 1992 -- 93 and the 2016 -- 17 seasons, Premier League clubs won the UEFA Champions League four times (and had five runners - up), behind Spain 's La Liga with ten wins, and Italy 's Serie A with five wins; ahead of, among others, Germany 's Bundesliga with three wins. The FIFA Club World Cup (originally called the FIFA Club World Championship) has been won once by a Premier League club (Manchester United in 2008), with two runners - up (Liverpool in 2005, Chelsea in 2012), behind Spain 's La Liga with five wins, Brazil 's Brasileirão with four wins, and Italy 's Serie A with two wins (see table here).
A system of promotion and relegation exists between the Premier League and the EFL Championship. The three lowest placed teams in the Premier League are relegated to the Championship, and the top two teams from the Championship promoted to the Premier League, with an additional team promoted after a series of play - offs involving the third, fourth, fifth and sixth placed clubs. The Premier League had 22 teams when it began in 1992, but this was reduced to the present 20 - team format in 1995.
A total of 49 clubs have played in the Premier League from its inception in 1992, up to and including the 2017 -- 18 season.
The following 20 clubs will compete in the Premier League during the 2017 -- 18 season.
: Founding member of the Premier League: Never been relegated from Premier League: One of the original 12 Football League teams: Club based in Wales
In 2011, a Welsh club participated in the Premier League for the first time after Swansea City gained promotion. The first Premier League match to be played outside England was Swansea City 's home match at the Liberty Stadium against Wigan Athletic on 20 August 2011. In 2012 -- 13, Swansea qualified for the Europa League by winning the League Cup. The number of Welsh clubs in the Premier League increased to two for the first time in 2013 -- 14, as Cardiff City gained promotion, but they were relegated after their maiden season.
Because they are members of the Football Association of Wales (FAW), the question of whether clubs like Swansea should represent England or Wales in European competitions has caused long - running discussions in UEFA. Swansea took one of England 's three available places in the Europa League in 2013 -- 14 by winning the League Cup in 2012 -- 13. The right of Welsh clubs to take up such English places was in doubt until UEFA clarified the matter in March 2012, allowing them to participate.
Participation in the Premier League by some Scottish or Irish clubs has sometimes been discussed, but without result. The idea came closest to reality in 1998, when Wimbledon received Premier League approval to relocate to Dublin, Ireland, but the move was blocked by the Football Association of Ireland. Additionally, the media occasionally discusses the idea that Scotland 's two biggest teams, Celtic and Rangers, should or will take part in the Premier League, but nothing has come of these discussions.
From 1993 to 2016, the Premier League had title sponsorship rights sold to two companies, which were Carling Breweries and Barclays Bank PLC; Barclays was the most recent title sponsor, having sponsored the Premier League from 2001 through 2016 (until 2004, the title sponsorship was held through its Barclaycard brand before shifting to its main banking brand in 2004).
Barclays ' deal with the Premier League expired at the end of the 2015 -- 16 season. The FA announced on 4 June 2015 that it would not pursue any further title sponsorship deals for the Premier League, arguing that they wanted to build a "clean '' brand for the competition more in line with those of major U.S. sports leagues.
As well as sponsorship for the league itself, the Premier League has a number of official partners and suppliers. The official ball supplier for the league is Nike who have had the contract since the 2000 -- 01 season when they took over from Mitre.
The Premier League has the highest revenue of any football league in the world, with total club revenues of € 2.48 billion in 2009 -- 10. In 2013 -- 14, due to improved television revenues and cost controls, the Premier League had net profits in excess of £ 78 million, exceeding all other football leagues. In 2010 the Premier League was awarded the Queen 's Award for Enterprise in the International Trade category for its outstanding contribution to international trade and the value it brings to English football and the United Kingdom 's broadcasting industry.
The Premier League includes some of the richest football clubs in the world. Deloitte 's "Football Money League '' listed seven Premier League clubs in the top 20 for the 2009 -- 10 season, and all 20 clubs were in the top 40 globally by the end of the 2013 -- 14 season, largely as a result of increased broadcasting revenue. From 2013, the league generates € 2.2 billion per year in domestic and international television rights.
Premier League clubs agreed in principle in December 2012, to radical new cost controls. The two proposals consist of a break - even rule and a cap on the amount clubs can increase their wage bill by each season. With the new television deals on the horizon, momentum has been growing to find ways of preventing the majority of the cash going straight to players and agents.
Central payments for the 2016 -- 17 season amounted to £ 2,398,515,773 across the 20 clubs, with each team receiving a flat participation fee of £ 35,301,989 and additional payments for TV broadcasts (£ 1,016,690 for general UK rights to match highlights, £ 1,136,083 for each live UK broadcast of their games and £ 39,090,596 for all overseas rights), commercial rights (a flat fee of £ 4,759,404) and a notional measure of "merit '' which was based upon final league position. The merit component was a nominal sum of £ 1,941,609 multiplied by each finishing place, counted from the foot of the table (e.g., Burnley finished 16th in May 2017, 5 places counting upwards, and received 5 x £ 1,941,609 = £ 9,708,045 merit payment).
Television has played a major role in the history of the Premier League. The League 's decision to assign broadcasting rights to BSkyB in 1992 was at the time a radical decision, but one that has paid off. At the time pay television was an almost untested proposition in the UK market, as was charging fans to watch live televised football. However, a combination of Sky 's strategy, the quality of Premier League football and the public 's appetite for the game has seen the value of the Premier League 's TV rights soar.
The Premier League sells its television rights on a collective basis. This is in contrast to some other European Leagues, including La Liga, in which each club sells its rights individually, leading to a much higher share of the total income going to the top few clubs. The money is divided into three parts: half is divided equally between the clubs; one quarter is awarded on a merit basis based on final league position, the top club getting twenty times as much as the bottom club, and equal steps all the way down the table; the final quarter is paid out as facilities fees for games that are shown on television, with the top clubs generally receiving the largest shares of this. The income from overseas rights is divided equally between the twenty clubs.
The first Sky television rights agreement was worth £ 304 million over five seasons. The next contract, negotiated to start from the 1997 -- 98 season, rose to £ 670 million over four seasons. The third contract was a £ 1.024 billion deal with BSkyB for the three seasons from 2001 -- 02 to 2003 -- 04. The league brought in £ 320 million from the sale of its international rights for the three - year period from 2004 -- 05 to 2006 -- 07. It sold the rights itself on a territory - by - territory basis. Sky 's monopoly was broken from August 2006 when Setanta Sports was awarded rights to show two out of the six packages of matches available. This occurred following an insistence by the European Commission that exclusive rights should not be sold to one television company. Sky and Setanta paid a total of £ 1.7 billion, a two - thirds increase which took many commentators by surprise as it had been widely assumed that the value of the rights had levelled off following many years of rapid growth. Setanta also hold rights to a live 3 pm match solely for Irish viewers. The BBC has retained the rights to show highlights for the same three seasons (on Match of the Day) for £ 171.6 million, a 63 per cent increase on the £ 105 million it paid for the previous three - year period. Sky and BT have agreed to jointly pay £ 84.3 million for delayed television rights to 242 games (that is the right to broadcast them in full on television and over the internet) in most cases for a period of 50 hours after 10 pm on matchday. Overseas television rights fetched £ 625 million, nearly double the previous contract. The total raised from these deals is more than £ 2.7 billion, giving Premier League clubs an average media income from league games of around £ 40 million - a-year from 2007 to 2010.
The TV rights agreement between the Premier League and Sky has faced accusations of being a cartel, and a number of court cases have arisen as a result. An investigation by the Office of Fair Trading in 2002 found BSkyB to be dominant within the pay TV sports market, but concluded that there were insufficient grounds for the claim that BSkyB had abused its dominant position. In July 1999 the Premier League 's method of selling rights collectively for all member clubs was investigated by the UK Restrictive Practices Court, who concluded that the agreement was not contrary to the public interest.
The BBC 's highlights package on Saturday and Sunday nights, as well as other evenings when fixtures justify, will run until 2016. Television rights alone for the period 2010 to 2013 have been purchased for £ 1.782 billion. On 22 June 2009, due to troubles encountered by Setanta Sports after it failed to meet a final deadline over a £ 30 million payment to the Premier League, ESPN was awarded two packages of UK rights containing a total of 46 matches that were available for the 2009 -- 10 season as well as a package of 23 matches per season from 2010 -- 11 to 2012 -- 13. On 13 June 2012, the Premier League announced that BT had been awarded 38 games a season for the 2013 -- 14 through 2015 -- 16 seasons at £ 246 million - a-year. The remaining 116 games were retained by Sky who paid £ 760 million - a-year. The total domestic rights have raised £ 3.018 billion, an increase of 70.2 % over the 2010 -- 11 to 2012 -- 13 rights. The value of the licensing deal rose by another 70.2 % in 2015, when Sky and BT paid a total of £ 5.136 billion to renew their contracts with the Premier League for another three years up to the 2018 -- 19 season.
UK highlights
Between the 1998 -- 99 season and the 2012 -- 13 season, RTÉ broadcast highlights on Premier Soccer Saturday and occasionally Premier Soccer Sunday. During then between the 2004 -- 05 season and the 2006 -- 07 season, RTÉ broadcast 15 live matches on a Saturday afternoon with each match being called Premiership Live.
In August 2016, it was announced that the BBC would be creating a new magazine - style show for the Premier League entitled The Premier League Show.
The Premier League is the most - watched football league in the world, broadcast in 212 territories to 643 million homes and a potential TV audience of 4.7 billion people,. The Premier League 's production arm, Premier League Productions, is operated by IMG Productions and produces all content for its international television partners.
The Premier League is particularly popular in Asia, where it is the most widely distributed sports programme. In Australia, Optus telecommunications holds exclusive rights to the Premier League, providing live broadcasts and online access (Fox Sports formerly held rights). In India, the matches are broadcast live on STAR Sports. In China, the broadcast rights were awarded to Super Sports in a six - year agreement that began in the 2013 -- 14 season. As of the 2013 -- 14 season, Canadian broadcast rights to the Premier League are jointly owned by Sportsnet and TSN, with both rival networks holding rights to 190 matches per season.
The Premier League is broadcast in the United States through NBC Sports. Premier League viewership has increased rapidly, with NBC and NBCSN averaging a record 479,000 viewers in the 2014 -- 15 season, up 118 % from 2012 -- 13 when coverage still aired on Fox Soccer and ESPN / ESPN2 (220,000 viewers), and NBC Sports has been widely praised for its coverage. NBC Sports reached a six - year extension with the Premier League in 2015 to broadcast the league through the 2021 -- 22 season in a deal valued at $1 billion (£ 640 million).
There has been an increasing gulf between the Premier League and the Football League. Since its split with the Football League, many established clubs in the Premier League have managed to distance themselves from their counterparts in lower leagues. Owing in large part to the disparity in revenue from television rights between the leagues, many newly promoted teams have found it difficult to avoid relegation in their first season in the Premier League. In every season except 2001 -- 02 and 2011 -- 12, at least one Premier League newcomer has been relegated back to the Football League. In 1997 -- 98 all three promoted clubs were relegated at the end of the season.
The Premier League distributes a portion of its television revenue to clubs that are relegated from the league in the form of "parachute payments ''. Starting with the 2013 -- 14 season, these payments are in excess of £ 60 million over four seasons. Though designed to help teams adjust to the loss of television revenues (the average Premier League team receives £ 55 million while the average Football League Championship club receives £ 2 million), critics maintain that the payments actually widen the gap between teams that have reached the Premier League and those that have not, leading to the common occurrence of teams "bouncing back '' soon after their relegation. For some clubs who have failed to win immediate promotion back to the Premier League, financial problems, including in some cases administration or even liquidation have followed. Further relegations down the footballing ladder have ensued for several clubs unable to cope with the gap.
As of the 2017 -- 18 season, Premier League football has been played in 58 stadiums since the formation of the division. The Hillsborough disaster in 1989 and the subsequent Taylor Report saw a recommendation that standing terraces should be abolished; as a result all stadiums in the Premier League are all - seater. Since the formation of the Premier League, football grounds in England have seen constant improvements to capacity and facilities, with some clubs moving to new - build stadiums. Nine stadiums that have seen Premier League football have now been demolished. The stadiums for the 2017 -- 18 season show a large disparity in capacity: Wembley Stadium, the temporary home of Tottenham Hotspur, has a capacity of 90,000 with Dean Court, the home of Bournemouth, having a capacity of 11,360. The combined total capacity of the Premier League in the 2017 -- 18 season is 806,033 with an average capacity of 40,302.
Stadium attendances are a significant source of regular income for Premier League clubs. For the 2016 -- 17 season, average attendances across the league clubs were 35,838 for Premier League matches with a total aggregate attendance figure of 13,618,596. This represents an increase of 14,712 from the average attendance of 21,126 recorded in the league 's first season (1992 -- 93). However, during the 1992 -- 93 season the capacities of most stadiums were reduced as clubs replaced terraces with seats in order to meet the Taylor Report 's 1994 -- 95 deadline for all - seater stadiums. The Premier League 's record average attendance of 36,144 was set during the 2007 -- 08 season. This record was then beaten in the 2013 -- 14 season recording an average attendance of 36,695 with a total attendance of just under 14 million, the highest average in England 's top flight since 1950.
Managers in the Premier League are involved in the day - to - day running of the team, including the training, team selection, and player acquisition. Their influence varies from club - to - club and is related to the ownership of the club and the relationship of the manager with fans. Managers are required to have a UEFA Pro Licence which is the final coaching qualification available, and follows the completion of the UEFA ' B ' and ' A ' Licences. The UEFA Pro Licence is required by every person who wishes to manage a club in the Premier League on a permanent basis (i.e. more than 12 weeks -- the amount of time an unqualified caretaker manager is allowed to take control). Caretaker appointments are managers that fill the gap between a managerial departure and a new appointment. Several caretaker managers have gone on to secure a permanent managerial post after performing well as a caretaker; examples include Paul Hart at Portsmouth and David Pleat at Tottenham Hotspur.
The league 's longest - serving manager was Alex Ferguson, who was in charge of Manchester United from November 1986 until his retirement at the end of the 2012 -- 13 season, meaning that he was manager for all of the first 21 seasons of the Premier League. Arsène Wenger is the league 's longest - serving current manager, having been in charge of Arsenal in the Premier League since 1996. As of the 22nd game - week of the 2017 / 2018 season, 7 managers have been sacked, the most recent being Mark Hughes of Stoke City.
There have been several studies into the reasoning behind, and effects of, managerial sackings. Most famously, Professor Sue Bridgewater of the University of Liverpool and Dr. Bas ter Weel of the University of Amsterdam, performed two separate studies which helped to explain the statistics behind managerial sackings. Bridgewater 's study found that clubs generally sack their managers upon dropping below an average of 1 point - per - game.
At the inception of the Premier League in 1992 -- 93, just eleven players named in the starting line - ups for the first round of matches hailed from outside of the United Kingdom or Ireland. By 2000 -- 01, the number of foreign players participating in the Premier League was 36 per cent of the total. In the 2004 -- 05 season the figure had increased to 45 per cent. On 26 December 1999, Chelsea became the first Premier League side to field an entirely foreign starting line - up, and on 14 February 2005 Arsenal were the first to name a completely foreign 16 - man squad for a match. By 2009, under 40 % of the players in the Premier League were English.
In response to concerns that clubs were increasingly passing over young English players in favour of foreign players, in 1999, the Home Office tightened its rules for granting work permits to players from countries outside of the European Union. A non-EU player applying for the permit must have played for his country in at least 75 per cent of its competitive ' A ' team matches for which he was available for selection during the previous two years, and his country must have averaged at least 70th place in the official FIFA world rankings over the previous two years. If a player does not meet those criteria, the club wishing to sign him may appeal.
Players may only be transferred during transfer windows that are set by the Football Association. The two transfer windows run from the last day of the season to 31 August and from 31 December to 31 January. Player registrations can not be exchanged outside these windows except under specific licence from the FA, usually on an emergency basis. As of the 2010 -- 11 season, the Premier League introduced new rules mandating that each club must register a maximum 25 - man squad of players aged over 21, with the squad list only allowed to be changed in transfer windows or in exceptional circumstances. This was to enable the ' home grown ' rule to be enacted, whereby the League would also from 2010 require at least 8 of the named 25 man squad to be made up of ' home - grown players '.
There is no team or individual salary cap in the Premier League. As a result of the increasingly lucrative television deals, player wages rose sharply following the formation of the Premier League when the average player wage was £ 75,000 per year. The average salary stands at £ 1.1 million as of the 2008 -- 09 season. As of 2015, average salaries in the Premier League are higher than for any other football league in the world.
The record transfer fee for a Premier League player has risen steadily over the lifetime of the competition. Prior to the start of the first Premier League season Alan Shearer became the first British player to command a transfer fee of more than £ 3 million. The record rose steadily in the Premier League 's first few seasons, until Alan Shearer made a record breaking £ 15 million move to Newcastle United in 1996. All three of the most expensive transfers in the sport 's history had a Premier League club on the selling or buying end, with Juventus selling Paul Pogba to Manchester United in August 2016 for a fee of £ 89 million, Tottenham Hotspur selling Gareth Bale to Real Madrid for £ 85 million in 2013, Manchester United 's sale of Cristiano Ronaldo to Real Madrid for £ 80 million in 2009, and Liverpool selling Luis Suárez to Barcelona for £ 75 million in 2014.
Italics denotes players still playing professional football, Bold denotes players still playing in the Premier League.
The Golden Boot is awarded to the top Premier League scorer at the end of each season. Former Blackburn Rovers and Newcastle United striker Alan Shearer holds the record for most Premier League goals with 260. Twenty - five players have reached the 100 - goal mark. Since the first Premier League season in 1992 -- 93, 14 different players from 10 different clubs have won or shared the top scorers title. Thierry Henry won his fourth overall scoring title by scoring 27 goals in the 2005 -- 06 season. Andrew Cole and Alan Shearer hold the record for most goals in a season (34) -- for Newcastle and Blackburn respectively. Ryan Giggs of Manchester United holds the record for scoring goals in consecutive seasons, having scored in the first 21 seasons of the league.
The Premier League maintains two trophies -- the genuine trophy (held by the reigning champions) and a spare replica. Two trophies are held in the event that two different clubs could win the League on the final day of the season. In the rare event that more than two clubs are vying for the title on the final day of the season -- then a replica won by a previous club is used.
The current Premier League trophy was created by Royal Jewellers Asprey of London. It consists of a trophy with a golden crown and a malachite plinth base. The plinth weighs 33 pounds (15 kg) and the trophy weighs 22 pounds (10.0 kg). The trophy and plinth are 76 cm (30 in) tall, 43 cm (17 in) wide and 25 cm (9.8 in) deep.
Its main body is solid sterling silver and silver gilt, while its plinth is made of malachite, a semi-precious stone. The plinth has a silver band around its circumference, upon which the names of the title - winning clubs are listed. Malachite 's green colour is also representative of the green field of play. The design of the trophy is based on the heraldry of Three Lions that is associated with English football. Two of the lions are found above the handles on either side of the trophy -- the third is symbolised by the captain of the title - winning team as he raises the trophy, and its gold crown, above his head at the end of the season. The ribbons that drape the handles are presented in the team colours of the league champions that year.
In 2004, a special gold version of the trophy was commissioned to commemorate Arsenal winning the title without a single defeat.
In addition to the winner 's trophy and the individual winner 's medals awarded to players, the Premier League also awards the monthly Manager of the Month and Player of the Month awards, as well as annual awards for Manager of the Season, Player of the Season, Golden Boot and the Golden Glove awards.
In 2012, the Premier League celebrated its second decade by holding the 20 Seasons Awards:
|
who plays davey jones in pirates of the caribbean | Davy Jones (Pirates of the Caribbean) - wikipedia
Edinburgh Trader Black Pearl Empress Endeavor Various unnamed ships
Davy Jones is a fictional character and one of the main antagonists of the Pirates of the Caribbean film series, portrayed by Bill Nighy. He debut in the second film Dead Man 's Chest as the main antagonist and return in the third film At World 's End as one of the two main antagonists (the other is Cutler Beckett), respectively, and appear at the end of the series ' fifth installment, Dead Men Tell No Tales who suggests will back in a possibile sixth film. He is the captain of the Flying Dutchman (based on the ghost ship of the same name).
The computer - generated imagery used to complete Jones was named by Entertainment Weekly as the tenth favorite computer generated film character in film history, behind King Kong in 2007. The work on Davy Jones by Industrial Light and Magic earned them the 2006 Academy Award for Visual Effects for Dead Man 's Chest.
The character is based on the superstition of Davy Jones ' Locker. In contrast to the historical legends, the films ' Davy Jones is a villain.
Before officially casting Bill Nighy, producers also met with Jim Broadbent, Iain Glen and Richard E. Grant for the role.
Like the entire crew of the Flying Dutchman (except "Bootstrap Bill ''), Davy Jones 's physical appearance is completely 3 - D computer - generated. Nighy 's performance was recorded using motion capture during actual filming on the set, with Nighy wearing several markers in both a grey suit and his face, rather than in a studio during post-production. Nighy also wore make - up around his eyes, since the original plan was to use his real eyes, if necessary to get the proper lighting, in the digital character; he also wore make - up on his lips and around his mouth, to assist in the motion capture of his character 's Scottish accent. Briefly during the third film, Jones appears as a human for a single scene, played by Nighy in costume. Several reviewers have in fact mistakenly identified Nighy as wearing prosthetic makeup due to the computer - generated character 's photorealism.
Davy Jones ' physique was designed by the films ' producers to be a mixture of various aquatic flora and fauna features. Jones ' most striking feature is his cephalopod - like head, with octopus - like appendages giving the illusion of a thick beard. The major features of the Davy Jones ' physique bear strong resemblance to the mythical Cthulhu created by H.P. Lovecraft. In Lovecraft 's short story "The Call of Cthulhu '' he describes the creature as "... a monster of vaguely anthropoid outline, but with an octopus - like head whose face was a mass of feelers, a scaly, rubbery - looking body, prodigious claws on hind and fore feet... ''
It is revealed in the bonus features of the Special Edition DVD that the face 's color was partly inspired by a coffee - stained styrofoam cup which was then scanned into ILM 's computers to be used as the skin. The character of Davy Jones has also a crustacean - style claw for his left arm, a long tentacle in place of the index finger on his right hand, and the right leg of a crab (resembling a pegleg). He also speaks with a clearly distinguishable, albeit thick, Scottish accent that 's slightly altered to account for his lack of a nose, and presumably, a nasal cavity and / or sinuses. Originally, director Gore Verbinski wanted Jones to be Dutch, as he is the captain of the "Dutch - man ''. Nighy however responded, "I do n't do Dutch. So I decided on Scottish. '' Nighy later revealed that Scottish sitcom Still Game influenced his choice of accent, stating: "I had to find an accent no one else had. Although Alex Norton is Scottish, mine was slightly different. We wanted something that was distinctive and authoritative... I have seen Still Game and I am a fan. The sort of extremity of the accent was inspired in that area. ''
Davy Jones, a human and a great sailor, fell in love with Calypso, a sea goddess. She entrusted him with the task of ferrying the souls of those who died at sea to the next world. Calypso gave him the Flying Dutchman to accomplish this task. She swore that after ten years, she would meet him and they would spend one day together before he returned to his duties. However, when Jones returned to shore after ten years, Calypso failed to appear. Believing Calypso had betrayed him, a heartbroken and enraged Davy Jones turned the Pirate Brethren against her, saying that if she were removed from the world, they would be able to claim the seas for themselves. They assembled in the First Brethren Court and Jones taught them how to imprison her into her human form.
Despite betraying her, Jones still loved Calypso, and in despair and guilt for what he had done, he carved out his own heart and placed it in the "Dead Man 's Chest ''. The Chest was sealed and placed within a larger wooden chest, along with Jones ' numerous love letters to Calypso and all other items having to do with her, except his matching musical locket. The chest was then buried on Isla Cruces. Jones kept the chest 's key with him at all times. With Calypso gone, Jones abandoned his duties and returned to the Seven Seas. As a result of this, Jones gradually became monstrous, his physical appearance merging with various aquatic fauna. Sailors everywhere would fear him to the death, for Davy Jones had turned fierce and cruel, with an insatiable taste for all things brutal. Jones recruits dying sailors by promising them a reprieve from death in exchange for 100 years of service aboard the Dutchman. He comes to command the Kraken, a feared mythological sea monster.
In the book series about Jack Sparrow 's earlier adventures, Davy Jones shows interest in the Sword of Cortes, also sought by Jack. He is a minor character, but appears in the seventh book as Jack and his crew encounter the Flying Dutchman.
Jones also appears in the prequel book about Jack 's first years as a captain. He helps the Brethren Court to identify the traitor among them, who turns out to be Borya Palachnik, the Pirate Lord of the Caspian Sea.
Before the events of the first film, Davy Jones approaches Sparrow with a deal: Jones will raise the Black Pearl back from Davy Jones ' Locker, allowing Sparrow to be captain for 13 years if Sparrow agrees to serve on the Dutchman for 100 years. This event, referenced in the films, also appears in the book series.
Davy Jones first appears in the second film; Dead Man 's Chest he attempts to collect on his bargain with Jack Sparrow. Sparrow argues that he was only captain for two years before one of his crew members Hector Barbossa committed mutiny, but Jones rejects this explanation. Sparrow then attempts to escape the deal by providing Will Turner as a substitute for himself. Jack strikes a new deal with Jones; Jack will be spared enslavement on the Dutchman if he brings Jones one hundred souls to replace his own within the next three days. Jones accepts, removes the black spot from Jack 's hand, and retains Will, keeping him as a "good faith payment. ''
While on the Dutchman, Will challenges Jones at a game of liar 's dice. They wager Will 's soul for an eternity of service against the key to the Dead Man 's Chest. Bootstrap Bill joins the game and purposefully loses to save Will. During the game, Will learns where Jones keeps the key. The next morning, Jones realizes the key is gone and summons the Kraken to destroy the ship carrying Turner, who actually survives. The Dutchman then sails to Isla Cruces to stop Sparrow from getting the Chest.
Arriving, Jones sends his crew to retrieve the Chest; they return to him with it. The Dutchman then chases after the Black Pearl, but is outrun. Jones summons the Kraken, which drags Jack Sparrow and the Pearl to Davy Jones 's Locker. He afterwards opens the Chest only to find his heart missing; it having been taken by James Norrington, who gives it to Cutler Beckett of the East India Trading Company.
In the third film At World 's End, Jones is under the control of Cutler Beckett and the East India Trading Company. Beckett possesses the heart, and threatens to have soldiers shoot it if Jones disobeys. Beckett orders Jones to sink pirate ships, but is infuriated when Jones leaves no survivors; Beckett wants prisoners to interrogate about the Brethren Court. Beckett orders Jones to kill the Kraken. Later, he orders Jones attack the Pirate Lord Sao Feng; Jones subsequently kills Sao and captures Elizabeth Swann, who had been named captain by Sao Feng upon his death. When Admiral James Norrington dies on board the Dutchman helping Elizabeth escape, Jones claims Norrington 's sword (originally crafted by Will Turner). Jones then attempts mutiny against the EITC. However, Mercer successfully defends the Chest, forcing Jones to continue under Beckett 's service.
Beckett later summons Jones to his ship, the Endeavour. Jones confronts Will Turner and divulges his past with Calypso, while learning of Jack Sparrow 's escape from the Locker. The three men then arrive at Shipwreck Cove.
Jones confronts Calypso, locked in the brig of the Black Pearl. The two former lovers discuss Calypso 's betrayal and Jones 's curse. Calypso temporarily lifts his curse, allowing him to be seen briefly in his original human form. Jones tells her that his heart will always belong to her. Calypso, unaware that Jones betrayed her to the first Brethren Court, says that after her release, she will fully give her love to him.
Jones participates in a parley in which the EITC trades Turner for Sparrow. After Calypso is freed, Will reveals that Jones betrayed her. She escapes, refusing to aid either the pirates or Jones. Her fury creates a monstrous maelstrom. The Dutchman and the Pearl enter it and battle.
During the battle Jones kills Mercer and retrieves the key to the Chest. Sparrow and Jones fight for control of the chest in the rigging of the Dutchman. Jack acquires both the Chest and the key while Jones battles Will and Elizabeth. Jones quickly overpowers Elizabeth, and is subsequently impaled through the back by Will. Jones, unharmed, holds Will at sword - point. Jack threatens to stab the heart, and Jones cruelly stabs Will. Remembering Will as his son, Bootstrap Bill briefly fights and overpowers Jones, but is quickly defeated. Jones attempts to kill Bootstrap, but Jack helps Will stab the heart. Jones then calls out for Calypso, before tumbling to his death in the maelstrom.
In the post-credits scene of the fifth film Dead Men Tell No Tales, Will (no longer bounded to the Flying Dutchman after the destruction of the Trident of Poseidon) and Elizabeth are sleeping in their bed together, when their room is entered by the silhouette of an apparently resurrected Davy Jones. Just as Jones raises his clawed arm to strike at the couple, Will awakens and the room is empty. Assuming Jones 's appearance to be a nightmare, Will goes back to sleep, oblivious to the presence of barnacles on the floor amid a small puddle of seawater, suggesting it was no dream.
In the films, Jones possesses a locket that plays a distinct melody, and he is known to play the same melody on his pipe organ. This melody is also his character 's theme, and can be heard throughout the film 's score. It comes in two variations: The soundtrack version and the film version. The melody of the soundtrack version is heard only in Dead Man 's Chest. The film version is played in both films multiple times, and is heard last during the climax of the film. Because Jones and Calypso own matching locket lockets, Tia Dalma 's theme is similar to that of Davy Jones, albeit in a different arrangement. The theme is also heard briefly after Jones ' appearance in Dead Men Tell No Tales.
Davy Jones possesses a large number of supernatural abilities. Jones is capable of teleportation on board the Flying Dutchman and the Black Pearl, and he can pass through solid objects.
Jones is immortal, capable of surviving injuries that would be fatal to mortals. However, he is not impervious to pain, as demonstrated when Jack was able to cut off some of his facial tentacles during their battle. Jones can also track any soul that is owed to him using the black spot, with which any member of his crew can mark a victim.
Jones has also the power to control and call forth the Kraken, a sea monster which can destroy ships upon command.
In a physical confrontation, Jones ' physiology also gives him various advantages; his facial tentacles allow him to manipulate objects while leaving his hands free, such that he is able to restrain Mercer 's arms with his hands while smothering him with his tentacles. His tentacle finger allows him to exert a much stronger grip and control his sword more quickly and precisely than a normal hand could, and his crab claw hand possesses enough strength to bend or sever sword blades. He also demonstrates more general superhuman strength when he throws Jack off the crossbeam using only one arm.
Davy Jones was part of Series One of the Pirates of the Caribbean: Dead Man 's Chest action figure set produced by NECA. Although the initial run of figures had a sticker on the box that proclaimed that the figure came with the Dead Man 's Chest and Jones ' heart, both props (as well as the key) were released with the Bootstrap Bill figure in Series Two. Jones also made an appearance as a smaller figure with crew members Angler, Wheelback and Penrod. Jones was issued as a plush toy as part of Sega 's "Dead Man 's Chest '' plush assortment. Jones was also part of a 3 figure pack as a 3.75 inch figure with Hector Barbossa and a limited edition gold Jack Sparrow for At World 's End. Davy Jones and his ship, the Flying Dutchman, were produced as a Mega Blocks set for the movies Dead Man 's Chest and Pirates of the Caribbean: At World 's End. Although his minifigure counterpart in the Dead Man 's Chest set has more bluish tentacles then his counterpart in the At World 's End set, which has more greenish tentacles.
He was made as a Lego minifigure in November 2011, with 4184 Black Pearl.
A children 's and adult Halloween costumes were released for Halloween 2007.
Davy Jones was released as a PEZ dispenser, along with Jack Sparrow and Will Turner.
Hot Toys also announced plans to make a 1: 6 version of Davy Jones which became available Q2 2008, and is widely regarded as more detailed than those produced by NECA.
|
what kinds of african problems did the petition address | First Pan-African Conference - wikipedia
The First Pan-African Conference was held in London from 23 to 25 July 1900 (just prior to the Paris Exhibition of 1900 "in order to allow tourists of African descent to attend both events ''). Organized primarily by the Trinidadian barrister Henry Sylvester Williams, it took place in Westminster Town Hall (now Caxton Hall) and was attended by 37 delegates and about 10 other participants and observers from Africa, the West Indies, the US and the UK, including Samuel Coleridge Taylor (the youngest delegate), John Alcindor, Dadabhai Naoroji, John Archer, Henry Francis Downing, and W.E.B. Du Bois, with Bishop Alexander Walters of the AME Zion Church taking the chair. Du Bois played a leading role, drafting a letter ("Address to the Nations of the World '') to European leaders appealing to them to struggle against racism, to grant colonies in Africa and the West Indies the right to self - government and demanding political and other rights for African Americans.
On 24 September 1897, Henry Sylvester Williams had been instrumental in founding the African Association (not to be confused with the Association for Promoting the Discovery of the Interior Parts of Africa), in response to the European partition of Africa that followed the 1884 - 5 Congress of Berlin. The formation of the association marked an early stage in the development of the anti-colonialist movement, and was established to encourage the unity of Africans and people of African descent, particularly in territories of the British empire, concerning itself with injustices in Britain 's African and Caribbean colonies. In March 1898 the association issued a circular calling for a pan-African conference. Booker T. Washington, who had been travelling in the UK in the summer of 1899, wrote in a letter to African - American newspapers:
"In connection with the assembling of so many Negroes in London from different parts of the world, a very important movement has just been put upon foot. It is known as the Pan-African Conference. Representatives from Africa, the West Indian Islands and other parts of the world, asked me to meet them a few days ago with a view to making a preliminary program for this conference, and we had a most interesting meeting. It is surprising to see the strong intellectual mould which many of these Africans and West Indians possess. The object and character of the Pan-African Conference is best told in the words of the resolution, which was adopted at the meeting referred to, viz: ' In view of the widespread ignorance which is prevalent in England about the treatment of native races under European and American rule, the African Association, which consists of members of the race resident in England and which has been in existence for nearly two years, have resolved during the Paris Exposition of 1900 (which many representatives of the race may be visiting) to hold a conference in London in the month of May of the said year, in order to take steps to influence public opinion on existing proceedings and conditions affecting the welfare of the natives in various parts of Africa, the West Indies and the United States. ' The resolution is signed by Mr H. Mason Joseph, President, and Mr H. Sylvester Williams as Honourable Secretary. The Honourable Secretary will be pleased to hear from representative natives who are desirous of attending at an early date. He may be addressed, Common Room, Grey 's (sic) Inn, London, W.C. ''
When the First Pan-African Conference opened on Monday, 23 July 1900, in London 's Westminster Hall, Bishop Alexander Walters in his opening address, "The Trials and Tribulations of the Coloured Race in America '', noted that "for the first time in history black people had gathered from all parts of the globe to discuss and improve the condition of their race, to assert their rights and organize so that they might take an equal place among nations. '' The Bishop of London, Mandell Creighton, gave a speech of welcome "referring to ' the benefits of self - government ' which Britain must confer on ' other races... as soon as possible '. ''
Speakers over the three days addressed a variety of aspects of racial discrimination. Among the papers delivered were: "Conditions Favouring a High Standard of African Humanity '' (C.W. French of St. Kitts), "The Preservation of Racial Equality '' (Anna H. Jones, from Kansas City, Missouri), "The Necessary Concord to be Established between Native Races and European Colonists '' (Benito Sylvain, Haitian aide - de-camp to the Ethiopian emperor), "The Negro Problem in America '' (Anna J. Cooper, from Washington), "The Progress of our People '' (John E. Quinlan of St. Lucia) and "Africa, the Sphinx of History, in the Light of Unsolved Problems '' (D.E. Tobias from the USA). Other topics included Richard Phipps ' complaint of discrimination against black people in the Trinidadian civil service and an attack by William Meyer, a medical student at Edinburgh University, on pseudo-scientific racism. Discussions followed the presentation of the papers, and on the last day George James Christian, a law student from Dominica, led a discussion on the subject "Organized Plunder and Human Progress Have Made Our Race Their Battlefield '', saying that in the past "Africans had been kidnapped from their land, and in South Africa and Rhodesia slavery was being revived in the form of forced labour. ''
The conference culminated in the conversion of the African Association (formed by Sylvester Williams in 1897) into the Pan-African Association, and the implementation of a unanimously adopted "Address to the Nations of the World '', sent to various heads of state where people of African descent were living and suffering oppression. The address implored the United States and the imperial European nations to "acknowledge and protect the rights of people of African descent '' and to respect the integrity and independence of "the free Negro States of Abyssinia, Liberia, Haiti, etc. '' Signed by Walters (President of the Pan-African Association), the Canadian Rev. Henry B. Brown (Vice-President), Williams (General Secretary) and Du Bois (Chairman of the Committee on the Address), the document contained the phrase "The problem of the Twentieth Century is the problem of the colour - line '', which Du Bois would use three years later in the "Forethought '' of his book The Souls of Black Folk.
In September, the delegates petitioned Queen Victoria through the British government to look into the treatment of Africans in South Africa and Rhodesia, including specified acts of injustice perpetrated by whites there, namely:
The response eventually received by Sylvester - Williams on 17 January 1901 stated:
"Sir. I am directed by Mr Secretary Chamberlain to state that he has received the Queen 's commands to inform you that the Memorial of the Pan-African Conference requesting the situation of the native races in South Africa, has been laid before Her Majesty, and that she was graciously pleased to command him to return an answer to it on behalf of her government.
2. Mr. Chamberlain accordingly desires to assure the members of the Pan-African Conference that, it settling the lines on which the administration of the conquered territories is to be conducted, Her Majesty 's Government will not overlook the interests and welfare of the native races.
Days later, Victoria responded more personally, instructing her private secretary, Arthur Bigge, to write, which he did on 21 January - the day before the Queen died. Although the specific injustices in South Africa continued for some time, the conference brought them to the attention of the world.
The conference was reported in major British newspapers, including The Times and the Westminster Gazette, which commented that it "marks the initiation of a remarkable movement in history: the negro is at last awake to the potentialities of his future '' and quoted Williams as saying: "Our object now is to secure throughout the world the same facilities and privileges for the black as the white man enjoys. ''
DuBois recorded in his report,
"On Monday, the 23d of July, the conference was invited to a five o'clock tea given by the Reform Cobden Club of London in honor of the delegates, at its headquarters in the St. Ermin Hotel, one of the most elegant in the city. Several members of Parliament and other notables were present. A splendid repast was served, and for two hours the delegates were delightfully entertained by the members and friends of the club.
At 5 o'clock on Tuesday a tea was given in our honor by the late Dr. Creighton, Lord Bishop of London, at his stately palace at Fulham, which has been occupied by the Bishops of London since the fifteenth century. On our arrival at the palace we found his Lordship and one or two other Bishops, with their wives and daughters, waiting to greet us. After a magnificent repast had been served we were conducted through the extensive grounds which surround the palace...
Through the kindness of Mr. Clark, a member of Parliament, we were invited to tea on Wednesday, at 5 o'clock, on the Terrace of Parliament. After the tea the male members of our party were admitted to the House of Commons, which is considered quite an honor; indeed, the visit to the House of Parliament and tea on the Terrace was the crowning honor of the series. Great credit is due our genial secretary, Mr. H. Sylvester Williams, for these social functions.
Miss Catherine Impey, of London, said she was glad to come in contact with the class of Negroes that composed the Pan-African Conference, and wished that the best and most cultured would visit England and meet her citizens of noble birth, that the adverse opinion which had been created against them in some quarters of late by their enemies might be changed. ''
After the conference ended, Williams set up branches of the Pan-African Association in Jamaica, Trinidad and the USA. He also launched a short - lived journal, The Pan-African, in October 1901. Although plans for the association to meet every two years failed, the 1900 conference encouraged the development of the Pan-African Congress.
As Tony Martin noted, "At least three of the Caribbean delegates later emigrated to Africa. George Christian of Dominica became a successful lawyer and legislator in the Gold Coast (Ghana) where he was a member of the Legislative Council from 1920 to 1940. Richard E. Phipps, the Trinidad barrister, returned home after the conference and emigrated to the Gold Coast in 1911. He remained there until his death around 1926. Williams himself lived in South Africa from 1903 to 1905, 10 and died in Trinidad in 1911. ''
Under the Pan-African Congress banner, a series of gatherings subsequently took place -- in 1919 in Paris, 1921 in London, 1923 in London, 1927 in New York City, 1945 in Manchester, 1974 in Dar es Salaam and 1994 in Kampala -- to address the issues facing Africa as a result of European colonization.
A centenary commemorative event was held in London on 25 July 2000, attended by descendants of some of the delegates at the original conference, as well as descendants of delegates at the 1945 5th Pan-African Congress in Manchester.
|
steps in longitudinal bone growth at the epiphyseal plate | Epiphyseal plate - wikipedia
The epiphyseal plate (or epiphysial plate, physis, or growth plate) is a hyaline cartilage plate in the metaphysis at each end of a long bone. It is the part of a long bone where new bone growth takes place; that is, the whole bone is alive, with maintenance remodeling throughout its existing bone tissue, but the growth plate is the place where the long bone grows longer (adds length).
The plate is found in children and adolescents; in adults, who have stopped growing, the plate is replaced by an epiphyseal line. This replacement is known as epiphyseal closure.
Endochondral ossification is responsible for the initial bone development from cartilage in utero and infants and the longitudinal growth of long bones in the epiphyseal plate. The plate 's chondrocytes are under constant division by mitosis. These daughter cells stack facing the epiphysis while the older cells are pushed towards the diaphysis. As the older chondrocytes degenerate, osteoblasts ossify the remains to form new bone. In puberty increasing levels of estrogen, in both females and males, leads to increased apoptosis of chondrocytes in the epiphyseal plate. Depletion of chondrocytes due to apoptosis leads to less ossification and growth slows down and later stops when the entire cartilage have become replaced by bone, leaving only a thin epiphyseal scar which later disappears.
The growth plate has a very specific morphology in having a zonal arrangement.
A mnemonic for remembering the names of the epiphyseal plate growth zones is "Real People Have Career Options, '' standing for: Resting zone, Proliferative zone, Hypertrophic cartilage zone, Calcified cartilage zone, Ossification zone. The growth plate is clinically relevant in that it is often the primary site for infection, metastasis, fractures and the effects of endocrine bone disorders.
Defects in the development and continued division of epiphyseal plates can lead to growth disorders. The most common defect is achondroplasia, where there is a defect in cartilage formation. Achondroplasia is the most common cause of dwarfism.
Salter -- Harris fractures are fractures involving epiphyseal plates and hence tend to interfere with growth, height or physiologic functions.
Osgood - Schlatter disease results from stress on the epiphyseal plate in the tibia, leading to excess bone growth and a painful lump at the knee.
John Hunter studied growing chickens. He observed bones grew at the ends and thus demonstrated the existence of the epiphyseal plates. Hunter is considered the "father of the growth plate ''.
|
which tiny insect can reach speeds of up to 37 mph | Fastest animals - wikipedia
This is a list of the fastest animals in the world, grouped by types of animal.
The fastest land animal is the cheetah, which has a recorded speed of 109.4 -- 120.7 km / h (68.0 -- 75.0 mph). The peregrine falcon is the fastest bird and the fastest member of the animal kingdom with a diving speed of 389 km / h (242 mph). The fastest animal in the sea is the black marlin, which has a recorded speed of 129 km / h (80 mph). (New studies show the top speed is only 58 mph, and that the accepted top speed of 70 + mph were in error)
While comparing between various classes of animals, a different unit is used, body length per second for organisms. The fastest organism on earth, relative to body length, is the South Californian mite Paratarsotomus macropalpis, which has a speed of 322 body lengths per second. The equivalent speed for a human running as fast as this mite would be 1,300 mph (2,092 km / h). This is far in excess of the previous record holder, the Australian tiger beetle, Cicindela eburneola, the fastest insect in the world relative to body size, which has been recorded at 1.86 metres per second (6.7 km / h; 4.2 mph) or 171 body lengths per second. The cheetah, the fastest land mammal, scores at only 16 body lengths per second while Anna 's hummingbird has the highest known length - specific velocity attained by any vertebrate.
240 -- 320 km / h (150 -- 200 mph)
Avg max over fastest 10 to 20m was 45 kmh / 28 mph Compared to other land animals, humans are exceptionally capable of endurance, but exceptionally incapable of great speed.
In the absence of significant external factors, non-athletic humans tend to walk at about 1.4 m / s (5.0 km / h; 3.1 mph) and run at about 5.1 m / s (18 km / h; 11 mph). Although humans are capable of walking at speeds from nearly 0 m / s to upwards of 2.5 m / s (9.0 km / h; 5.6 mph) and running 1 mile (1.6 kilometers) in 6.5 minutes, humans typically choose to use only a small range within these speeds.
|
what is apple's intelligence voice assistance called | Siri - wikipedia
Siri (pronounced / ˈsɪəri /) is an intelligent personal assistant, part of Apple Inc. 's iOS, watchOS, macOS, and tvOS operating systems. The assistant uses voice queries and a natural language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Internet services. The software adapts to users ' individual language usages, searches, and preferences, with continuing use. Returned results are individualized.
Siri is a spin - off from a project originally developed by the SRI International Artificial Intelligence Center. Its speech recognition engine is provided by Nuance Communications, and Siri uses advanced machine learning technologies to function. Its original American, British, and Australian voice actors recorded their respective voices around 2005, unaware of the recordings ' eventual usage in Siri. The voice assistant was released as an app for iOS in February 2010, and it was acquired by Apple two months later. Siri was then integrated into iPhone 4S at its release in October 2011. At that time, the separate app was also removed from the iOS App Store. Siri has become an integral part of Apple 's products, having been adapted into other hardware devices over the years, including newer iPhone models, as well as iPad, iPod Touch, Mac, and Apple TV.
Siri supports a wide range of user commands, including performing phone actions, checking basic information, scheduling events and reminders, handling device settings, searching the Internet, navigating areas, finding information on entertainment, and is able to engage with iOS - integrated apps. With the release of iOS 10 in 2016, Apple opened up limited third - party access to Siri, including third - party messaging apps, as well as payments, ride - sharing, and Internet calling apps. With the release of iOS 11, Apple has updated Siri 's voices for more clear, human voices, supports follow - up questions and language translation, and additional third - party actions.
Siri 's original release on iPhone 4S in 2011 received mixed reviews. It received praise for its voice recognition and contextual knowledge of user information, including calendar appointments, but was criticized for requiring stiff user commands and having a lack of flexibility. It was also criticized for lacking information on certain nearby places, and for its inability to understand certain English accents. In 2016 and 2017, a number of media reports have indicated that Siri is lacking in innovation, particularly against new competing voice assistants from other technology companies. The reports concerned Siri 's limited set of features, "bad '' voice recognition, and undeveloped service integrations as causing trouble for Apple in the field of artificial intelligence and cloud - based services; the basis for the complaints reportedly due to stifled development, as caused by Apple 's prioritization of user privacy and executive power struggles within the company.
Siri is a spin - out from the SRI International Artificial Intelligence Center, and is an offshoot of the DARPA - funded CALO project. It was co-founded by Dag Kittlaus, Adam Cheyer, and Tom Gruber.
Siri 's speech recognition engine is provided by Nuance Communications, a speech technology company. This was not officially acknowledged by Apple nor Nuance for years, until Nuance CEO Paul Ricci confirmed the information at a 2011 technology conference. The speech recognition system makes use of sophisticated machine learning techniques, including convolutional neural networks and long short - term memory.
Apple 's first notion of a digital personal assistant was originally a concept video in 1987, called the "Knowledge Navigator ''.
The original American voice of Siri was provided by Susan Bennett in July 2005, unaware that it would eventually be used for the voice assistant. A report from The Verge in September 2013 about voice actors, their work, and machine learning developments, made hints that Allison Dufty was the voice behind Siri, though this was disproven when Dufty wrote on her website that she was "absolutely, positively NOT the voice of Siri ''. Citing growing pressure, Bennett revealed her role as Siri in October, and her claim was proven by Ed Primeau, an American audio forensics expert. Apple has never confirmed the information.
The original British male voice was provided by Jon Briggs, a former technology journalist. Having discovered that he was the voice of Siri by watching television, he first spoke only about his role in November 2011, also acknowledging his voice work was done "five or six years ago '' without knowing the recordings ' final usage form.
The original Australian voice was provided by Karen Jacobsen, a voice - over artist known in Australia for her work as the "GPS girl ''.
As part of an interview between all three voice actors and The Guardian, Briggs stated that "The original system was recorded for a US company called Scansoft, who were then bought by Nuance. Apple simply licensed it. ''
Siri was originally released as a stand - alone application for the iOS operating system in February 2010, and at the time, the developers were also intending to release Siri for Android and BlackBerry devices. Two months later, Apple acquired Siri. On October 4, 2011, Apple introduced the iPhone 4S with their implementation of a beta version of Siri. After the announcement, Apple removed the existing standalone Siri app from App Store. TechCrunch wrote that, despite the Siri app 's support for iPhone 4, its removal from App Store might also have had a financial aspect for the company, in providing an incentive for customers to upgrade devices. Third - party developer Steven Troughton - Smith, however, managed to port Siri to iPhone 4, though without being able to communicate with Apple 's servers. A few days later, Troughton - Smith, working with an anonymous person nicknamed "Chpwn '', managed to fully hack Siri, enabling its full functionalities on iPhone 4 and iPod Touch devices. Additionally, developers were also able to successfully create and distribute legal ports of Siri to any device capable of running iOS 5, though a proxy server was required for Apple server interaction.
Over the years, Apple has expanded the line of officially supported products, including newer iPhone models, as well as iPad support in June 2012, iPod Touch support in September 2012, Apple TV support, and the standalone Siri Remote, in September 2015, Mac support in September 2016 alongside macOS Sierra, and HomePod support in June 2017.
Apple offers a wide range of voice commands to interact with Siri, including, but not limited to:
Siri also offers numerous pre-programmed responses to amusing questions. Such questions include "What is the meaning of life? '', to which Siri may reply "All evidence to date suggests it 's chocolate ''; "Why am I here? '', to which it may reply "I do n't know. Frankly, I 've wondered that myself ''; and "Will you marry me? '', to which it may respond with "My End User Licensing Agreement does not cover marriage. My apologies ''.
Initially limited to female voices, Apple announced in June 2013 that Siri would feature a gender option, adding a male voice counterpart.
In September 2014, Apple added the ability for users to speak "Hey, Siri '' to enable the assistant without the requirement of physically handling the device.
In September 2015, the "Hey Siri '' feature was updated to include individualized voice recognition, a presumed effort to prevent non-owner activation.
With the announcement of iOS 10 in June 2016, Apple opened up limited third - party developer access to Siri through a dedicated application programming interface (API). The API restricts usage of Siri to engaging with third - party messaging apps, payment apps, ride - sharing apps, and Internet calling apps.
In iOS 11, Siri supports more clear, human voices, is able to handle follow - up questions, supports language translation, and opens up to more third - party actions, including task management. Additionally, users are able to type to Siri, and a new, privacy - minded "on - device learning '' technique improves Siri 's suggestions by privately analyzing personal usage of different iOS applications.
Siri received mixed reviews during its "beta '' release as an integrated part of iPhone 4S in October 2011.
MG Siegler of TechCrunch wrote that Siri was "great '', adding that "The amount of times Siri has n't been able to understand and execute my request is astonishingly low ''. Siegler further praised the potential for Siri after losing the "beta '' tag, writing "Just imagine what will happen when Apple partners with other services to expand Siri further. And imagine when they have an API that any developer can use. This really could alter the mobile landscape ''.
Writing for The New York Times, David Pogue also praised Siri, writing that it "thinks for a few seconds, displays a beautifully formatted response and speaks in a calm female voice ''. Pogue praised Siri 's language understanding, stating that "It 's mind - blowing how inexact your utterances can be. Siri understands everything from, "What 's the weather going to be like in Tucson this weekend? '' to "Will I need an umbrella tonight? '' ". Pogue also equally praised Siri 's ability to understand context, using the example of "Once, I tried saying, "Make an appointment with Patrick for Thursday at 3. '' Siri responded, "Note that you already have an all - day appointment about ' Boston Trip ' for this Thursday. Shall I schedule this anyway? '' Unbelievable ".
Jacqui Cheng of Ars Technica wrote that Apple 's claims of what Siri could do were "bold '', and the early demos "even bolder ''. Cheng wrote that "Though Siri shows real potential, these kinds of high expectations are bound to be disappointed '', adding that "Apple makes clear that the product is still in beta -- an appropriate label, in our opinion ''. While praising its ability to "decipher our casual language '' and deliver "very specific and accurate result '', sometimes even providing additional information, Cheng noted and criticized its restrictions, particularly when the language moved away from "stiffer commands '' into more human interactions. One example included the phrase "Send a text to Jason, Clint, Sam, and Lee saying we 're having dinner at Silver Cloud '', which Siri interpreted as sending a message to Jason only, containing the text "Clint Sam and Lee saying we 're having dinner at Silver Cloud ''. She also noted a lack of proper editability, as saying "Edit message to say: we 're at Silver Cloud and you should come find us '', generated "Clint Sam and Lee saying we 're having dinner at Silver Cloud to say we 're at Silver Cloud and you should come find us ''.
Google 's executive chairman and former chief, Eric Schmidt, conceded that Siri could pose a "competitive threat '' to the company 's core search business.
Siri was criticized by pro-choice abortion organizations, including the American Civil Liberties Union (ACLU) and NARAL Pro-Choice America, after users found that Siri could not provide information about the location of birth control or abortion providers nearby, sometimes directing users to pro-life crisis pregnancy centers instead. Natalie Kerris, a spokeswoman for Apple, told The New York Times that "Our customers want to use Siri to find out all types of information, and while it can find a lot, it does n't always find what you want '', adding "These are not intentional omissions meant to offend anyone. It simply means that as we bring Siri from beta to a final product, we find places where we can do better, and we will in the coming weeks ''. In January 2016, Fast Company reported that, in then - recent months, Siri had begun to confuse the word "abortion '' with "adoption '', citing "health experts '' who stated that the situation had "gotten worse ''. However, at the time of Fast Company 's report, the situation had changed slightly, with Siri offering "a more comprehensive list of Planned Parenthood facilities '', although "Adoption clinics continue to pop up, but near the bottom of the list. ''
Siri has also not been well received by some English speakers with distinctive accents, including Scottish and Americans from Boston or the South.
In March 2012, Frank M. Fazio filed a class action lawsuit against Apple on behalf of the people who bought iPhone 4S and felt misled about the capabilities of Siri, alleging its failing to function as depicted in Apple 's Siri commercials. Fazio filed the lawsuit in California and claimed that the iPhone 4S was merely a "more expensive iPhone 4 '' if Siri fails to function as advertised. On July 22, 2013, U.S. District Judge Claudia Wilken in San Francisco dismissed the suit but said the plaintiffs could amend at a later time. The reason given for dismissal was that plaintiffs did not sufficiently document enough misrepresentations by Apple for the trial to proceed.
In June 2016, The Verge 's Sean O'Kane wrote about the then - upcoming major iOS 10 updates, with a headline stating "Siri 's big upgrades wo n't matter if it ca n't understand its users ''. O'Kane wrote that "What Apple did n't talk about was solving Siri 's biggest, most basic flaws: it 's still not very good at voice recognition, and when it gets it right, the results are often clunky. And these problems look even worse when you consider that Apple now has full - fledged competitors in this space: Amazon 's Alexa, Microsoft 's Cortana, and Google 's Assistant. '' Also writing for The Verge, Walt Mossberg had previously questioned Apple 's efforts in cloud - based services, writing:
(...) perhaps the biggest disappointment among Apple 's cloud - based services is the one it needs most today, right now: Siri. Before Apple bought it, Siri was on the road to being a robust digital assistant that could do many things, and integrate with many services -- even though it was being built by a startup with limited funds and people. After Apple bought Siri, the giant company seemed to treat it as a backwater, restricting it to doing only a few, slowly increasing number of tasks, like telling you the weather, sports scores, movie and restaurant listings, and controlling the device 's functions. Its unhappy founders have left Apple to build a new AI service called Viv. And, on too many occasions, Siri either gets things wrong, does n't know the answer, or ca n't verbalize it. Instead, it shows you a web search result, even when you 're not in a position to read it.
In October 2016, Bloomberg reported that Apple had plans to unify the teams behind its various cloud - based services, including a single campus and reorganized cloud computing resources aimed at improving the processing of Siri 's queries, though another report from The Verge in June 2017 once again called Siri 's voice recognition "bad ''.
In June 2017, The Wall Street Journal published an extensive report on the lack of innovation with Siri following competitors ' advancement in the field of voice assistants. Citing the announcement of Amazon 's Alexa as having caused Apple workers ' anxiety levels to have "went up a notch '', the Journal wrote that "Today, Apple is playing catch - up in a product category it invented, increasing worries about whether the technology giant has lost some of its innovation edge ''. The report cites the primary causes as being Apple 's prioritization of user privacy, including randomly - tagged six - month Siri searches, whereas Google and Amazon keep data until actively discarded by the user, and executive power struggles with some employees leaving. Apple declined to comment on the report, while Eddy Cue said that "Apple often uses generic data rather than user data to train its systems and has the ability to improve Siri 's performance for individual users with information kept on their iPhones ''.
See also: Logic machines in fiction and List of fictional computers
|
when white light is cast upon cyan toner. which color wavelengths are reflected | Cyan - wikipedia
Cyan (/ ˈsaɪ. ən / or / ˈsaɪ. æn /) is a greenish - blue color. It is evoked by light with a predominant wavelength of between 490 -- 520 nm, between the wavelengths of blue and green.
In the subtractive color system, or CMYK (subtractive), which can be overlaid to produce all colors in paint and color printing, cyan is one of the primary colors, along with magenta, yellow, and black. In the additive color system, or RGB (additive) color model, used to create all the colors on a computer or television display, cyan is made by mixing equal amounts of green and blue light. Cyan is the complement of red; it can be made by the removal of red from white light. Mixing red light and cyan light at the right intensity will make white light.
The web color cyan is synonymous with aqua. Other colors in the cyan color range are teal, turquoise, electric blue, aquamarine, and others described as blue - green.
In the RGB color model, used to make colors on computer and TV displays, cyan is created by the combination of green and blue light.
In the RGB color wheel of additive colors, cyan is midway between blue and green.
In the CMYK color model, used in color printing, cyan, magenta and yellow combined make black. In practice, since the inks are not perfect, some black ink is added.
Color printers today use magenta, cyan, and yellow ink to produce the full range of colors.
Cyan and red are complementary colors. They have strong contrast and harmony, and if combined, they make either white, black or grey, depending upon the color system used.
Cyan is the color of shallow water over a sandy beach. The water absorbs the color red from the sunlight, leaving a greenish - blue color.
The dome of the Tilla Kari Mosque in Samarkand, Uzbekistan (1660) is cyan. The color is widely used in architecture in Turkey and Central Asia.
The planet Uranus. seen from the Voyager 2 spacecraft. The cyan color comes from clouds of methane gas in the planet 's atmosphere.
A surgical team in Germany. Surgeons and nurses often wear gowns colored cyan, and operating rooms are often painted that color, because it is the complement of red and thus reduces the emotional response to blood red that occurs when doing surgery on internal organs.
Its name is derived from the Ancient Greek κύανος, transliterated kyanos, meaning "dark blue, dark blue enamel, lapis lazuli ''. It was formerly known as "cyan blue '' or cyan - blue, and its first recorded use as a color name in English was in 1879. Further origins of the color name can be traced back to a dye produced from the cornflower (Centaurea cyanus).
In most languages, ' cyan ' is not a basic color term and it phenomenologically appears as a greenish vibrant hue of blue to most English speakers. Reasons for why cyan is not linguistically acknowledged as a basic color term can be found in the frequent lack of distinction between blue and green in many languages.
The web color cyan shown at right is a secondary color in the RGB color model, which uses combinations of red, green and blue light to create all the colors on computer and television displays. In X11 colors, this color is called both cyan and aqua. In the HTML color list, this same color is called aqua.
The web colors are more vivid than the cyan used in the CMYK color system, and the web colors can not be accurately reproduced on a printed page. To reproduce the web color cyan in inks, it is necessary to add some white ink to the printer 's cyan below, so when it is reproduced in printing, it is not a primary subtractive color. It is called aqua (a name in use since 1598) because it is a color commonly associated with water, such as the appearance of the water at a tropical beach.
Cyan is also one of the common inks used in four - color printing, along with magenta, yellow, and black; this set of colors is referred to as CMYK as in spectrum (s).
While both the additive secondary and the subtractive primary are called cyan, they can be substantially different from one another. Cyan printing ink can be more saturated or less saturated than the RGB secondary cyan, depending on what RGB color space and ink are considered.
Process cyan is not an RGB color, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer 's ink, so there can be variations in the printed color that is pure cyan ink. This is because real - world subtractive (unlike additive) color mixing does not consistently produce the same result when mixing apparently identical colors, since the specific frequencies filtered out to produce that color affect how it interacts with other colors. A typical formulation of process cyan is shown in the color box at right.
|
who sang the song there is a rose in spanish harlem | Spanish Harlem (song) - wikipedia
"Spanish Harlem '' is a song released by Ben E. King in 1960 on Atco Records, written by Jerry Leiber and Phil Spector, and produced by Jerry Leiber and Mike Stoller. During a 1968 interview, Leiber credited Stoller with the arrangement; similarly, in a 2009 radio interview with Leiber and Stoller on the Bob Edwards Weekend talk show, Jerry Leiber said that Stoller, while uncredited, had written the key instrumental introduction to the record. In the team 's autobiography from the same year, Hound Dog, Stoller himself remarks that he had created this "fill '' while doing a piano accompaniment when the song was presented to Ahmet Ertegun and Jerry Wexler at Atlantic Records, with Spector playing guitar and Leiber doing the vocal. "Since then, I 've never heard the song played without that musical figure. I presumed my contribution was seminal to the composition, but I also knew that Phil did n't want to share credit with anyone but Jerry, so I kept quiet. ''
It was originally released as the B - side to "First Taste of Love ''. The song was King 's first hit away from The Drifters, a group he had led for several years. With an arrangement by Stan Applebaum featuring Spanish guitar, marimba, drum - beats, soprano saxophone, strings, and a male chorus, it climbed the Billboard charts, eventually peaking at # 15 R&B and # 10 Pop. It was ranked # 358 on Rolling Stone 's list of the 500 Greatest Songs of All Time. King 's version was not a hit in the UK: instead, the original A-side, "First Taste of Love '', that was played on Radio Luxembourg, charting at # 27. In 1987, after Stand By Me made # 1, the song was re-released and charted at # 92.
|
less i know the better music video girl | The Less I Know the Better - wikipedia
"The Less I Know the Better '' is a song by the Australian rock band Tame Impala, released on 29 November 2015 as the third and final single from the group 's third studio album Currents. The song 's accompanying music video takes place in a high school where a basketball player suffers a broken heart.
The song peaked at number 23 on the Belgian Flanders singles chart, number 66 on the ARIA Singles Chart and number 195 on the French Singles Chart. In the U.S., the song charted at number 36 on Billboard 's Hot Rock Songs chart. Along with "Let It Happen '', the song was one of two from Tame Impala 's Currents to reach the top five in the Triple J Hottest 100, 2015, coming in at # 4.
Parker stated that "The Less I Know the Better '' originated from his love of disco:
According to Parker, recording the song became obsessive. He recalled performing over 1,057 partial vocal takes for either "The Less I Know the Better '' or the album 's second single, "' Cause I 'm a Man '', though he could not recall which.
A music video for the song was uploaded on 26 November 2015 to the group 's Vevo channel. The video follows a high school basketball player in love with a cheerleader, who soon begins a relationship with the team 's gorilla mascot (named Trevor). Lars Brandle of Billboard described it thus: "the clip is a strange tale of high school lust and jealousy (and King Kong) played out in a technicolor trip. '' The clip was filmed by the Spanish creative collective and directors known as "Canada ''. The cheerleader is portrayed by Spanish actress Laia Manzanares and the basketball player is portayed by Spanish actor Albert Baró.
shipments figures based on certification alone sales + streaming figures based on certification alone
|
funeral for a friend casually dressed & deep in conversation songs | Casually Dressed & Deep in Conversation - wikipedia
Casually Dressed & Deep in Conversation is the debut album by Welsh rock band Funeral for a Friend. It was released on 13 October 2003 through Atlantic Records and was produced by Colin Richardson with co-production by the band themselves. The cover of the album as well as its subsequent singles is based on a small series of paintings by Belgian artist René Magritte titled: "The Lovers ''.
Upon its release the album received positive reception from critics and was a commercial success. The album was included on NME 's album of the year list for 2003, and Rock Sound 's 101 Modern Classics list. debuted at number 12 on the UK charts and achieved Gold certifications after a sales of 100,000 a year after its release on 29 October 2004.
The album featured newly recorded versions of "Red is the New Black '' and "Juno '' (renamed "Juneau '') from their "Between Order And Model '' EP.
It also features remastered versions of "She Drove Me To Daytime Television '' and "Escape Artists Never Die '' from Four Ways To Scream Your Name (the latter is slightly remixed).
"Waking Up '' was a largely rewritten version of "Summer 's Dead and Buried '', which was one of the band 's earliest songs. "Storytelling '' had previously been recorded in studio after Between Order And Model came out. The band liked this demo, but this version had n't been released at the time (it can be found on the "Between Order And Model '' 10th anniversary edition, as well as the out of print Digital 3CD edition of Your History Is Mine). "Moments Forever Faded '' was also previously demoed (featured on the 3CD Digital edition of ' Your History Is Mine).
"Rookie of the Year '' was featured on the soundtracks of WWE Wrestlemania 21 and Burnout 3: Takedown. "Red Is The New Black '' was used in the Torchwood episode "They Keep Killing Suzie ''.
In 2012 the band released a live album titled Casually Dressed and Deep in Conversation: Live and in Full at Shepherds Bush Empire. The live album was recorded at the Shepherds Bush Empire in London in July 2010. The live performance 's setlist featured the debut album in its entirety, plus additional tracks from other albums in its encore. The album was announced in January 2011, with no tentative release date announced but a teaser trailer was posted. The live recording featured guitarist Darran Smith before he left the band in late 2010. He was replaced by Richard Boucher, whom was a member of hardcore punk band Hondo Maclean. The album was released in early April 2012 and was made available through the band 's online store to purchase. To celebrate the 10th anniversary of the album, it was repressed on double white 180g vinyl as a limited edition release for Record Store Day 2013.
The album received very positive reviews from the British music press upon its release. With NME writer Dan Martin gave the album at 8 out of 10 and praised the album quite considerably calling it a "fearsome debut '' and stating that: "It would be unfair on everyone else to call ' Casually Dressed and Deep in Conversation ' the first great album to come out of all this. But what is true is that this fiery, dynamic hulk of a record will be the first to break properly out into daylight. '' It was also featured at number 17 in New Musical Express 's albums of the year list for 2003.
In 2012, British publication Rock Sound added Casually Dressed & Deep in Conversation into their "101 Modern Classics '', placed at number 33. They considered the album more of a classic than Sum 41 's All Killer No Filler and Botch 's We Are the Romans. Stating that "' Casually Dressed... ' evidences a remarkable young band who, on their debut full - length release, were already the finished article, establishing a sound that continues to define an entire generation. ''
All tracks written by Funeral for a Friend.
shipments figures based on certification alone
|
who fired the first shot in the civil war | Battle of Fort Sumter - wikipedia
Confederate victory
The Battle of Fort Sumter (April 12 -- 13, 1861) was the bombardment of Fort Sumter near Charleston, South Carolina by the Confederate States Army, and the return gunfire and subsequent surrender by the United States Army, that started the American Civil War. Following the declaration of secession by South Carolina on December 20, 1860, its authorities demanded that the U.S. Army abandon its facilities in Charleston Harbor. On December 26, Major Robert Anderson of the U.S. Army surreptitiously moved his small command from the vulnerable Fort Moultrie on Sullivan 's Island to Fort Sumter, a substantial fortress built on an island controlling the entrance of Charleston Harbor. An attempt by U.S. President James Buchanan to reinforce and resupply Anderson using the unarmed merchant ship Star of the West failed when it was fired upon by shore batteries on January 9, 1861. South Carolina authorities then seized all Federal property in the Charleston area except for Fort Sumter.
During the early months of 1861, the situation around Fort Sumter increasingly began to resemble a siege. In March, Brigadier General P.G.T. Beauregard, the first general officer of the newly formed Confederate States Army, was placed in command of Confederate forces in Charleston. Beauregard energetically directed the strengthening of batteries around Charleston harbor aimed at Fort Sumter. Conditions in the fort, growing increasingly dire due to shortages of men, food, and supplies, deteriorated as the Union soldiers rushed to complete the installation of additional guns.
The resupply of Fort Sumter became the first crisis of the administration of the newly inaugurated U.S. President Abraham Lincoln following his victory in the election of November 6, 1860. He notified the Governor of South Carolina, Francis W. Pickens that he was sending supply ships, which resulted in an ultimatum from the Confederate government for the immediate evacuation of Fort Sumter, which Major Anderson refused. Beginning at 4: 30 a.m. on April 12, the Confederates bombarded the fort from artillery batteries surrounding the harbor. Although the Union garrison returned fire, they were significantly outgunned and, after 34 hours, Major Anderson agreed to evacuate. There were no deaths on either side as a direct result of this engagement, although a gun explosion during the surrender ceremonies on April 14 caused two Union deaths.
Following the battle, there was widespread support from both North and South for further military action. Lincoln 's immediate call for 75,000 volunteers to suppress the rebellion resulted in an additional four southern states also declaring their secession and joining the Confederacy. The battle is usually recognized as the first battle that opened the American Civil War.
On December 20, 1860, shortly after Abraham Lincoln 's victory in the presidential election of 1860, South Carolina adopted an ordinance declaring its secession from the United States of America and, by February 1861, six more Southern states had adopted similar ordinances of secession. On February 7, the seven states adopted a provisional constitution for the Confederate States of America and established their temporary capital at Montgomery, Alabama. A February peace conference met in Washington, D.C., but failed to resolve the crisis. The remaining eight states declined pleas to join the Confederacy.
The seceding states seized numerous Federal properties within their boundaries, including buildings, arsenals, and fortifications. President James Buchanan protested but took no military action in response. Buchanan was concerned that an overt action could cause the remaining slave states to leave the Union, and while he acknowledged there was no constitutional authority for a state to secede, he could find no constitutional authority for him to act to prevent it.
Several forts had been constructed in Charleston 's harbor, including Fort Sumter and Fort Moultrie, which were not among the sites seized initially. Fort Moultrie on Sullivan Island was the oldest -- it was the site of fortifications since 1776 -- and was the headquarters of the U.S. Army garrison. However, it had been designed as a gun platform for defending the harbor, and its defenses against land - based attacks were feeble; during the crisis, the Charleston newspapers commented that sand dunes had piled up against the walls in such a way that the wall could easily be scaled. When the garrison began clearing away the dunes, the papers objected.
Major Robert Anderson of the 1st U.S. Artillery regiment had been appointed to command the Charleston garrison that fall because of rising tensions. A native of Kentucky, he was a protégé of Winfield Scott, the general in chief of the Army, and was thought more capable of handling a crisis than the garrison 's previous commander, Col. John L. Gardner, who was nearing retirement. Anderson had served an earlier tour of duty at Fort Moultrie and his father had been a defender of the fort (then called Fort Sullivan) during the American Revolutionary War. Throughout the fall, South Carolina authorities considered both secession and the expropriation of federal property in the harbor to be inevitable. As tensions mounted, the environment around the fort increasingly resembled a siege, to the point that the South Carolina authorities placed picket ships to observe the movements of the troops and threatened to attack when forty rifles were transferred to one of the harbor forts from the U.S. arsenal in the city.
In contrast to Moultrie, Fort Sumter dominated the entrance to Charleston Harbor and, though unfinished, was designed to be one of the strongest fortresses in the world. In the fall of 1860 work on the fort was nearly completed, but the fortress was thus far garrisoned by a single soldier, who functioned as a lighthouse keeper, and a small party of civilian construction workers. Under the cover of darkness on December 26, six days after South Carolina declared its secession, Anderson abandoned the indefensible Fort Moultrie, ordering its guns spiked and its gun carriages burned, and surreptitiously relocated his command by small boats to Sumter.
South Carolina authorities considered Anderson 's move to be a breach of faith. Governor Francis W. Pickens believed that President Buchanan had made implicit promises to him to keep Sumter unoccupied and suffered political embarrassment as a result of his trust in those promises. Buchanan, a former U.S. Secretary of State and diplomat, had used carefully crafted ambiguous language to Pickens, promising that he would not "immediately '' occupy it. From Major Anderson 's standpoint, he was merely moving his existing garrison troops from one of the locations under his command to another. He had received instructions from the War Department on December 11, written by Major General Don Carlos Buell, Assistant Adjutant General of the Army, approved by Secretary of War John B. Floyd:
... you are to hold possession of the forts in this harbor, and if attacked you are to defend yourself to the last extremity. The smallness of your force will not permit you, perhaps, to occupy more than one of the three forts, but an attack on or attempt to take possession of any one of them will be regarded as an act of hostility, and you may then put your command into either of them which you may deem most proper to increase its power of resistance. You are also authorized to take similar steps whenever you have tangible evidence of a design to proceed to a hostile act.
Governor Pickens therefore ordered that all remaining Federal positions except Fort Sumter were to be seized. State troops quickly occupied Fort Moultrie (capturing 56 guns), Fort Johnson on James Island, and the battery on Morris Island. On December 27, an assault force of 150 men seized the Union - occupied Castle Pinckney fortification, in the harbor close to downtown Charleston, capturing 24 guns and mortars without bloodshed. On December 30, the Federal arsenal in Charleston was captured, resulting in the acquisition of more than 22,000 weapons by the militia. The Confederates promptly made repairs at Fort Moultrie and dozens of new batteries and defense positions were constructed throughout the Charleston harbor area, including an unusual floating battery, and armed with weapons captured from the arsenal.
President Buchanan was surprised and dismayed at Anderson 's move to Sumter, unaware of the authorization Anderson had received. Nevertheless, he refused Pickens 's demand to evacuate Charleston harbor. Since the garrison 's supplies were limited, Buchanan authorized a relief expedition of supplies, small arms, and 200 soldiers. The original intent was to send the Navy sloop - of - war USS Brooklyn, but it was discovered that Confederates had sunk some derelict ships to block the shipping channel into Charleston and there was concern that Brooklyn had too deep a draft to negotiate the obstacles. Instead, it seemed prudent to send an unarmed civilian merchant ship, Star of the West, which might be perceived as less provocative to the Confederates. As Star of the West approached the harbor entrance on January 9, 1861, it was fired upon by a battery on Morris Island, which was staffed by cadets from The Citadel, among them William Stewart Simkins, who were the only trained artillerymen in the service of South Carolina at the time. Batteries from Fort Moultrie joined in and Star of the West was forced to withdraw. Major Anderson prepared his guns at Sumter when he heard the Confederate fire, but the secrecy of the operation had kept him unaware that a relief expedition was in progress and he chose not to start a general engagement.
In a letter delivered January 31, 1861, Governor Pickens demanded of President Buchanan that he surrender Fort Sumter because, "I regard that possession is not consistent with the dignity or safety of the State of South Carolina. ''
Conditions at the fort were difficult during the winter of 1860 -- 61. Rations were short and fuel for heat was limited. The garrison scrambled to complete the defenses as best they could. Fort Sumter was designed to mount 135 guns, operated by 650 officers and men, but construction had met with numerous delays for decades and budget cuts had left it only about 90 percent finished in early 1861. Anderson 's garrison consisted of just 85 men, primarily made up of two small artillery companies: Company E, 1st U.S. Artillery, commanded by Capt. Abner Doubleday, and Company H, commanded by Capt. Truman Seymour. There were six other officers present: Surgeon Samuel W. Crawford, First Lt. Theodore Talbot of Company H, First Lt. Jefferson C. Davis of the 1st U.S. Artillery, and Second Lt. Norman J. Hall of Company H. Capt. John G. Foster and First Lt. George W. Snyder of the Corps of Engineers were responsible for construction of the Charleston forts, but they reported to their headquarters in Washington, not directly to Anderson. The remaining personnel were 68 noncommissioned officers and privates, eight musicians, and 43 noncombatant workmen.
By April the Union troops had positioned 60 guns, but they had insufficient men to operate them all. The fort consisted of three levels of enclosed gun positions, or casemates. The second level of casemates was unoccupied. The majority of the guns were on the first level of casemates, on the upper level (the parapet or barbette positions), and on the center parade field. Unfortunately for the defenders, the original mission of the fort -- harbor defense -- meant that it was designed so that the guns were primarily aimed at the Atlantic, with little capability of protecting from artillery fire from the surrounding land or from infantry conducting an amphibious assault.
In March, Brig. Gen. P.G.T. Beauregard took command of South Carolina forces in Charleston; on March 1, President Jefferson Davis had appointed him the first general officer in the armed forces of the new Confederacy, specifically to take command of the siege. Beauregard made repeated demands that the Union force either surrender or withdraw and took steps to ensure that no supplies from the city were available to the defenders, whose food was running low. He also increased drills amongst the South Carolina militia, training them to operate the guns they manned. Major Anderson had been Beauregard 's artillery instructor at West Point; the two had been especially close, and Beauregard had become Anderson 's assistant after graduation. Both sides spent March drilling and improving their fortifications to the best of their abilities.
Beauregard, a trained military engineer, built - up overwhelming strength to challenge Fort Sumter. Fort Moultrie had three 8 - inch Columbiads, two 8 - inch howitzers, five 32 - pound smoothbores, and four 24 - pounders. Outside of Moultrie were five 10 - inch mortars, two 32 - pounders, two 24 - pounders, and a 9 - inch Dahlgren smoothbore. The floating battery next to Fort Moultrie had two 42 - pounders and two 32 - pounders on a raft protected by iron shielding. Fort Johnson on James Island had one 24 - pounder and four 10 - inch mortars. At Cummings Point on Morris Island, the Confederates had emplaced seven 10 - inch mortars, two 42 - pounders, an English Blakely rifled cannon, and three 8 - inch Columbiads, the latter in the so - called Iron Battery, protected by a wooden shield faced with iron bars. About 6,000 men were available to man the artillery and to assault the fort, if necessary, including the local militia, young boys and older men.
On March 4, 1861, Abraham Lincoln was inaugurated as president. He was almost immediately confronted with the surprise information that Major Anderson was reporting that only six weeks of rations remained at Fort Sumter. A crisis similar to the one at Fort Sumter had emerged at Pensacola, Florida, where Confederates threatened another U.S. fortification -- Fort Pickens. Lincoln and his new cabinet struggled with the decisions of whether to reinforce the forts, and how. They were also concerned about whether to take actions that might start open hostilities and which side would be perceived as the aggressor as a result. Similar discussions and concerns were occurring in the Confederacy.
After the formation of the Confederate States of America in early February, there was some debate among the secessionists whether the capture of the fort was rightly a matter for South Carolina or for the newly declared national government in Montgomery, Alabama. South Carolina governor Pickens was among the states ' rights advocates who thought that all property in Charleston harbor had reverted to South Carolina upon that state 's secession as an independent commonwealth. This debate ran alongside another discussion about how aggressively the installations -- including Forts Sumter and Pickens -- should be obtained. President Davis, like his counterpart in Washington, preferred that his side not be seen as the aggressor. Both sides believed that the first side to use force would lose precious political support in the border states, whose allegiance was undetermined; before Lincoln 's inauguration on March 4, five states had voted against secession, including Virginia, and Lincoln openly offered to evacuate Fort Sumter if it would guarantee Virginia 's loyalty.
The South sent delegations to Washington, D.C., and offered to pay for the Federal properties and enter into a peace treaty with the United States. Lincoln rejected any negotiations with the Confederate agents because he did not consider the Confederacy a legitimate nation and making any treaty with it would be tantamount to recognition of it as a sovereign government. However, Secretary of State William H. Seward, who wished to give up Sumter for political reasons -- as a gesture of good will -- engaged in unauthorized and indirect negotiations that failed.
On April 4, as the supply situation on Sumter became critical, President Lincoln ordered a relief expedition, to be commanded by former naval captain (and future Assistant Secretary of the Navy) Gustavus V. Fox, who had proposed a plan for nighttime landings of smaller vessels than the Star of the West. Fox 's orders were to land at Sumter with supplies only, and if he was opposed by the Confederates, to respond with the U.S. Navy vessels following and to then land both supplies and men. This time, Maj. Anderson was informed of the impending expedition, although the arrival date was not revealed to him. On April 6, Lincoln notified Governor Pickens that "an attempt will be made to supply Fort Sumter with provisions only, and that if such attempt be not resisted, no effort to throw in men, arms, or ammunition will be made without further notice, (except) in case of an attack on the fort. ''
Lincoln 's notification had been made to the governor of South Carolina, not the new Confederate government, which Lincoln did not recognize. Pickens consulted with Beauregard, the local Confederate commander. Soon President Davis ordered Beauregard to repeat the demand for Sumter 's surrender, and if it did not, to reduce the fort before the relief expedition arrived. The Confederate cabinet, meeting in Montgomery, endorsed Davis 's order on April 9. Only Secretary of State Robert Toombs opposed this decision: he reportedly told Jefferson Davis the attack "will lose us every friend at the North. You will only strike a hornet 's nest... Legions now quiet will swarm out and sting us to death. It is unnecessary. It puts us in the wrong. It is fatal. ''
Beauregard dispatched aides -- Col. James Chesnut, Col. James A. Chisholm, and Capt. Stephen D. Lee -- to Fort Sumter on April 11 to issue the ultimatum. Anderson refused, although he reportedly commented, "I shall await the first shot, and if you do not batter us to pieces, we shall be starved out in a few days. '' The aides returned to Charleston and reported this comment to Beauregard. At 1 a.m. on April 12, the aides brought Anderson a message from Beauregard: "If you will state the time which you will evacuate Fort Sumter, and agree in the meantime that you will not use your guns against us unless ours shall be employed against Fort Sumter, we will abstain from opening fire upon you. '' After consulting with his senior officers, Maj. Anderson replied that he would evacuate Sumter by noon, April 15, unless he received new orders from his government or additional supplies. Col. Chesnut considered this reply to be too conditional and wrote a reply, which he handed to Anderson at 3: 20 a.m.: "Sir: by authority of Brigadier General Beauregard, commanding the Provisional Forces of the Confederate States, we have the honor to notify you that he will open fire of his batteries on Fort Sumter in one hour from this time. '' Anderson escorted the officers back to their boat, shook hands with each one, and said "If we never meet in this world again, God grant that we may meet in the next. ''
At 4: 30 a.m. on April 12, 1861, Lt. Henry S. Farley, acting upon the command of Capt. George S. James, fired a single 10 - inch mortar round from Fort Johnson. (James had offered the first shot to Roger Pryor, a noted Virginia secessionist, who declined, saying, "I could not fire the first gun of the war. '') The shell exploded over Fort Sumter as a signal to open the general bombardment from 43 guns and mortars at Fort Moultrie, Fort Johnson, the floating battery, and Cummings Point. Under orders from Beauregard, the guns fired in a counterclockwise sequence around the harbor, with 2 minutes between each shot; Beauregard wanted to conserve ammunition, which he calculated would last for only 48 hours. Edmund Ruffin, another noted Virginia secessionist, had traveled to Charleston to be present for the beginning of the war, and fired one of the first shots at Sumter after the signal round, a 64 - pound shell from the Iron Battery at Cummings Point. The shelling of Fort Sumter from the batteries ringing the harbor awakened Charleston 's residents (including diarist Mary Chesnut), who rushed out into the predawn darkness to watch the shells arc over the water and burst inside the fort.
Major Anderson held his fire, awaiting daylight. His troops reported for a call at 6 a.m. and then had breakfast. At 7 a.m., Capt. Abner Doubleday fired a shot at the Ironclad Battery at Cummings Point. He missed. Given the available manpower, Anderson could not take advantage of all of his 60 guns. He deliberately avoided using guns that were situated in the fort where casualties were most likely. The fort 's best cannons were mounted on the uppermost of its three tiers -- the barbette tier -- where his troops were most exposed to incoming fire from overhead. The fort had been designed to withstand a naval assault, and naval warships of the time did not mount guns capable of elevating to shoot over the walls of the fort. However, the land - based cannons manned by the Confederates were capable of high - arcing ballistic trajectories and could therefore fire at parts of the fort that would have been out of naval guns ' reach. Fort Sumter 's garrison could only safely fire the 21 working guns on the lowest level, which themselves, because of the limited elevation allowed by their embrasures, were largely incapable of delivering fire with trajectories high enough to seriously threaten Fort Moultrie. Moreover, although the Federals had moved as many of their supplies to Fort Sumter as they could manage, the fort was quite low on ammunition, and was nearly out at the end of the 34 - hour bombardment. A more immediate problem was the scarcity of cloth gunpowder cartridges or bags; only 700 were available at the beginning of the battle and workmen sewed frantically to create more, in some cases using socks from Anderson 's personal wardrobe. Because of the shortages, Anderson reduced his firing to only six guns: two aimed at Cummings Point, two at Fort Moultrie, and two at the Sullivan 's Island batteries.
Ships from Fox 's relief expedition began to arrive on April 12. Although Fox himself arrived at 3 a.m. on his steamer Baltic, most of the rest of his fleet was delayed until 6 p.m., and one of the two warships, USS Powhatan, never did arrive. Unbeknownst to Fox, it had been ordered to the relief of Fort Pickens in Florida. As landing craft were sent toward the fort with supplies, the artillery fire deterred them and they pulled back. Fox decided to wait until after dark and for the arrival of his warships. The next day, heavy seas made it difficult to load the small boats with men and supplies and Fox was left with the hope that Anderson and his men could hold out until dark on April 13.
Although Sumter was a masonry fort, there were wooden buildings inside for barracks and officer quarters. The Confederates targeted these with Heated shot (cannonballs heated red hot in a furnace), starting fires that could prove more dangerous to the men than explosive artillery shells. At 7 p.m. on April 12, a rain shower extinguished the flames and at the same time the Union gunners stopped firing for the night. They slept fitfully, concerned about a potential infantry assault against the fort. During the darkness, the Confederates reduced their fire to four shots each hour. The following morning, the full bombardment resumed and the Confederates continued firing hot shot against the wooden buildings. By noon most of the wooden buildings in the fort and the main gate were on fire. The flames moved toward the main ammunition magazine, where 300 barrels of gunpowder were stored. The Union soldiers frantically tried to move the barrels to safety, but two - thirds were left when Anderson judged it was too dangerous and ordered the magazine doors closed. He ordered the remaining barrels thrown into the sea, but the tide kept floating them back together into groups, some of which were ignited by incoming artillery rounds. He also ordered his crews to redouble their efforts at firing, but the Confederates did the same, firing the hot shots almost exclusively. Many of the Confederate soldiers admired the courage and determination of the Yankees. When the fort had to pause its firing, the Confederates often cheered and applauded after the firing resumed and they shouted epithets at some of the nearby Union ships for failing to come to the fort 's aid.
The fort 's central flagpole was knocked down at 1 p.m. on April 13, raising doubts among the Confederates about whether the fort was ready to surrender. Col. Louis Wigfall, a former U.S. senator, had been observing the battle and decided that this indicated the fort had had enough punishment. He commandeered a small boat and proceeded from Morris Island, waving a white handkerchief from his sword, dodging incoming rounds from Sullivan 's Island. Meeting with Major Anderson, he said, "You have defended your flag nobly, Sir. You have done all that it is possible to do, and General Beauregard wants to stop this fight. On what terms, Major Anderson, will you evacuate this fort? '' Anderson was encouraged that Wigfall had said "evacuate, '' not "surrender. '' He was low on ammunition, fires were burning out of control, and his men were hungry and exhausted. Satisfied that they had defended their post with honor, enduring over 3,000 Confederate rounds without losing a man, Anderson agreed to a truce at 2: 00 p.m.
Fort Sumter raised Wigfall 's white handkerchief on its flagpole as Wigfall departed in his small boat back to Morris Island, where he was hailed as a hero. The handkerchief was spotted in Charleston and a delegation of officers representing Beauregard -- Stephen D. Lee, Porcher Miles, a former mayor of Charleston, and Roger Pryor -- sailed to Sumter, unaware of Wigfall 's visit. Anderson was outraged when these officers disavowed Wigfall 's authority, telling him that the former senator had not spoken with Beauregard for two days, and he threatened to resume firing. Meanwhile, General Beauregard himself had finally seen the handkerchief and sent a second set of officers, offering essentially the same terms that Wigfall had presented, so the agreement was reinstated.
The Union garrison formally surrendered the fort to Confederate personnel at 2: 30 p.m., April 13. No one from either side was killed during the bombardment. During the 100 - gun salute to the U.S. flag -- Anderson 's one condition for withdrawal -- a pile of cartridges blew up from a spark, mortally wounding privates Daniel Hough and Edward Galloway, and seriously wounding the other four members of the gun crew; these were the first military fatalities of the war. The salute was stopped at fifty shots. Hough was buried in the Fort Sumter parade ground within two hours after the explosion. Galloway and Private George Fielding were sent to the hospital in Charleston, where Galloway died a few days later; Fielding was released after six weeks. The other wounded men and the remaining Union troops were placed aboard a Confederate steamer, the Isabel, where they spent the night and were transported the next morning to Fox 's relief ship Baltic, resting outside the harbor bar.
Anderson carried the Fort Sumter Flag with him North, where it became a widely known symbol of the battle, and rallying point for supporters of the Union. This inspired Frederic Edwin Church to paint Our Banner in the Sky, described as a "symbolic landscape embodying the stars and stripes. '' A chromolithograph was then created and sold to benefit the families of Union soldiers.
The bombardment of Fort Sumter was the first military action of the American Civil War. Following the surrender, Northerners rallied behind Lincoln 's call for all states to send troops to recapture the forts and preserve the Union. With the scale of the rebellion apparently small so far, Lincoln called for 75,000 volunteers for 90 days. Some Northern states filled their quotas quickly. There were so many volunteers in Ohio that within 16 days they could have met the full call for 75,000 men by themselves. Other governors from border states were undiplomatic in their responses. For example, Gov. Claiborne Jackson wrote, "Not one man will the state of Missouri furnish to carry on any such unholy crusade '', and Gov. Beriah Magoffin wrote, "Kentucky will furnish no troops for the wicked purpose of subduing her sister Southern states. '' The governors of other states still in the Union were equally unsupportive. The call for 75,000 troops triggered four additional slave states to declare their secession from the Union and join the Confederacy. The ensuing war lasted four years, effectively ending in April 1865 with the surrender of General Robert E. Lee 's Army of Northern Virginia at Appomatox Courthouse.
Charleston Harbor was completely in Confederate hands for almost the entire four - year duration of the war, leaving a hole in the Union naval blockade. Union forces conducted major operations in 1862 and 1863 to capture Charleston, first overland on James Island (the Battle of Secessionville, June 1862), then by naval assault against Fort Sumter (the First Battle of Charleston Harbor, April 1863), then by seizing the Confederate artillery positions on Morris Island (beginning with the Second Battle of Fort Wagner, July 1863, and followed by a siege until September). After pounding Sumter to rubble with artillery fire, a final amphibious operation attempted to occupy it (the Second Battle of Fort Sumter, September 1863), but was repulsed and no further attempts were made. The Confederates evacuated Fort Sumter and Charleston in February 1865 as Union Maj. Gen. William T. Sherman outflanked the city in the Carolinas Campaign. On April 14, 1865, four years to the day after lowering the Fort Sumter Flag in surrender, Robert Anderson (by then a major general, although ill and in retired status) returned to the ruined fort to raise the flag he had lowered in 1861.
Two of the cannons used at Fort Sumter were later presented to Louisiana State University by General William Tecumseh Sherman, who was president of the university before the war began.
The U.S. Post Office Department released the Fort Sumter Centennial issue as the first in the series of five stamps marking the Civil War Centennial on April 12, 1961, at the Charleston post office.
The stamp was designed by Charles R. Chickering. It illustrates a seacoast gun from Fort Sumter aimed by an officer in a typical uniform of the time. The background features palmetto leaves akin to bursting shells. The state tree of South Carolina, the palmettos suggest the geopolitical area opening Civil War hostilities.
This stamp was produced by an engraving and printed by the rotary process in panes of fifty stamps each. The Postal Department authorized an initial printing of 120 million stamps.
Online resources
|
according to classical jurists what is the difference between shari'a and fiqh | Sharia - wikipedia
Sharia, Sharia law, or Islamic law (Arabic: شريعة (IPA: (ʃaˈriːʕa))) is the religious law forming part of the Islamic tradition. It is derived from the religious precepts of Islam, particularly the Quran and the Hadith. In Arabic, the term sharīʿah refers to God 's immutable divine law and is contrasted with fiqh, which refers to its human scholarly interpretations. The manner of its application in modern times has been a subject of dispute between Muslim traditionalists and reformists.
Traditional theory of Islamic jurisprudence recognizes four sources of sharia: the Quran, sunnah (authentic hadith), qiyas (analogical reasoning), and ijma (juridical consensus). Different legal schools -- of which the most prominent are Hanafi, Maliki, Shafi'i, Hanbali and Jafari -- developed methodologies for deriving sharia rulings from scriptural sources using a process known as ijtihad. Traditional jurisprudence distinguishes two principal branches of law, ʿibādāt (rituals) and muʿāmalāt (social relations), which together comprise a wide range of topics. Its rulings assign actions to one of five categories: mandatory, recommended, neutral, abhorred, and prohibited. Thus, some areas of sharia overlap with the Western notion of law while others correspond more broadly to living life in accordance with God 's will.
Historically, sharia was interpreted by independent jurists (muftis). Their legal opinions (fatwas) were taken into account by ruler - appointed judges who presided over qāḍī 's courts, and by maẓālim courts, which were controlled by the ruler 's council and administered criminal law. Ottoman rulers achieved additional control over the legal system by promulgating their own legal code (qanun) and turning muftis into state employees. Non-Muslim (dhimmi) communities had legal autonomy, except in cases of interconfessional disputes, which fell under jurisdiction of qadi 's courts.
In the modern era, sharia - based criminal laws were widely replaced by statutes inspired by European models. Judicial procedures and legal education in the Muslim world were likewise brought in line with European practice. While the constitutions of most Muslim - majority states contain references to sharia, its classical rules were largely retained only in personal status (family) laws. Legislative bodies which codified these laws sought to modernize them without abandoning their foundations in traditional jurisprudence. The Islamic revival of the late 20th century brought along calls by Islamist movements for full implementation of sharia, including reinstatement of hudud corporal punishments, such as stoning. In some cases, this resulted in traditionalist legal reform, while other countries witnessed juridical reinterpretation of sharia advocated by progressive reformers.
The role of sharia has become a contested topic around the world. Attempts to impose it on non-Muslims have caused intercommunal violence in Nigeria and may have contributed to the breakup of Sudan. Some Muslim - minority countries in Asia (such as Israel), Africa, and Europe recognize the use of sharia - based family laws for their Muslim populations. Some jurisdictions in North America have passed bans on use of sharia, framed as restrictions on religious or foreign laws. There are ongoing debates as to whether sharia is compatible with secular forms of government, human rights, freedom of thought, and women 's rights.
The word sharīʿah is used by Arabic - speaking peoples of the Middle East to designate a prophetic religion in its totality. For example, sharīʿat Mūsā means law or religion of Moses and sharīʿatu - nā can mean "our religion '' in reference to any monotheistic faith. Within Islamic discourse, šarīʿah refers to religious regulations governing the lives of Muslims. For many Muslims, the word means simply "justice '', and they will consider any law that promotes justice and social welfare to conform to sharia.
Jan Michiel Otto distinguishes four senses conveyed by the term sharia in religious, legal and political discourse:
A related term al - qānūn al - islāmī (القانون الإسلامي, Islamic law), which was borrowed from European usage in the late 19th century, is used in the Muslim world to refer to a legal system in the context of a modern state.
The primary range of meanings of the Arabic word šarīʿah, derived from the root š - r - ʕ, is related to religion and religious law. The lexicographical tradition records two major areas of use where the word šarīʿah can appear without religious connotation. In texts evoking a pastoral or nomadic environment, the word and its derivatives refer to watering animals at a permanent water - hole or to the seashore, with special reference to animals who come there. Another area of use relates to notions of stretched or lengthy. This range of meanings is cognate with the Hebrew saraʿ and is likely to be the origin of the meaning "way '' or "path ''. Both these areas have been claimed to have given rise to aspects of the religious meaning.
Some scholars describe the word šarīʿah as an archaic Arabic word denoting "pathway to be followed '' (analogous to the Hebrew term Halakhah ("The Way to Go '')), or "path to the water hole '' and argue that its adoption as a metaphor for a divinely ordained way of life arises from the importance of water in an arid desert environment.
In the Quran, šarīʿah and its cognate širʿah occur once each, with the meaning "way '' or "path ''. The word šarīʿah was widely used by Arabic - speaking Jews during the Middle Ages, being the most common translation for the word torah in the 10th century Arabic Old Testament known as Saʿadya Gaon. A similar use of the term can be found in Christian writers. The Arabic expression Sharīʿat Allāh (شريعة الله "God 's Law '') is a common translation for תורת אלוהים (' God 's Law ' in Hebrew) and νόμος τοῦ θεοῦ (' God 's Law ' in Greek in the New Testament (Rom. 7: 22)). In Muslim literature, šarīʿah designates the laws or message of a prophet or God, in contrast to fiqh, which refers to a scholar 's interpretation thereof.
According to the traditional Muslim view, the emergence of Islamic jurisprudence (fiqh) goes back to the lifetime of the Islamic prophet Muhammad. In this view, his companions and followers took what he did and approved of as a model (sunnah) and transmitted this information to the succeeding generations in the form of hadith. These reports led first to informal discussion and then systematic legal thought, articulated with greatest success in the eighth and ninth centuries by the master jurists Abu Hanifah, Malik ibn Anas, Al - Shafi'i, and Ahmad ibn Hanbal, who are viewed as the founders of the Hanafi, Maliki, Shafiʿi, and Hanbali legal schools (madhhabs) of Sunni jurisprudence.
Modern historians have presented alternative theories of the formation of fiqh. At first Western scholars accepted the general outlines of the traditional account. An influential revisionist hypothesis was advanced by Ignac Goldziher and elaborated by Joseph Schacht in the mid-20th century. Schacht argued that the hadith reflected local practices of early Muslim communities and their chains of transmission were extended back to Muhammad 's companions at a later date, when it became accepted that legal norms must be formally grounded in scriptural sources. In his view, the real architect of Islamic jurisprudence was al - Shafi'i, who formulated this and other elements of classical legal theory in his work al - risala. Both these accounts gave rise to objections, and modern historians generally adopt more cautious, intermediate positions.
While the origin of hadith remains a subject of scholarly controversy, it is generally accepted that early Islamic jurisprudence developed out of a combination of administrative and popular practices shaped by the religious and ethical precepts of Islam. It continued some aspects of pre-Islamic laws and customs of the lands that fell under Muslim rule in the aftermath of the early conquests and modified other aspects, aiming to meet the practical need of establishing Islamic norms of behavior and adjudicating disputes arising in the early Muslim communities. Juristic thought gradually developed in study circles, where independent scholars met to learn from a local master and discuss religious topics. At first, these circles were fluid in their membership, but with time distinct regional legal schools crystallized around shared sets of methodological principles. As the boundaries of the schools became clearly delineated, the authority of their doctrinal tenets came to be vested in a master jurist from earlier times, who was henceforth identified as the school 's founder. In the course of the first three centuries of Islam, all legal schools came to accept the broad outlines of classical legal theory, according to which Islamic law had to be firmly rooted in the Quran and hadith.
Fiqh is traditionally divided into the fields of uṣūl al - fiqh (lit. the roots of fiqh), which studies the theoretical principles of jurisprudence, and furūʿ al - fiqh (lit. the branches of fiqh), which is devoted to elaboration of rulings on the basis of these principles.
Classical jurists held that human reason is a gift from God which should be exercised to its fullest capacity. However, they believed that use of reason alone is insufficient to distinguish right from wrong, and that rational argumentation must draw its content from the body of transcendental knowledge revealed in the Quran and through the sunnah of Muhammad.
Traditional theory of Islamic jurisprudence elaborates how scriptures should be interpreted from the standpoint of linguistics and rhetoric. It also comprises methods for establishing authenticity of hadith and for determining when the legal force of a scriptural passage is abrogated by a passage revealed at a later date. In addition to the Quran and sunnah, the classical theory of Sunni fiqh recognizes two other sources of law: juristic consensus (ijmaʿ) and analogical reasoning (qiyas). It therefore studies the application and limits of analogy, as well as the value and limits of consensus, along with other methodological principles, some of which are accepted by only certain legal schools. This interpretive apparatus is brought together under the rubric of ijtihad, which refers to a jurist 's exertion in an attempt to arrive at a ruling on a particular question. The theory of Twelver Shia jurisprudence parallels that of Sunni schools with some differences, such as recognition of reason (ʿaql) as a source of law in place of qiyas and extension of the notion of sunnah to include traditions of the imams.
The classical process of ijtihad combined these generally recognized principles with other methods, which were not adopted by all legal schools, such as istihsan (juristic preference), istislah (consideration of public interest) and istishab (presumption of continuity). A jurist who is qualified to practice ijtihad is known as a mujtahid. The use of independent reasoning to arrive at a ruling is contrasted with taqlid (imitation), which refers to following the rulings of a mujtahid. By the beginning of the 10th century, development of Sunni jurisprudence prompted leading jurists to state that the main legal questions had been addressed and the scope of ijtihad was gradually restricted. From the 18th century on, leading Muslim reformers began calling for abandonment of taqlid and renewed emphasis on ijtihad, which they saw as a return to the vitality of early Islamic jurisprudence.
Sharia rulings fall into one of five categories known as "the five decisions '' (al - aḥkām al - khamsa): mandatory (farḍ or wājib), recommended (mandūb or mustaḥabb), neutral (mubāḥ), reprehensible (makrūh), and forbidden (ḥarām). It is a sin or a crime to perform a forbidden action or not to perform a mandatory action. Reprehensible acts should be avoided, but they are not considered to be sinful or punishable in court. Avoiding reprehensible acts and performing recommended acts is held to be subject of reward in the afterlife, while allowed actions entail no judgement from God. Jurists disagree on whether the term ḥalāl covers the first three or the first four categories. The legal and moral verdict depends on whether the action is committed out of necessity (ḍarūra).
Maqāṣid (aims or purposes) of sharia and maṣlaḥa (welfare or public interest) are two related classical doctrines which have come to play an increasingly prominent role in modern times. They were first clearly articulated by al - Ghazali (d. 1111), who argued that maslaha was God 's general purpose in revealing the divine law, and that its specific aims was preservation of five essentials of human well - being: religion, life, intellect, offspring, and property. Although most classical - era jurists recognized maslaha and maqasid as important legal principles, they held different views regarding the role they should play in Islamic law. Some jurists viewed them as auxiliary rationales constrained by scriptural sources and analogical reasoning. Others regarded them as an independent source of law, whose general principles could override specific inferences based on the letter of scripture. While the latter view was held by a minority of classical jurists, in modern times it came to be championed in different forms by prominent scholars who sought to adapt Islamic law to changing social conditions by drawing on the intellectual heritage of traditional jurisprudence. These scholars expanded the inventory of maqasid to include such aims of sharia as reform and women 's rights (Rashid Rida); justice and freedom (Mohammed al - Ghazali); and human dignity and rights (Yusuf al - Qaradawi).
The domain of furūʿ al - fiqh (lit. branches of fiqh) is traditionally divided into ʿibādāt (rituals or acts of worship) and muʿāmalāt (social relations). Many jurists further divided the body of substantive jurisprudence into "the four quarters '', called rituals, sales, marriage and injuries. Each of these terms figuratively stood for a variety of subjects. For example, the quarter of sales would encompass partnerships, guaranty, gifts, and bequests, among other topics. Juristic works were arranged as a sequence of such smaller topics, each called a "book '' (kitab). The special significance of ritual was marked by always placing its discussion at the start of the work.
Some historians distinguish a field of Islamic criminal law, which combines several traditional categories. Several crimes with scripturally prescribed punishments are known as hudud. Jurists developed various restrictions which in many cases made them virtually impossible to apply. Other crimes involving intentional bodily harm are judged according to a version of lex talionis that prescribes a punishment analogous to the crime (qisas), but the victims or their heirs may accept a monetary compensation (diya) or pardon the perpetrator instead; only diya is imposed for non-intentional harm. Other criminal cases belong to the category of taʿzīr, where the goal of punishment is correction or rehabilitation of the culprit and its form is largely left to the judge 's discretion. In practice, since early on in Islamic history, criminal cases were usually handled by ruler - administered courts or local police using procedures which were only loosely related to sharia.
The two major genres of furūʿ literature are the mukhtasar (concise summary of law) and the mabsut (extensive commentary). Mukhtasars were short specialized treatises or general overviews that could be used in a classroom or consulted by judges. A mabsut, which usually provided a commentary on a mukhtasar and could stretch to dozens of large volumes, recorded alternative rulings with their justifications, often accompanied by a proliferation of cases and conceptual distinctions. The terminology of juristic literature was conservative and tended to preserve notions which had lost their practical relevance. At the same time, the cycle of abridgement and commentary allowed jurists of each generation to articulate a modified body of law to meet changing social conditions. Other juristic genres include the qawāʿid (succinct formulas meant to aid the student remember general principles) and collections of fatwas by a particular scholar.
The main Sunni schools of law (madhhabs) are the Hanafi, Maliki, Shafi'i and Hanbali madhhabs. They emerged in the ninth and tenth centuries and by the twelfth century almost all jurists aligned themselves with a particular madhhab. These four schools recognize each other 's validity and they have interacted in legal debate over the centuries. Rulings of these schools are followed across the Muslim world without exclusive regional restrictions, but they each came to dominate in different parts of the world. For example, the Maliki school is predominant in North and West Africa; the Hanafi school in South and Central Asia; the Shafi'i school in Lower Egypt, East Africa, and Southeast Asia; and the Hanbali school in North and Central Arabia. The first centuries of Islam also witnessed a number of short - lived Sunni madhhabs. The Zahiri school, which is commonly identified as extinct, continues to exert influence over legal thought. The development of Shia legal schools occurred along the lines of theological differences and resulted in formation of the Twelver, Zaidi and Ismaili madhhabs, whose differences from Sunni legal schools are roughly of the same order as the differences among Sunni schools. The Ibadi legal school, distinct from Sunni and Shia madhhabs, is predominant in Oman.
The transformations of Islamic legal institutions in the modern era have had profound implications for the madhhab system. Legal practice in most of the Muslim world has come to be controlled by government policy and state law, so that the influence of the madhhabs beyond personal ritual practice depends on the status accorded to them within the national legal system. State law codification commonly utilized the methods of takhayyur (selection of rulings without restriction to a particular madhhab) and talfiq (combining parts of different rulings on the same question). Legal professionals trained in modern law schools have largely replaced traditional ulema as interpreters of the resulting laws. Global Islamic movements have at times drawn on different madhhabs and at other times placed greater focus on the scriptural sources rather than classical jurisprudence. The Hanbali school, with its particularly strict adherence to the Quran and hadith, has inspired conservative currents of direct scriptural interpretation by the Salafi and Wahhabi movements. Other currents, such as networks of Indonesian ulema and Islamic scholars residing in Muslim - minority countries, have advanced liberal interpretations of Islamic law without focusing on traditions of a particular madhhab.
Politics portal
From the 9th century onward, the power to interpret law in traditional Islamic societies was in the hands of the scholars (ulema). This separation of powers served to limit the range of actions available to the ruler, who could not easily decree or reinterpret law independently and expect the continued support of the community. Through succeeding centuries and empires, the balance between the ulema and the rulers shifted and reformed, but the balance of power was never decisively changed. Over the course of many centuries, imperial, political and technological change, including the Industrial Revolution and the French Revolution, ushered in an era of European world hegemony that gradually included the domination of many of the lands which had previously been ruled by Islamic empires. At the end of the Second World War, the European powers found themselves too weakened to maintain their empires as before. The wide variety of forms of government, systems of law, attitudes toward modernity and interpretations of sharia are a result of the ensuing drives for independence and modernity in the Muslim world.
Most Muslim - majority countries incorporate sharia at some level in their legal framework, with many calling it the highest law or the source of law of the land in their constitution. Most use sharia for personal law (marriage, divorce, domestic violence, child support, family law, inheritance and such matters). Elements of sharia are present, to varying extents, in the criminal justice system of many Muslim - majority countries. Saudi Arabia, Yemen, Brunei, Qatar, Pakistan, United Arab Emirates, Iraq, Iran, Afghanistan, Sudan and Mauritania apply the code predominantly or entirely while it applies in some parts of Indonesia.
Most Muslim - majority countries with sharia - prescribed hudud punishments in their legal code do not prescribe it routinely and use other punishments instead. The harshest sharia penalties such as stoning, beheading and other forms of the death penalty are enforced with varying levels of consistency.
Since the 1970s, most Muslim - majority countries have faced vociferous demands from their religious groups and political parties for immediate adoption of sharia as the sole, or at least primary, legal framework. Some moderates and liberal scholars within these Muslim countries have argued for limited expansion of sharia.
With the growing Muslim immigrant communities in Europe, there have been reports in some media of "no - go zones '' being established where sharia law reigns supreme. However, there is no evidence of the existence of "no - go zones '', and these allegations are sourced from anti-immigrant groups falsely equating low - income neighborhoods predominantly inhabited by immigrants as "no - go zones ''. In England, the Muslim Arbitration Tribunal makes use of sharia family law to settle disputes, though this limited adoption of sharia is controversial.
Sharia is enforced in Islamic nations in a number of ways, including mutaween (police enforcement) and hisbah. mutaween (Arabic: المطوعين ، مطوعية muṭawwiʿīn, muṭawwiʿiyyah) are the government - authorized or government - recognized religious police (or clerical police) of Saudi Arabia. Elsewhere, enforcement of Islamic values in accordance with sharia is the responsibility of the Polisi Perda Syariah Islam in Aceh province of Indonesia, the Committee for the Propagation of Virtue and the Prevention of Vice (Gaza Strip) in parts of Palestine, and the Basiji Force in Iran.
Hisbah (Arabic: حسبة ḥisb (ah), or hisba) is a historic Islamic doctrine which means "accountability ''. Hisbah doctrine holds that it is a religious obligation of every Muslim that he or she report to the ruler (Sultan, government authorities) any wrong behavior of a neighbor or relative that violates sharia or insults Islam. The doctrine states that it is the divinely sanctioned duty of the ruler to intervene when such charges are made, and coercively "command right and forbid wrong '' in order to keep everything in order according to sharia. Al - Jama'a al - Islamiyya (considered a terrorist organization) suggest that enforcement of sharia under the Hisbah doctrine is the sacred duty of all Muslims, not just rulers.
The doctrine of Hisbah in Islam may allow a Muslim to accuse another Muslim, ex-Muslim or non-Muslim for beliefs or behavior that harms Islamic society. This principle has been used in countries such as Egypt, Pakistan and others to bring blasphemy charges against apostates. For example, in Egypt, sharia was enforced on the Muslim scholar Nasr Abu Zayd, through the doctrine of Hisbah for apostasy. Similarly, in Nigeria, after twelve northern Muslim - majority states such as Kano adopted a sharia - based penal code between 1999 and 2000, hisbah became the allowed method of sharia enforcement where Muslim citizens could police compliance of moral order based on sharia. In Aceh province of Indonesia, Islamic vigilante activists have invoked Hisbah doctrine to enforce sharia on fellow Muslims as well as demanding that non-Muslims respect sharia. Hisbah has been used in many Muslim majority countries to enforce sharia restrictions on blasphemy and criticism of Islam over internet and social media.
Sharia judicial proceedings have significant differences from other legal traditions, including those in both common law and civil law. Sharia courts traditionally do not rely on lawyers; plaintiffs and defendants represent themselves. Trials are conducted solely by the judge, and there is no jury system. There is no pre-trial discovery process, and no cross-examination of witnesses. Unlike common law, judges ' verdicts do not set binding precedents under the principle of stare decisis, and unlike civil law, sharia is left to the interpretation in each case and has no formally codified universal statutes.
The rules of evidence in sharia courts also maintain a distinctive custom of prioritizing oral testimony. Witnesses, in a sharia court system, must be faithful, that is Muslim. Male Muslim witnesses are deemed more reliable than female Muslim witnesses, and non-Muslim witnesses considered unreliable and receive no priority in a sharia court. In civil cases in some countries, a Muslim woman witness is considered half the worth and reliability than a Muslim man witness. In criminal cases, women witnesses are unacceptable in stricter, traditional interpretations of sharia, such as those found in Hanbali madhhab.
A confession, an oath, or the oral testimony of Muslim witnesses are the main evidence admissible, in sharia courts, for hudud crimes, that is the religious crimes of adultery, fornication, rape, accusing someone of illicit sex but failing to prove it, apostasy, drinking intoxicants and theft. Testimony must be from at least two free Muslim male witnesses, or one Muslim male and two Muslim females, who are not related parties and who are of sound mind and reliable character. Testimony to establish the crime of adultery, fornication or rape must be from four Muslim male witnesses, with some fiqhs allowing substitution of up to three male with six female witnesses; however, at least one must be a Muslim male. Forensic evidence (i.e., fingerprints, ballistics, blood samples, DNA etc.) and other circumstantial evidence is likewise rejected in hudud cases in favor of eyewitnesses, a practice which can cause severe difficulties for women plaintiffs in rape cases.
Muslim jurists have debated whether and when coerced confession and coerced witnesses are acceptable. In the Ottoman Criminal Code, the executive officials were allowed to use torture only if the accused had a bad reputation and there were already indications of his guilt, such as when stolen goods were found in his house, if he was accused of grievous bodily harm by the victim or if a criminal during investigation mentioned him as an accomplice. Confessions obtained under torture could not be used as a ground for awarding punishment unless they were corroborated by circumstantial evidence.
Quran 2: 282 recommends written financial contracts with reliable witnesses, although there is dispute about equality of female testimony.
Marriage is solemnized as a written financial contract, in the presence of two Muslim male witnesses, and it includes a brideprice (Mahr) payable from a Muslim man to a Muslim woman. The brideprice is considered by a sharia court as a form of debt. Written contracts are paramount in sharia courts in the matters of dispute that are debt - related, which includes marriage contracts. Written contracts in debt - related cases, when notarized by a judge, is deemed more reliable.
In commercial and civil contracts, such as those relating to exchange of merchandise, agreement to supply or purchase goods or property, and others, oral contracts and the testimony of Muslim witnesses triumph over written contracts. Sharia system has held that written commercial contracts may be forged. Timur Kuran states that the treatment of written evidence in religious courts in Islamic regions created an incentive for opaque transactions, and the avoidance of written contracts in economic relations. This led to a continuation of a "largely oral contracting culture '' in Muslim nations and communities.
In lieu of written evidence, oaths are accorded much greater weight; rather than being used simply to guarantee the truth of ensuing testimony, they are themselves used as evidence. Plaintiffs lacking other evidence to support their claims may demand that defendants take an oath swearing their innocence, refusal thereof can result in a verdict for the plaintiff. Taking an oath for Muslims can be a grave act; one study of courts in Morocco found that lying litigants would often "maintain their testimony ' right up to the moment of oath - taking and then to stop, refuse the oath, and surrender the case. '' Accordingly, defendants are not routinely required to swear before testifying, which would risk casually profaning the Quran should the defendant commit perjury; instead oaths are a solemn procedure performed as a final part of the evidence process.
In classical jurisprudence monetary compensation for bodily harm (diya or blood money) is assessed differently for different classes of victims. For example, for Muslim women the amount was half that assessed for a Muslim man. Diya for the death of a free Muslim man is twice as high as for Jewish and Christian victims according to the Maliki and Hanbali madhhabs and three times as high according to Shafi'i rules. Several legals schools assessed diya for Magians (majus) at one - fifteenth the value of a free Muslim male.
Modern countries which incorporate classical diya rules into their legal system treat them in different ways. The Pakistan Penal Code modernized the Hanafi doctrine by eliminating distinctions between Muslims and non-Muslims. In Iran, diya for non-Muslim victims professing one of the faiths protected under the constitution (Jews, Christians, and Zoroastrians) was made equal to diya for Muslims in 2004, though according to a 2006 US State Department report, the penal code still discriminates against other religious minorities and women. According to Human Rights Watch and the US State Department, in Saudi Arabia Jewish or Christian male plaintiffs are entitled to half the amount a Muslim male would receive, while for all other non-Muslim males the proportion is one - sixteenth.
A 2013 survey based on interviews of 38,000 Muslims, randomly selected from urban and rural parts in 39 countries using area probability designs, by the Pew Forum on Religion and Public Life found that a majority -- in some cases "overwhelming '' majority -- of Muslims in a number of countries support making sharia the law of the land, including Afghanistan (99 %), Iraq (91 %), Niger (86 %), Malaysia (86 %), Pakistan (84 %), Morocco (83 %), Bangladesh (82 %), Egypt (74 %), Indonesia (72 %), Jordan (71 %), Uganda (66 %), Ethiopia (65 %), Mali (63 %), Ghana (58 %), and Tunisia (56 %). In Muslim regions of Southern - Eastern Europe and Central Asia, the support is less than 50 %: Russia (42 %), Kyrgyzstan (35 %), Tajikistan (27 %), Kosovo (20 %), Albania (12 %), Turkey (12 %), Kazakhstan (10 %), Azerbaijan (8 %). Regarding specific averages, in South Asia, Sharia had 84 % favorability rating among the respondents; in Southeast Asia 77 %; in the Middle - East / North Africa 74 %; in Sub-Saharan Africa 64 %; in Southern - Eastern Europe 18 %; and in Central Asia 12 %.
However, while most of those who support implementation of sharia favor using it in family and property disputes, fewer supported application of severe punishments such as whippings and cutting off hands, and interpretations of some aspects differed widely. According to the Pew poll, among Muslims who support making sharia the law of the land, most do not believe that it should be applied to non-Muslims. In the Muslim - majority countries surveyed this proportion varied between 74 % (of 74 % in Egypt) and 19 % (of 10 % in Kazakhstan), as percentage of those who favored making sharia the law of the land. Polls demonstrate that for Egyptians, the ' Shariah ' is associated with notions of political, social and gender justice.
In 2008, Rowan Williams, the archbishop of Canterbury, has suggested that Islamic and Orthodox Jewish courts should be integrated into the British legal system alongside ecclesiastical courts to handle marriage and divorce, subject to agreement of all parties and strict requirements for protection of equal rights for women. His reference to the sharia sparked a controversy. Later that year, Nicholas Phillips, then Lord Chief Justice of England and Wales, stated that there was "no reason why sharia principles (...) should not be the basis for mediation or other forms of alternative dispute resolution. '' A 2008 YouGov poll in the United Kingdom found 40 % of Muslim students interviewed supported the introduction of sharia into British law for Muslims. Michael Broyde, professor of law at Emory University specializing in alternative dispute resolution and Jewish law, has argued that sharia courts can be integrated into the American religious arbitration system, provided that they adopt appropriate institutional requirements as American rabbinical courts have done.
Fundamentalists, wishing to return to basic Islamic religious values and law, have in some instances imposed harsh sharia punishments for crimes, curtailed civil rights and violated human rights. Extremists have used the Quran and their own particular version of sharia to justify acts of war and terror against Muslim as well as non-Muslim individuals and governments, using alternate, conflicting interpretations of sharia and their notions of jihad.
The sharia basis of arguments advocating terrorism is controversial. According to Bernard Lewis, "(a) t no time did the classical jurists offer any approval or legitimacy to what we nowadays call terrorism '' and the terrorist practice of suicide bombing "has no justification in terms of Islamic theology, law or tradition ''. In the modern era the notion of jihad has lost its jurisprudential relevance and instead gave rise to an ideological and political discourse. For al - Qaeda ideologues, in jihad all means are legitimate, including targeting Muslim non-combatants and the mass killing of non-Muslim civilians. According to these interpretations, Islam does not discriminate between military and civilian targets, but rather between Muslims and nonbelievers, whose blood can be legitimately spilled.
Some scholars of Islam, such as Yusuf al - Qaradawi and Sulaiman Al - Alwan, have supported suicide attacks against Israeli civilians, arguing that they are army reservists and hence should be considered as soldiers, while Hamid bin Abdallah al - Ali declared that suicide attacks in Chechnya were justified as a "sacrifice ''. Many prominent Islamic scholars, including al - Qaradawi himself, have issued condemnations of terrorism in general terms. For example, Abdul - Aziz ibn Abdullah Al ash - Sheikh, the Grand Mufti of Saudi Arabia has stated that "terrorizing innocent people (...) constitute (s) a form of injustice that can not be tolerated by Islam '', while Muhammad Sayyid Tantawy, Grand Imam of al - Azhar and former Grand Mufti of Egypt has stated that "attacking innocent people is not courageous; it is stupid and will be punished on the Day of Judgment ''.
In the Western world, sharia has been called a source of "hysteria '', "more controversial than ever '', the one aspect of Islam that inspires "particular dread ''. On the Internet, "dozens of self - styled counter-jihadis '' emerged to campaign against sharia law, describing it in strict interpretations resembling those of Salafi Muslims. Also, fear of sharia law and of "the ideology of extremism '' among Muslims reportedly spread to mainstream conservative Republicans in the United States. Former House Speaker Newt Gingrich won ovations calling for a federal ban on sharia law. The issue of "liberty versus Sharia '' was called a "momentous civilisational debate '' by right - wing pundit Diana West. In 2008 in Britain, the future Prime Minister (David Cameron) declared his opposition to "any expansion of Sharia law in the UK. '' In Germany, in 2014, the Interior Minister (Thomas de Maizière) told a newspaper (Bild), "Sharia law is not tolerated on German soil. ''
Some countries and jurisdictions have explicit bans on sharia law. In Canada, for example, sharia law has been explicitly banned in Quebec by a 2005 unanimous vote of the National Assembly, while the province of Ontario allows family law disputes to be arbitrated only under Ontario law. In the U.S., opponents of Sharia have sought to ban it from being considered in courts, where it has been routinely used alongside traditional Jewish and Catholic laws to decide legal, business, and family disputes subject to contracts drafted with reference to such laws, as long as they do not violate secular law or the U.S. constitution. After failing to gather support for a federal law making observing Sharia a felony punishable by up to 20 years in prison, anti-Sharia activists have focused on state legislatures. By 2014, bills aimed against use of Sharia have been introduced in 34 states and passed in 11. These bills have generally referred to banning foreign or religious law in order to thwart legal challenges.
According to Jan Michiel Otto, Professor of Law and Governance in Developing Countries at Leiden University, "(a) nthropological research shows that people in local communities often do not distinguish clearly whether and to what extent their norms and practices are based on local tradition, tribal custom, or religion. Those who adhere to a confrontational view of sharia tend to ascribe many undesirable practices to sharia and religion overlooking custom and culture, even if high - ranking religious authorities have stated the opposite. ''
Ali Khan states that "constitutional orders founded on the principles of sharia are fully compatible with democracy, provided that religious minorities are protected and the incumbent Islamic leadership remains committed to the right to recall ''. Other scholars say sharia is not compatible with democracy, particularly where the country 's constitution demands separation of religion and the democratic state.
Courts in non-Muslim majority nations have generally ruled against the implementation of sharia, both in jurisprudence and within a community context, based on sharia 's religious background. In Muslim nations, sharia has wide support with some exceptions. For example, in 1998 the Constitutional Court of Turkey banned and dissolved Turkey 's Refah Party on the grounds that "Democracy is the antithesis of Sharia '', the latter of which Refah sought to introduce.
On appeal by Refah the European Court of Human Rights determined that "sharia is incompatible with the fundamental principles of democracy ''. Refah 's sharia - based notion of a "plurality of legal systems, grounded on religion '' was ruled to contravene the European Convention for the Protection of Human Rights and Fundamental Freedoms. It was determined that it would "do away with the State 's role as the guarantor of individual rights and freedoms '' and "infringe the principle of non-discrimination between individuals as regards their enjoyment of public freedoms, which is one of the fundamental principles of democracy ''.
Several major, predominantly Muslim countries have criticized the Universal Declaration of Human Rights (UDHR) for its perceived failure to take into account the cultural and religious context of non-Western countries. Iran declared in the UN assembly that UDHR was "a secular understanding of the Judeo - Christian tradition '', which could not be implemented by Muslims without trespassing the Islamic law. Islamic scholars and Islamist political parties consider ' universal human rights ' arguments as imposition of a non-Muslim culture on Muslim people, a disrespect of customary cultural practices and of Islam. In 1990, the Organisation of Islamic Cooperation, a group representing all Muslim majority nations, met in Cairo to respond to the UDHR, then adopted the Cairo Declaration on Human Rights in Islam.
Ann Elizabeth Mayer points to notable absences from the Cairo Declaration: provisions for democratic principles, protection for religious freedom, freedom of association and freedom of the press, as well as equality in rights and equal protection under the law. Article 24 of the Cairo declaration states that "all the rights and freedoms stipulated in this Declaration are subject to the Islamic shari'a ''.
In 2009, the journal Free Inquiry summarized the criticism of the Cairo Declaration in an editorial: "We are deeply concerned with the changes to the Universal Declaration of Human Rights by a coalition of Islamic states within the United Nations that wishes to prohibit any criticism of religion and would thus protect Islam 's limited view of human rights. In view of the conditions inside the Islamic Republic of Iran, Egypt, Pakistan, Saudi Arabia, the Sudan, Syria, Bangladesh, Iraq, and Afghanistan, we should expect that at the top of their human rights agenda would be to rectify the legal inequality of women, the suppression of political dissent, the curtailment of free expression, the persecution of ethnic minorities and religious dissenters -- in short, protecting their citizens from egregious human rights violations. Instead, they are worrying about protecting Islam. ''
H. Patrick Glenn states that sharia is structured around the concept of mutual obligations of a collective, and it considers individual human rights as potentially disruptive and unnecessary to its revealed code of mutual obligations. In giving priority to this religious collective rather than individual liberty, the Islamic law justifies the formal inequality of individuals (women, non-Islamic people). Bassam Tibi states that sharia framework and human rights are incompatible. Abdel al - Hakeem Carney, in contrast, states that sharia is misunderstood from a failure to distinguish sharia from siyasah (politics).
The Cairo Declaration on Human Rights in Islam conditions free speech with sharia law: Article 22 (a) of the Declaration states that "Everyone shall have the right to express his opinion freely in such manner as would not be contrary to the principles of the Shariah. ''
Blasphemy in Islam is any form of cursing, questioning or annoying God, Muhammad or anything considered sacred in Islam. The sharia of various Islamic schools of jurisprudence specify different punishment for blasphemy against Islam, by Muslims and non-Muslims, ranging from imprisonment, fines, flogging, amputation, hanging, or beheading. In some cases, sharia allows non-Muslims to escape death by converting and becoming a devout follower of Islam.
Blasphemy, as interpreted under sharia, is controversial. Muslim nations have petitioned the United Nations to limit "freedom of speech '' because "unrestricted and disrespectful opinion against Islam creates hatred ''. Other nations, in contrast, consider blasphemy laws as violation of "freedom of speech '', stating that freedom of expression is essential to empowering both Muslims and non-Muslims, and point to the abuse of blasphemy laws, where hundreds, often members of religious minorities, are being lynched, killed and incarcerated in Muslim nations, on flimsy accusations of insulting Islam.
According to the United Nations ' Universal Declaration of Human Rights, every human has the right to freedom of thought, conscience and religion; this right includes freedom to change their religion or belief. Sharia has been criticized for not recognizing this human right. According to scholars of Islamic law, the applicable rules for religious conversion under sharia are as follows:
According to sharia theory, conversion of disbelievers and non-Muslims to Islam is encouraged as a religious duty for all Muslims, and leaving Islam (apostasy), expressing contempt for Islam (blasphemy), and religious conversion of Muslims is prohibited. Not all Islamic scholars agree with this interpretation of sharia theory. In practice, as of 2011, 20 Islamic nations had laws declaring apostasy from Islam as illegal and a criminal offense. Such laws are incompatible with the UDHR 's requirement of freedom of thought, conscience and religion. In another 2013 report based on international survey of religious attitudes, more than 50 % of Muslim population in 6 out of 49 Islamic countries supported death penalty for any Muslim who leaves Islam (apostasy). However it is also shown that the majority of Muslims in the 43 nations surveyed did not agree with this interpretation of sharia.
Some scholars claim sharia allows religious freedom because a sharia verse teaches, "there is no compulsion in religion. '' Other scholars claim sharia recognizes only one proper religion, considers apostasy as sin punishable with death, and members of other religions as kafir (infidel); or hold that sharia demands that all apostates and kafir must be put to death, enslaved or be ransomed. Yet other scholars suggest that sharia has become a product of human interpretation and inevitably leads to disagreements about the "precise contents of the Shari'a. '' In the end, then, what is being applied is not sharia, but what a particular group of clerics and government decide is sharia. It is these differing interpretations of sharia that explain why many Islamic countries have laws that restrict and criminalize apostasy, proselytism and their citizens ' freedom of conscience and religion.
Homosexual intercourse is illegal under sharia law, though the prescribed penalties differ from one school of jurisprudence to another. For example, some Muslim - majority countries impose the death penalty for acts perceived as sodomy and homosexual activities: Iran, Saudi Arabia, and in other Muslim - majority countries such as Egypt, Iraq, and the Indonesian province of Aceh, same - sex sexual acts are illegal, and LGBT people regularly face violence and discrimination.
Many claim sharia law encourages domestic violence against women, when a husband suspects nushuz (disobedience, disloyalty, rebellion, ill conduct) in his wife. Other scholars claim wife beating, for nashizah, is not consistent with modern perspectives of the Quran.
One of the verses of the Quran relating to permissibility of domestic violence is Surah 4: 34. Sharia has been criticized for ignoring women 's rights in domestic abuse cases. Musawah, CEDAW, KAFA and other organizations have proposed ways to modify sharia - inspired laws to improve women 's rights in Islamic nations, including women 's rights in domestic abuse cases.
Shari'a is the basis for personal status laws in most Islamic majority nations. These personal status laws determine rights of women in matters of marriage, divorce and child custody. A 2011 UNICEF report concludes that sharia law provisions are discriminatory against women from a human rights perspective. In legal proceedings under sharia law, a woman 's testimony is worth half of a man 's before a court.
Except for Iran, Lebanon and Bahrain which allow child marriages, the civil code in Islamic majority countries do not allow child marriage of girls. However, with sharia personal status laws, sharia courts in all these nations have the power to override the civil code. The religious courts permit girls less than 18 years old to marry. As of 2011, child marriages are common in a few Middle Eastern countries, accounting for 1 in 6 of all marriages in Egypt and 1 in 3 marriages in Yemen. UNICEF and other studies state that the top five nations in the world with highest observed child marriage rates -- Niger (75 %), Chad (72 %), Mali (71 %), Bangladesh (64 %), Guinea (63 %) -- are Islamic - majority countries where the personal laws for Muslims are sharia - based. In his Cairo speech, President Obama spoke out against child marriage.
Rape is considered a crime in all countries, but sharia courts in Bahrain, Iraq, Jordan, Libya, Morocco, Syria and Tunisia in some cases allow a rapist to escape punishment by marrying his victim, while in other cases the victim who complains is often prosecuted with the crime of Zina (adultery).
Sharia grants women the right to inherit property from other family members, and these rights are detailed in the Quran. A woman 's inheritance is unequal and less than a man 's, and dependent on many factors. For instance, a daughter 's inheritance is usually half that of her brother 's.
Until the 20th century, Islamic law granted Muslim women certain legal rights, such as the right to own property received as Mahr (brideprice) at her marriage. However, Islamic law does not grant non-Muslim women the same legal rights as the few it did grant Muslim women. Sharia recognizes the basic inequality between master and women slave, between free women and slave women, between Believers and non-Believers, as well as their unequal rights. Sharia authorized the institution of slavery, using the words abd (slave) and the phrase ma malakat aymanukum ("that which your right hand owns '') to refer to women slaves, seized as captives of war. Under Islamic law, Muslim men could have sexual relations with female captives and slaves.
Slave women under sharia did not have a right to own property or to move freely. Sharia, in Islam 's history, provided a religious foundation for enslaving non-Muslim women (and men), but allowed for the manumission of slaves. However, manumission required that the non-Muslim slave first convert to Islam. A non-Muslim slave woman who bore children to her Muslim master became legally free upon her master 's death, and her children were presumed to be Muslims like their father, in Africa and elsewhere.
Starting with the 20th century, Western legal systems evolved to expand women 's rights, but women 's rights under Islamic law have remained tied to the Quran, hadiths and their fundamentalist interpretation as sharia by Islamic jurists.
Elements of Islamic law have parallels in western legal systems. As example, the influence of Islam on the development of an international law of the sea can be discerned alongside that of the Roman influence.
Makdisi states Islamic law also parallels the legal scholastic system in the West, which gave rise to the modern university system. He writes that the triple status of faqih ("master of law ''), mufti ("professor of legal opinions '') and mudarris ("teacher ''), conferred by the classical Islamic legal degree, had its equivalents in the medieval Latin terms magister, professor and doctor, respectively, although they all came to be used synonymously in both East and West. Makdisi suggests that the medieval European doctorate, licentia docendi was modeled on the Islamic degree ijazat al - tadris wa - l - ifta ', of which it is a word - for - word translation, with the term ifta ' (issuing of fatwas) omitted. He also argues that these systems shared fundamental freedoms: the freedom of a professor to profess his personal opinion and the freedom of a student to pass judgement on what he is learning.
There are differences between Islamic and Western legal systems. For example, sharia classically recognizes only natural persons, and never developed the concept of a legal person, or corporation, i.e., a legal entity that limits the liabilities of its managers, shareholders, and employees; exists beyond the lifetimes of its founders; and that can own assets, sign contracts, and appear in court through representatives. Interest prohibitions imposed secondary costs by discouraging record keeping and delaying the introduction of modern accounting. Such factors, according to Timur Kuran, have played a significant role in retarding economic development in the Middle East.
|
who is honoured with bharat ratna padma bhushan and padma vibhushan | Padma Vibhushan - Wikipedia
The Padma Vibhushan is the second - highest civilian award of the Republic of India, preceded by Bharat Ratna and followed by Padma Bhushan. Instituted on 2 January 1954, the award is given for "exceptional and distinguished service '', without distinction of race, occupation, position, or sex. The award criteria include "service in any field including service rendered by Government servants '' including doctors and scientists, but excludes those working with the public sector undertakings. As of 2017, the award has been bestowed on 300 individuals, including twelve posthumous and 19 non-citizen recipients.
During 1 May and 15 September of every year, the recommendations for the award are submitted to the Padma Awards Committee, constituted by the Prime Minister of India. The recommendations are received from all the state and the union territory governments, the Ministries of the Government of India, the Bharat Ratna and previous Padma Vibhushan award recipients, the Institutes of Excellence, the Ministers, the Chief Ministers and the Governors of State, and the Members of Parliament including private individuals. The committee later submits their recommendations to the Prime Minister and the President of India for the further approval. The award recipients are announced on Republic Day.
The first recipients of the award were Satyendra Nath Bose, Nand Lal Bose, Zakir Hussain, Balasaheb Gangadhar Kher, Jigme Dorji Wangchuk, and V.K. Krishna Menon, who were honoured in 1954. The 1954 statutes did not allow posthumous awards but this was subsequently modified in the January 1955 statute. The "Padma Vibhushan '', along with other personal civil honours, was briefly suspended twice, from July 1977 to January 1980 and from August 1992 to December 1995. Some of the recipients have refused or returned their conferments. Vilayat Khan, Swami Ranganathananda, and Manikonda Chalapathi Rau refused the award, the family members of Lakshmi Chand Jain (2011) and Sharad Anantrao Joshi (2016) declined their posthumous conferments, and Baba Amte returned his 1986 conferment in 1991. On 25 January 2017, the award was conferred upon seven recipients; Murli Manohar Joshi, Sharad Pawar, Udupi Ramachandra Rao, Jaggi Vasudev, K.J. Yesudas and posthumously on Sunder Lal Patwa and P.A. Sangma.
On 2 January 1954, a press release was published from the office of the secretary to the President of India announcing the creation of two civilian awards -- Bharat Ratna, the highest civilian award, and the three - tier Padma Vibhushan, classified into "Pahela Warg '' (Class I), "Dusra Warg '' (Class II), and "Tisra Warg '' (Class III), which rank below the Bharat Ratna. On 15 January 1955, the Padma Vibhushan was reclassified into three different awards: the Padma Vibhushan, the highest of the three, followed by the Padma Bhushan and the Padma Shri.
The award, along with other personal civilian honours, was briefly suspended twice in its history; for the first time in July 1977 when Morarji Desai was sworn in as the fourth Prime Minister of India, for being "worthless and politicized ''. The suspension was rescinded on 25 January 1980 after Indira Gandhi became the Prime Minister. The civilian awards were suspended again in mid-1992, when two Public - Interest Litigations were filed in the High Courts of India, one in the Kerala High Court on 13 February 1992 by Balaji Raghavan and another in the Madhya Pradesh High Court (Indore Bench) on 24 August 1992 by Satya Pal Anand. Both petitioners questioned the civilian awards being "titles '' per an interpretation of Article 18 (1) of the Constitution of India. On 25 August 1992, the Madhya Pradesh High Court issued a notice temporarily suspending all civilian awards. A Special Division Bench of the Supreme Court of India was formed comprising five judges: A.M. Ahmadi C.J., Kuldip Singh, B.P. Jeevan Reddy, N.P. Singh, and S. Saghir Ahmad. On 15 December 1995, the Special Division Bench restored the awards and delivered a judgment that the "Bharat Ratna and Padma awards are not titles under Article 18 of the Constitution of India ''.
The award is conferred for "exceptional and distinguished service '', without distinction of race, occupation, position, or sex. The criteria include "service in any field including service rendered by Government servants '', but excludes those working with the public sector undertakings, with the exception of doctors and scientists. The 1954 statutes did not allow posthumous awards, but this was subsequently modified in the January 1955 statute; Aditya Nath Jha, Ghulam Mohammed Sadiq, and Vikram Sarabhai became the first recipients to be honoured posthumously in 1972.
The recommendations are received from all state and union territory governments, the Ministries of the Government of India, the Bharat Ratna and previous Padma Vibhushan award recipients, the Institutes of Excellence, the Ministers, the Chief Ministers, the Governors of State, and the Members of Parliament, including private individuals. The recommendations received during 1 May and 15 September of every year are submitted to the Padma Awards Committee, convened by the Prime Minister of India. The Awards Committee later submits its recommendations to the Prime Minister and the President of India for further approval.
The Padma Vibhushan award recipients are announced every year on Republic Day of India and registered in The Gazette of India -- a publication released weekly by the Department of Publication, Ministry of Urban Development used for official government notices. The conferral of the award is not considered official without its publication in the Gazette. Recipients whose awards have been revoked or restored, both of which actions require the authority of the President, are also registered in the Gazette and are required to surrender their medals when their names are struck from the register.
The original 1954 specifications of the award called for a circle made of gold gilt 1 ⁄ inches (35 mm) in diameter, with rims on both sides. A centrally located lotus flower was embossed on the obverse side of the medal and the text "Padma Vibhushan '' written in Devanagari script was inscribed above the lotus along the upper edge of the medal. A floral wreath was embossed along the lower edge and a lotus wreath at the top along the upper edge. The Emblem of India was placed in the centre of the reverse side with the text "Desh Seva '' in Devanagari Script on the lower edge. The medal was suspended by a pink riband 1 ⁄ inches (32 mm) in width divided into two equal segments by a white vertical line.
A year later, the design was modified. The current decoration is a circular - shaped bronze toned medallion 1 ⁄ inches (44 mm) in diameter and ⁄ inch (3.2 mm) thick. The centrally placed pattern made of outer lines of a square of 1 ⁄ inches (30 mm) side is embossed with a knob carved within each of the outer angles of the pattern. A raised circular space of 1 ⁄ inches (27 mm) in diameter is placed at the centre of the decoration. A centrally located lotus flower is embossed on the obverse side of the medal and the text "Padma '' written in Devanagari script is placed above and the text "Vibhushan '' is placed below the lotus. The Emblem of India is placed in the centre of the reverse side with the national motto of India, "Satyameva Jayate '' (Truth alone triumphs), in Devanagari Script, inscribed on the lower edge. The rim, the edges. and all embossing on either side is of white gold with the text "Padma Vibhushan '' of silver gilt. The medal is suspended by a pink riband 1 ⁄ inches (32 mm) in width.
The medal is ranked fourth in the order of precedence of wearing of medals and decorations. The medals are produced at Alipore Mint, Kolkata along with the other civilian and military awards like Bharat Ratna, Padma Bhushan, Padma Shri, and Param Veer Chakra.
The first recipients of the Padma Vibhushan were Satyendra Nath Bose, Nandalal Bose, Zakir Husain, Balasaheb Gangadhar Kher, V.K. Krishna Menon, and Jigme Dorji Wangchuk, who were honoured in 1954. As of 2017, the award has been bestowed on 300 individuals, including 12 posthumous and 19 non-citizen recipients. Some of the conferments have been refused or returned by the recipients; Vilayat Khan, Swami Ranganathananda, and Manikonda Chalapathi Rau refused the award; the family members of Lakshmi Chand Jain (2011) and Sharad Anantrao Joshi (2016) declined their posthumous conferments, and Baba Amte returned his 1986 conferment in 1991. On 25 January 2017, the award was conferred upon seven recipients; Murli Manohar Joshi, Sharad Pawar, Udupi Ramachandra Rao, Jaggi Vasudev, K.J. Yesudas and posthumously on Sunder Lal Patwa and P.A. Sangma.
|
where does the colorado river begin and end | Colorado River - wikipedia
The Colorado River is one of the principal rivers of the Southwestern United States and northern Mexico (the other being the Rio Grande). The 1,450 - mile - long (2,330 km) river drains an expansive, arid watershed that encompasses parts of seven U.S. and two Mexican states. Starting in the central Rocky Mountains in the U.S., the river flows generally southwest across the Colorado Plateau and through the Grand Canyon before reaching Lake Mead on the Arizona -- Nevada border, where it turns south toward the international border. After entering Mexico, the Colorado approaches the mostly dry Colorado River Delta at the tip of the Gulf of California between Baja California and Sonora.
Known for its dramatic canyons, whitewater rapids, and eleven U.S. National Parks, the Colorado River system is a vital source of water for 40 million people in southwestern North America. The river and its tributaries are controlled by an extensive system of dams, reservoirs, and aqueducts, which in most years divert its entire flow for agricultural irrigation and domestic water supply. Its large flow and steep gradient are used for generating hydroelectric power, and its major dams regulate peaking power demands in much of the Intermountain West. Intensive water consumption has dried up the lower 100 miles (160 km) of the river, which has rarely reached the sea since the 1960s.
Beginning with small bands of nomadic hunter - gatherers, Native Americans have inhabited the Colorado River basin for at least 8,000 years. Between 2,000 and 1,000 years ago, the river and its tributaries fostered large agricultural civilizations -- some of the most sophisticated indigenous cultures in North America -- which eventually faded due to a combination of severe drought and poor land use practices. Most native peoples that inhabit the basin today are descended from other groups that settled in the region beginning about 1,000 years ago. Europeans first entered the Colorado Basin in the 16th century, when explorers from Spain began mapping and claiming the area, which later became part of Mexico upon its independence in 1821. Early contact between Europeans and Native Americans was generally limited to the fur trade in the headwaters and sporadic trade interactions along the lower river.
After most of the Colorado River basin became part of the U.S. in 1846, the bulk of the river 's course was still the subject of myths and speculation. Several expeditions charted the Colorado in the mid-19th century -- one of which, led by John Wesley Powell, was the first to run the rapids of the Grand Canyon. American explorers collected valuable information that was later used to develop the river for navigation and water supply. Large - scale settlement of the lower basin began in the mid - to late - 19th century, with steamboats providing transportation from the Gulf of California to landings along the river that linked to wagon roads to the interior. Lesser numbers settled in the upper basin, which was the scene of major gold strikes in the 1860s and 1870s.
Large engineering works began around the start of the 20th century, with major guidelines established in a series of international and U.S. interstate treaties known as the "Law of the River ''. The U.S. federal government was the main driving force behind the construction of dams and aqueducts, although many state and local water agencies were also involved. Most of the major dams were built between 1910 and 1970; the system keystone, Hoover Dam, was completed in 1935. The Colorado is now considered among the most controlled and litigated rivers in the world, with every drop of its water fully allocated.
The environmental movement in the American Southwest has opposed the damming and diversion of the Colorado River system because of detrimental effects on the ecology and natural beauty of the river and its tributaries. During the construction of Glen Canyon Dam, environmental organizations vowed to block any further development of the river, and a number of later dam and aqueduct proposals were defeated by citizen opposition. As demands for Colorado River water continue to rise, the level of human development and control of the river continues to generate controversy.
The Colorado begins at La Poudre Pass in the Southern Rocky Mountains of Colorado, at just under 2 miles (3 km) above sea level. After a short run south, the river turns west below Grand Lake, the largest natural lake in the state. For the first 250 miles (400 km) of its course, the Colorado carves its way through the mountainous Western Slope, a sparsely populated region defined by the portion of the state west of the Continental Divide. As it flows southwest, it gains strength from many small tributaries, as well as larger ones including the Blue, Eagle and Roaring Fork rivers. After passing through De Beque Canyon, the Colorado emerges from the Rockies into the Grand Valley, a major farming and ranching region where it meets one of its largest tributaries, the Gunnison River, at Grand Junction. Most of the upper river is a swift whitewater stream ranging from 200 to 500 feet (60 to 150 m) wide, the depth ranging from 6 to 30 feet (2 to 9 m), with a few notable exceptions, such as the Blackrocks reach where the river is nearly 100 feet (30 m) deep. In a few areas, such as the marshy Kawuneeche Valley near the headwaters and the Grand Valley, it exhibits braided characteristics.
Arcing northwest, the Colorado begins to cut across the eponymous Colorado Plateau, a vast area of high desert centered at the Four Corners of the southwestern United States. Here, the climate becomes significantly drier than that in the Rocky Mountains, and the river becomes entrenched in progressively deeper gorges of bare rock, beginning with Ruby Canyon and then Westwater Canyon as it enters Utah, now once again heading southwest. Farther downstream it receives the Dolores River and defines the southern border of Arches National Park, before passing Moab and flowing through "The Portal '', where it exits the Moab Valley between a pair of 1,000 - foot (300 m) sandstone cliffs.
In Utah, the Colorado flows primarily through the "slickrock '' country, which is characterized by its narrow canyons and unique "folds '' created by the tilting of sedimentary rock layers along faults. This is one of the most inaccessible regions of the continental United States. Below the confluence with the Green River, its largest tributary, in Canyonlands National Park, the Colorado enters Cataract Canyon, named for its dangerous rapids, and then Glen Canyon, known for its arches and erosion - sculpted Navajo sandstone formations. Here, the San Juan River, carrying runoff from the southern slope of Colorado 's San Juan Mountains, joins the Colorado from the east. The Colorado then enters northern Arizona, where since the 1960s Glen Canyon Dam near Page has flooded the Glen Canyon reach of the river, forming Lake Powell for water supply and hydroelectricity generation.
In Arizona, the river passes Lee 's Ferry, an important crossing for early explorers and settlers and since the early 20th century the principal point where Colorado River flows are measured for apportionment to the seven U.S. and two Mexican states in the basin. Downstream, the river enters Marble Canyon, the beginning of the Grand Canyon, passing under the Navajo Bridges on a now southward course. Below the confluence with the Little Colorado River, the river swings west into Granite Gorge, the most dramatic portion of the Grand Canyon, where the river cuts up to one mile (1.6 km) into the Colorado Plateau, exposing some of the oldest visible rocks on Earth, dating as long ago as 2 billion years. The 277 miles (446 km) of the river that flow through the Grand Canyon are largely encompassed by Grand Canyon National Park and are known for their difficult whitewater, separated by pools that reach up to 110 feet (34 m) in depth.
At the lower end of Grand Canyon, the Colorado widens into Lake Mead, the largest reservoir in the continental United States, formed by Hoover Dam on the border of Arizona and Nevada. Situated southeast of metropolitan Las Vegas, the dam is an integral component for management of the Colorado River, controlling floods and storing water for farms and cities in the lower Colorado River basin. Below the dam the river passes under the Mike O'Callaghan -- Pat Tillman Memorial Bridge -- which at nearly 900 feet (270 m) above the water is the highest concrete arch bridge in the Western Hemisphere -- and then turns due south towards Mexico, defining the Arizona -- Nevada and Arizona -- California borders.
After leaving the confines of the Black Canyon, the river emerges from the Colorado Plateau into the Lower Colorado River Valley (LCRV), a desert region dependent on irrigation agriculture and tourism and also home to several major Indian reservations. The river widens here to a broad, moderately deep waterway averaging 500 to 1,000 feet (150 to 300 m) wide and reaching up to ⁄ mile (400 m) across, with depths ranging from 8 to 60 feet (2 to 20 m). Before channelization of the Colorado in the 20th century, the lower river was subject to frequent course changes caused by seasonal flow variations. Joseph C. Ives, who surveyed the lower river in 1861, wrote that "the shifting of the channel, the banks, the islands, the bars is so continual and rapid that a detailed description, derived from the experiences of one trip, would be found incorrect, not only during the subsequent year, but perhaps in the course of a week, or even a day. ''
The LCRV is one of the most densely populated areas along the river, and there are numerous towns including Bullhead City, Arizona, Needles, California, and Lake Havasu City, Arizona. Here, many diversions draw from the river, providing water for both local uses and distant regions including the Salt River Valley of Arizona and metropolitan Southern California. The last major U.S. diversion is at Imperial Dam, where over 90 percent of the river 's remaining flow is moved into the All - American Canal to irrigate California 's Imperial Valley, the most productive winter agricultural region in the United States.
Below Imperial Dam, only a small portion of the Colorado River makes it beyond Yuma, Arizona, and the confluence with the intermittent Gila River -- which carries runoff from western New Mexico and most of Arizona -- before defining about 24 miles (39 km) of the Mexico -- United States border. At Morelos Dam, the entire remaining flow of the Colorado is diverted to irrigate the Mexicali Valley, among Mexico 's most fertile agricultural lands. Below San Luis Río Colorado, the Colorado passes entirely into Mexico, defining the Baja California -- Sonora border; in most years, the stretch of the Colorado between here and the Gulf of California is dry or a trickle formed by irrigation return flows. The Hardy River provides most of the flow into the Colorado River Delta, a vast alluvial floodplain covering about 3,000 square miles (7,800 km) of northwestern Mexico. A large estuary is formed here before the Colorado empties into the Gulf about 75 miles (120 km) south of Yuma. Before 20th - century development dewatered the lower Colorado, a major tidal bore was present in the delta and estuary; the first historical record was made by the Croatian missionary in Spanish service Father Ferdinand Konščak on July 18, 1746. During spring tide conditions, the tidal bore -- locally called El Burro -- formed in the estuary about Montague Island in Baja California and propagated upstream.
The Colorado is joined by over 25 significant tributaries, of which the Green River is the largest by both length and discharge. The Green takes drainage from the Wind River Range of west - central Wyoming, from Utah 's Uinta Mountains, and from the Rockies of northwestern Colorado. The Gila River is the second longest and drains a greater area than the Green, but has a significantly lower flow because of a more arid climate and larger diversions for irrigation and cities. Both the Gunnison and San Juan rivers, which derive most of their water from Rocky Mountains snowmelt, contribute more water than the Gila did naturally.
In its natural state, the Colorado River poured about 16.3 million acre feet (20.1 km) into the Gulf of California each year, amounting to an average flow rate of 22,500 cubic feet per second (640 m / s). Its flow regime was not at all steady -- indeed, "prior to the construction of federal dams and reservoirs, the Colorado was a river of extremes like no other in the United States. '' Once, the river reached peaks of more than 100,000 cubic feet per second (2,800 m / s) in the summer and low flows of less than 2,500 cubic feet per second (71 m / s) in the winter annually. At Topock, Arizona, about 300 miles (480 km) upstream from the Gulf, a maximum historical discharge of 384,000 cubic feet per second (10,900 m / s) was recorded in 1884 and a minimum of 422 cubic feet per second (11.9 m / s) was recorded in 1935. In contrast, the regulated discharge rates on the lower Colorado below Hoover Dam rarely exceed 35,000 cubic feet per second (990 m / s) or drop below 4,000 cubic feet per second (110 m / s). Annual runoff volume has ranged from a high of 22.2 million acre feet (27.4 km) in 1984 to a low of 3.8 million acre feet (4.7 km) in 2002, although in most years only a small portion of this flow, if any, reaches the Gulf.
Between 85 and 90 percent of the Colorado River 's discharge originates in snowmelt, mostly from the Rocky Mountains of Colorado and Wyoming. The three major upper tributaries of the Colorado -- the Gunnison, Green, and San Juan -- alone deliver almost 9 million acre feet (11 km) per year to the main stem, mostly from snowmelt. The remaining 10 to 15 percent comes from a variety of sources, principally groundwater base flow and summer monsoon storms. The latter often produces heavy, highly localized floods on lower tributaries of the river, but does not often contribute significant volumes of runoff. Most of the annual runoff in the basin occurs with the melting of Rocky Mountains snowpack, which begins in April and peaks during May and June before exhausting in late July or early August.
Flows at the mouth have steadily declined since the beginning of the 20th century, and in most years after 1960 the Colorado River has run dry before reaching the sea. Irrigation, industrial, and municipal diversions, evaporation from reservoirs, natural runoff, and likely climate change have all contributed to this substantial reduction in flow, threatening the future water supply. For example, the Gila River -- formerly one of the Colorado 's largest tributaries -- contributes little more than a trickle in most years due to use of its water by cities and farms in central Arizona. The average flow rate of the Colorado at the northernmost point of the Mexico -- United States border (NIB, or Northerly International Boundary) is about 2,060 cubic feet per second (58 m / s), 1.49 million acre feet (1.84 km) per year -- less than a 10th of the natural flow -- due to upstream water use. Below here, all of the remaining flow is diverted to irrigate the Mexicali Valley, leaving a dry riverbed from Morelos Dam to the sea that is supplemented by intermittent flows of irrigation drainage water. There have been exceptions, however, namely in the early to mid-1980s, when the Colorado once again reached the sea during several consecutive years of record - breaking precipitation and snowmelt. In 1984, so much excess runoff occurred that some 16.5 million acre feet (20.4 km), or 22,860 cubic feet per second (647 m / s), poured into the sea.
The United States Geological Survey (USGS) operates or has operated 46 stream gauges to measure the discharge of the Colorado River, ranging from the headwaters near Grand Lake to the Mexico -- U.S. border. The tables at right list data associated with eight of these gauges. River flows as gauged at Lee 's Ferry, Arizona, about halfway along the length of the Colorado and 16 miles (26 km) below Glen Canyon Dam, are used to determine water allocations in the Colorado River basin. The average discharge recorded there was approximately 14,800 cubic feet per second (420 m / s), 10.72 million acre feet (13.22 km) per year, from 1921 to 2010. This figure has been heavily affected by upstream diversions and reservoir evaporation, especially after the completion of the Colorado River Storage Project in the 1970s. Prior to the completion of Glen Canyon Dam in 1964, the average discharge recorded between 1912 and 1962 was 17,850 cubic feet per second (505 m / s), 12.93 million acre feet (15.95 km) per year.
The drainage basin or watershed of the Colorado River encompasses 246,000 square miles (640,000 km) of southwestern North America, making it the seventh largest on the continent. About 238,600 square miles (618,000 km), or 97 percent of the watershed, is in the United States. The river and its tributaries drain most of western Colorado and New Mexico, southwestern Wyoming, eastern and southern Utah, southeastern Nevada and California, and nearly all of Arizona. The areas drained within Baja California and Sonora are very small and do not contribute measurable runoff. Most of the basin is arid, defined by the Sonoran and Mojave deserts and the expanse of the Colorado Plateau, although significant expanses of forest are found in the Rocky Mountains; the Kaibab, Aquarius, and Markagunt plateaus in southern Utah and northern Arizona; the Mogollon Rim through central Arizona; and other smaller mountain ranges and sky islands. Elevations range from sea level at the Gulf of California to 14,321 feet (4,365 m) at the summit of Uncompahgre Peak in Colorado, with an average of 5,500 feet (1,700 m) across the entire basin.
Climate varies widely across the watershed. Mean monthly high temperatures are 25.3 ° C (77.5 ° F) in the upper basin and 33.4 ° C (92.1 ° F) in the lower basin, and lows average − 3.6 and 8.9 ° C (25.5 and 48.0 ° F), respectively. Annual precipitation averages 6.5 inches (164 mm), ranging from over 40 inches (1,000 mm) in some areas of the Rockies to just 0.6 inches (15 mm) along the Mexican reach of the river. The upper basin generally receives snow and rain during the winter and early spring, while precipitation in the lower basin falls mainly during intense but infrequent summer thunderstorms brought on by the North American Monsoon.
As of 2010, approximately 12.7 million people lived in the Colorado River basin. Phoenix in Arizona and Las Vegas in Nevada are the largest metropolitan areas in the watershed. Population densities are also high along the lower Colorado River below Davis Dam, which includes Bullhead City, Lake Havasu City, and Yuma. Other significant population centers in the basin include Tucson, Arizona; St. George, Utah; and Grand Junction, Colorado. Colorado River basin states are among the fastest - growing in the U.S.; the population of Nevada alone increased by about 66 percent between 1990 and 2000 as Arizona grew by some 40 percent.
The Colorado River basin shares drainage boundaries with many other major watersheds of North America. The Continental Divide of the Americas forms a large portion of the eastern boundary of the watershed, separating it from the basins of the Yellowstone River and the Platte River -- both tributaries of the Missouri River -- on the northeast, and from the headwaters of the Arkansas River on the east. Both the Missouri and Arkansas rivers are part of the Mississippi River system. Further south, the Colorado River basin borders on the Rio Grande drainage, which along with the Mississippi flows to the Gulf of Mexico, as well as a series of endorheic (closed) drainage basins in southwestern New Mexico and extreme southeastern Arizona.
For a short stretch, the Colorado watershed meets the drainage basin of the Snake River, a tributary of the Columbia River, in the Wind River Range of western Wyoming. Southwest of there, the northern divide of the Colorado watershed skirts the edge of the Great Basin, bordering on the closed drainage basins of the Great Salt Lake and the Sevier River in central Utah, and other closed basins in southern Utah and Nevada. To the west in California, the Colorado River watershed borders on those of small closed basins in the Mojave Desert, the largest of which is the Salton Sea drainage north of the Colorado River Delta. On the south, the watersheds of the Sonoyta, Concepción, and Yaqui rivers, all of which drain to the Gulf of California, border that of the Colorado.
As recently as the Cretaceous period 100 million years ago, much of western North America was still part of the Pacific Ocean. Tectonic forces from the collision of the Farallon Plate with the North American Plate pushed up the Rocky Mountains between 75 and 50 million years ago in a mountain - building episode known as the Laramide orogeny. The Colorado first formed as a west - flowing stream draining the southwestern portion of the range, and the uplift also diverted the Green River from its original course to the Mississippi River west towards the Colorado. About 30 to 20 million years ago, volcanic activity related to the orogeny led to the Mid-Tertiary ignimbrite flare - up, which created smaller formations such as the Chiricahua Mountains in Arizona and deposited massive amounts of volcanic ash and debris over the watershed. The Colorado Plateau first began to rise during the Eocene, between about 55 and 34 million years ago, but did not attain its present height until about 5 million years ago, about when the Colorado River established its present course into the Gulf of California.
The time scale and sequence over which the river 's present course and the Grand Canyon were formed is uncertain. Before the Gulf of California was formed around 12 to 5 million years ago by faulting processes along the boundary of the North American and Pacific plates, the Colorado flowed west to an outlet on the Pacific Ocean -- possibly Monterey Bay on the Central California coast, forming the Monterey submarine canyon. The uplift of the Sierra Nevada mountains began about 4.5 million years ago, diverting the Colorado southwards towards the Gulf. As the Colorado Plateau continued to rise between 5 and 2.5 million years ago, the river maintained its ancestral course (as an antecedent stream) and began to cut the Grand Canyon. Antecedence played a major part in shaping other peculiar geographic features in the watershed, including the Dolores River 's bisection of Paradox Valley in Colorado and the Green River 's cut through the Uinta Mountains in Utah.
Sediments carried from the plateau by the Colorado River created a vast delta made of more than 10,000 cubic miles (42,000 km) of material that walled off the northernmost part of the gulf in approximately 1 million years. Cut off from the ocean, the portion of the gulf north of the delta eventually evaporated and formed the Salton Sink, which reached about 260 feet (79 m) below sea level. Since then the river has changed course into the Salton Sink at least three times, transforming it into Lake Cahuilla, which at maximum size flooded up the valley to present - day Indio, California. The lake took about 50 years to evaporate after the Colorado resumed flowing to the Gulf. The present - day Salton Sea can be considered the most recent incarnation of Lake Cahuilla, though on a much smaller scale.
Between 1.8 million and 10,000 years ago, massive flows of basalt from the Uinkaret volcanic field in northern Arizona dammed the Colorado River within the Grand Canyon. At least 13 lava dams were formed, the largest of which was more than 2,300 feet (700 m) high, backing the river up for nearly 500 miles (800 km) to present - day Moab, Utah. The lack of associated sediment deposits along this stretch of the Colorado River, which would have accumulated in the impounded lakes over time, suggests that most of these dams did not survive for more than a few decades before collapsing or being washed away. Failure of the lava dams caused by erosion, leaks and cavitation caused catastrophic floods, which may have been some of the largest ever to occur in North America, rivaling the late - Pleistocene Missoula Floods of the northwestern United States. Mapping of flood deposits indicate that crests as high as 700 feet (210 m) passed through the Grand Canyon, reaching peak discharges as great as 17 million cubic feet per second (480,000 m / s).
The first humans of the Colorado River basin were likely Paleo - Indians of the Clovis and Folsom cultures, who first arrived on the Colorado Plateau about 12,000 years ago. Very little human activity occurred in the watershed until the rise of the Desert Archaic Culture, which from 8,000 to 2,000 years ago constituted most of the region 's human population. These prehistoric inhabitants led a generally nomadic lifestyle, gathering plants and hunting small animals (though some of the earliest peoples hunted larger mammals that became extinct in North America after the end of the Pleistocene epoch). Another notable early group was the Fremont culture, whose peoples inhabited the Colorado Plateau from 2,000 to 700 years ago. The Fremont were likely the first peoples of the Colorado River basin to domesticate crops and construct masonry dwellings; they also left behind a large amount of rock art and petroglyphs, many of which have survived to the present day.
Beginning in the early centuries A.D., Colorado River basin peoples began to form large agriculture - based societies, some of which lasted hundreds of years and grew into well - organized civilizations encompassing tens of thousands of inhabitants. The Ancient Puebloan (also known as Anasazi or Hisatsinom) people of the Four Corners region were descended from the Desert Archaic culture. The Puebloan people developed a complex distribution system to supply drinking and irrigation water in Chaco Canyon in northwestern New Mexico.
The Puebloans dominated the basin of the San Juan River, and the center of their civilization was in Chaco Canyon. In Chaco Canyon and the surrounding lands, they built more than 150 multi-story pueblos or "great houses '', the largest of which, Pueblo Bonito, is composed of more than 600 rooms. The Hohokam culture was present along the middle Gila River beginning around 1 A.D. Between 600 and 700 A.D. they began to employ irrigation on a large scale, and did so more prolifically than any other native group in the Colorado River basin. An extensive system of irrigation canals was constructed on the Gila and Salt rivers, with various estimates of a total length ranging from 180 to 300 miles (290 to 480 km) and capable of irrigating 25,000 to 250,000 acres (10,000 to 101,000 ha). Both civilizations supported large populations at their height; the Chaco Canyon Puebloans numbered between 6,000 and 15,000 and estimates for the Hohokam range between 30,000 and 200,000.
These sedentary peoples heavily exploited their surroundings, practicing logging and harvesting of other resources on a large scale. The construction of irrigation canals may have led to a significant change in the morphology of many waterways in the Colorado River basin. Prior to human contact, rivers such as the Gila, Salt and Chaco were shallow perennial streams with low, vegetated banks and large floodplains. In time, flash floods caused significant downcutting on irrigation canals, which in turn led to the entrenchment of the original streams into arroyos, making agriculture difficult. A variety of methods were employed to combat these problems, including the construction of large dams, but when a megadrought hit the region in the 14th century A.D. the ancient civilizations of the Colorado River basin abruptly collapsed. Some Puebloans migrated to the Rio Grande Valley of central New Mexico and south - central Colorado, becoming the predecessors of the Hopi, Zuni, Laguna and Acoma people in western New Mexico. Many of the tribes that inhabited the Colorado River basin at the time of European contact were descended from Puebloan and Hohokam survivors, while others already had a long history of living in the region or migrated in from bordering lands.
The Navajo were an Athabaskan people who migrated from the north into the Colorado River basin around 1025 A.D. They soon established themselves as the dominant Native American tribe in the Colorado River basin, and their territory stretched over parts of present - day Arizona, New Mexico, Utah and Colorado -- in the original homelands of the Puebloans. In fact, the Navajo acquired agricultural skills from the Puebloans before the collapse of the Pueblo civilization in the 14th century. A profusion of other tribes have made a continued, lasting presence along the Colorado River. The Mohave have lived along the rich bottomlands of the lower Colorado below Black Canyon since 1200 A.D. They were fishermen -- navigating the river on rafts made of reeds to catch Gila trout and Colorado pikeminnow -- and farmers, relying on the annual floods of the river rather than irrigation to water their crops. Ute peoples have inhabited the northern Colorado River basin, mainly in present - day Colorado, Wyoming and Utah, for at least 2,000 years, but did not become well established in the Four Corners area until 1500 A.D. The Apache, Cocopah, Halchidhoma, Havasupai, Hualapai, Maricopa, Pima, and Quechan are among many other groups that live along or had territories bordering on the Colorado River and its tributaries.
Beginning in the 17th century, contact with Europeans brought significant changes to the lifestyles of Native Americans in the Colorado River basin. Missionaries sought to convert indigenous peoples to Christianity -- an effort sometimes successful, such as in Father Eusebio Francisco Kino 's 1694 encounter with the "docile Pimas of the Gila Valley (who) readily accepted Father Kino and his Christian teachings ''. The Spanish introduced sheep and goats to the Navajo, who came to rely heavily on them for meat, milk and wool. By the mid-16th century, the Utes, having acquired horses from the Spanish, introduced them to the Colorado River basin. The use of horses spread through the basin via trade between the various tribes and greatly facilitated hunting, communications and travel for indigenous peoples. More warlike groups such as the Utes and Navajos often used horses to their advantage in raids against tribes that were slower to adopt them, such as the Goshutes and Southern Paiutes.
The gradual influx of European and American explorers, fortune seekers and settlers into the region eventually led to conflicts that forced many Native Americans off their traditional lands. After the acquisition of the Colorado River basin from Mexico in the Mexican -- American War in 1846, U.S. military forces commanded by Kit Carson forced more than 8,000 Navajo men, women and children from their homes after a series of unsuccessful attempts to confine their territory, many of which were met with violent resistance. In what is now known as the Long Walk of the Navajo, the captives were marched from Arizona to Fort Sumner in New Mexico, and many died along the route. Four years later, the Navajo signed a treaty that moved them onto a reservation in the Four Corners region that is now known as the Navajo Nation. It is the largest Native American reservation in the United States, encompassing 27,000 square miles (70,000 km) with a population of over 180,000 as of 2000.
The Mohave were expelled from their territory after a series of minor skirmishes and raids on wagon trains passing through the area in the late 1850s, culminating in an 1859 battle with American forces that concluded the Mohave War. In 1870, the Mohave were relocated to a reservation at Fort Mojave, which spans the borders of Arizona, California and Nevada. Some Mohave were also moved to the 432 - square - mile (1,120 km) Colorado River Indian Reservation on the Arizona -- California border, originally established for the Mohave and Chemehuevi people in 1865. In the 1940s, some Hopi and Navajo people were also relocated to this reservation. The four tribes now form a geopolitical body known as the Colorado River Indian Tribes.
Water rights of Native Americans in the Colorado River basin were largely ignored during the extensive water resources development carried out on the river and its tributaries in the 19th and 20th centuries. The construction of dams has often had negative impacts on tribal peoples, such as the Chemehuevi when their riverside lands were flooded after the completion of Parker Dam in 1938. Ten Native American tribes in the basin now hold or continue to claim water rights to the Colorado River. The U.S. government has taken some actions to help quantify and develop the water resources of Native American reservations. The first federally funded irrigation project in the U.S. was the construction of an irrigation canal on the Colorado River Indian Reservation in 1867. Other water projects include the Navajo Indian Irrigation Project, authorized in 1962 for the irrigation of lands in part of the Navajo Nation in north - central New Mexico. The Navajo continue to seek expansion of their water rights because of difficulties with the water supply on their reservation; about 40 percent of its inhabitants must haul water by truck many miles to their homes. In the 21st century, they have filed legal claims against the governments of Arizona, New Mexico and Utah for increased water rights. Some of these claims have been successful for the Navajo, such as a 2004 settlement in which they received a 326,000 - acre - foot (402,000 ML) allotment from New Mexico.
During the 16th century, the Spanish began to explore and colonize western North America. An early motive was the search for the Seven Cities of Gold, or "Cibola '', rumored to have been built by Native Americans somewhere in the desert Southwest. According to a United States Geological Survey publication, it is likely that Francisco de Ulloa was the first European to see the Colorado River when in 1536 he sailed to the head of the Gulf of California. Francisco Vásquez de Coronado 's 1540 -- 1542 expedition began as a search for the fabled Cities of Gold, but after learning from natives in New Mexico of a large river to the west, he sent García López de Cárdenas to lead a small contingent to find it. With the guidance of Hopi Indians, Cárdenas and his men became the first outsiders to see the Grand Canyon. Cárdenas was reportedly unimpressed with the canyon, assuming the width of the Colorado River at 6 feet (1.8 m) and estimating 300 - foot (91 m) - tall rock formations to be the size of a man. After failing at an attempt to descend to the river, they left the area, defeated by the difficult terrain and torrid weather.
In 1540, Hernando de Alarcón and his fleet reached the mouth of the river, intending to provide additional supplies to Coronado 's expedition. Alarcón may have sailed the Colorado as far upstream as the present - day California -- Arizona border. Coronado never reached the Gulf of California, and Alarcón eventually gave up and left. Melchior Díaz reached the delta in the same year, intending to establish contact with Alarcón, but the latter was already gone by the time of Díaz 's arrival. Díaz named the Colorado River Rio del Tizon ("Firebrand River '') after seeing a practice used by the local natives for warming themselves. The name Tizon lasted for the next 200 years, while the name Rio Colorado ("Red River '') was first applied to a tributary of the Gila River, possibly the Verde River, circa 1720. The first known map to label the main stem as the Colorado was drawn by French cartographer Jacques - Nicolas Bellin in 1743.
During the 18th and early 19th centuries, many Americans and Spanish believed in the existence of the Buenaventura River, purported to run from the Rocky Mountains in Utah or Colorado to the Pacific Ocean. The name Buenaventura was given to the Green River by Silvestre Vélez de Escalante as early as 1776, but Escalante did not know that the Green drained to the Colorado. Many later maps showed the headwaters of the Green and Colorado rivers connecting with the Sevier River (Rio San Ysabel) and Utah Lake (Lake Timpanogos) before flowing west through the Sierra Nevada into California. Mountain man Jedediah Smith reached the lower Colorado by way of the Virgin River canyon in 1826. Smith called the Colorado the "Seedskeedee '', as the Green River in Wyoming was known to fur trappers, correctly believing it to be a continuation of the Green and not a separate river as others believed under the Buenaventura myth. John C. Frémont 's 1843 Great Basin expedition proved that no river traversed the Great Basin and Sierra Nevada, officially debunking the Buenaventura myth.
Between 1850 and 1854 the U.S. Army explored the lower reach of the Colorado River from the Gulf of California, looking for the river to provide a less expensive route to supply the remote post of Fort Yuma. First in November 1850 to January 1851, by its transport schooner, Invincible under Captain Alfred H. Wilcox and then by its longboat commanded by Lieutenant George Derby. Later Lieutenant Derby, in his expedition report, recommended that a shallow draft sternwheel steamboat would be the way to send supplies up river to the fort. The next contractors George Alonzo Johnson with his partner Benjamin M. Hartshorne, brought two barges and 250 tons of supplies arriving at the river 's mouth in February 1852, on the United States transport schooner Sierra Nevada under Captain Wilcox. Poling the barges up the Colorado, the first barge sank with its cargo a total loss. The second was finally, after a long struggle poled up to Fort Yuma, but what little it carried was soon consumed by the garrison. Subsequently, wagons again were sent from the fort to haul the balance of the supplies overland from the estuary through the marshes and woodlands of the Delta. At last Derby 's recommendation was heeded and in November 1852, the Uncle Sam, a 65 - foot long side - wheel paddle steamer, built by Domingo Marcucci, became the first steamboat on the Colorado River. It was brought by the schooner Capacity from San Francisco to the delta by the next contractor to supply the fort, Captain James Turnbull. It was assembled and launched in the estuary, 30 miles above the mouth of the Colorado River. Equipped with only a 20 - horsepower engine, the Uncle Sam could only carry 35 tons of supplies, taking 15 days to make the first 120 - mile trip. It made many trips up and down the river, taking four months to finish carrying the supplies for the fort, improving its time up river to 12 days. Negligence caused it to sink at its dock below Fort Yuma, and was then washed away before it could be raised, in the spring flood of 1853. Turnbull in financial difficulty, disappeared. Nevertheless, he had shown the worth of steamboats to solve Fort Yuma 's ls supply problem.
George Alonzo Johnson with his partner Hartshorne and a new partner Captain Alfred H. Wilcox (formerly of the Invincible and Sierra Nevada), formed George A. Johnson & Company and obtained the next contract to supply the fort. Johnson and his partners, all having learned lessons from their failed attempts ascending the Colorado and with the example of the Uncle Sam, brought the parts of a more powerful side - wheel steamboat, the General Jesup, with them to the mouth of the Colorado from San Francisco. There it was reassembled at a landing in the upper tidewater of the river and reached Fort Yuma, January 18, 1854. This new boat, capable of carrying 50 tons of cargo, was very successful making round trips from the estuary to the fort in only four or five days. Costs were cut from $200 to $75 per ton.
Lorenzo Sitgreaves led the first Corps of Topographical Engineers mission across northern Arizona to the Colorado River (near modern Bullhead City, Arizona), and down its east bank to the river crossings of the Southern Immigrant Trail at Fort Yuma in 1851.
The second Corps of Topographical Engineers expedition passed along and crossed the Colorado was the 1853 - 1854 Pacific Railroad Survey expedition along the 35th parallel north from Oklahoma to Los Angeles, led by Lt. Amiel Weeks Whipple.
George A. Johnson was instrumental in getting the support for Congressional funding a military expedition up the river. With those funds Johnson expected to provide the transportation for the expedition but was angry and disappointed when the commander of the expedition Lt. Joseph Christmas Ives rejected his offer of one of his steamboats. Before Ives could finish reassembling his steamer in the delta, George A. Johnson set off from Fort Yuma on December 31, 1857, conducting his own exploration of the river above the fort in his steamboat General Jesup. He ascended the river in twenty one days as far as the first rapids in Pyramid Canyon, over 300 miles (480 km) above Fort Yuma and 8 miles (13 km) above the modern site of Davis Dam. Running low on food he turned back. He as he returned he encountered Lieutenant Ives, Whipple 's assistant, who was leading an expedition to explore the feasibility of using the Colorado River as a navigation route in the Southwest. Ives and his men used a specially built steamboat, the shallow - draft U.S.S. Explorer, and traveled up the river as far as Black Canyon. He then took a small boat up beyond the canyon to Fortification Rock and Las Vegas Wash. After experiencing numerous groundings and accidents and having been inhibited by low water in the river, Ives declared: "Ours has been the first, and will doubtless be the last, party of whites to visit this profitless locality. It seems intended by nature that the Colorado River, along the greater portion of its lonely and majestic way, shall be forever unvisited and undisturbed. ''
Until 1866, El Dorado Canyon was the actual head of navigation on the Colorado River. In that year Captain Robert T. Rogers, commanding the steamer Esmeralda with a barge and ninety tons of freight, reached Callville, Nevada, on October 8, 1866. Callville remained the head of navigation on the river until July 7, 1879, when Captain J.A. Mellon in the Gila left El Dorado Canyon landing, steamed up through the rapids in Black Canyon, making record time to Callville and tied up overnight. Next morning he to steamed up through the rapids in Boulder Canyon to reach the mouth of the Virgin River at Rioville July 8, 1879. From 1879 to 1887, Rioville, Nevada was the high water Head of Navigation for the steamboats and the mining company sloop Sou'Wester that carried the salt needed for the reduction of silver ore from there to the mills at El Dorado Canyon.
Up until the mid-19th century, long stretches of the Colorado and Green rivers between Wyoming and Nevada remained largely unexplored due to their remote location and dangers of navigation. Because of the dramatic drop in elevation of the two rivers, there were rumors of huge waterfalls and violent rapids, and Native American tales strengthened their credibility. In 1869, one - armed Civil War veteran John Wesley Powell led an expedition from Green River Station in Wyoming, aiming to run the two rivers all the way down to St. Thomas, Nevada, near present - day Hoover Dam. Powell and nine men -- none of whom had prior whitewater experience -- set out in May. After braving the rapids of the Gates of Lodore, Cataract Canyon and other gorges along the Colorado, the party arrived at the mouth of the Little Colorado River, where Powell noted down arguably the most famous words ever written about the Grand Canyon of the Colorado:
We are now ready to start on our way down the Great Unknown. Our boats, tied to a common stake, are chafing each other, as they are tossed by the fretful river. They ride high and buoyant, for their loads are lighter than we could desire. We have but a month 's rations remaining. The flour has been re-sifted through the mosquito net sieve; the spoiled bacon has been dried, and the worst of it boiled; the few pounds of dried apples have been spread in the sun, and re-shrunken to their normal bulk; the sugar has all melted, and gone on its way down the river; but we have a large sack of coffee. The lighting of the boats has this advantage: they will ride the waves better, and we shall have little to carry when we make a portage.
We are three - quarters of a mile in the depths of the earth, and the great river shrinks into insignificance, as it dashes its angry waves against the walls and cliffs, that rise to the world above; they are but puny ripples, and we but pigmies, running up and down the sands, or lost among the boulders.
On August 28, 1869, three men deserted the expedition, convinced that they could not possibly survive the trip through the Grand Canyon. They were killed by Native Americans after making it to the rim of the canyon; two days later, the expedition ran the last of the Grand Canyon rapids and reached St. Thomas. Powell led a second expedition in 1871, this time with financial backing from the U.S. government. The explorers named many features along the Colorado and Green rivers, including Glen Canyon, the Dirty Devil River, Flaming Gorge, and the Gates of Lodore. In what is perhaps a twist of irony, modern - day Lake Powell, which floods Glen Canyon, is also named for their leader.
Starting in the latter half of the 19th century, the lower Colorado below Black Canyon became an important waterway for steamboat commerce. In 1852, the Uncle Sam was launched to provide supplies to the U.S. Army outpost at Fort Yuma. Although this vessel accidentally foundered and sank early in its career, commercial traffic quickly proliferated because river transport was much cheaper than hauling freight over land. Navigation on the Colorado River was dangerous because of the shallow channel and flow variations, so the first sternwheeler on the river, the Colorado of 1855, was designed to carry 60 short tons (54 t) while drawing less than 2 feet (0.6 m) of water. The tidal bore of the lower Colorado also presented a major hazard; in 1922, a 15 - foot (4.6 m) - high wave swamped a ship bound for Yuma, killing between 86 and 130 people. Steamboats quickly became the principal source of communication and trade along the river until competition from railroads began in the 1870s, and finally the construction of dams along the lower river in 1909, none of which had locks to allow the passage of ships.
During the Manifest Destiny era of the mid-19th century, American pioneers settled many western states but generally avoided the Colorado River basin until the 1850s. Under Brigham Young 's grand vision for a "vast empire in the desert '', (the State of Deseret) Mormon settlers were among the first whites to establish a permanent presence in the watershed, Fort Clara or Fort Santa Clara, in the winter of 1855 - 1856 along the Santa Clara River, tributary of the Virgin River. In the lower Colorado mining was the primary spur to economic development, copper mining in southwestern New Mexico Territory the 1850s then the Mohave War and a gold rush on the Gila River in 1859, the El Dorado Canyon Rush in 1860 and Colorado River Gold Rush in 1862.
In 1860, anticipating the American Civil War, the Mormons established a number of settlements to grow cotton along the Virgin River in Washington County, Utah. From 1863 to 1865, Mormon colonists founded St. Thomas and other colonies on the Muddy and Virgin rivers in northwestern Arizona Territory, (now Clark County, Nevada). Stone 's Ferry was established by these colonists on the Colorado at the mouth of the Virgin River to carry their produce on a wagon road to the mining districts of Mohave County, Arizona to the south. Also, in 1866, a steamboat landing was established at Callville, intended as an outlet to the Pacific Ocean via the Colorado River, for Mormon settlements in the Great Basin. These settlements reached a peak population of about 600 before being abandoned in 1871, and for nearly a decade these valleys became a haven for outlaws and cattle rustlers. One Mormon settler Daniel Bonelli, remained, operating the ferry and began mining salt in nearby mines, bring it in barges, down river to El Dorado Canyon where it was used to process silver ore. From 1879 to 1887, Colorado Steam Navigation Company steamboats carried the salt, operating up river in the high spring flood waters, through Boulder Canyon, to the landing at Rioville at the mouth of the Virgin River. From 1879 to 1882 the Southwestern Mining Company, largest in El Dorado Canyon, brought in a 56 - foot sloop the Sou'Wester that sailed up and down river carrying the salt in the low water time of year until it was wrecked in the Quick and Dirty Rapids of Black Canyon.
Mormons founded settlements along the Duchesne River Valley in the 1870s, and populated the Little Colorado River valley later in the century, settling in towns such as St. Johns, Arizona. They also established settlements along the Gila River in central Arizona beginning in 1871. These early settlers were impressed by the extensive ruins of the Hohokam civilization that previously occupied the Gila River valley, and are said to have "envisioned their new agricultural civilization rising as the mythical phoenix bird from the ashes of Hohokam society ''. The Mormons were the first whites to develop the water resources of the basin on a large scale, and built complex networks of dams and canals to irrigate wheat, oats and barley in addition to establishing extensive sheep and cattle ranches.
One of the main reasons the Mormons were able to colonize Arizona was the existence of Jacob Hamblin 's ferry across the Colorado at Lee 's Ferry (then known as Pahreah Crossing), which began running in March 1864. This location was the only section of river for hundreds of miles in both directions where the canyon walls dropped away, allowing for the development of a transport route. John Doyle Lee established a more permanent ferry system at the site in 1870. One reason Lee chose to run the ferry was to flee from Mormon leaders who held him responsible for the Mountain Meadows Massacre, in which 120 emigrants in a wagon train were killed by a local militia disguised as Native Americans. Even though it was located along a major travel route, Lee 's Ferry was very isolated, and there Lee and his family established the aptly named Lonely Dell Ranch. In 1928, the ferry sank, resulting in the deaths of three men. Later that year, the Navajo Bridge was completed at a point 5 miles (8 km) downstream, rendering the ferry obsolete.
Gold strikes from the mid-19th to early 20th centuries played a major role in attracting settlers to the upper Colorado River basin. In 1859, a group of adventurers from Georgia discovered gold along the Blue River in Colorado and established the mining boomtown of Breckenridge. During 1875, even bigger strikes were made along the Uncompahgre and San Miguel rivers, also in Colorado, and these led to the creation of Ouray and Telluride, respectively. Because most gold deposits along the upper Colorado River and its tributaries occur in lode deposits, extensive mining systems and heavy machinery were required to extract them. Mining remains a substantial contributor to the economy of the upper basin and has led to acid mine drainage problems in some regional streams and rivers.
Prior to 1921, the upper Colorado River above the confluence with the Green River in Utah had assumed various names. Fathers Dominguez and Escalante named it Rio San Rafael in 1776. Through the mid-1800s, the river between Green River and the Gunnison River was most commonly known as the Grand River. The river above the junction with the Gunnison River, however, was known variously as the Bunkara River, the North Fork of the Grand River, the Blue River, and the Grand River. The latter name did not become consistently applied until the 1870s.
In 1921, U.S. Representative Edward T. Taylor of Colorado petitioned the Congressional Committee on Interstate and Foreign Commerce to rename the Grand River as the Colorado River. Taylor saw the fact that the Colorado River started outside the border of his state as an "abomination ''. On July 25, the name change was made official in House Joint Resolution 460 of the 66th Congress, over the objections of representatives from Wyoming, Utah, and the USGS, which noted that the Green River was much longer and had a larger drainage basin above its confluence with the Grand River, although the Grand contributed a greater flow of water.
Today, between 36 and 40 million people depend on the Colorado River 's water for agricultural, industrial and domestic needs. Southern Nevada Water Authority called the Colorado River one of the "most controlled, controversial and litigated rivers in the world ''. Over 29 major dams and hundreds of miles of canals serve to supply thirsty cities, provide irrigation water to some 4 million acres (1.6 million hectares), and meet peaking power demands in the Southwest, generating more than 12 billion kWh of hydroelectricity each year. Often called "America 's Nile '', the Colorado is so carefully managed -- with basin reservoirs capable of holding four times the river 's annual flow -- that each drop of its water is used an average of 17 times in a single year.
One of the earliest water projects in the Colorado River basin was the Grand Ditch, a 16 - mile (26 km) diversion canal that sends water from the Never Summer Mountains, which would naturally have drained into the headwaters of the Colorado River, to bolster supplies in Colorado 's Front Range Urban Corridor. Constructed primarily by Japanese and Mexican laborers, the ditch was considered an engineering marvel when completed in 1890, delivering 17,700 acre feet (21,800 ML) across the Continental Divide each year. Because roughly 75 percent of Colorado 's precipitation falls west of the Rocky Mountains while 80 percent of the population lives east of the range, more of these interbasin water transfers, locally known as transmountain diversions, followed. While first envisioned in the late 19th century, construction on the Colorado - Big Thompson Project (C - BT) did not begin until the 1930s. The C - BT now delivers more than 11 times the Grand Ditch 's flow from the Colorado River watershed to cities along the Front Range.
Meanwhile, large - scale development was also beginning on the opposite end of the Colorado River. In 1900, entrepreneurs of the California Development Company (CDC) looked to the Imperial Valley of southern California as an excellent location to develop agriculture irrigated by the waters of the river. Engineer George Chaffey was hired to design the Alamo Canal, which split off from the Colorado River near Pilot Knob, curved south into Mexico, and dumped into the Alamo River, a dry arroyo which had historically carried flood flows of the Colorado into the Salton Sink. With a stable year - round flow in the Alamo River, irrigators in the Imperial Valley were able to begin large - scale farming, and small towns in the region started to expand with the influx of job - seeking migrants. By 1903, more than 100,000 acres (40,000 ha) in the valley were under cultivation, supporting a growing population of 4,000.
It was not long before the Colorado River began to wreak havoc with its erratic flows. In autumn, the river would drop below the level of the canal inlet, and temporary brush diversion dams had to be constructed. In early 1905, heavy floods destroyed the headworks of the canal, and water began to flow uncontrolled down the canal towards the Salton Sink. On August 9, the entire flow of the Colorado swerved into the canal and began to flood the bottom of the Imperial Valley. In a desperate gamble to close the breach, crews of the Southern Pacific Railroad, whose tracks ran through the valley, attempted to dam the Colorado above the canal, only to see their work demolished by a flash flood. It took seven attempts, more than $3 million, and two years for the railroad, the CDC, and the federal government to permanently block the breach and send the Colorado on its natural course to the gulf -- but not before part of the Imperial Valley was flooded under a 45 - mile - long (72 km) lake, today 's Salton Sea. After the immediate flooding threat passed, it was realized that a more permanent solution would be needed to rein in the Colorado.
In 1922, six U.S. states in the Colorado River basin signed the Colorado River Compact, which divided half of the river 's flow to both the Upper Basin (the drainage area above Lee 's Ferry, comprising parts of Colorado, New Mexico, Utah, and Wyoming and a small portion of Arizona) and the Lower Basin (Arizona, California, Nevada, and parts of New Mexico and Utah). Each was given rights to 7.5 million acre feet (9.3 km) of water per year, a figure believed to represent half of the river 's minimum flow at Lee 's Ferry. This was followed by a U.S. -- Mexico treaty in 1944, allocating 1.5 million acre feet (1.9 km) of Colorado River water to the latter country per annum. Arizona refused to ratify the Colorado River Compact in 1922 because it feared that California would take too much of the lower basin allotment; in 1944 a compromise was reached in which Arizona would get a firm allocation of 2.8 million acre feet (3.5 km), but only if California 's 4.4 - million - acre - foot (5.4 km) allocation was prioritized during drought years. These and nine other decisions, compacts, federal acts and agreements made between 1922 and 1973 form what is now known as the Law of the River.
On September 30, 1935, the United States Bureau of Reclamation (USBR) completed Hoover Dam in the Black Canyon of the Colorado River. Behind the dam rose Lake Mead, the largest artificial lake in the U.S., capable of holding more than two years of the Colorado 's flow. The construction of Hoover was a major step towards stabilizing the lower channel of the Colorado River, storing water for irrigation in times of drought, and providing much - needed flood control as part of a program known as the Boulder Canyon Project. Hoover was the tallest dam in the world at the time of construction and also had the world 's largest hydroelectric power plant. Flow regulation from Hoover Dam opened the doors for rapid development on the lower Colorado River; Imperial and Parker dams followed in 1938, and Davis Dam was completed in 1950.
Completed in 1938 some 20 miles (32 km) above Yuma, Imperial Dam diverts nearly all of the Colorado 's flow into two irrigation canals. The All - American Canal, built as a permanent replacement for the Alamo Canal, is so named because it lies completely within the U.S., unlike its ill -- fated predecessor. With a capacity of over 26,000 cubic feet per second (740 m / s), the All - American is the largest irrigation canal in the world, supplying water to 500,000 acres (2,000 km) of California 's Imperial Valley. Because the valley 's warm and sunny climate lends to a year - round growing season in addition to the large water supply furnished by the Colorado, the Imperial Valley is now one of the most productive agricultural regions in North America. In 1957, the USBR completed a second canal, the Gila Gravity Main Canal, to irrigate about 110,000 acres (450 km) in southwestern Arizona with Colorado River water as part of the Gila Project.
The Lower Basin states also sought to develop the Colorado for municipal supplies. Central Arizona initially relied on the Gila River and its tributaries through projects such as the Theodore Roosevelt and Coolidge Dams -- completed in 1911 and 1928, respectively. Roosevelt was the first large dam constructed by the USBR and provided the water needed to start large - scale agricultural and urban development in the region. The Colorado River Aqueduct, which delivers water nearly 250 miles (400 km) from near Parker Dam to 10 million people in the Los Angeles metropolitan area, was completed in 1941. The San Diego Aqueduct branch, whose initial phase was complete by 1947, furnishes water to nearly 3 million people in San Diego and its suburbs. The Las Vegas Valley of Nevada experienced rapid growth in part due to Hoover Dam construction, and Las Vegas had tapped a pipeline into Lake Mead by 1937. Nevada officials, believing that groundwater resources in the southern part of the state were sufficient for future growth, were more concerned with securing a large amount of the dam 's power supply than water from the Colorado; thus they settled for the smallest allocation of all the states in the Colorado River Compact.
Through the early decades of the 20th century, the Upper Basin states, with the exception of Colorado, remained relatively undeveloped and used little of the water allowed to them under the Colorado River Compact. Water use had increased significantly by the 1950s, and more water was being diverted out of the Colorado River basin to the Front Range corridor, the Salt Lake City area in Utah, and the Rio Grande basin in New Mexico. Such projects included the Roberts Tunnel, completed in 1956, which diverts 63,000 acre feet (78,000 ML) per year from the Blue River to the city of Denver, and the Fryingpan - Arkansas Project, which delivers 69,200 acre feet (85,400 ML) from the Fryingpan River to the Arkansas River basin each year. Without the addition of surface water storage in the upper basin, there was no guarantee that the upper basin states would be able to use the full amount of water given to them by the compact. There was also the concern that drought could impair the upper basin 's ability to deliver the required 7.5 million acre feet (9.3 × 10 m) past Lee 's Ferry per year as stipulated by the compact. A 1956 act of Congress cleared the way for the USBR 's Colorado River Storage Project (CRSP), which entailed the construction of large dams on the Colorado, Green, Gunnison and San Juan Rivers.
The initial blueprints for the CRSP included two dams on the Green River within Dinosaur National Monument 's Echo Park Canyon, a move criticized by both the U.S. National Park Service and environmentalist groups such as the Sierra Club. Controversy reached a nationwide scale, and the USBR dropped its plans for the Dinosaur dams in exchange for a dam at Flaming Gorge and a raise to an already - proposed dam at Glen Canyon. The famed opposition to Glen Canyon Dam, the primary feature of the CRSP, did not build momentum until construction was well underway. This was primarily because of Glen Canyon 's remote location and the result that most of the American public did not even know of the existence of the impressive gorge; the few who did contended that it had much greater scenic value than Echo Park. Sierra Club leader David Brower fought the dam both during the construction and for many years afterwards until his death in 2000. Brower strongly believed that he was personally responsible for the failure to prevent Glen Canyon 's flooding, calling it his "greatest mistake, greatest sin ''.
Agricultural and urban growth in Arizona eventually outstripped the capacity of local rivers; these concerns were reflected in the creation of a Pacific Southwest Water Plan in the 1950s, which aimed to build a project that would permit Arizona to fully utilize its 2.8 - million - acre - foot (3.5 km) allotment of the river. The Pacific Southwest Water Plan was the first major proposal to divert water to the Colorado Basin from other river basins -- namely, from the wetter northwestern United States. It was intended to boost supplies for the Lower Basin states of Arizona, California and Nevada as well as Mexico, thus allowing the Upper Basin states to retain native Colorado River flows for their own use. Although there was still a surplus of water in the Colorado Basin during the mid-20th century, the Bureau of Reclamation predicted, correctly, that eventually population growth would outstrip the available supply and require the transfer of water from other sources.
The original version of the plan proposed to divert water from the Trinity River in northern California to reduce Southern California 's dependence on the Colorado, allowing more water to be pumped, by exchange, to central Arizona. Because of the large amount of power that would be required to pump Colorado River water to Arizona, the CAP originally included provisions for hydroelectric dams at Bridge Canyon and Marble Canyon, which would have flooded large portions of the Colorado within the Grand Canyon and dewatered much of the remainder. When these plans were publicized, the environmental movement -- still reeling from the Glen Canyon controversy -- successfully lobbied against the project. As a result, the Grand Canyon dams were removed from the CAP agenda, the boundaries of Grand Canyon National Park were extended to preclude any further development in the area, and the pumping power was replaced by the building of the coal - fired Navajo Generating Station near Page, Arizona, in 1976. The resulting Central Arizona Project (CAP) irrigates more than 830,000 acres (3,400 km) and provides municipal supplies to over 5 million people from Phoenix to Tucson using water from the Colorado River.
Historically, the Colorado transported from 85 to 100 million short tons (77,000,000 to 91,000,000 t) of sediment or silt to the Gulf of California each year -- second only to the Mississippi among North American rivers. This sediment nourished wetlands and riparian areas along the river 's lower course, particularly in its 3,000 - square - mile (7,800 km) delta, once the largest desert estuary on the continent. Currently, the majority of sediments carried by the Colorado River are deposited at the upper end of Lake Powell, and most of the remainder ends up in Lake Mead. Various estimates place the time it would take for Powell to completely fill with silt at 300 to 700 years. Dams trapping sediment not only pose damage to river habitat but also threaten future operations of the Colorado River reservoir system.
Reduction in flow caused by dams, diversions, water for thermoelectric power stations, and evaporation losses from reservoirs -- the latter of which consumes more than 15 percent of the river 's natural runoff -- has had severe ecological consequences in the Colorado River Delta and the Gulf of California. Historically, the delta with its large freshwater outflow and extensive salt marshes provided an important breeding ground for aquatic species in the Gulf. Today 's desiccated delta, at only a fraction of its former size, no longer provides suitable habitat, and populations of fish, shrimp and sea mammals in the gulf have seen a dramatic decline. Since 1963, the only times when the Colorado River has reached the ocean have been during El Niño events in the 1980s and 1990s.
Reduced flows have led to increases in the concentration of certain substances in the lower river that have impacted water quality. Salinity is one of the major issues and also leads to the corrosion of pipelines in agricultural and urban areas. The lower Colorado 's salt content was about 50 parts per million (ppm) in its natural state, but by the 1960s, it had increased to well over 2000 ppm. By the early 1970s, there was also serious concern about salinity caused by salts leached from local soils by irrigation drainage water, which were estimated to add 10 million short tons (9,100,000 t) of excess salt to the river per year. The Colorado River Basin Salinity Control Act was passed in 1974, mandating conservation practices including the reduction of saline drainage. The program reduced the annual load by about 1.2 million short tons (1,100,000 t), but salinity remains an ongoing issue. In 1997, the USBR estimated that saline irrigation water caused crop damages exceeding $500 million in the U.S. and $100 million in Mexico. Further efforts have been made to combat the salt issue in the lower Colorado, including the construction of a desalination plant at Yuma. In 2011, the seven U.S. states agreed upon a "Plan of Implementation '', which aims to reduce salinity by 644,000 short tons (584,000 t) per year by 2030. In 2013, the Bureau of Reclamation estimated that around $32 million was spent each year to prevent around 1.2 million tons of salt from entering and damaging the Colorado River.
Agricultural runoff containing pesticide residues has also been concentrated in the lower river in greater amounts. Toxins derived from pesticides have led to fish kills; six of these events were recorded between 1964 and 1968 alone. The pesticide issue is even greater in streams and water bodies near agricultural lands irrigated by the Imperial Irrigation District with Colorado River water. In the Imperial Valley, Colorado River water used for irrigation overflows into the New and Alamo rivers and into the Salton Sea. Both rivers and the sea are among the most polluted bodies of water in the United States, posing dangers not only to aquatic life but to contact by humans and migrating birds. Pollution from agricultural runoff is not limited to the lower river; the issue is also significant in upstream reaches such as Colorado 's Grand Valley, also a major center of irrigated agriculture.
Large dams such as Hoover and Glen Canyon typically release water from lower levels of their reservoirs, resulting in stable and relatively cold year - round temperatures in long reaches of the river. The Colorado 's average temperature once ranged from 85 ° F (29 ° C) at the height of summer to near freezing in winter, but modern flows through the Grand Canyon, for example, rarely deviate significantly from 46 ° F (8 ° C). Changes in temperature regime have caused declines of native fish populations, and stable flows have enabled increased vegetation growth, obstructing riverside habitat. These flow patterns have also made the Colorado more dangerous to recreational boaters; people are more likely to die of hypothermia in the colder water, and the general lack of flooding allows rockslides to build up, making the river more difficult to navigate.
In the 21st century, there has been renewed interest in restoring a limited water flow to the delta. In November 2012, the U.S. and Mexico reached an agreement, known as Minute 319, permitting Mexico storage of its water allotment in U.S. reservoirs during wet years, thus increasing the efficiency with which the water can be used. In addition to renovating irrigation canals in the Mexicali Valley to reduce leakage, this will make about 45,000 acre feet (56,000,000 m) per year available for release to the delta on average. The water will be used to provide both an annual base flow and a spring "pulse flow '' to mimic the river 's original snowmelt - driven regime. The first pulse flow, an eight - week release of 105,000 acre feet (130,000,000 m), was initiated on March 21, 2014, with the aim of revitalising 2,350 acres (950 hectares) of wetland. This pulse reached the sea on May 16, 2014, marking the first time in 16 years that any water from the Colorado flowed into the ocean, and was hailed as "an experiment of historic political and ecological significance '' and a landmark in U.S. -- Mexican cooperation in conservation. The pulse will be followed by the steady release of 52,000 acre feet (64,000,000 m) over the following three years, just a small fraction of its average flow before damming.
When the Colorado River Compact was drafted in the 1920s, it was based on barely 30 years of streamflow records that suggested an average annual flow of 17.5 million acre feet (21.6 km) past Lee 's Ferry. Modern studies of tree rings revealed that those three decades were probably the wettest in the past 500 to 1,200 years and that the natural long - term annual flow past Lee 's Ferry is probably closer to 13.5 million acre feet (16.7 km), as compared to the natural flow at the mouth of 16.3 million acre feet (20.1 km). This has resulted in more water being allocated to river users than actually flows through the Colorado. Droughts have exacerbated the issue of water over-allocation, including the Texas drought of the 1950s, which saw several consecutive years of notably low water and has often been used in planning for "a worst - case scenario ''.
The most severe drought on record began in the early 21st century, in which the river basin produced normal or above - average runoff in only four years between 2000 and 2012. Major reservoirs in the basin dropped to historic lows, with Lake Powell falling to just one - third of capacity in early 2005, the lowest level on record since 1969, when the reservoir was still in the process of filling. The watershed is experiencing a warming trend, which is accompanied by earlier snowmelt and a general reduction in precipitation. A 2004 study showed that a 1 -- 6 percent decrease of precipitation would lead to runoff declining by as much as 18 percent by 2050. Average reservoir storage declined by at least 32 percent, further crippling the region 's water supply and hydropower generation. A study by the Scripps Institution of Oceanography in 2008 predicted that both Lake Mead and Lake Powell stand an even chance of dropping to useless levels or "dead pool '' by 2021 if current drying trends and water usage rates continue.
In late 2010, Lake Mead dropped to just 8 feet (2.4 m) above the first "drought trigger '' elevation, a level at which Arizona and Nevada would have to begin rationing water as delineated by the Colorado River Compact. Despite above - average runoff in 2011 that raised the immense reservoir more than 30 feet (9.1 m), record drought conditions returned in 2012 and 2013. Reservoir levels were low enough at the beginning of water year 2014 that the Bureau of Reclamation cut releases from Lake Powell by 750,000 acre feet (930,000,000 m) -- the first such reduction since the 1960s, when Lake Powell was being filled for the first time. This resulted in Lake Mead dropping to its lowest recorded level since 1937, when it was first being filled. Rapid development and economic growth further complicate the issue of a secure water supply, particularly in the case of California 's senior water rights over those of Nevada and Arizona: in case of a reduction in water supply, Nevada and Arizona would have to endure severe cuts before any reduction in the California allocation, which is also larger than the other two combined. Although stringent water conservation measures have been implemented, the threat of severe shortfalls in the Colorado River basin continues to increase each year.
The Colorado River and its tributaries often nourish extensive corridors of riparian growth as they traverse the arid desert regions of the watershed. Although riparian zones represent a relatively small proportion of the basin and have been affected by engineering projects and river diversion in many places, they have the greatest biodiversity of any habitat in the basin. The most prominent riparian zones along the river occur along the lower Colorado below Davis Dam, especially in the Colorado River Delta, where riparian areas support 358 species of birds despite the reduction in freshwater flow and invasive plants such as tamarisk (salt cedar). Reduction of the delta 's size has also threatened animals such as jaguars and the vaquita porpoise, which is endemic to the gulf. Human development of the Colorado River has also helped to create new riparian zones by smoothing the river 's seasonal flow, notably through the Grand Canyon.
More than 1,600 species of plants grow in the Colorado River watershed, ranging from the creosote bush, saguaro cactus, and Joshua trees of the Sonoran and Mojave Deserts to the forests of the Rocky Mountains and other uplands, composed mainly of ponderosa pine, subalpine fir, Douglas - fir and Engelmann spruce. Before logging in the 19th century, forests were abundant in high elevations as far south as the Mexico -- U.S. border, and runoff from these areas nourished abundant grassland communities in river valleys. Some arid regions of the watershed, such as the upper Green River valley in Wyoming, Canyonlands National Park in Utah and the San Pedro River valley in Arizona and Sonora, supported extensive reaches of grassland roamed by large mammals such as buffalo and antelope as late as the 1860s. Near Tucson, Arizona, "where now there is only powder - dry desert, the grass once reached as high as the head of a man on horse back ''.
Rivers and streams in the Colorado basin were once home to 49 species of native fish, of which 42 were endemic. Engineering projects and river regulation have led to the extinction of four species and severe declines in the populations of 40 species. Bonytail chub, razorback sucker, Colorado pikeminnow, and humpback chub are among those considered the most at risk; all are unique to the Colorado River system and well adapted to the river 's natural silty conditions and flow variations. Clear, cold water released by dams has significantly changed characteristics of habitat for these and other Colorado River basin fishes. A further 40 species that occur in the river today, notably the brown trout, were introduced during the 19th and 20th centuries, mainly for sport fishing.
Famed for its dramatic rapids and canyons, the Colorado is one of the most desirable whitewater rivers in the United States, and its Grand Canyon section -- run by more than 22,000 people annually -- has been called the "granddaddy of rafting trips ''. Grand Canyon trips typically begin at Lee 's Ferry and take out at Diamond Creek or Lake Mead; they range from one to eighteen days for commercial trips and from two to twenty - five days for private trips. Private (noncommercial) trips are extremely difficult to arrange because the National Park Service limits river traffic for environmental purposes; people who desire such a trip often have to wait more than 10 years for the opportunity.
Several other sections of the river and its tributaries are popular whitewater runs, and many of these are also served by commercial outfitters. The Colorado 's Cataract Canyon and many reaches in the Colorado headwaters are even more heavily used than the Grand Canyon, and about 60,000 boaters run a single 4.5 - mile (7.2 km) section above Radium, Colorado, each year. The upper Colorado also includes many of the river 's most challenging rapids, including those in Gore Canyon, which is considered so dangerous that "boating is not recommended ''. Another section of the river above Moab, known as the Colorado "Daily '' or "Fisher Towers Section '', is the most visited whitewater run in Utah, with more than 77,000 visitors in 2011 alone. The rapids of the Green River 's Gray and Desolation Canyons and the less difficult "Goosenecks '' section of the lower San Juan River are also frequently traversed by boaters.
Eleven U.S. national parks -- Arches, Black Canyon of the Gunnison, Bryce Canyon, Canyonlands, Capitol Reef, Grand Canyon, Mesa Verde, Petrified Forest, Rocky Mountain, Saguaro, and Zion -- are in the watershed, in addition to many national forests, state parks, and recreation areas. Hiking, backpacking, camping, skiing, and fishing are among the multiple recreation opportunities offered by these areas. Fisheries have declined in many streams in the watershed, especially in the Rocky Mountains, because of polluted runoff from mining and agricultural activities. The Colorado 's major reservoirs are also heavily traveled summer destinations. Houseboating and water - skiing are popular activities on Lakes Mead, Powell, Havasu, and Mojave, as well as Flaming Gorge Reservoir in Utah and Wyoming, and Navajo Reservoir in New Mexico and Colorado. Lake Powell and surrounding Glen Canyon National Recreation Area received more than two million visitors per year in 2007, while nearly 7.9 million people visited Lake Mead and the Lake Mead National Recreation Area in 2008. Colorado River recreation employs some 250,000 people and contributes $26 billion each year to the Southwest economy.
|
why do we use having clause in sql | HAVING (SQL) - wikipedia
A HAVING clause in SQL specifies that an SQL SELECT statement should only return rows where aggregate values meet the specified conditions. It was added to the SQL language because the WHERE keyword could not be used with aggregate functions.
To return a list of department IDs whose total sales exceeded $1000 on the date of January 1, 2000, along with the sum of their sales on that date:
Referring to the sample tables in the Join example, the following query will return the list of departments which have more than 1 employee:
HAVING is convenient, but not necessary. Code equivalent to the example above, but without using HAVING, might look like:
|
who were the allies in the battle of the bulge | Battle of the Bulge - Wikipedia
Allied victory
12th Army Group
Army Group B
1939
1940
1942 -- 1943
1944 -- 1945
Strategic campaigns
The Battle of the Bulge (16 December 1944 -- 25 January 1945) was the last major German offensive campaign on the Western Front during World War II. It was launched through the densely forested Ardennes region of Wallonia in eastern Belgium, northeast France, and Luxembourg, towards the end of World War II. The surprise attack caught the Allied forces completely off guard. American forces bore the brunt of the attack and incurred their highest casualties of any operation during the war. The battle also severely depleted Germany 's armored forces, and they were largely unable to replace them. German personnel and, later, Luftwaffe aircraft (in the concluding stages of the engagement) also sustained heavy losses.
The Germans officially referred to the offensive as Unternehmen Wacht am Rhein ("Operation Watch on the Rhine ''), while the Allies designated it the Ardennes Counteroffensive. The phrase "Battle of the Bulge '' was coined by contemporary press to describe the bulge in German front lines on wartime news maps, and it became the most widely used name for the battle. The German offensive was intended to stop Allied use of the Belgian port of Antwerp and to split the Allied lines, allowing the Germans to encircle and destroy four Allied armies and force the Western Allies to negotiate a peace treaty in the Axis powers ' favor. Once that was accomplished, the German dictator Adolf Hitler believed he could fully concentrate on the Soviets on the Eastern Front. The offensive was planned by the German forces with utmost secrecy, with minimal radio traffic and movements of troops and equipment under cover of darkness. Intercepted German communications indicating a substantial German offensive preparation were not acted upon by the Allies.
The Germans achieved total surprise on the morning of 16 December 1944, due to a combination of Allied overconfidence, preoccupation with Allied offensive plans, and poor aerial reconnaissance. The Germans attacked a weakly defended section of the Allied line, taking advantage of heavily overcast weather conditions that grounded the Allies ' overwhelmingly superior air forces. Fierce resistance on the northern shoulder of the offensive, around Elsenborn Ridge, and in the south, around Bastogne, blocked German access to key roads to the northwest and west that they counted on for success. Columns of armor and infantry that were supposed to advance along parallel routes found themselves on the same roads. This, and terrain that favored the defenders, threw the German advance behind schedule and allowed the Allies to reinforce the thinly placed troops. Improved weather conditions permitted air attacks on German forces and supply lines, which sealed the failure of the offensive. In the wake of the defeat, many experienced German units were left severely depleted of men and equipment, as survivors retreated to the defenses of the Siegfried Line.
The Germans ' initial attack involved 410,000 men; just over 1,400 tanks, tank destroyers, and assault guns; 2,600 artillery pieces; 1,600 anti-tank guns; and over 1,000 combat aircraft, as well as large numbers of other AFVs. These were reinforced a couple of weeks later, bringing the offensive 's total strength to around 450,000 troops, and 1,500 tanks and assault guns. Between 63,222 and 125,000 of their men were killed, missing, or wounded in action. For the Americans, out of 610,000 troops involved in the battle, 89,000 were casualties. While some sources report that up to 19,000 were killed, Eisenhower 's personnel chief put the number at about 8,600. British historian Antony Beevor reports the number killed as 8,407. It was the largest and bloodiest battle fought by the United States in World War II.
After the breakout from Normandy at the end of July 1944 and the Allied landings in southern France on 15 August 1944, the Allies advanced toward Germany more quickly than anticipated. The Allies were faced with several military logistics issues:
General Dwight D. Eisenhower (the Supreme Allied Commander on the Western Front) and his staff chose to hold the Ardennes region which was occupied by the U.S. First Army. The Allies chose to defend the Ardennes with as few troops as possible due to the favorable terrain (a densely wooded highland with deep river valleys and a rather thin road network) and limited Allied operational objectives in the area. They also had intelligence that the Wehrmacht was using the area across the German border as a rest - and - refit area for its troops.
The speed of the Allied advance coupled with an initial lack of deep - water ports presented the Allies with enormous supply problems. Over-the - beach supply operations using the Normandy landing areas and direct landing LSTs on the beaches were unable to meet operational needs. The only deep - water port the Allies had captured was Cherbourg on the northern shore of the Cotentin peninsula and west of the original invasion beaches, but the Germans had thoroughly wrecked and mined the harbor before it could be taken. It took many months to rebuild its cargo - handling capability. The Allies captured the port of Antwerp intact in the first days of September, but it was not operational until 28 November. The estuary of the Schelde river (also called Scheldt) that controlled access to the port had to be cleared of both German troops and naval mines. The limitations led to differences between General Eisenhower and Field Marshal Bernard Montgomery, commander of the Anglo - Canadian 21st Army Group, over whether Montgomery or Lieutenant General Omar Bradley, commanding the U.S. 12th Army Group, in the south would get priority access to supplies.
German forces remained in control of several major ports on the English Channel coast until May 1945. The Allies ' efforts to destroy the French railway system prior to D - Day, successful in hampering German response to the invasion, proved equally restrictive to the Allies. It took time to repair the rail network 's tracks and bridges. A trucking system nicknamed the Red Ball Express brought supplies to front - line troops, but used up five times as much fuel to reach the front line near the Belgian border as was delivered. By early October, the Allies had suspended major offensives to improve their supply lines and availability.
Montgomery and Bradley both pressed for priority delivery of supplies to their respective armies so they could continue their individual lines of advance and maintain pressure on the Germans while Eisenhower preferred a broad - front strategy. He gave some priority to Montgomery 's northern forces. This had the short - term goal of opening the urgently needed port of Antwerp and the long - term goal of capturing the Ruhr area, the biggest industrial area of Germany. With the Allies stalled, German Generalfeldmarschall (Field Marshal) Gerd von Rundstedt was able to reorganize the disrupted German armies into a coherent defence.
Field Marshal Montgomery 's Operation Market Garden achieved only some of its objectives, while its territorial gains left the Allied supply situation stretched further than before. In October, the First Canadian Army fought the Battle of the Scheldt, opening the port of Antwerp to shipping. As a result, by the end of October the supply situation had eased somewhat.
Despite a lull along the front after the Scheldt battles, the German situation remained dire. While operations continued in the autumn, notably the Lorraine Campaign, the Battle of Aachen and fighting in the Hürtgen Forest, the strategic situation in the west had changed little. The Allies were slowly pushing towards Germany, but no decisive breakthrough was achieved. The Western Allies already had 96 divisions at or near the front, with an estimated ten more divisions en route from the United Kingdom. Additional Allied airborne units remained in England. The Germans could field a total of 55 understrength divisions.
Adolf Hitler first officially outlined his surprise counter-offensive to his astonished generals on September 16, 1944. The assault 's ambitious goal was to pierce the thinly held lines of the U.S. First Army between Monschau and Wasserbillig with Army Group B (Model) by the end of the first day, get the armor through the Ardennes by the end of the second day, reach the Meuse between Liège and Dinant by the third day, and seize Antwerp and the western bank of the Scheldt estuary by the fourth day.
Hitler initially promised his generals a total of 18 infantry and 12 armored or mechanized divisions "for planning purposes. '' The plan was to pull 13 infantry divisions, two parachute divisions and six panzer - type divisions from the Oberkommando der Wehrmacht combined German military strategic reserve. On the Eastern Front, the Soviets ' Operation Bagration during the summer had destroyed much of Germany 's Army Group Center (Heeresgruppe Mitte). The extremely swift operation ended only when the advancing Soviet Red Army forces outran their supplies. By November, it was clear that Soviet forces were preparing for a winter offensive.
Meanwhile, the Allied air offensive of early 1944 had effectively grounded the Luftwaffe, leaving the German Army with little battlefield intelligence and no way to interdict Allied supplies. The converse was equally damaging; daytime movement of German forces was rapidly noticed, and interdiction of supplies combined with the bombing of the Romanian oil fields starved Germany of oil and gasoline. This fuel shortage intensified after the Soviets overran those fields in the course of their August 1944 Jassy - Kishinev Offensive.
One of the few advantages held by the German forces in November 1944 was that they were no longer defending all of Western Europe. Their front lines in the west had been considerably shortened by the Allied offensive and were much closer to the German heartland. This drastically reduced their supply problems despite Allied control of the air. Additionally, their extensive telephone and telegraph network meant that radios were no longer necessary for communications, which lessened the effectiveness of Allied Ultra intercepts. Nevertheless, some 40 -- 50 messages per day were decrypted by Ultra. They recorded the quadrupling of German fighter forces and a term used in an intercepted Luftwaffe message -- Jägeraufmarsch (literally "Hunter Deployment '') -- implied preparation for an offensive operation. Ultra also picked up communiqués regarding extensive rail and road movements in the region, as well as orders that movements should be made on time.
Hitler felt that his mobile reserves allowed him to mount one major offensive. Although he realized nothing significant could be accomplished in the Eastern Front, he still believed an offensive against the Western Allies, whom he considered militarily inferior to the Red Army, would have some chances of success. Hitler believed he could split the Allied forces and compel the Americans and British to settle for a separate peace, independent of the Soviet Union. Success in the west would give the Germans time to design and produce more advanced weapons (such as jet aircraft, new U-boat designs and super-heavy tanks) and permit the concentration of forces in the east. After the war ended, this assessment was generally viewed as unrealistic, given Allied air superiority throughout Europe and their ability to continually disrupt German offensive operations.
Given the reduced manpower of their land forces at the time, the Germans believed the best way to seize the initiative would be to attack in the West against the smaller Allied forces rather than against the vast Soviet armies. Even the encirclement and destruction of multiple Soviet armies, as in 1941, would still have left the Soviets with a numerical superiority.
Hitler 's plan called for a classic Blitzkrieg attack through the weakly defended Ardennes, mirroring the successful German offensive there during the Battle of France in 1940 -- aimed at splitting the armies along the U.S. -- British lines and capturing Antwerp. The plan banked on unfavorable weather, including heavy fog and low - lying clouds, which would minimize the Allied air advantage. Hitler originally set the offensive for late November, before the anticipated start of the Russian winter offensive. The disputes between Montgomery and Bradley were well known, and Hitler hoped he could exploit this disunity. If the attack were to succeed in capturing Antwerp, four complete armies would be trapped without supplies behind German lines.
Several senior German military officers, including Generalfeldmarschall Walter Model and Gerd von Rundstedt, expressed concern as to whether the goals of the offensive could be realized. Model and von Rundstedt both believed aiming for Antwerp was too ambitious, given Germany 's scarce resources in late 1944. At the same time, they felt that maintaining a purely defensive posture (as had been the case since Normandy) would only delay defeat, not avert it. They thus developed alternative, less ambitious plans that did not aim to cross the Meuse River (in German and Dutch: Maas); Model 's being Unternehmen Herbstnebel (Operation Autumn Mist) and von Rundstedt 's Fall Martin ("Plan Martin ''). The two field marshals combined their plans to present a joint "small solution '' to Hitler. When they offered their alternative plans, Hitler would not listen. Rundstedt later testified that while he recognized the merit of Hitler 's operational plan, he saw from the very first that "all, absolutely all conditions for the possible success of such an offensive were lacking. ''
Model, commander of German Army Group B (Heeresgruppe B), and von Rundstedt, overall commander of the German Army Command in the West (OB West), were put in charge of carrying out the operation.
In the west supply problems began significantly to impede Allied operations, even though the opening of the port of Antwerp in late November improved the situation somewhat. The positions of the Allied armies stretched from southern France all the way north to the Netherlands. German planning for the counteroffensive rested on the premise that a successful strike against thinly manned stretches of the line would halt Allied advances on the entire Western Front.
The Wehrmacht 's code name for the offensive was Unternehmen Wacht am Rhein ("Operation Watch on the Rhine ''), after the German patriotic hymn Die Wacht am Rhein, a name that deceptively implied the Germans would be adopting a defensive posture along the Western Front. The Germans also referred to it as "Ardennenoffensive '' (Ardennes Offensive) and Rundstedt - Offensive, both names being generally used nowadays in modern Germany. The French (and Belgian) name for the operation is Bataille des Ardennes (Battle of the Ardennes). The battle was militarily defined by the Allies as the Ardennes Counteroffensive, which included the German drive and the American effort to contain and later defeat it. The phrase Battle of the Bulge was coined by contemporary press to describe the way the Allied front line bulged inward on wartime news maps.
While the Ardennes Counteroffensive is the correct term in Allied military language, the official Ardennes - Alsace campaign reached beyond the Ardennes battle region, and the most popular description in English speaking countries remains simply the Battle of the Bulge.
The OKW decided by mid-September, at Hitler 's insistence, that the offensive would be mounted in the Ardennes, as was done in 1940. In 1940 German forces had passed through the Ardennes in three days before engaging the enemy, but the 1944 plan called for battle in the forest itself. The main forces were to advance westward to the Meuse River, then turn northwest for Antwerp and Brussels. The close terrain of the Ardennes would make rapid movement difficult, though open ground beyond the Meuse offered the prospect of a successful dash to the coast.
Four armies were selected for the operation. Adolf Hitler personally selected for the counter-offensive on the northern shoulder of the western front the best troops available and officers he trusted. The lead role in the attack was given to 6th Panzer Army, commanded by SS - Oberstgruppenführer Sepp Dietrich. It included the most experienced formation of the Waffen - SS: the 1st SS Panzer Division Leibstandarte Adolf Hitler. It also contained the 12th SS Panzer Division Hitlerjugend. They were given priority for supply and equipment and assigned the shortest route to the primary objective of the offensive, Antwerp, starting from the northernmost point on the intended battlefront, nearest the important road network hub of Monschau.
The Fifth Panzer Army under General Hasso von Manteuffel was assigned to the middle sector with the objective of capturing Brussels.
The Seventh Army, under General Erich Brandenberger, was assigned to the southernmost sector, near the Luxembourgish city of Echternach, with the task of protecting the flank. This Army was made up of only four infantry divisions, with no large - scale armored formations to use as a spearhead unit. As a result, they made little progress throughout the battle.
Also participating in a secondary role was the Fifteenth Army, under General Gustav - Adolf von Zangen. Recently brought back up to strength and re-equipped after heavy fighting during Operation Market Garden, it was located on the far north of the Ardennes battlefield and tasked with holding U.S. forces in place, with the possibility of launching its own attack given favorable conditions.
For the offensive to be successful, four criteria were deemed critical: the attack had to be a complete surprise; the weather conditions had to be poor to neutralize Allied air superiority and the damage it could inflict on the German offensive and its supply lines; the progress had to be rapid -- the Meuse River, halfway to Antwerp, had to be reached by day 4; and Allied fuel supplies would have to be captured intact along the way because the combined Wehrmacht forces were short on fuel. The General Staff estimated they only had enough fuel to cover one - third to one - half of the ground to Antwerp in heavy combat conditions.
The plan originally called for just under 45 divisions, including a dozen panzer and Panzergrenadier divisions forming the armored spearhead and various infantry units to form a defensive line as the battle unfolded. By this time the German Army suffered from an acute manpower shortage, and the force had been reduced to around 30 divisions. Although it retained most of its armor, there were not enough infantry units because of the defensive needs in the East. These 30 newly rebuilt divisions used some of the last reserves of the German Army. Among them were Volksgrenadier ("People 's Grenadier '') units formed from a mix of battle - hardened veterans and recruits formerly regarded as too young, too old or too frail to fight. Training time, equipment and supplies were inadequate during the preparations. German fuel supplies were precarious -- those materials and supplies that could not be directly transported by rail had to be horse - drawn to conserve fuel, and the mechanized and panzer divisions would depend heavily on captured fuel. As a result, the start of the offensive was delayed from 27 November to 16 December.
Before the offensive the Allies were virtually blind to German troop movement. During the liberation of France, the extensive network of the French resistance had provided valuable intelligence about German dispositions. Once they reached the German border, this source dried up. In France, orders had been relayed within the German army using radio messages enciphered by the Enigma machine, and these could be picked up and decrypted by Allied code - breakers headquartered at Bletchley Park, to give the intelligence known as Ultra. In Germany such orders were typically transmitted using telephone and teleprinter, and a special radio silence order was imposed on all matters concerning the upcoming offensive. The major crackdown in the Wehrmacht after the 20 July plot to assassinate Hitler resulted in much tighter security and fewer leaks. The foggy autumn weather also prevented Allied reconnaissance aircraft from correctly assessing the ground situation. German units assembling in the area were even issued charcoal instead of wood for cooking fires to cut down on smoke and reduce chances of Allied observers deducing a troop buildup was underway.
For these reasons Allied High Command considered the Ardennes a quiet sector, relying on assessments from their intelligence services that the Germans were unable to launch any major offensive operations this late in the war. What little intelligence they had led the Allies to believe precisely what the Germans wanted them to believe - -- that preparations were being carried out only for defensive, not offensive, operations. The Allies relied too much on Ultra, not human reconnaissance. In fact, because of the Germans ' efforts, the Allies were led to believe that a new defensive army was being formed around Düsseldorf in the northern Rhineland, possibly to defend against British attack. This was done by increasing the number of flak (Flugabwehrkanonen, i.e., anti-aircraft cannons) in the area and the artificial multiplication of radio transmissions in the area. The Allies at this point thought the information was of no importance. All of this meant that the attack, when it came, completely surprised the Allied forces. Remarkably, the U.S. Third Army intelligence chief, Colonel Oscar Koch, the U.S. First Army intelligence chief and the SHAEF intelligence officer Brigadier General Kenneth Strong all correctly predicted the German offensive capability and intention to strike the U.S. VIII Corps area. These predictions were largely dismissed by the U.S. 12th Army Group. Strong had informed Bedell Smith in December of his suspicions. Bedell Smith sent Strong to warn Lieutenant General Omar Bradley, the commander of the 12th Army Group, of the danger. Bradley 's response was succinct: "Let them come. '' Historian Patrick K. O'Donnell writes that on 8 December 1944 U.S. Rangers at great cost took Hill 400 during the Battle of the Hürtgen Forest. The next day GIs who relieved the Rangers reported a considerable movement of German troops inside the Ardennes in the enemy 's rear, but that no one in the chain of command connected the dots.
Because the Ardennes was considered a quiet sector, considerations of economy of force led it to be used as a training ground for new units and a rest area for units that had seen hard fighting. The U.S. units deployed in the Ardennes thus were a mixture of inexperienced troops (such as the raw U.S. 99th and 106th "Golden Lions '' Divisions), and battle - hardened troops sent to that sector to recuperate (the 28th Infantry Division).
Two major special operations were planned for the offensive. By October it was decided that Otto Skorzeny, the German SS - commando who had rescued the former Italian dictator Benito Mussolini, was to lead a task force of English - speaking German soldiers in "Operation Greif ''. These soldiers were to be dressed in American and British uniforms and wear dog tags taken from corpses and prisoners of war. Their job was to go behind American lines and change signposts, misdirect traffic, generally cause disruption and seize bridges across the Meuse River. By late November another ambitious special operation was added: Col. Friedrich August von der Heydte was to lead a Fallschirmjäger - Kampfgruppe (paratrooper combat group) in Operation Stösser, a night - time paratroop drop behind the Allied lines aimed at capturing a vital road junction near Malmedy.
German intelligence had set 20 December as the expected date for the start of the upcoming Soviet offensive, aimed at crushing what was left of German resistance on the Eastern Front and thereby opening the way to Berlin. It was hoped that Soviet leader Stalin would delay the start of the operation once the German assault in the Ardennes had begun and wait for the outcome before continuing.
After the 20 July attempt on Hitler 's life, and the close advance of the Red Army which would seize the site on 27 January 1945, Hitler and his staff had been forced to abandon the Wolfsschanze headquarters in East Prussia, in which they had coordinated much of the fighting on the Eastern Front. After a brief visit to Berlin, Hitler travelled on his Führersonderzug ("Special Train of the Führer '' (Leader)) to Giessen on 11 December, taking up residence in the Adlerhorst (eyrie) command complex, co-located with OB West 's base at Kransberg Castle. Believing in omens and the successes of his early war campaigns that had been planned at Kransberg, Hitler had chosen the site from which he had overseen the successful 1940 campaign against France and the Low Countries.
Von Rundstedt set up his operational headquarters near Limburg, close enough for the generals and Panzer Corps commanders who were to lead the attack to visit Adlerhorst on 11 December, travelling there in an SS - operated bus convoy. With the castle acting as overflow accommodation, the main party was settled into the Adlerhorst 's Haus 2 command bunker, including Gen. Alfred Jodl, Gen. Wilhelm Keitel, Gen. Blumentritt, von Manteuffel and SS Gen. Joseph ("Sepp '') Dietrich.
In a personal conversation on 13 December between Walter Model and Friedrich von der Heydte, who was put in charge of Operation Stösser, von der Heydte gave Operation Stösser less than a 10 % chance of succeeding. Model told him it was necessary to make the attempt: "It must be done because this offensive is the last chance to conclude the war favorably. ''
On 16 December 1944 at 05: 30, the Germans began the assault with a massive, 90 - minute artillery barrage using 1,600 artillery pieces across a 130 - kilometre (80 mi) front on the Allied troops facing the 6th Panzer Army. The Americans ' initial impression was that this was the anticipated, localized counterattack resulting from the Allies ' recent attack in the Wahlerscheid sector to the north, where the 2nd Division had knocked a sizable dent in the Siegfried Line. Heavy snowstorms engulfed parts of the Ardennes area. While having the effect of keeping the Allied aircraft grounded, the weather also proved troublesome for the Germans because poor road conditions hampered their advance. Poor traffic control led to massive traffic jams and fuel shortages in forward units.
In the center, von Manteuffel 's Fifth Panzer Army attacked towards Bastogne and St. Vith, both road junctions of great strategic importance. In the south, Brandenberger 's Seventh Army pushed towards Luxembourg in its efforts to secure the flank from Allied attacks. Only one month before, 250 members of the Waffen - SS had unsuccessfully tried to recapture the town of Vianden with its castle from the Luxembourgish resistance during the Battle of Vianden.
While the Siege of Bastogne is often credited as the central point where the German offensive was stopped, the battle for Elsenborn Ridge was actually the decisive component of the Battle of the Bulge, stopping the advance of the best equipped armored units of the German army and forcing them to reroute their troops to unfavorable alternative routes that considerably slowed their advance.
The attack on Monschau, Höfen, Krinkelt - Rocherath, and then Elsenborn Ridge was led by the units personally selected by Adolf Hitler. The 6th Panzer Army was given priority for supply and equipment and was assigned the shortest route to the ultimate objective of the offensive, Antwerp. The 6th Panzer Army included the elite of the Waffen - SS, including four Panzer divisions and five infantry divisions in three corps. SS - Obersturmbannführer Joachim Peiper led Kampfgruppe Peiper, consisting of 4,800 men and 600 vehicles, which was charged with leading the main effort. Its newest and most powerful tank, the Tiger II heavy tank, consumed 3.8 litres (1 gal) of fuel to go 800 m (. 5 mi), and the Germans had less than half the fuel they needed to reach Antwerp.
The attacks by the Sixth Panzer Army 's infantry units in the north fared badly because of unexpectedly fierce resistance by the U.S. 2nd and 99th Infantry Divisions. Kampfgruppe Peiper, at the head of the Sepp Dietrich 's Sixth Panzer Army, had been designated to take the Losheim - Losheimergraben road, a key route through the Losheim Gap, but it was closed by two collapsed overpasses that German engineers failed to repair during the first day. Peiper 's forces were rerouted through Lanzerath.
To preserve the quantity of armor available, the infantry of the 9th Fallschirmjaeger Regiment, 3rd Fallschirmjaeger Division, had been ordered to clear the village first. A single 18 - man Intelligence and Reconnaissance Platoon from the 99th Infantry Division along with four Forward Air Controllers held up the battalion of about 500 German paratroopers until sunset, about 16: 00, causing 92 casualties among the Germans.
This created a bottleneck in the German advance. Kampfgruppe Peiper did not begin his advance until nearly 16: 00, more than 16 hours behind schedule and did n't reach Bucholz Station until the early morning of 17 December. Their intention was to control the twin villages of Rocherath - Krinkelt which would clear a path to the high ground of Elsenborn Ridge. Occupation of this dominating terrain would allow control of the roads to the south and west and ensure supply to Kampfgruppe Peiper 's armored task force.
At 12: 30 on 17 December, Kampfgruppe Peiper was near the hamlet of Baugnez, on the height halfway between the town of Malmedy and Ligneuville, when they encountered elements of the 285th Field Artillery Observation Battalion, U.S. 7th Armored Division. After a brief battle the lightly armed Americans surrendered. They were disarmed and, with some other Americans captured earlier (approximately 150 men), sent to stand in a field near the crossroads under light guard. About fifteen minutes after Peiper 's advance guard passed through, the main body under the command of SS - Sturmbannführer Werner Pötschke arrived. Allegedly, the SS troopers suddenly opened fire on the prisoners. As soon as the firing began, the prisoners panicked. Most were shot where they stood, though some managed to flee. Accounts of the killing vary, but at least 84 of the POWs were murdered. A few survived, and news of the killings of prisoners of war spread through Allied lines. Following the end of the war, soldiers and officers of Kampfgruppe Peiper, including Joachim Peiper and SS general Sepp Dietrich, were tried for the incident at the Malmedy massacre trial.
Driving to the south - east of Elsenborn, Kampfgruppe Peiper entered Honsfeld, where they encountered one of the 99th Division 's rest centers, clogged with confused American troops. They quickly captured portions of the 3rd Battalion of the 394th Infantry Regiment. They destroyed a number of American armored units and vehicles, and took several dozen prisoners who were subsequently murdered. Peiper also captured 50,000 US gallons (190,000 l; 42,000 imp gal) of fuel for his vehicles.
Peiper advanced north - west towards Büllingen, keeping to the plan to move west, unaware that if he had turned north he had an opportunity to flank and trap the entire 2nd and 99th Divisions. Instead, intent on driving west, Peiper turned south to detour around Hünningen, choosing a route designated Rollbahn D as he had been given latitude to choose the best route west.
To the north, the 277th Volksgrenadier Division attempted to break through the defending line of the U.S. 99th and the 2nd Infantry Divisions. The 12th SS Panzer Division, reinforced by additional infantry (Panzergrenadier and Volksgrenadier) divisions, took the key road junction at Losheimergraben just north of Lanzerath and attacked the twin villages of Rocherath and Krinkelt.
Another, smaller massacre was committed in Wereth, Belgium, approximately 6.5 miles (10.5 km) northeast of Saint - Vith on 17 December 1944. Eleven black American soldiers were tortured after surrendering and then shot by men of the 1st SS Panzer Division belonging to Schnellgruppe Knittel. The perpetrators were never punished for this crime and recent research indicates that men from Third Company of the Reconnaissance Battalion were responsible.
By the evening the spearhead had pushed north to engage the U.S. 99th Infantry Division and Kampfgruppe Peiper arrived in front of Stavelot. Peiper 's forces were already behind his timetable because of the stiff American resistance and because when the Americans fell back, their engineers blew up bridges and emptied fuel dumps. Peiper 's unit was delayed and his vehicles denied critically needed fuel. They took 36 hours to advance from the Eifel region to Stavelot, while the same advance required nine hours in 1940.
Kampfgruppe Peiper attacked Stavelot on 18 December but was unable to capture the town before the Americans evacuated a large fuel depot. Three tanks attempted to take the bridge, but the lead vehicle was disabled by a mine. Following this, 60 grenadiers advanced forward but were stopped by concentrated American defensive fire. After a fierce tank battle the next day, the Germans finally entered the town when U.S. engineers failed to blow the bridge.
Capitalizing on his success and not wanting to lose more time, Peiper rushed an advance group toward the vital bridge at Trois - Ponts, leaving the bulk of his strength in Stavelot. When they reached it at 11: 30 on 18 December, retreating U.S. engineers blew it up. Peiper detoured north towards the villages of La Gleize and Cheneux. At Cheneux, the advance guard was attacked by American fighter - bombers, destroying two tanks and five halftracks, blocking the narrow road. The group began moving again at dusk at 16: 00 and was able to return to its original route at around 18: 00. Of the two bridges remaining between Kampfgruppe Peiper and the Meuse, the bridge over the Lienne was blown by the Americans as the Germans approached. Peiper turned north and halted his forces in the woods between La Gleize and Stoumont. He learned that Stoumont was strongly held and that the Americans were bringing up strong reinforcements from Spa.
To Peiper 's south, the advance of Kampfgruppe Hansen had stalled. SS - Oberführer Mohnke ordered Schnellgruppe Knittel, which had been designated to follow Hansen, to instead move forward to support Peiper. SS - Sturmbannführer Knittel crossed the bridge at Stavelot around 19: 00 against American forces trying to retake the town. Knittel pressed forward towards La Gleize, and shortly afterward the Americans recaptured Stavelot. Peiper and Knittel both faced the prospect of being cut off.
At dawn on 19 December, Peiper surprised the American defenders of Stoumont by sending infantry from the 2nd SS Panzergrenadier Regiment in an attack and a company of Fallschirmjäger to infiltrate their lines. He followed this with a Panzer attack, gaining the eastern edge of the town. An American tank battalion arrived but, after a two - hour tank battle, Peiper finally captured Stoumont at 10: 30. Knittel joined up with Peiper and reported the Americans had recaptured Stavelot to their east. Peiper ordered Knittel to retake Stavelot. Assessing his own situation, he determined that his Kampfgruppe did not have sufficient fuel to cross the bridge west of Stoumont and continue his advance. He maintained his lines west of Stoumont for a while, until the evening of 19 December when he withdrew them to the village edge. On the same evening the U.S. 82nd Airborne Division under Maj. Gen. James Gavin arrived and deployed at La Gleize and along Peiper 's planned route of advance.
German efforts to reinforce Peiper were unsuccessful. Kampfgruppe Hansen was still struggling against bad road conditions and stiff American resistance on the southern route. Schnellgruppe Knittel was forced to disengage from the heights around Stavelot. Kampfgruppe Sandig, which had been ordered to take Stavelot, launched another attack without success. Sixth Panzer Army commander Sepp Dietrich ordered Hermann Prieß, commanding officer of the I SS Panzer Corps, to increase its efforts to back Peiper 's battle group, but Prieß was unable to break through.
Small units of the U.S. 2nd Battalion, 119th Infantry Regiment, 30th Infantry Division, attacked the dispersed units of Kampfgruppe Peiper on the morning of 21 December. They failed and were forced to withdraw, and a number were captured, including battalion commander Maj. Hal McCown. Peiper learned that his reinforcements had been directed to gather in La Gleize to his east, and he withdrew, leaving wounded Americans and Germans in the Froidcourt Castle (fr). As he withdrew from Cheneux, American paratroopers from the 82nd Airborne Division engaged the Germans in fierce house - to - house fighting. The Americans shelled Kampfgruppe Peiper on 22 December, and although the Germans had run out of food and had virtually no fuel, they continued to fight. A Luftwaffe resupply mission went badly when SS - Brigadeführer Wilhelm Mohnke insisted the grid coordinates supplied by Peiper were wrong, parachuting supplies into American hands in Stoumont.
In La Gleize, Peiper set up defenses waiting for German relief. When the relief force was unable to penetrate the Allied lines, he decided to break through the Allied lines and return to the German lines on 23 December. The men of the Kampfgruppe were forced to abandon their vehicles and heavy equipment, although most of the 800 remaining troops were able to escape.
The 99th Infantry Division as a whole, outnumbered five to one, inflicted casualties in the ratio of 18 to one. The division lost about 20 % of its effective strength, including 465 killed and 2,524 evacuated due to wounds, injuries, fatigue, or trench foot. German losses were much higher. In the northern sector opposite the 99th, this included more than 4,000 deaths and the destruction of 60 tanks and big guns. Historian John S.D. Eisenhower wrote, "... the action of the 2nd and 99th Divisions on the northern shoulder could be considered the most decisive of the Ardennes campaign. ''
The stiff American defense prevented the Germans from reaching the vast array of supplies near the Belgian cities of Liège and Spa and the road network west of the Elsenborn Ridge leading to the Meuse River. After more than 10 days of intense battle, they pushed the Americans out of the villages, but were unable to dislodge them from the ridge, where elements of the V Corps of the First U.S. Army prevented the German forces from reaching the road network to their west.
Operation Stösser was a paratroop drop into the American rear in the High Fens (French: Hautes Fagnes; German: Hohes Venn; Dutch: Hoge Venen) area. The objective was the "Baraque Michel '' crossroads. It was led by Oberst Friedrich August Freiherr von der Heydte, considered by Germans to be a hero of the Battle of Crete.
It was the German paratroopers ' only night time drop during World War II. Von der Heydte was given only eight days to prepare prior to the assault. He was not allowed to use his own regiment because their movement might alert the Allies to the impending counterattack. Instead, he was provided with a Kampfgruppe of 800 men. The II Parachute Corps was tasked with contributing 100 men from each of its regiments. In loyalty to their commander, 150 men from von der Heydte 's own unit, the 6th Parachute Regiment, went against orders and joined him. They had little time to establish any unit cohesion or train together.
The parachute drop was a complete failure. Von der Heydte ended up with a total of around 300 troops. Too small and too weak to counter the Allies, they abandoned plans to take the crossroads and instead converted the mission to reconnaissance. With only enough ammunition for a single fight, they withdrew towards Germany and attacked the rear of the American lines. Only about 100 of his weary men finally reached the German rear.
Following the Malmedy massacre, on New Year 's Day 1945, after having previously received orders to take no prisoners, American soldiers allegedly shot approximately sixty German prisoners of war near the Belgian village of Chenogne (8 km from Bastogne).
The Germans fared better in the center (the 32 km (20 mi) Schnee Eifel sector) as the Fifth Panzer Army attacked positions held by the U.S. 28th and 106th Infantry Divisions. The Germans lacked the overwhelming strength that had been deployed in the north, but still possessed a marked numerical and material superiority over the very thinly spread 28th and 106th divisions. They succeeded in surrounding two largely intact regiments (422nd and 423rd) of the 106th Division in a pincer movement and forced their surrender, a tribute to the way Manteuffel 's new tactics had been applied. One of those wounded and captured was Lieutenant Donald Prell of the Anti-Tank Company of the 422nd Infantry, 106th Division. The official U.S. Army history states: "At least seven thousand (men) were lost here and the figure probably is closer to eight or nine thousand. The amount lost in arms and equipment, of course, was very substantial. The Schnee Eifel battle, therefore, represents the most serious reverse suffered by American arms during the operations of 1944 -- 45 in the European theater. ''
In the center, the town of St. Vith, a vital road junction, presented the main challenge for both von Manteuffel 's and Dietrich 's forces. The defenders, led by the 7th Armored Division, included the remaining regiment of the 106th U.S. Infantry Division, with elements of the 9th Armored Division and 28th U.S. Infantry Division. These units, which operated under the command of Generals Robert W. Hasbrouck (7th Armored) and Alan W. Jones (106th Infantry), successfully resisted the German attacks, significantly slowing the German advance. At Montgomery 's orders, St. Vith was evacuated on 21 December; U.S. troops fell back to entrenched positions in the area, presenting an imposing obstacle to a successful German advance. By 23 December, as the Germans shattered their flanks, the defenders ' position became untenable and U.S. troops were ordered to retreat west of the Salm River. Since the German plan called for the capture of St. Vith by 18: 00 on 17 December, the prolonged action in and around it dealt a major setback to their timetable.
To protect the river crossings on the Meuse at Givet, Dinant and Namur, Montgomery ordered those few units available to hold the bridges on 19 December. This led to a hastily assembled force including rear - echelon troops, military police and Army Air Force personnel. The British 29th Armoured Brigade of British 11th Armoured Division, which had turned in its tanks for re-equipping, was told to take back their tanks and head to the area. British XXX Corps was significantly reinforced for this effort. Units of the corps which fought in the Ardennes were the 51st (Highland) and 53rd (Welsh) Infantry Divisions, the British 6th Airborne Division, the 29th and 33rd Armoured Brigades, and the 34th Tank Brigade.
Unlike the German forces on the northern and southern shoulders who were experiencing great difficulties, the German advance in the center gained considerable ground. The Fifth Panzer Army was spearheaded by the 2nd Panzer Division while the Panzer Lehr Division (Armored Training Division) came up from the south, leaving Bastogne to other units. The Ourthe River was passed at Ourtheville on 21 December. Lack of fuel held up the advance for one day, but on 23 December the offensive was resumed towards the two small towns of Hargimont and Marche - en - Famenne. Hargimont was captured the same day, but Marche - en - Famenne was strongly defended by the American 84th Division. Gen. von Lüttwitz, commander of the XXXXVII Panzer - Korps, ordered the Division to turn westwards towards Dinant and the Meuse, leaving only a blocking force at Marche - en - Famenne. Although advancing only in a narrow corridor, 2nd Panzer Division was still making rapid headway, leading to jubilation in Berlin. Headquarters now freed up the 9th Panzer Division for Fifth Panzer Army, which was deployed at Marche.
On 22 / 23 December German forces reached the woods of Foy - Nôtre - Dame, only a few kilometers ahead of Dinant. The narrow corridor caused considerable difficulties, as constant flanking attacks threatened the division. On 24 December, German forces made their furthest penetration west. The Panzer Lehr Division took the town of Celles, while a bit farther north, parts of 2nd Panzer Division were in sight of the Meuse near Dinant at Foy - Nôtre - Dame. A hastily assembled Allied blocking force on the east side of the river prevented the German probing forces from approaching the Dinant bridge. By late Christmas Eve the advance in this sector was stopped, as Allied forces threatened the narrow corridor held by the 2nd Panzer Division.
For Operation Greif ("Griffin ''), Otto Skorzeny successfully infiltrated a small part of his battalion of English - speaking Germans disguised in American uniforms behind the Allied lines. Although they failed to take the vital bridges over the Meuse, their presence caused confusion out of all proportion to their military activities, and rumors spread quickly. Even General George Patton was alarmed and, on 17 December, described the situation to General Dwight Eisenhower as "Krauts... speaking perfect English... raising hell, cutting wires, turning road signs around, spooking whole divisions, and shoving a bulge into our defenses. ''
Checkpoints were set up all over the Allied rear, greatly slowing the movement of soldiers and equipment. American MPs at these checkpoints grilled troops on things that every American was expected to know, like the identity of Mickey Mouse 's girlfriend, baseball scores, or the capital of a particular U.S. state -- though many could not remember or did not know. General Omar Bradley was briefly detained when he correctly identified Springfield as the capital of Illinois because the American MP who questioned him mistakenly believed the capital was Chicago.
The tightened security nonetheless made things very hard for the German infiltrators, and a number of them were captured. Even during interrogation, they continued their goal of spreading disinformation; when asked about their mission, some of them claimed they had been told to go to Paris to either kill or capture General Dwight Eisenhower. Security around the general was greatly increased, and Eisenhower was confined to his headquarters. Because Skorzeny 's men were captured in American uniforms, they were executed as spies. This was the standard practice of every army at the time, as many belligerents considered it necessary to protect their territory against the grave dangers of enemy spying. Skorzeny said that he was told by German legal experts that as long he did not order his men to fight in combat while wearing American uniforms, such a tactic was a legitimate ruse of war. Skorzeny and his men were fully aware of their likely fate, and most wore their German uniforms underneath their American ones in case of capture. Skorzeny was tried by an American military tribunal in 1947 at the Dachau Trials for allegedly violating the laws of war stemming from his leadership of Operation Greif, but was acquitted. He later moved to Spain and South America.
Operation Währung was carried out by a small number of German agents who infiltrated Allied lines in American uniforms. These agents were tasked with using an existing Nazi intelligence network to bribe rail and port workers to disrupt Allied supply operations. The operation was a failure.
Further south on Manteuffel 's front, the main thrust was delivered by all attacking divisions crossing the River Our, then increasing the pressure on the key road centers of St. Vith and Bastogne. The more experienced 28th Infantry Division put up a much more dogged defense than the inexperienced soldiers of the 106th Infantry Division. The 112th Infantry Regiment (the most northerly of the 28th Division 's regiments), holding a continuous front east of the Our, kept German troops from seizing and using the Our River bridges around Ouren for two days, before withdrawing progressively westwards.
The 109th and 110th Regiments of the 28th Division fared worse, as they were spread so thinly that their positions were easily bypassed. Both offered stubborn resistance in the face of superior forces and threw the German schedule off by several days. The 110th 's situation was by far the worst, as it was responsible for an 18 - kilometre (11 mi) front while its 2nd Battalion was withheld as the divisional reserve. Panzer columns took the outlying villages and widely separated strong points in bitter fighting, and advanced to points near Bastogne within four days. The struggle for the villages and American strong points, plus transport confusion on the German side, slowed the attack sufficiently to allow the 101st Airborne Division (reinforced by elements from the 9th and 10th Armored Divisions) to reach Bastogne by truck on the morning of 19 December. The fierce defense of Bastogne, in which American paratroopers particularly distinguished themselves, made it impossible for the Germans to take the town with its important road junctions. The panzer columns swung past on either side, cutting off Bastogne on 20 December but failing to secure the vital crossroads.
In the extreme south, Brandenberger 's three infantry divisions were checked by divisions of the U.S. VIII Corps after an advance of 6.4 km (4 mi); that front was then firmly held. Only the 5th Parachute Division of Brandenberger 's command was able to thrust forward 19 km (12 mi) on the inner flank to partially fulfill its assigned role. Eisenhower and his principal commanders realized by 17 December that the fighting in the Ardennes was a major offensive and not a local counterattack, and they ordered vast reinforcements to the area. Within a week 250,000 troops had been sent. General Gavin of the 82nd Airborne Division arrived on the scene first and ordered the 101st to hold Bastogne while the 82nd would take the more difficult task of facing the SS Panzer Divisions; it was also thrown into the battle north of the bulge, near Elsenborn Ridge.
By the time the senior Allied commanders met in a bunker in Verdun on 19 December, the town of Bastogne and its network of 11 hard - topped roads leading through the widely forested mountainous terrain with deep river valleys and boggy mud of the Ardennes region were to have been in German hands for several days. By the time of that meeting, two separate westbound German columns that were to have bypassed the town to the south and north, the 2nd Panzer Division and Panzer - Lehr - Division of XLVII Panzer Corps, as well as the Corps ' infantry (26th Volksgrenadier Division), coming due west had been engaged and much slowed and frustrated in outlying battles at defensive positions up to sixteen kilometres (10 mi) from the town proper -- and were gradually being forced back onto and into the hasty defenses built within the municipality. Moreover, the sole corridor that was open (to the southeast) was threatened and it had been sporadically closed as the front shifted, and there was expectation that it would be completely closed sooner than later, given the strong likelihood that the town would soon be surrounded.
Gen. Eisenhower, realizing that the Allies could destroy German forces much more easily when they were out in the open and on the offensive than if they were on the defensive, told his generals, "The present situation is to be regarded as one of opportunity for us and not of disaster. There will be only cheerful faces at this table. '' Patton, realizing what Eisenhower implied, responded, "Hell, let 's have the guts to let the bastards go all the way to Paris. Then, we 'll really cut ' em off and chew ' em up. '' Eisenhower, after saying he was not that optimistic, asked Patton how long it would take to turn his Third Army, located in northeastern France, north to counterattack. To the disbelief of the other generals present, Patton replied that he could attack with two divisions within 48 hours. Unknown to the other officers present, before he left Patton had ordered his staff to prepare three contingency plans for a northward turn in at least corps strength. By the time Eisenhower asked him how long it would take, the movement was already underway. On 20 December, Eisenhower removed the First and Ninth U.S. Armies from Gen. Bradley 's 12th Army Group and placed them under Montgomery 's 21st Army Group.
By 21 December the Germans had surrounded Bastogne, which was defended by the 101st Airborne Division, the all African American 969th Artillery Battalion, and Combat Command B of the 10th Armored Division. Conditions inside the perimeter were tough -- most of the medical supplies and medical personnel had been captured. Food was scarce, and by 22 December artillery ammunition was restricted to 10 rounds per gun per day. The weather cleared the next day and supplies (primarily ammunition) were dropped over four of the next five days.
Despite determined German attacks the perimeter held. The German commander, Generalleutnant (Lt. Gen.) Heinrich Freiherr von Lüttwitz, requested Bastogne 's surrender. When Brig. Gen. Anthony McAuliffe, acting commander of the 101st, was told of the Nazi demand to surrender, in frustration he responded, "Nuts! '' After turning to other pressing issues, his staff reminded him that they should reply to the German demand. One officer, Lt. Col. Harry Kinnard, noted that McAuliffe 's initial reply would be "tough to beat. '' Thus McAuliffe wrote on the paper, which was typed up and delivered to the Germans, the line he made famous and a morale booster to his troops: "NUTS! '' That reply had to be explained, both to the Germans and to non-American Allies.
Both 2nd Panzer and Panzer - Lehr division moved forward from Bastogne after 21 December, leaving only Panzer - Lehr division 's 901st Regiment to assist the 26th Volksgrenadier - Division in attempting to capture the crossroads. The 26th VG received one Panzergrenadier Regiment from the 15th Panzergrenadier Division on Christmas Eve for its main assault the next day. Because it lacked sufficient troops and those of the 26th VG Division were near exhaustion, the XLVII Panzerkorps concentrated its assault on several individual locations on the west side of the perimeter in sequence rather than launching one simultaneous attack on all sides. The assault, despite initial success by its tanks in penetrating the American line, was defeated and all the tanks destroyed. On the following day of 26 December the spearhead of Gen. Patton 's 4th Armored Division, supplemented by the 26th (Yankee) Infantry Division, broke through and opened a corridor to Bastogne.
On 23 December the weather conditions started improving, allowing the Allied air forces to attack. They launched devastating bombing raids on the German supply points in their rear, and P - 47 Thunderbolts started attacking the German troops on the roads. Allied air forces also helped the defenders of Bastogne, dropping much - needed supplies -- medicine, food, blankets, and ammunition. A team of volunteer surgeons flew in by military glider and began operating in a tool room.
By 24 December the German advance was effectively stalled short of the Meuse. Units of the British XXX Corps were holding the bridges at Dinant, Givet, and Namur and U.S. units were about to take over. The Germans had outrun their supply lines, and shortages of fuel and ammunition were becoming critical. Up to this point the German losses had been light, notably in armor, with the exception of Peiper 's losses. On the evening of 24 December, General Hasso von Manteuffel recommended to Hitler 's Military Adjutant a halt to all offensive operations and a withdrawal back to the Westwall (literally Western Rampart). Hitler rejected this.
Disagreement and confusion at the Allied command prevented a strong response, throwing away the opportunity for a decisive action. In the center, on Christmas Eve, the 2nd Armored Division attempted to attack and cut off the spearheads of the 2nd Panzer Division at the Meuse, while the units from the 4th Cavalry Group kept the 9th Panzer Division at Marche busy. As result, parts of the 2nd Panzer Division were cut off. The Panzer - Lehr division tried to relieve them, but was only partially successful, as the perimeter held. For the next two days the perimeter was strengthened. On 26 and 27 December the trapped units of 2nd Panzer Division made two break - out attempts, again only with partial success, as major quantities of equipment fell into Allied hands. Further Allied pressure out of Marche finally led the German command to the conclusion that no further offensive action towards the Meuse was possible.
In the south, Patton 's Third Army was battling to relieve Bastogne. At 16: 50 on 26 December, the lead element, Company D, 37th Tank Battalion of the 4th Armored Division, reached Bastogne, ending the siege.
On 1 January, in an attempt to keep the offensive going, the Germans launched two new operations. At 09: 15, the Luftwaffe launched Unternehmen Bodenplatte (Operation Baseplate), a major campaign against Allied airfields in the Low Countries, which are nowadays called the Benelux States. Hundreds of planes attacked Allied airfields, destroying or severely damaging some 465 aircraft. The Luftwaffe lost 277 planes, 62 to Allied fighters and 172 mostly because of an unexpectedly high number of Allied flak guns, set up to protect against German V - 1 flying bomb / missile attacks and using proximity fused shells, but also by friendly fire from the German flak guns that were uninformed of the pending large - scale German air operation. The Germans suffered heavy losses at an airfield named Y - 29, losing 40 of their own planes while damaging only four American planes. While the Allies recovered from their losses within days, the operation left the Luftwaffe ineffective for the remainder of the war.
On the same day, German Army Group G (Heeresgruppe G) and Army Group Upper Rhine (Heeresgruppe Oberrhein) launched a major offensive against the thinly - stretched, 110 kilometres (70 mi) line of the Seventh U.S. Army. This offensive, known as Unternehmen Nordwind (Operation North Wind), was the last major German offensive of the war on the Western Front. The weakened Seventh Army had, at Eisenhower 's orders, sent troops, equipment, and supplies north to reinforce the American armies in the Ardennes, and the offensive left it in dire straits.
By 15 January Seventh Army 's VI Corps was fighting on three sides in Alsace. With casualties mounting, and running short on replacements, tanks, ammunition, and supplies, Seventh Army was forced to withdraw to defensive positions on the south bank of the Moder River on 21 January. The German offensive drew to a close on 25 January. In the bitter, desperate fighting of Operation Nordwind, VI Corps, which had borne the brunt of the fighting, suffered a total of 14,716 casualties. The total for Seventh Army for January was 11,609. Total casualties included at least 9,000 wounded. First, Third, and Seventh Armies suffered a total of 17,000 hospitalized from the cold.
While the German offensive had ground to a halt, they still controlled a dangerous salient in the Allied line. Patton 's Third Army in the south, centered around Bastogne, would attack north, Montgomery 's forces in the north would strike south, and the two forces planned to meet at Houffalize.
The temperature during January 1945 was extremely low. Weapons had to be maintained and truck engines run every half - hour to prevent their oil from congealing. The offensive went forward regardless.
Eisenhower wanted Montgomery to go on the counter offensive on 1 January, with the aim of meeting up with Patton 's advancing Third Army and cutting off most of the attacking Germans, trapping them in a pocket. Montgomery, refusing to risk underprepared infantry in a snowstorm for a strategically unimportant area, did not launch the attack until 3 January, by which time substantial numbers of German troops had already managed to fall back successfully, but at the cost of losing most of their heavy equipment.
At the start of the offensive, the First and Third U.S. Armies were separated by about 40 km (25 mi). American progress in the south was also restricted to about a kilometer a day. On 2 January, the Tiger IIs of German Heavy Tank Battalion 506 supported an attack by the 12th SS Hitlerjugend division against U.S. positions near Wardin and knocked out 15 Sherman tanks. The majority of the German force executed a successful fighting withdrawal and escaped the battle area, although the fuel situation had become so dire that most of the German armor had to be abandoned. On 7 January 1945 Hitler agreed to withdraw all forces from the Ardennes, including the SS - Panzer divisions, thus ending all offensive operations. Considerable fighting went on for another 3 weeks; St. Vith was recaptured by the Americans on 23 January, and the last German units participating in the offensive did not return to their start line until 25 January.
Winston Churchill, addressing the House of Commons following the Battle of the Bulge said, "This is undoubtedly the greatest American battle of the war and will, I believe, be regarded as an ever - famous American victory. ''
The plan and timing for the Ardennes attack sprang from the mind of Adolf Hitler. He believed a critical fault line existed between the British and American military commands, and that a heavy blow on the Western Front would shatter this alliance. Planning for the "Watch on the Rhine '' offensive emphasized secrecy and the commitment of overwhelming force. Due to the use of landline communications within Germany, motorized runners carrying orders, and draconian threats from Hitler, the timing and mass of the attack was not detected by ULTRA codebreakers and achieved complete surprise.
Hitler when selecting leadership for the attack, felt that the implementation of this decisive blow should be entrusted to his own Nazi Party army, the Waffen - SS. Ever since German regular Army officers attempted to assassinate him, he had increasingly trusted only the SS and its armed branch, the Waffen - SS. After the invasion of Normandy, the SS armored units had suffered significant leadership casualties. These losses included SS - Gruppenführer (Major General) Kurt Meyer, commander of the 12th SS Panzer (Armor) Division, captured by Belgian partisans on 6 September 1944. The tactical efficiency of these units were somewhat reduced. The strong right flank of the assault was therefore composed mostly of SS Divisions under the command of "Sepp '' (Joseph) Dietrich, a fanatical political disciple of Hitler, and a loyal follower from the early days of the rise of National Socialism in Germany. The leadership composition of the Sixth Panzer Division had a distinctly political nature.
None of the German field commanders entrusted with planning and executing the offensive believed it was possible to capture Antwerp. Even Sepp Dietrich, commanding the strongest arm of the attack, felt that the Ardennes was a poor area for armored warfare, and that the inexperienced and badly equipped Volksgrenadier units would clog the roads that the tanks would need for their rapid advance. In this Dietrich was proved correct. The horse drawn artillery and rocket units were a significant obstacle to the tanks. Other than making futile objections to Hitler in private, he generally stayed out of the planning for the offensive. Model and Manteuffel, the technical experts from the eastern front, took the view that a limited offensive with the goal of surrounding and crushing the American 1st Army would be the best the offensive could hope for. These revisions shared the same fate as Dietrich 's objections. In the end, the headlong drive on Elsenborn Ridge would not benefit from support from German units that had already bypassed the ridge. The decision to stop the attacks on the twin villages and change the axis of the attacks southward to the hamlet of Domäne Bütgenbach, was also made by Dietrich. This decision played into American hands, as Robertson had already decided to abandon the villages. The staff planning and organization of the attack was well done; most of the units committed to the offensive reached their jump off points undetected and were well organized and supplied for the attack.
One of the fault lines between the British and American high commands was General Dwight D. Eisenhower 's commitment to a broad front advance. This view was opposed by the British Chief of the Imperial General Staff, Field Marshal Alan Brooke (as well as Field Marshal Montgomery) who promoted a rapid advance on a narrow front, with the other allied armies in reserve.
British Field Marshal Bernard Montgomery had differing views of how to approach the German attack with the U.S. command. His ensuing public pronouncements of opinion caused tension in the American high command. Major General Freddie de Guingand, Chief of Staff of Montgomery 's 21st Army Group, rose to the occasion, and personally smoothed over the disagreements on 30 December.
As the Ardennes crisis developed, at 10: 30 a.m. on 20 December, Eisenhower telephoned Montgomery and ordered him to assume command of the American First (Hodges) and Ninth Army (Simpson) -- which, until then, were under Bradley 's overall command. This change in command was ordered because the northern armies had not only lost all communications with Bradley, who was based in Luxembourg City, and the US command structure, but with adjacent units.
Describing the situation as he found it on 20 December, Montgomery wrote;
The First Army was fighting desperately. Having given orders to Dempsey and Crerar, who arrived for a conference at 11 am, I left at noon for the H.Q. of the First Army, where I had instructed Simpson to meet me. I found the northern flank of the bulge was very disorganized. Ninth Army had two corps and three divisions; First Army had three corps and fifteen divisions. Neither Army Commander had seen Bradley or any senior member of his staff since the battle began, and they had no directive on which to work. The first thing to do was to see the battle on the northern flank as one whole, to ensure the vital areas were held securely, and to create reserves for counter-attack. I embarked on these measures: I put British troops under command of the Ninth Army to fight alongside American soldiers, and made that Army take over some of the First Army Front. I positioned British troops as reserves behind the First and Ninth Armies until such time as American reserves could be created. Slowly but surely the situation was held, and then finally restored. Similar action was taken on the southern flank of the bulge by Bradley, with the Third Army.
Due to the news blackout imposed on the 16th, the change of leadership to Montgomery did not become known to the outside world until eventually SHAEF made a public announcement making clear that the change in command was "absolutely nothing to do with failure on the part of the three American generals ''. This resulted in headlines in British newspapers. The story was also covered in Stars and Stripes and for the first time British contribution to the fighting was mentioned.
Montgomery asked Churchill if he could give a conference to the press to explain the situation. Though some of his staff were concerned at the image it would give, the conference had been cleared by Alan Brooke, the CIGS, who was possibly the only person to whom Monty would listen.
On the same day as Hitler 's withdrawal order of 7 January, Montgomery held his press conference at Zonhoven. Montgomery started with giving credit to the "courage and good fighting quality '' of the American troops, characterizing a typical American as a "very brave fighting man who has that tenacity in battle which makes a great soldier '', and went on to talk about the necessity of Allied teamwork, and praised Eisenhower, stating, "Teamwork wins battles and battle victories win wars. On our team, the captain is General Ike. ''
Then Montgomery described the course of the battle for a half - hour. Coming to the end of his speech he said he had "employed the whole available power of the British Group of Armies; this power was brought into play very gradually... Finally it was put into battle with a bang... you thus have the picture of British troops fighting on both sides of the Americans who have suffered a hard blow. '' He stated that he (i.e., the German) was "headed off... seen off... and... written off ''. "The battle has been the most interesting, I think possibly one of the most interesting and tricky battles I have ever handled. ''.
Despite his positive remarks about American soldiers, the overall impression given by Montgomery, at least in the ears of the American military leadership, was that he had taken the lion 's share of credit for the success of the campaign, and had been responsible for rescuing the besieged Americans.
His comments were interpreted as self - promoting, particularly his claiming that when the situation "began to deteriorate, '' Eisenhower had placed him in command in the north. Patton and Eisenhower both felt this was a misrepresentation of the relative share of the fighting played by the British and Americans in the Ardennes (for every British soldier there were thirty to forty Americans in the fight), and that it belittled the part played by Bradley, Patton and other American commanders. In the context of Patton 's and Montgomery 's well - known antipathy, Montgomery 's failure to mention the contribution of any American general beside Eisenhower was seen as insulting. Indeed, General Bradley and his American commanders were already starting their counterattack by the time Montgomery was given command of 1st and 9th U.S. Armies. Focusing exclusively on his own generalship, Montgomery continued to say he thought the counteroffensive had gone very well but did not explain the reason for his delayed attack on 3 January. He later attributed this to needing more time for preparation on the northern front. According to Winston Churchill, the attack from the south under Patton was steady but slow and involved heavy losses, and Montgomery was trying to avoid this situation.
Many American officers had already grown to dislike Montgomery, who was seen by them as an overly cautious commander, arrogant, and all too willing to say uncharitable things about the Americans. The British Prime Minister Winston Churchill found it necessary in a speech to Parliament to explicitly state that the Battle of the Bulge was purely an American victory.
Montgomery subsequently recognized his error and later wrote: "Not only was it probably a mistake to have held this conference at all in the sensitive state of feeling at the time, but what I said was skilfully distorted by the enemy. Chester Wilmot explained that his dispatch to the BBC about it was intercepted by the German wireless, re-written to give it an anti-American bias, and then broadcast by Arnhem Radio, which was then in Goebbels ' hands. Monitored at Bradley 's HQ, this broadcast was mistaken for a BBC transmission and it was this twisted text that started the uproar. ''
Montgomery later said, "Distorted or not, I think now that I should never have held that press conference. So great were the feelings against me on the part of the American generals that whatever I said was bound to be wrong. I should therefore have said nothing. '' Eisenhower commented in his own memoirs: "I doubt if Montgomery ever came to realize how resentful some American commanders were. They believed he had belittled them -- and they were not slow to voice reciprocal scorn and contempt. ''
Bradley and Patton both threatened to resign unless Montgomery 's command was changed. Eisenhower, encouraged by his British deputy Arthur Tedder, had decided to sack Montgomery. Intervention by Montgomery 's and Eisenhower 's Chiefs of Staff, Maj. Gen. Freddie de Guingand, and Lt. Gen. Walter Bedell Smith, moved Eisenhower to reconsider and allowed Montgomery to apologize.
The German commander of the 5th Panzer Army, Hasso von Manteuffel said of Montgomery 's leadership:
The operations of the American 1st Army had developed into a series of individual holding actions. Montgomery 's contribution to restoring the situation was that he turned a series of isolated actions into a coherent battle fought according to a clear and definite plan. It was his refusal to engage in premature and piecemeal counter-attacks which enabled the Americans to gather their reserves and frustrate the German attempts to extend their breakthrough.
Casualty estimates for the battle vary widely. According to the U.S. Department of Defense, American forces suffered 89,500 casualties including 19,000 killed, 47,500 wounded and 23,000 missing. An official report by the United States Department of the Army lists 105,102 casualties, including 19,246 killed, 62,489 wounded, and 26,612 captured or missing. A preliminary Army report restricted to the First and Third U.S. Armies listed 75,000 casualties (8,400 killed, 46,000 wounded and 21,000 missing). The Battle of the Bulge was the bloodiest battle for U.S. forces in World War II. British casualties totaled 1,400 with 200 deaths. The German Armed Forces High Command 's official figure for all German losses on the Western Front during the period 16 December 1944 -- 25 January 1945 was 81,834 German casualties, and other estimates range between 60,000 and 125,000. German historian Hermann Jung lists 67,675 casualties from 16 December 1944 to late January 1945 for the three German armies that participated in the offensive. The German casualty reports for the involved armies count 63,222 losses from 10 December 1944 to 31 January 1945. The United States Army Center of Military History 's official numbers are 75,000 American casualties and 100,000 German casualties.
The U.S. Army lost 2,000 armored vehicles destroyed in the Ardennes in 30 days -- 900 Sherman tanks, 300 light tanks, 150 tank destroyers, 450 armored cars and 150 self - propelled guns. U.S. divisions suffered severe losses of armor. The 3rd Armored Division lost 163 tanks in 30 days, of which 125 were Shermans and 38 Stuarts. The 7th Armored Division lost 103 tanks, including 73 Shermans and 31 Stuarts, in 13 days from 17 -- 30 December 1944. The 11th Armored Division lost 86 tanks (54 Shermans and 32 Stuarts) in several weeks of high - intensity combat.
German armored losses to all causes were between 527 and 554, with 324 tanks being lost in combat. Of the German write - offs, 16 -- 20 were Tigers, 191 -- 194 Panthers, 141 -- 158 Panzer IV, 179 -- 182 tank destroyers and assault guns. The Germans lost an additional 5,000 soft - skinned and armored vehicles. German AFV losses were far fewer than the U.S., and German armored units outperformed their U.S. opponents in combat.
Although the Germans managed to begin their offensive with complete surprise and enjoyed some initial successes, they were not able to seize the initiative on the Western front. While the German command did not reach its goals, the Ardennes operation inflicted heavy losses and set back the Allied invasion of Germany by several weeks. The High Command of the Allied forces had planned to resume the offensive by early January 1945, after the wet season rains and severe frosts, but those plans had to be postponed until 29 January 1945 in connection with the unexpected changes in the front.
The Allies pressed their advantage following the battle. By the beginning of February 1945, the lines were roughly where they had been in December 1944. In early February, the Allies launched an attack all along the Western front: in the north under Montgomery toward Aachen; in the center, under Courtney Hodges; and in the south, under Patton. Montgomery 's behavior during the months of December and January, including the press conference on 7 January where he appeared to downplay the contribution of the American generals, further soured his relationship with his American counterparts through to the end of the war.
The German losses in the battle were especially critical: their last reserves were now gone, the Luftwaffe had been shattered, and remaining forces throughout the West were being pushed back to defend the Siegfried Line.
In response to the early success of the offensive, on 6 January Churchill contacted Stalin to request that the Soviets put pressure on the Germans on the Eastern Front. On 12 January, the Soviets began the massive Vistula -- Oder Offensive, originally planned for 20 January. It had been brought forward from 20 January to 12 January because meteorological reports warned of a thaw later in the month, and the tanks needed hard ground for the offensive (and the advance of the Red Army was assisted by two Panzer Armies (5th & 6th) being redeployed for the Ardennes attack).
During World War II, most U.S. black soldiers still served only in maintenance or service positions, or in segregated units. Because of troop shortages during the Battle of the Bulge, Eisenhower decided to integrate the service for the first time. This was an important step toward a desegregated United States military. More than 2,000 black soldiers had volunteered to go to the front. A total of 708 black Americans were killed in combat during World War II.
The battle around Bastogne received a great deal of media attention because in early December 1944 it was a rest and recreation area for many war correspondents. The rapid advance by the German forces who surrounded the town, the spectacular resupply operations via parachute and glider, along with the fast action of General Patton 's Third U.S. Army, all were featured in newspaper articles and on radio and captured the public 's imagination; but there were no correspondents in the area of Saint - Vith, Elsenborn, or Monschau - Höfen. The static, stubborn resistance of troops in the north, who refused to yield their ground in the cold snow and freezing rain despite the heavy German attacks, did not get a casual observer excited. The images of supply troops trying to bring ammunition and cold food, crawling through mud and snow, to front - line troops dug into frozen foxholes around Montjoie, Elseborn and Butgenbach were not exciting news.
After the war ended, the U.S. Army issued battle credit in the form of the Ardennes - Alsace campaign citation to units and individuals that took part in operations in northwest Europe. The citation covered troops in the Ardennes sector where the main battle took place, as well as units further south in the Alsace sector, including those in the northern Alsace who filled in the vacuum created by the U.S. Third Army racing north, engaged in the concurrent Operation Nordwind diversion in central and southern Alsace launched to weaken Allied response in the Ardennes, and provided reinforcements to units fighting in the Ardennes.
The battle has been depicted in numerous works of art, entertainment, and media, including:
|
what is bar council of india in hindi | Bar Council of India - Wikipedia
The Bar Council of India is a statutory body established under the section 4 of advocates Act 1961 that regulates the legal practice and legal education in India. Its members are elected from amongst the lawyers in India and as such represents the Indian bar. It prescribes standards of professional conduct, etiquettes and exercises disciplinary jurisdiction over the bar. It also sets standards for legal education and grants recognition to Universities whose degree in law will serve as a qualification for students to enroll themselves as advocates upon graduation.
In March 1953, the ' All India Bar Committee ', headed by S.R. Das, submitted a report which proposed the creation of a bar council for each state and an all - India bar council as an apex body. It was suggested that the all India bar council should regulate the legal profession and set the standard of legal education. The Law Commission of India was assigned the job of assembling a report on judicial administration reforms. In 1961, the Advocates Act (1) was introduced to implement the recommendations made by the ' All India Bar Committee ' and ' Law Commission '. M.C. Setalvad and C.K. Daphtary were the first chairman and vice chairman respectively. In 1963, C.K. Daphtary became the Chairman and S.K. Ghose became the Vice Chairman.
Section 7 of the Advocates Act, 1961 lays down the Bar Council 's regulatory and representative mandate. The functions of the Bar Council are to:
As per the Advocates Act, the Bar Council of India consists of members elected from each state bar council, and the Attorney General of India and the Solicitor General of India who are ex officio members. The members from the state bar councils are elected for a period of five years.
The council elects its own Chairman and Vice-Chairman for a period of two years from amongst its members. Assisted by the various committees of the Council, the chairman acts as the chief executive and director of the Council.
Manan Kumar Mishra is the present Chairman. He was preceded by Biri Singh Sinsinewar, who was in turn preceded by the current Chairman, Manan Kumar Mishra.
Eligible persons having a recognised law degree are admitted as advocates on the rolls of the state bar Councils. The Advocates Act, 1961 empowers state bar councils to frame their own rules regarding enrollment of advocates. The Council 's enrollment committee may scrutinise a candidate 's application. Those admitted as advocates by any state bar council are eligible to take the All India Bar Examination which is conducted by the Bar Council of India. Passing the All India Bar Examination awards the state - enrolled advocate with a ' Certificate of Enrolment ' which enables the state - enrolled advocate to practice law as an advocate in any High Court and lower court within the territory of India. However to practise Law before the Supreme Court of India, Advocates must first appear for and qualify in the Supreme Court Advocate on Record Examination conducted by the Supreme Court.
The Bar Council of India has various committees which make recommendations to the council. The members of these committees are elected from amongst the members of the Council.
Other than these, there are Finance Committee, Special or Oversee Committee and All India Bar Examination Committee.
The Bar Council of India has established a Directorate of Legal Education for the purpose of organising, running, conducting, holding, and administering the following:
On April 10, 2010, the Bar Council of India resolved to conduct an All India Bar Examination that tests an advocate 's ability to practice law. It is required for an advocate to pass this examination to practice law. This examination is held biannually and tests advocates on substantive and procedural law. The syllabi for this examination has to be published at least three months before the examination. An advocate may appear for the examination any number of times. Once the advocate passes the examination, he / she will be entitled to a Certificate of Practice law throughout India. The All India Bar Examination (AIBE) IX scheduled to be held on 13 December 2015. It is clarified that the Bar Examination shall be mandatory for all law students graduating from academic year 2009 - 2010 onwards and enrolled as advocates under Section 24 of the Advocates Act, 1961.
|
the mismatched towers and spires of chartres cathedral help illustrate | Chartres cathedral - wikipedia
Chartres Cathedral, also known as the Cathedral of Our Lady of Chartres (French: Cathédrale Notre - Dame de Chartres), is a Roman Catholic church of the Latin Church located in Chartres, France, about 80 km (50 miles) southwest of Paris. The current cathedral, mostly constructed between 1194 and 1220, is the last of at least five which have occupied the site since the town became a bishopric in the 4th century. It is in the Gothic and Romanesque styles.
It is designated a World Heritage Site by UNESCO, which calls it "the high point of French Gothic art '' and a "masterpiece ''.
The cathedral has been well preserved. The majority of the original stained glass windows survived intact, while the architecture has seen only minor changes since the early 13th century. The building 's exterior is dominated by heavy flying buttresses which allowed the architects to increase the window size significantly, while the west end is dominated by two contrasting spires -- a 105 - metre (349 ft) plain pyramid completed around 1160 and a 113 - metre (377 ft) early 16th - century Flamboyant spire on top of an older tower. Equally notable are the three great façades, each adorned with hundreds of sculpted figures illustrating key theological themes and narratives.
Since at least the 12th century the cathedral has been an important destination for travelers. It remains so to the present, attracting large numbers of Christian pilgrims, many of whom come to venerate its famous relic, the Sancta Camisa, said to be the tunic worn by the Virgin Mary at Christ 's birth, as well as large numbers of secular tourists who come to admire the cathedral 's architecture and historical merit.
As with any medieval bishopric, Chartres Cathedral was the most important building in the town -- the centre of its economy, its most famous landmark and the focal point of many activities that in modern towns are provided for by specialised civic buildings. In the Middle Ages, the cathedral functioned as a kind of marketplace, with different commercial activities centred on the different portals, particularly during the regular fairs. Textiles were sold around the north transept, while meat, vegetable and fuel sellers congregated around the south porch. Money - changers (an essential service at a time when each town or region had its own currency) had their benches, or banques, near the west portals and also in the nave itself. Wine sellers plied their trade in the nave, although occasional 13th - century ordinances survive which record their being temporarily banished to the crypt to minimise disturbances. Workers of various professions gathered in particular locations around the cathedral awaiting offers of work.
Although the town of Chartres was under the judicial and tax authority of the Counts of Blois, the area immediately surrounding the cathedral, known as the cloître, was in effect a free - trade zone governed by the church authorities, who were entitled to the taxes from all commercial activity taking place there. As well as greatly increasing the cathedral 's income, throughout the 12th and 13th centuries this led to regular disputes, often violent, between the bishops, the chapter and the civic authorities -- particularly when serfs belonging to the counts transferred their trade (and taxes) to the cathedral. In 1258, after a series of bloody riots instigated by the count 's officials, the chapter finally gained permission from the King to seal off the area of the cloître and lock the gates each night.
Even before the Gothic cathedral was built, Chartres was a place of pilgrimage, albeit on a much smaller scale. During the Merovingian and early Carolingian eras, the main focus of devotion for pilgrims was a well (now located in the north side of Fulbert 's crypt), known as the Puits des Saints - Forts, or the ' Well of the Strong Saints ', into which it was believed the bodies of various local Early - Christian martyrs (including saints Piat, Cheron, Modesta and Potentianus) had been tossed.
Chartres became a site for the veneration of the Blessed Virgin Mary. In 876 the cathedral acquired the Sancta Camisa, believed to be the tunic worn by Mary at the time of Christ 's birth. According to legend, the relic was given to the cathedral by Charlemagne who received it as a gift from Emperor Constantine VI during a crusade to Jerusalem. However, as Charlemagne 's crusade is fiction, the legend lacks historical merit and was probably invented in the 11th century to authenticate relics at the Abbey of St Denis. In fact, the Sancta Camisa was a gift to the cathedral from Charles the Bald and there is no evidence for its being an important object of pilgrimage prior to the 12th century. In 1194, when the Cathedral was struck by lightning, and the east spire was lost, the Sancta Camisa was thought lost, too. However, it was found three days later, protected by priests, who fled behind iron trapdoors when the fire broke out.
Some research suggests that depictions in the cathedral, e.g. Mary 's infertile parents Joachim and Anne, harken back to the pre-Christian cult of a fertility goddess, and women would come to the well at this location in order to pray for their children and that some refer to that past. Chartres historian and expert Malcolm Miller rejected the claims of pre-Cathedral, Celtic, ceremonies and buildings on the site in a documentary. However, the widespread belief that the cathedral was also the site of a pre-Christian druidical sect who worshipped a "Virgin who will give birth '' is purely a late - medieval invention.
By the end of the 12th century the church had become one of the most important popular pilgrimage destinations in Europe. There were four great fairs which coincided with the main feast days of the Virgin Mary: the Presentation, the Annunciation, the Assumption and the Nativity. The fairs were held in the area administered by the cathedral and were attended by many of the pilgrims in town to see the cloak of the Virgin. Specific pilgrimages were also held in response to outbreaks of disease. When ergotism (more popularly known in the Middle Ages as "St. Anthony 's fire '') afflicted many victims, the crypt of the original church became a hospital to care for the sick.
Today Chartres continues to attract large numbers of pilgrims, many of whom come to walk slowly around the labyrinth, their heads bowed in prayer -- a devotional practice that the cathedral authorities accommodate by removing the chairs from the nave on Fridays from Lent to All Saints ' Day (except for Good Friday).
There have been at least five cathedrals on this site, each replacing an earlier building damaged by war or fire. Nothing survives of the earliest church, which was destroyed during an attack on the city by the Danes in 858. Of the Carolingian church that replaced it, all that remains is a semicircular chamber located directly below the centre of the present apse. This chamber, known as the Lubinus Crypt (named after the mid-6th - century Bishop of Chartres), is lower than the rest of the crypt and may have been the shrine of a local saint, prior to the church 's rededication to the Virgin. Another fire in 962 is mentioned in the annals, though nothing is known about the subsequent rebuilding. A more serious conflagration occurred in 1020, after which Bishop Fulbert (bishop from 1006 to 1028) began the construction of an entirely new building. Most of the present crypt, which is the largest in France, dates from that period. The rebuilding proceeded in phases over the next hundred years or so, culminating in 1145 in a display of public enthusiasm dubbed the "Cult of the Carts '' -- one of several such incidents recorded during the period. It was claimed that during this religious outburst, a crowd of more than a thousand penitents dragged carts filled with building supplies and provisions including stones, wood, grain, etc. to the site.
In 1134, another fire damaged the town, and perhaps part of the cathedral. The north tower was started immediately afterwards -- the south tower some time later. From the beginning, it was intended that these towers flank a central porch of some sort and a narthex. When the north tower rose to the level of the second storey, the south was begun -- the evidence lies in the profiles and in the masons marks on the two levels of the two towers. Between them on the first level, a chapel was constructed to Saint Michael. Traces of the vaults and the shafts which supported them are still visible in the western two bays. This chapel was probably vaulted, and those vaults saved the western glass. The stained glass in the three lancets over the portals date from some time between 1145 and 1155, while the south spire, some 103 metres high, was also completed by 1155 or later.
Work was begun on the Royal Portal with the south lintel around 1136 and with all its sculpture installed up to 1141. Opinions are uncertain as the sizes and styles of the figures vary and some elements, such as the lintel over the right - hand portal, have clearly been cut down to fit the available spaces. The sculpture was originally designed for these portals, but the layouts were changed by successive masters, see careful lithic analysis by John James. Either way, most of the carving follows the exceptionally high standard typical of this period and exercised a strong influence on the subsequent development of gothic portal design.
Some of the masters have been identified by John James, and drafts of these studies have been published on the web site of the International Centre of Medieval Art, New York.
On 10 June 1194, another fire caused extensive damage to Fulbert 's cathedral. The true extent of the damage is unknown, though the fact that the lead cames holding the west windows together survived the conflagration intact suggests contemporary accounts of the terrible devastation may have been exaggerated. Either way, the opportunity was taken to begin a complete rebuilding of the choir and nave in the latest style. The undamaged western towers and façade were incorporated into the new works, as was the earlier crypt, effectively limiting the designers of the new building to the same general plan as its predecessor. In fact the present building is only marginally longer than Fulbert 's cathedral.
One of the unusual features of Chartres cathedral is the speed with which it was built -- a factor which helped contribute to the consistency of its design. Even though there were innumerable changes to the details, the plan remains remarkably consistent. The major change occurred six years after work began when the seven deep chapels around the choir opening off a single ambulatory were turned into shallow recesses opening off a double - aisled ambulatory.
Australian architectural historian John James, who made a detailed study of the cathedral, has estimated that there were about 300 men working on the site at any one time, although it has to be acknowledged that current knowledge of working practices at this time is somewhat limited. Normally medieval churches were built from east to west so that the choir could be completed first and put into use (with a temporary wall sealing off the west end) while the crossing and nave were completed. Canon Delaporte argued that building work started at the crossing and proceeded outwards from there, but the evidence in the stonework itself is unequivocal, especially within the level of the triforium: the nave was at all times more advanced than ambulatory bays of the choir, and this has been confirmed by dendrochronology.
The history of the cathedral has been plagued by more theories than any other, a singular problem for those attempting to discover the truth. For example, Louis Grodecki argued that the lateral doors of the transept portals were cut through the walls at a later date, and van der Meulen that they had wanted to rebuild the western portals (then only 50 years old). None of these theories refer back to the actual stonework, and it is only when one has done so, as John James did exhaustively in 1969, that one realizes that the construction process was in fact simple and logical.
It is important to remember that the builders were not working on a clean site but would have had to clear back the rubble and surviving parts of the old church as they built the new. Nevertheless, work progressed rapidly. The south porch with most of its sculpture was installed by 1210, and by 1215 the north porch had been completed and the western rose installed. The nave high vaults were erected in the 1220s, the canons moved into their new stalls in 1221 under a temporary roof at the level of the clerestory, and the transept roses were erected over the subsequent two decades. The high vaults over the choir were not built until the last years of the 1250s, as was rediscovered in the first decade of the 21st century.
Each arm of the transept was originally meant to support two towers, two more were to flank the choir, and there was to have been a central lantern over the crossing -- nine towers in all. Plans for a crossing tower were abandoned in 1221 and the crossing was vaulted over. Work on the remaining six towers continued at a slower pace for some decades, until it was decided to leave them without spires (as at Laon Cathedral and elsewhere). The cathedral was consecrated on 24 October 1260 in the presence of King Louis IX of France, whose coat of arms was painted over the apsidal boss.
Compared with other medieval churches, relatively few changes have been made to the cathedral since its consecration. In 1323 a substantial two story construction was added at the eastern end of the choir, with a chapel dedicated to Saint Piat in the upper floor accessed by a staircase opening onto the ambulatory (the chapel of St Piat is normally closed to visitors, although it occasionally houses temporary exhibitions). The chamber below the chapel served the canons as their chapter house.
Shortly after 1417, a small chapel was placed between the buttresses of the south nave for the Count of Vendôme. At the same time the small organ that had been built in the nave aisle was moved up into the triforium where it remains, though some time in the sixteenth century it was replaced with a larger one on a raised platform at the western end of the building. To this end, some of the interior shafts in the western bay were removed and plans made to rebuild the organ there. In the event, this plan was abandoned, the glass in the western lancets was retained and the old organ was replaced with the present one.
In 1506, lightning destroyed the north spire, which was rebuilt in the ' Flamboyant ' style by local mason Jehan de Beauce (who also worked on the abbey church in Vendôme). It is 113 -- metres high and took seven years to construct. After its completion Jehan continued working on the cathedral, and began the monumental screen around the choir stalls, which was not completed until the beginning of the eighteenth century.
In 1757, a number of changes were made to the interior to increase the visibility of the Mass, in accordance with changing religious customs. The jubé (choir screen) that separated the liturgical choir from the nave was torn down and the present stalls built (some of the magnificent sculpture from this screen was later found buried underneath the paving and preserved, though it is not on public display). At the same time, some of the stained glass in the clerestory was removed and replaced with grisaille windows, greatly increasing the illumination of the High Altar.
In 1836, the old lead - covered roof, with its complex structure of timber supports (known as ' the forest ') was destroyed by fire. It was replaced with a copper - clad roof supported by a network of cast iron ribs, known as the Charpente de fer. At the time, the framework over the crossing had the largest span of any iron framed construction in Europe.
The cathedral was damaged in the French Revolution when a mob began to destroy the sculpture on the north porch. This is one of the few occasions on which the anti-religious fervour was stopped by the townfolk. The Revolutionary Committee decided to destroy the cathedral via explosives, and asked a local architect to organise it. He saved the building by pointing out that the vast amount of rubble from the demolished building would so clog the streets it would take years to clear away. However, when metal was needed for the army the brass plaque in the centre of the labyrinth was removed and melted down; our only record of what was on the plaque was Felibien 's description.
The Cathedral of Chartres was therefore neither destroyed nor looted during the French Revolution and the numerous restorations have not diminished its reputation as a triumph of Gothic art. The cathedral has been fortunate in being spared the damage suffered by so many during the Wars of Religion and the Revolution, though the lead roof was removed to make bullets and the Directorium threatened to destroy the building as its upkeep, without a roof, had become too onerous.
All the glass from the cathedral was removed in 1939 just before the Germans invaded France, and it was cleaned after the War and releaded before replacing. While the city suffered heavy damage by bombing in the course of World War II, the cathedral was spared by an American Army officer who challenged the order to destroy it.
Colonel Welborn Barton Griffith, Jr. questioned the strategy of destroying the cathedral and volunteered to go behind enemy lines to find out whether the German Army was occupying the cathedral and using it as an observation post. With a single enlisted soldier to assist, Griffith proceeded to the cathedral and confirmed that the Germans were not using it. After he returned from his reconnaissance, he reported that the cathedral was clear of enemy troops. The order to destroy the cathedral was withdrawn, and the Allies later liberated the area. Griffith was killed in action later that day on 16 August 1944, in the town of Leves, near Chartres.
In 2009 the Monuments Historiques division of the French Ministry of Culture began an 18.5 million dollar program of works at the cathedral, described as a restoration project. Part of the project involved painting the interior masonry creamy - white, with trompe l'oeil marbling and gilded detailing. The restoration architect in charge of this painting is Frédéric Didier. The goal of the project, which is due for completion in 2017, is to make the cathedral look as it would have done when finished in the 13th century.
The goal of the project and its results has been widely condemned. Architectural critic Alexander Gorlin described the goal as a "great lie '', writing that the "idea that the 13th century interior of Chartres can be recreated is so totally absurd as to be laughable '' and that it is "against every single cultural trend today that values the patina of age and the mark of time rather than the shiny bling of cheap jewelry and faux finishes ''. Alasdair Palmer called the project an "ill - conceived makeover '' Architectural historian Martin Filler described the work as a "scandalous desecration of a cultural holy place '', an "unfolding cultural disaster '', and stated that it violates international conservation protocols, in particular the 1964 Charter of Venice of which France is a signatory.
The restoration has, however, received almost universal backing from French experts and from the general public. Malcolm Miller, longtime Chartres tour guide and author of books about the cathedral, dismissed the objections: "They talk about the patina of the centuries. Nonsense. Rubbish. This is not the patina of the centuries. It is the rotting remains of a whitewash from the 18th century. The people who built this cathedral intended that its interior should be light. There was nothing natural about its darkness. It was nothing to do with ageing of the stone. It was caused, first of all, by centuries of candle smoke and then by a stupid decision to install oil - fired central heating in the 1950s. More recently, there was smoke damage from a couple of fires. ''
The cathedral is still the seat of the Bishop of Chartres of the Diocese of Chartres, though in the ecclesiastical province of Tours.
Every evening since the events of 11 September 2001, vespers are sung by the Chemin Neuf Community.
The plan is cruciform. A two bay narthex at the western end opens into a seven bay nave leading to the crossing, from which wide transepts extend three bays each to north and south. East of the crossing are four rectangular bays terminating in a semicircular apse. The nave and transepts are flanked by single aisles, broadening to a double - aisled ambulatory around the choir and apse. From the ambulatory radiate three deep semi-circular chapels (overlying the deep chapels of Fulbert 's 11th - century crypt) and three much shallower ones. Of the latter, one was effectively lost in the 1320s when the Chapel of St Piat was built.
The elevation of the nave is three - storied, with arcade, triforium and clerestory levels. By eschewing the gallery level that featured in many early Gothic cathedrals (normally between arcade and triforium), the designers were able to make the richly glazed arcade and clerestory levels larger and almost equal in height, with just a narrow dark triforium in between. Although not the first example of this three - part elevation, Chartres was perhaps the first of the great churches to make a success of it and to use the same design consistently throughout. The result was a far greater area of window openings. These windows were entirely glazed with densely colored glass, which resulted in a relatively dark interior -- but one which accentuated the richness of the glass and the colored light that filtered through them.
Increasing the size of the windows meant reducing the wall area considerably, something which was made possible only by the extensive use of flying buttresses on the outside. These buttresses supported the considerable lateral thrusts resulting from the 34m high stone vaults, higher and wider than any attempted before in France. These vaults were quadripartite, each bay split into four webs by two diagonally crossing ribs, unlike the sexpartite vaults adopted in many earlier Gothic cathedrals such as at Laon.
Another architectural breakthrough at Chartres was a resolution to the problem of how to arrange attached columns or shafts around a pier in a way that worked aesthetically -- but which also satisfied the desire for structural logic that characterised French high gothic. The nave at Chartres features alternating round and octagonal solid cored piers, each of which has four attached half - columns at the cardinal points, two of which (on the east - west axis) support the arches of the arcade, one acts as the springing for the aisle vault and one supports the cluster of shafts that rise through the triforium and clerestory to support the high - vault ribs. This pier design, known as pilier cantonné was to prove highly influential and subsequently featured in a number of other high gothic churches.
Although the sculpture on the portals at Chartres is generally of a high standard, the various carved elements inside, such as the capitals and string courses, are relatively poorly finished (when compared for example with those at Reims or Soissons) -- the reason is simply that the portals were carved from the finest Parisian limestone, or ' ' calcaire ' ', while the internal capitals were carved from the local "Berchères stone '', that is hard to work and can be brittle.
Perhaps the most distinctive feature of Chartres Cathedral is the extent to which architectural structure has been adapted to meet the needs of stained glass. The use of a three - part elevation with external buttressing allowed for far larger windows than earlier designs, particularly at the clerestory level. Most cathedrals of the period had a mixture of windows containing plain or grisaille glass and windows containing dense stained glass panels, with the result that the brightness of the former tended to diminish the impact and legibility of the latter. At Chartres, nearly all of the 176 windows were filled with equally dense stained glass, creating a relatively dark but richly coloured interior in which the light filtering through the myriad narrative and symbolic windows was the main source of illumination.
The majority of the windows now visible at Chartres were made and installed between 1205 and 1240, but four lancets preserve panels of Romanesque glass from the 12th century which survived the fire of 1195. Three of these are located beneath the rose in the west façade: the Passion window to the south, the Infancy of Christ in the centre and a Tree of Jesse to the north. All three of these windows were originally made around 1145 but were restored in the early 13th century and again in the 19th.
The other 12th - century window, perhaps the most famous at Chartres, is the so - called "Notre - Dame de la Belle - Verrière '', found in the first bay of the choir after the south transept. This window is actually a composite; the upper part, showing the Virgin and child surrounded by adoring angels, dates from around 1180 and was probably positioned at the centre of the apse in the earlier building. The Virgin is depicted wearing a blue robe and sitting in a frontal pose on a throne, with the Christ Child seated on her lap raising his hand in blessing. This composition, known as the Sedes sapientia (' Throne of Wisdom '), which also appears on the Portail royal, is based on the famous cult figure kept in the crypt. The lower part of the window, showing scenes from the Infancy of Christ, dates from the main glazing campaign around 1225.
Each bay of the aisles and the choir ambulatory contains one large lancet window, most of them roughly 8.1 m high by 2.2 m wide. The subjects depicted in these windows, made between 1205 and 1235, include stories from the Old and New Testament and the Lives of the Saints as well as typological cycles and symbolic images such as the signs of the zodiac and labours of the months, or the Good Samaritan parable. Most windows are made up of around 25 -- 30 individual panels showing distinct episodes within the narrative; only "Notre - Dame de la Belle - Verrière '' includes a larger image made up of multiple panels.
Several of the windows at Chartres include images of local tradesmen or labourers in the lowest two or three panels, often with details of their equipment and working methods. Traditionally it was claimed that these images represented the guilds of the donors who paid for the windows. In recent years however this view has largely been discounted, not least because each window would have cost around as much as a large mansion house to make -- while most of the labourers depicted would have been subsistence workers with little or no disposable income. Furthermore, although they became powerful and wealthy organisations in the later medieval period, none of these trade guilds had actually been founded when the glass was being made in the early 13th century. A more likely explanation is that the Cathedral clergy wanted to emphasise the universal reach of the Church, particularly at a time when their relationship with the local community was often a troubled one.
Because of their greater distance from the viewer, the windows in the clerestory generally adopt simpler, bolder designs. Most feature the standing figure of a saint or Apostle in the upper two - thirds, often with one or two simplified narrative scenes in the lower part, either to help identify the figure or else to remind the viewer of some key event in their life. Whereas the lower windows in the nave arcades and the ambulatory consist of one simple lancet per bay, the clerestory windows are each made up of a pair of lancets with a plate - traceried rose window above. The nave and transept clerestory windows mainly depict saints and Old Testament prophets. Those in the choir depict the kings of France and Castille and members of the local nobility in the straight bays, while the windows in the apse hemicycle show those Old Testament prophets who foresaw the virgin birth, flanking scenes of the Annunciation, Visitation and Nativity in the axial window.
The cathedral has three large rose windows.
The western rose, made c. 1215 and 12 m in diameter shows the Last Judgement -- a traditional theme for west façades. A central oculus showing Christ as the Judge is surrounded by an inner ring of 12 paired roundels containing angels and the Elders of the Apocalypse and an outer ring of 12 roundels showing the dead emerging from their tombs and the angels blowing trumpets to summon them to judgement.
The north transept rose (10.5 m diameter, made c. 1235), like much of the sculpture in the north porch beneath it, is dedicated to the Virgin. The central oculus shows the Virgin and Child and is surrounded by 12 small petal - shaped windows, 4 with doves (the ' Four Gifts of the Spirit '), the rest with adoring angels carrying candlesticks. Beyond this is a ring of 12 diamond - shaped openings containing the Old Testament Kings of Judah, another ring of smaller lozenges containing the arms of France and Castille, and finally a ring of semicircles containing Old Testament Prophets holding scrolls. The presence of the arms of the French king (yellow fleurs - de-lis on a blue background) and of his mother, Blanche of Castile (yellow castles on a red background) are taken as a sign of royal patronage for this window. Beneath the rose itself are five tall lancet windows (7.5 m high) showing, in the centre, the Virgin as an infant held by her mother, St Anne -- the same subject as the trumeau in the portal beneath it. Flanking this lancet are four more containing Old Testament figures. Each of these standing figures is shown symbolically triumphing over an enemy depicted in the base of the lancet beneath them -- David over Saul, Aaron over Pharaoh, St Anne over Synagoga, etc.
The south transept rose (10.5 m diameter, made c. 1225 -- 30) is dedicated to Christ, who is shown in the central oculus, right hand raised in benediction, surrounded by adoring angels. Two outer rings of twelve circles each contain the 24 Elders of the Apocalypse, crowned and carrying phials and musical instruments. The central lancet beneath the rose shows the Virgin carrying the infant Christ. Either side of this are four lancets showing the four evangelists sitting on the shoulders of four Prophets -- a rare literal illustration of the theological principle that the New Testament builds upon the Old Testament. This window was a donation of the Mauclerc family, the Counts of Dreux - Bretagne, who are depicted with their arms in the bases of the lancets.
On the whole, Chartres ' windows have been remarkably fortunate. The medieval glass largely escaped harm during the Huguenot iconoclasm and the religious wars of the 16th century although the west rose sustained damage from artillery fire in 1591. The relative darkness of the interior seems to have been a problem for some. A few windows were replaced with much lighter grisaille glass in the 14th century to improve illumination, particularly on the north side and several more were replaced with clear glass in 1753 as part of the reforms to liturgical practice that also led to the removal of the jubé. The installation of the Vendôme Chapel between two buttresses of the nave in the early 15th century resulted in the loss of one more lancet window, though it did allow for the insertion of a fine late - gothic window with donor portraits of Louis de Bourbon and his family witnessing the Coronation of the Virgin with assorted saints.
Although estimates vary (depending on how one counts compound or grouped windows) approximately 152 of the original 176 stained glass windows survive -- far more than any other medieval cathedral anywhere in the world.
Like most medieval buildings, the windows at Chartres suffered badly from the corrosive effects of atmospheric acids during the Industrial Revolution and subsequently. The majority of windows were cleaned and restored by the famous local workshop Atelier Lorin at the end of the 19th century but they continued to deteriorate. During World War II most of the stained glass was removed from the cathedral and stored in the surrounding countryside to protect it from damage. At the close of the war the windows were taken out of storage and reinstalled. Since then an ongoing programme of conservation has been underway and isothermal secondary glazing was gradually installed on the exterior to protect the windows from further damage.
The cathedral has three great façades, each equipped with three portals, opening into the nave from the west and into the transepts from north and south. In each façade the central portal is particularly large and was only used for special ceremonies, while the smaller side portals allowed everyday access for the different communities that used the cathedral
One of the few elements to survive from the mid-12th - century church, the Portail royal was integrated into the new cathedral built after the 1194 fire. Opening on to the parvis (the large square in front of the cathedral where markets were held), the two lateral doors would have been the first entry point for most visitors to Chartres, as it remains today. The central door is only opened for the entry of processions on major festivals, of which the most important is the Adventus or installation of a new bishop. The harmonious appearance of the façade results in part from the relative proportions of the central and lateral portals, whose widths are in the ratio 10: 7 -- one of the common medieval approximations of the square root of 2.
As well as their basic functions of providing access to the interior, portals are the main locations for sculpted images on the gothic cathedral and it is on the west façade at Chartres that this practice began to develop into a visual summa or encyclopedia of theological knowledge. The three portals each focus on a different aspect of Christ 's role; his earthly incarnation on the right, his second coming on the left and his eternal aspect in the centre.
Above the right portal, the lintel is carved in two registers with (lower) the Annunciation, Visitation, Nativity, Annunciation to the Shepherds and (upper) the Presentation in the Temple. Above this the tympanum shows the Virgin and Child enthroned in the Sedes sapientiae pose. Surrounding the tympanum, as a reminder of the glory days of the School of Chartres, the archivolts are carved with some very distinctive personifications of the Seven Liberal Arts as well as the classical authors and philosophers most associated with them.
The left portal is more enigmatic and art historians still argue over the correct identification. The tympanum shows Christ standing on a cloud, apparently supported by two angels. Some see this as a depiction of the Ascension of Christ (in which case the figures on the lower lintel would represent the disciples witnessing the event) while others see it as representing the Parousia, or Second Coming of Christ (in which case the lintel figures could be either the prophets who foresaw that event or else the ' Men of Galilee ' mentioned in Acts 1: 9 - 11). The presence of angels in the upper lintel, descending from a cloud and apparently shouting to those below, would seem to support the latter interpretation. The archivolts contain the signs of the zodiac and the labours of the months -- standard references to the cyclical nature of time which appear in many gothic portals.
The central portal is a more conventional representation of the End of Time as described in the Book of Revelation. In the centre of the tympanum is Christ within a mandorla, surrounded by the four symbols of the evangelists (the Tetramorph). The lintel shows the Twelve Apostles while the archivolts show the 24 Elders of the Apocalypse.
Although the upper parts of the three portals are treated separately, two sculptural elements run horizontally across the façade, uniting its different parts. Most obvious are the jamb statues affixed to the columns flanking the doorways -- tall, slender standing figures of kings and queens from whom the Portail royal derived its name. Although in the 18th and 19th century these figures were mistakenly identified as the Merovingian monarchs of France (thus attracting the opprobrium of Revolutionary iconoclasts) they almost certainly represent the kings and queens of the Old Testament -- another standard iconographical feature of gothic portals.
Less obvious than the jamb statues but far more intricately carved is the frieze that stretches all across the façade in the sculpted capitals on top of the jamb columns. Carved into these capitals is a very lengthy narrative depicting the life of the Virgin and the life and Passion of Christ.
In northern Europe it is common for the iconography on the north side of a church to focus on Old Testament themes, with stories from the lives of the saints and the Gospels being more prominent on the physically (and hence, spiritually) brighter southern side. Chartres is no exception to this general principle and the north transept portals, with their deep sheltering porches, concentrate on the precursors of Christ, leading up to the moment of His incarnation, with a particular emphasis on the Virgin Mary. The overall iconographical themes are clearly laid - out; the glorification of Mary in the centre, the incarnation of her son on the left and Old Testament prefigurations and prophecies on the right. One major exception to this scheme is the presence of large statues of St Modesta (a local martyr) and St Potentian on the north west corner of the porch, close to a small doorway where pilgrims visiting the crypt (where their relics were stored) would once have emerged blinking into the light.
As well as the main sculptural areas around the portals themselves, the deep porches are filled with myriad other carvings depicting a range of subjects including local saints, Old Testament narratives, naturalistic foliage, fantastical beasts, Labours of the Months and personifications of the ' active and contemplative lives ' (the vita activa and vita contemplativa). The personifications of the vita activa (directly overhead, just inside the inside of the left hand porch) are of particular interest for their meticulous depictions of the various stages in the preparation of flax -- an important cash - crop in the area during the Middle Ages.
If the north transept portals are all about the time leading up to Christ 's incarnation and the west façade is about the events of his life and Passion, then the iconography of the south transept portals addresses the time from Christ 's death until his Second Coming. The central portal concentrates on the Last Judgement and the Apostles, the left portal on the lives of martyrs and the right on confessor saints (an arrangement also reflected in the windows of the apse).
Just like their northern counterparts, the south transept portals open into deep porches which greatly extend the space available for sculptural embellishment. A large number of subsidiary scenes depict conventional themes like the labours of the months and the signs of the zodiac, personifications of the virtues and vices and also further scenes from the lives of the martyrs (left porch) and confessors (right porch).
In the Middle Ages the cathedral also functioned as an important cathedral school. In the early 11th century Bishop Fulbert established Chartres as one of the leading schools in Europe. Although the role of Fulbert himself as a scholar and teacher has been questioned, perhaps his greatest talent was as an administrator, who established the conditions in which the school could flourish, as well as laying the foundations for the rebuilding of the cathedral after the fire of 1020. Great scholars were attracted to the cathedral school, including Thierry of Chartres, William of Conches and the Englishman John of Salisbury. These men were at the forefront of the intense intellectual rethinking that culminated in what is now known as the twelfth - century renaissance, pioneering the Scholastic philosophy that came to dominate medieval thinking throughout Europe.
By the early 12th century the status of the School of Chartres was on the wane. It was gradually eclipsed by the newly emerging University of Paris, particularly at the School of the Abbey of St Victoire (the "Victorines ''). By the middle of the century the importance of Chartres Cathedral had begun to shift away from education and towards pilgrimage, a changing emphasis reflected in the subsequent architectural developments.
Orson Welles famously used Chartres as a visual backdrop and inspiration for a montage sequence in his film F For Fake. Welles ' semi-autobiographical narration spoke to the power of art in culture and how the work itself may be more important than the identity of its creators. Feeling that the beauty of Chartres and its unknown artisans and architects epitomized this sentiment, Welles, standing outside the cathedral and looking at it, eulogizes:
Now this has been standing here for centuries. The premier work of man perhaps in the whole western world and it 's without a signature: Chartres.
A celebration to God 's glory and to the dignity of man. All that 's left most artists seem to feel these days, is man. Naked, poor, forked radish. There are n't any celebrations. Ours, the scientists keep telling us, is a universe, which is disposable. You know it might be just this one anonymous glory of all things, this rich stone forest, this epic chant, this gaiety, this grand choiring shout of affirmation, which we choose when all our cities are dust, to stand intact, to mark where we have been, to testify to what we had it in us, to accomplish.
Our works in stone, in paint, in print are spared, some of them for a few decades, or a millennium or two, but everything must finally fall in war or wear away into the ultimate and universal ash. The triumphs and the frauds, the treasures and the fakes. A fact of life. We 're going to die. "Be of good heart, '' cry the dead artists out of the living past. Our songs will all be silenced -- but what of it? Go on singing. Maybe a man 's name does n't matter all that much.
(Church bells peal...)
Joseph Campbell references his spiritual experience in The Power of Myth:
I 'm back in the Middle Ages. I 'm back in the world that I was brought up in as a child, the Roman Catholic spiritual - image world, and it is magnificent... That cathedral talks to me about the spiritual information of the world. It 's a place for meditation, just walking around, just sitting, just looking at those beautiful things.
Joris - Karl Huysmans includes detailed interpretation of the symbolism underlying the art of Chartres Cathedral in his 1898 semi-autobiographical novel La cathédrale.
Chartres was the primary basis for the fictional Cathedral in David Macaulay 's Cathedral: The Story of Its Construction and the animated special based on this book.
Chartres was a major character in the religious thriller Gospel Truths by J.G. Sandom. The book used the Cathedral 's architecture and history as clues in the search for a lost Gospel.
The cathedral is featured in the television travel series The Naked Pilgrim; presenter Brian Sewell explores the cathedral and discusses its famous relic -- the nativity cloak said to have been worn by the Virgin Mary.
Popular action - adventure video game Assassin 's Creed features a climbable cathedral modeled heavily on the Chartres Cathedral.
One of the attractions at the Chartres Cathedral is the Chartres Light Celebration, when not only is the cathedral lit, but so are many buildings throughout the town, as a celebration of electrification.
|
flat panel mount interface 400 x 200 mm | Flat display mounting Interface - wikipedia
The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel monitors, TVs, and other displays to stands or wall mounts. It is implemented on most modern flat - panel monitors and TVs.
As well as being used for mounting monitors, the standards can be used to attach a small PC to the mount or function as a monitor mount.
The first standard in this family was introduced in 1997 and was originally called Flat Panel Monitor Physical Mounting Interface (FPMPMI), it corresponds to part D of the current standard.
Most sizes of VESA mount have four screw - holes arranged in a square on the mount, with matching tapped holes on the device. The horizontal and vertical distance between the screw centers was originally 100 mm. A 75 mm × 75 mm layout was defined for smaller displays. Later, variants were added for screens with as small as a 4 - inch diagonal.
The FDMI was extended in 2006 with additional screw patterns that are more appropriate for larger TV screens. Thus the standard now specifies 7 sizes, each with more than one variant. These are referenced as parts B to F of the standard or with official abbreviations, usually prefixed by the word "VESA ''.
Unofficially, the variants are sometimes referenced as just "VESA '' followed by the pattern size in mm, which is slightly ambiguous for the names "VESA 50 '' (4 possibilities), "VESA 75 '' (2 possibilities) and "VESA 200 '' (3 possibilities). However, if "VESA 100 '' is accepted as meaning the original variant ("VESA MIS - D, 100 ''), then all but "VESA MIS - E '' and "VESA MIS - F, 200 '' have at least one unique dimension that can be used in this way, as can be seen from the tables below.
Notes for the edge mounts:
More details can be found by purchasing a copy of the standard itself, including rules to ensure cables do n't prevent using the mounts.
More details can be found by purchasing a copy of the standard itself, including rules to ensure cables do n't prevent using the mounts.
In practice, many screens that almost comply with part F of the standard deviate in various minor ways, and most brands of compliant brackets are designed to handle these deviations with little or no trouble for the end user:
Manufacturers of FDMI compliant devices can license the use of a hexagonal "VESA mounting compliant '' logo.
Many compliant or almost compliant devices do not display the logo, as is reflected by the absence of most key vendors from VESA 's own public list of licensed manufacturers. Of the members of the standard committee (Ergotron, Peerless Industries, HP, Samsung, Sanus, ViewSonic and Vogel 's), only Ergotron is on the list.
As mentioned above under Variant F, there are many almost compliant screens on the market, and some of those use the "VESA '' name loosely to refer to their similar mounting patterns. Fortunately many brackets (mounts) from reputable vendors such as Ergotron, Invision and Vogel are designed to accommodate most of the deviating models, thus limiting the effect on end users.
|
who was the last president to go to south korea | United States presidential visits to East Asia - wikipedia
Ten United States presidents have made presidential visits to East Asia. The first presidential trip to a country in East Asia was made by Dwight D. Eisenhower (as president - elect) in 1952. Since then, all presidents, except John F. Kennedy, have travelled to one or more nations in the region while in office. To date, twenty - one visits have been made to Japan, nineteen to South Korea, twelve to China, and one to both Mongolia and Taiwan. No incumbent president has yet visited North Korea (which does not have diplomatic relations with the U.S.).
|
where does the last name bland come from | Bland (surname) - wikipedia
Bland is a surname thought to derive from Old English (ge) bland ' storm ', ' commotion '. It is thought to have originated in an area in Yorkshire (where there is a place called Bland Hill)
It predates the adjective ' bland ' meaning "characterless or uninteresting '' which arrived in this country around 1660
Notable persons with that surname include:
|
who made up the middle class in europe | Middle class - wikipedia
The middle class is a class of people in the middle of a social hierarchy. The very definition of the term "middle class '' is highly political and vigorously contested by various schools of political and economic philosophy. Modern social theorists - and especially economists (with widely divergent open and hidden political motivations behind their arguments) - have defined and re-defined the term "middle class '' in order to serve their particular political ends. The definitions of the term "middle class '' therefore are the result of the more - or less - scientific methods used when delineating the parameters of what is and is n't "middle class ''.
In Weberian socioeconomic terms, the middle class is the broad group of people in contemporary society who fall socio - economically between the working class and upper class. The common measures of what constitutes middle class vary significantly among cultures. One of the narrowest definitions limits it to those in the middle fifth of the nation 's income ladder. A wider characterization includes everyone but the poorest 20 % and the wealthiest 20 %.
In modern American vernacular usage, the term "middle class '' is most often used as a self - description by those persons whom academics and Marxists would otherwise identify as the working class which are below both the upper class and the true middle class, but above those in poverty. This leads to considerable ambiguity over the meaning of the term "middle class '' in American usage. Sociologists such as Dennis Gilbert and Joseph Kahl see this American self - described "middle class '' (i.e. working class) as the most populous class in the United States.
The term "middle class '' is first attested in James Bradshaw 's 1745 pamphlet Scheme to prevent running Irish Wools to France. Another phrase used in Early modern Europe was "the middling sort ''.
The term "middle class '' has had several, sometimes contradictory, meanings. Friedrich Engels saw the category in Marxist terms as an intermediate social class between the nobility and the peasantry of Europe in late - feudalist society. While the nobility owned the countryside, and the peasantry worked the countryside, a new bourgeoisie (literally "town - dwellers '') arose around mercantile functions in the city. In France, the middle classes helped drive the French Revolution. This "middle class '' eventually overthrew the ruling monarchists of feudal society, thus becoming the new ruling class or bourgeoisie in the new capitalist - dominated societies.
The modern usage of the term "middle - class '', however, dates to the 1913 UK Registrar - General 's report, in which the statistician T.H.C. Stevenson identified the middle class as that falling between the upper - class and the working - class. Included as belonging to the middle - class are: professionals, managers, and senior civil servants. The chief defining characteristic of membership in the middle - class is possession of significant human capital.
Within capitalism, "middle - class '' initially referred to the bourgeoisie; later, with the further differentiation of classes in the course of development of capitalist societies, the term came to be synonymous with the term petite bourgeoisie. The boom - and - bust cycles of capitalist economies result in the periodical and more or less temporary impoverisation and proletarianisation of much of the petit bourgeois world resulting in their moving back and forth between working - class and petit - bourgeois status. The typical modern definitions of "middle class '' tend to ignore the fact that the classical petit - bourgeoisie is and has always been the owner of a small - to medium - sized business whose income is derived almost exclusively from the employment of workers; "middle class '' came to refer to the combination of the labour aristocracy, the professionals, and the salaried white collar workers.
The size of the middle class depends on how it is defined, whether by education, wealth, environment of upbringing, social network, manners or values, etc. These are all related, but are far from deterministically dependent. The following factors are often ascribed in modern usage to a "middle class '':
In the United States by the end of the twentieth century, more people identified themselves as middle - class than as lower or "working '' class (with insignificant numbers identifying themselves as upper - class). The Labour Party in the UK, which grew out of the organised labour movement and originally drew almost all of its support from the working - class, reinvented itself under Tony Blair in the 1990s as "New Labour '', a party competing with the Conservative Party for the votes of the middle - class as well as those of the Labour Party 's traditional group of voters - the working - class. By 2011 almost three - quarters of British people were found to identify themselves as middle - class.
In Marxism, which defines social classes according to their relationship with the means of production, the "middle class '' is said to be the class below the ruling class and above the proletariat in the Marxist social schema and is synonymous with the term "petit - '' or "petty - bourgeoisie ''. Marxist writers have used the term in two distinct but related ways. In the first sense it is used for the bourgeoisie, the urban merchant and professional class that arose between the aristocracy and the proletariat in the waning years of feudalism in the Marxist model. V.I. Lenin, stated that the "peasantry... in Russia constitute eight - or nine - tenths of the petty bourgeoisie ''. However, in modern developed countries, Marxist writers define the petite bourgeoisie as primarily comprising, as the name implies, owners of small to medium - sized businesses who derive their income from the exploitation of wage - laborers (and who are in turn exploited by the "big '' bourgeoisie i.e. bankers, owners of large corporate trusts, etc.) as well as the highly educated professional class of doctors, engineers, architects, lawyers, university professors, salaried middle - management of capitalist enterprises of all sizes, etc. -- as the "middle class '' which stands between the ruling capitalist "owners of the means of production '' and the working class (whose income is derived solely from hourly wages).
Pioneer 20th century American Marxist theoretician Louis C. Fraina (Lewis Corey) defined the middle class as "the class of independent small enterprisers, owners of productive property from which a livelihood is derived. '' Included in this social category, from Fraina 's perspective, were "propertied farmers '' but not propertyless tenant farmers. Middle class also included salaried managerial and supervisory employees but not "the masses of propertyless, dependent salaried employees. Fraina speculated that the entire category of salaried employees might be adequately described as a "new middle class '' in economic terms, although this remained a social grouping in which "most of whose members are a new proletariat ''.
According to Christopher B. Doob, a sociology writer, the middle - class grooms each future generation to take over from the previous one. He states that, to do this the middle class have almost developed a system for turning children of the middle - class into successful citizens. Allegedly those who are categorized under the American middle - class give education great importance, and value success in education as one of the chief factors in establishing the middle - class life. Supposedly the parents place a strong emphasis on the significance of quality education and its effects on success later in life. He believes that the best way to understand education through the eyes of middle - class citizens would be through social reproduction as middle - class parents breed their own offspring to become successful members of the middle - class. Members of the middle - class consciously use their available sources of capital to prepare their children for the adult world.
The middle - class childhood is often characterized by an authoritative parenting approach with a combination of parental warmth, support and control. Parents set some rules establishing limits, but overall this approach creates a greater sense of trust, security, and self - confidence.
In addition to an often authoritative parenting style, middle - class parents provide their children with valuable sources of capital.
Parents of middle - class children make use of their social capital when it comes to their children 's education as they seek out other parents and teachers for advice. Some parents even develop regular communication with their child 's teachers, asking for regular reports on behavior and grades. When problems do occur, middle - class parents are quick to "enlist the help of professionals when they feel their children need such services ''. The middle - class parents ' involvement in their children 's schooling underlines their recognition of its importance.
In 1977 Barbara Ehrenreich and her then husband John defined a new class in United States as "salaried menial workers who do not own the means of production and whose major function in the social division of labor... (is)... the reproduction of capitalist culture and capitalist class relations ''; the Ehrenreichs named this group the "professional - managerial class ''. This group of middle - class professionals are distinguished from other social classes by their training and education (typically business qualifications and university degrees), with example occupations including academics and teachers, social workers, engineers, managers, nurses, and middle - level administrators. The Ehrenreichs developed their definition from studies by André Gorz, Serge Mallet, and others, of a "new working class '', which, despite education and a perception of themselves as being middle class, were part of the working class because they did not own the means of production, and were wage earners paid to produce a piece of capital. The professional - managerial class seeks higher rank status and salary, and tend to have incomes above the average for their country.
Compare the term "managerial caste ''.
It is important to understand that modern definitions of the term "middle class '' are often politically motivated and vary according to the exigencies of political purpose which they were conceived to serve in the first place as well as due to the multiplicity of more - or less - scientific methods used to measure and compare "wealth '' between modern advanced industrial states (where poverty is relatively low and the distribution of wealth more egalitarian in a relative sense) and in developing countries (where poverty and a profoundly unequal distribution of wealth crush the vast majority of the population). Many of these methods of comparison have been harshly criticised; for example, economist Thomas Piketty, in his book "Capital in the Twenty - First Century '', describes one of the most commonly used comparative measures of wealth across the globe -- the Gini coefficient -- as being an example of "synthetic indices... which mix very different things, such as inequality with respect to labor and capital, so that it is impossible to distinguish clearly among the multiple dimensions of inequality and the various mechanisms at work. ''
In February 2009, The Economist asserted that over half the world 's population now belongs to the middle class, as a result of rapid growth in emerging countries. It characterized the middle class as having a reasonable amount of discretionary income, so that they do not live from hand to mouth as the poor do, and defined it as beginning at the point where people have roughly a third of their income left for discretionary spending after paying for basic food and shelter. This allows people to buy consumer goods, improve their health care, and provide for their children 's education. Most of the emerging middle class consists of people who are middle - class by the standards of the developing world but not the rich one, since their money incomes do not match developed country levels, but the percentage of it which is discretionary does. By this definition, the number of middle - class people in Asia exceeded that in the West sometime around 2007 or 2008.
The Economist 's article pointed out that in many emerging countries the middle class has not grown incrementally, but explosively. The point at which the poor start entering the middle class by the millions is alleged to be the time when poor countries get the maximum benefit from cheap labour through international trade, before they price themselves out of world markets for cheap goods. It is also a period of rapid urbanization, when subsistence farmers abandon marginal farms to work in factories, resulting in a several-fold increase in their economic productivity before their wages catch up to international levels. That stage was reached in China some time between 1990 and 2005, when the Chinese "middle class '' grew from 15 % to 62 % of the population, and is just being reached in India now.
The Economist predicted that surge across the poverty line should continue for a couple of decades and the global middle class will grow enormously between now and 2030. Based on the rapid growth, scholars expect the global middle class to be the driving force for sustainable development. This assumption, however, is contested.
As the American middle class is estimated by some researchers to comprise approximately 45 % of the population, The Economist 's article would put the size of the American middle class below the world average. This difference is due to the extreme difference in definitions between The Economist 's and many other models.
In 2010, a working paper by the OECD asserted that 1.8 billion people were now members of the global "middle class ''. Credit Suisse 's Global Wealth Report 2014, released in October 2014, estimated that one billion adults belonged to the "middle class '', with wealth anywhere between the range of $10,000 -- $100,000.
According to a study carried out by the Pew Research Center, a combined 16 % of the world 's population in 2011 were "upper - middle income '' and "upper income ''.
In 2012, the "middle class '' in Russia was estimated as 15 % of the whole population. Due to sustainable growth, the pre-crisis level was exceeded. In 2015, research from the Russian Academy of Sciences estimated that around 15 % of the Russian population are "firmly middle class '', while around another 25 % are "on the periphery ''.
A study by the Chinese Academy of Social Sciences (CASS) estimated that 19 % of Chinese were middle class in 2003, including any household with assets worth between $18,000 and $36,000.
According to a 2011 report by National Council for Applied Economic Research of India, India 's middle class population is expected to increase from 160 million to 267 million in 2016, 20.3 % of the country 's total population. Further ahead, by 2025 - 26 the number of middle class households in India is likely to more than double to 547 million individuals. Another estimate put the Indian middle class as numbering 475 million people by 2030.
According to a 2014 study by Standard Bank economist Simon Freemantle, a total of 15.3 million households in 11 surveyed African nations are middle - class. These include Angola, Ethiopia, Ghana, Kenya, Mozambique, Nigeria, South Sudan, Sudan, Tanzania, Uganda and Zambia. In South Africa, a report conducted by the Institute for Race Relations in 2015 estimated that between 10 % - 20 % of South Africans are middle class, based on various criteria. An earlier study estimated that in 2008 21.3 % of South Africans were members of the middle class.
A study by EIU Canback indicates 90 % of Africans fall below an income of $10 a day. The proportion of Africans in the $10 -- $20 middle class (excluding South Africa), rose from 4.4 % to only 6.2 % between 2004 and 2014. Over the same period, the proportion of "upper middle '' income ($20 -- $50 a day) went from 1.4 % to 2.3 %.
According to a 2014 study by the German Development Institute, the middle class of Sub-Saharan Africa rose from 14 million to 31 million people between 1990 and 2010.
According to a study by the World Bank, the number of Latin Americans who are middle class rose from 103m to 152m between 2003 and 2009.
The American middle class is smaller than middle classes across Western Europe, but its income is higher, according to a recent Pew Research Center analysis of the U.S. and 11 European nations.
The median disposable (after - tax) income of middle - class households in the U.S. was $60,884 in 2010. With the exception of Luxembourg -- a virtual city - state where the median income was $71,799 -- the disposable incomes of middle - class households in the other 10 Western European countries in the study trailed well behind the American middle class.
The numbers below reflect the middle, upper, and lower share of all adults by country by net wealth (not income). Unlike that of the upper class, wealth of the middle and lowest quintile consists substantially of non-financial assets, specifically home equity. Factors which explain differences in home equity include housing prices and home ownership rates. According to the OECD, the vast majority of financial assets in every country analysed is found in the top of the wealth distribution.
^ * 1: (Middle class and above) - (Middle class)
^ * 2: 100 - (Middle class and above)
Other:
|
what is the meaning of st elmos fire | St. Elmo 's fire - wikipedia
St. Elmo 's fire (also St. Elmo 's light) is a weather phenomenon in which luminous plasma is created by a coronal discharge from a sharp or pointed object in a strong electric field in the atmosphere (such as those generated by thunderstorms or created by a volcanic eruption).
St. Elmo 's fire is named after St. Erasmus of Formia (also called St. Elmo, one of the two Italian names for St. Erasmus, the other being St. Erasmo), the patron saint of sailors. The phenomenon sometimes appeared on ships at sea during thunderstorms and was regarded by sailors with religious awe for its glowing ball of light, accounting for the name. Sailors may have considered St. Elmo 's fire as a good omen (as a sign of the presence of their patron saint).
St. Elmo 's fire is a bright blue or violet glow, appearing like fire in some circumstances, from tall, sharply pointed structures such as lightning rods, masts, spires and chimneys, and on aircraft wings or nose cones. St. Elmo 's fire can also appear on leaves and grass, and even at the tips of cattle horns. Often accompanying the glow is a distinct hissing or buzzing sound. It is sometimes confused with ball lightning.
In 1751, Benjamin Franklin hypothesized that a pointed iron rod would light up at the tip during a lightning storm, similar in appearance to St. Elmo 's fire.
St. Elmo 's fire is a form of plasma. The electric field around the object in question causes ionization of the air molecules, producing a faint glow easily visible in low - light conditions. Conditions that can generate St. Elmo 's fire are present during thunderstorms, when high voltage differentials are present between clouds and the ground underneath. A local electric field of approximately 100 kV / m is required to induce a discharge in air. The magnitude of the electric field depends greatly on the geometry (shape and size) of the object. Sharp points lower the necessary voltage because electric fields are more concentrated in areas of high curvature, so discharges preferably occur and are more intense at the ends of pointed objects.
The nitrogen and oxygen in the Earth 's atmosphere cause St. Elmo 's fire to fluoresce with blue or violet light; this is similar to the mechanism that causes neon lights to glow.
References to St. Elmo 's fire can be found in the works of Julius Caesar (De Bello Africo, 47), Pliny the Elder (Naturalis Historia, book 2, par. 101), Alcaeus frag. 34, and Antonio Pigafetta 's journal of his voyage with Ferdinand Magellan. St. Elmo 's fire, also known as "corposants '' or "corpusants '' from the Portuguese corpo santo ("holy body ''), was a phenomenon described in The Lusiads.
In 15th - century Ming China, Admiral Zheng He and his associates composed the Liujiagang and Changle inscriptions, the two epitaphs of the treasure voyages where they made a reference to St. Elmo 's fire as a divine omen of Tianfei (天 妃), the goddess of sailors and seafarers.
The power of the goddess, having indeed been manifested in previous times, has been abundantly revealed in the present generation. In the midst of the rushing waters it happened that, when there was a hurricane, suddenly a divine lantern was seen shining at the masthead, and as soon as that miraculous light appeared the danger was appeased, so that even in the peril of capsizing one felt reassured and that there was no cause for fear.
Robert Burton wrote of St. Elmo 's fire in his Anatomy of Melancholy: "Radzivilius, the Lithunian duke, calls this apparition Sancti Germani sidus; and saith moreover that he saw the same after in a storm, as he was sailing, 1582, from Alexandria to Rhodes ''. This refers to the voyage made by Mikołaj Krzysztof "the Orphan '' Radziwiłł in 1582 -- 1584.
On 9 May 1605, while on the second voyage of John Davis commanded by Sir Edward Michelborne to the East Indies, an unknown writer aboard the Tiger describes the phenomenon; "In the extremity of our storm appeared to us in the night, upon our maine Top - mast head, a flame about the bigness of a great Candle, which the Portugals call Corpo Sancto, holding it a most divine token that when it appeareth the worst is past. As, thanked be God, we had better weather after it ''.
William Noah, a silversmith convicted in London of stealing 2,000 pounds of lead, while en route to Sydney, New South Wales on the convict transport ship Hillsborough, recorded two such observations in his detailed daily journal. The first was in the Southern Ocean midway between Cape Town and Sydney and the second was in the Tasman Sea, a day out of Port Jackson:
26 June 1799: At 4 Began to Blow very Hard with Heavy Shower of Rain & Hail and Extraordinary Heavy Clap of Thunder & Lightning when fell a Cormesant (corposant) a Body of Fire which collect from the Lightning & Lodge itself in the Foretopmast Head where it was first seen by our Captain when followed a Heavy Clap of Thunder & Lightning which occasioned it to fall & Burst on the Main Deck the Electrific of the Bursting of this Ball of Fire had such power as to shake several of their Leg not only On the Main Deck as the fire Hung much round the smith Forge being Iron but had the same Effect on the Gun Deck & Orlop (deck) on several of the Convicts. 25 July 1799: We were now sourounded with Heavy Thunder & Lightning and the Dismal Element foaming all round us Shocking to see with a Cormesant Hanging at the Maintop mast Head the Seamen was here Shock 'd when a flash of Lightning came Burst the Cormesant & Struck two of the Seamen for several Hours Stone Blind & several much hurt in their Eyes.
While the exact nature of these weather phenomena can not be certain, they appear to be mostly about two observations of St. Elmo 's fire with perhaps some ball lightning and even a direct lightning strike to the ship thrown into the mix.
On 20 February 1817, during a severe electrical storm James Braid, surgeon at Lord Hopetoun 's mines at Leadhills, Lanarkshire, had an extraordinary experience whilst on horseback:
On Thursday 20th, I was gratified for a few minutes with the luminous appearance described above (viz., "such flashes of lightning from the west, repeated every two or three minutes, sometimes at shorter intervals, as appeared to illumine the whole heavens ''). It was about nine o'clock, P.M. I had no sooner got on horseback than I observed the tips of both the horse 's ears to be quite luminous: the edges of my hat had the same appearance. I was soon deprived of these luminaries by a shower of moist snow which immediately began to fall. The horse 's ears soon became wet and lost their luminous appearance; but the edges of my hat, being longer of getting wet, continued to give the luminous appearance somewhat longer. I could observe an immense number of minute sparks darting towards the horse 's ears and the margin of my hat, which produced a very beautiful appearance, and I was sorry to be so soon deprived of it. The atmosphere in this neighbourhood appeared to be very highly electrified for eight or ten days about this time. Thunder was heard occasionally from 15th to 23d, during which time the weather was very unsteady: frequent showers of hail, snow, rain, &c. I can find no person in this quarter who remembers to have ever seen the luminous appearance mentioned above, before this season, -- or such a quantity of lightning darting across the heavens, -- nor who have heard so much thunder at that season of the year. This country being all stocked with sheep, and the herds having frequent occasion to pay attention to the state of the weather, it is not to be thought that such an appearance can have been at all frequent, and none of them to have observed it.
Weeks earlier, reportedly on 17 January 1817, a luminous snowstorm occurred in Vermont and New Hampshire. Saint Elmo 's fire appeared as static discharges on roof peaks, fence posts, and the hats and fingers of people. Thunderstorms prevailed over central New England.
Charles Darwin noted the effect while aboard the Beagle. He wrote of the episode in a letter to J.S. Henslow that one night when the Beagle was anchored in the estuary of the Río de la Plata:
Everything is in flames, -- the sky with lightning, -- the water with luminous particles, and even the very masts are pointed with a blue flame.
In Two Years Before the Mast, Richard Henry Dana, Jr. describes seeing a corposant in the Horse latitudes of the northern Atlantic Ocean. However, he may have been talking about ball lightning; as mentioned earlier it is often erroneously identified as St. Elmo 's fire: "There, directly over where we had been standing, upon the main top - gallant mast - head, was a ball of light, which the sailors name a corposant (corpus sancti), and which the mate had called out to us to look at. They were all watching it carefully, for sailors have a notion, that if the corposant rises in the rigging, it is a sign of fair weather, but if it comes lower down, there will be a storm ''.
Nikola Tesla created St. Elmo 's Fire in 1899 while testing out a Tesla coil at his laboratory in Colorado Springs, USA. St. Elmo 's fire was seen around the coil and was said to have lit up the wings of butterflies with blue halos as they flew around.
A minute before the crash of the Luftschiffbau Zeppelin 's LZ 129 Hindenburg on 6 May 1937, Professor Mark Heald (1892 - 1971) of Princeton saw St. Elmo 's Fire flickering along the airship 's back. Standing outside the main gate to the Naval Air Station, he watched, together with his wife and son, as the airship approached the mast and dropped her bow lines. A minute thereafter, by Heald 's estimation, he first noticed a dim "blue flame '' flickering along the backbone girder about one - quarter the length abaft the bow to the tail. There was time for him to remark to his wife, "Oh, heavens, the thing is afire, '' for her to reply, "Where? '' and for him to answer, "Up along the top ridge '' -- before there was a big burst of flaming hydrogen from a point he estimated to be about one - third the ship 's length from the stern. (Added by K.G.H. Nicholes: My father is History Professor Mark M. Heald 's son, physicist Mark A. Heald. They did indeed witness the Hindenburg tragedy, but my grandfather had no prior experience of St. Elmo 's Fire, and no background to support his identification of what he saw as such. My father has protested people 's uncritical passing along of his father 's assumption on many occasions.)
St. Elmo 's fire was reported by New York Times reporter William L. Laurence on August 9, 1945 as he was aboard Bockscar on the way to Nagasaki.
I noticed a strange eerie light coming through the window high above in the Navigator 's cabin and as I peered through the dark all around us I saw a startling phenomenon. The whirling giant propellers had somehow become great luminous discs of blue flame. The same luminous blue flame appeared on the plexiglass windows in the nose of the ship, and on the tips of the giant wings it looked as though we were riding the whirlwind through space on a chariot of blue fire. It was, I surmised, a surcharge of static electricity that had accumulated on the tips of the propellers and on the dielectric material in the plastic windows. One 's thoughts dwelt anxiously on the precious cargo in the invisible ship ahead of us. Was there any likelihood of danger that this heavy electric tension in the atmosphere all about us may set it off? I express my fears to Captain Bock, who seems nonchalant and imperturbed at the controls. He quickly reassures me: "It is a familiar phenomenon seen often on ships. I have seen it many times on bombing missions. It is known as St. Elmo 's Fire. ''
One of the earliest references to the phenomenon appears in Alcaeus 's Fragment 34a about the Dioscuri, or Castor and Pollux. It is also referenced in Homeric Hymn 33 to the Dioscuri who were from Homeric times associated with it. Whether the Homeric Hymn antedates the Alcaeus fragment is unknown.
St. Elmo 's Fire is also mentioned in the novel, Castaways of the Flying Dutchman by Brian Jacques.
The phenomenon appears to be described first in the Gesta Herwardi, written around 1100 and concerning an event of the 1070s. However, one of the earliest direct references to St. Elmo 's fire made in fiction can be found in Ludovico Ariosto 's epic poem Orlando Furioso (1516). It is located in the 17th canto (19th in the revised edition of 1532) after a storm has punished the ship of Marfisa, Astolfo, Aquilant, Grifon, and others, for three straight days, and is positively associated with hope:
But now St. Elmo 's fire appeared, which they had so longed for, it settled at the bows of a fore stay, the masts and yards all being gone, and gave them hope of calmer airs.
In Shakespeare 's The Tempest (c. 1623), Act I, Scene II, St. Elmo 's fire acquires a more negative association, appearing as evidence of the tempest inflicted by Ariel according to the command of Prospero:
PROSPERO
ARIEL
The fires are also mentioned as "death fires '' in Samuel Taylor Coleridge 's The Rime of the Ancient Mariner.
About, about, in reel and rout, The death fires danced at night; The water, like a witch 's oils, Burnt green and blue and white.
Later in 18th century and 19th century literature associated St. Elmo 's fire with bad omen or divine judgment, coinciding with the growing conventions of Romanticism and the Gothic novel. For example, in Ann Radcliffe 's The Mysteries of Udolpho (1794), during a thunderstorm above the ramparts of the castle:
"And what is that tapering of light you bear? '' said Emily, "see how it darts upwards, -- and now it vanishes! ''
"This light, lady '', said the soldier, "has appeared to - night as you see it, on the point of my lance, ever since I have been on watch; but what it means I can not tell ''.
"This is very strange! '' said Emily.
"My fellow - guard '', continued the man, "has the same flame on his arms; he says he has sometimes seen it before... he says it is an omen, lady, and bodes no good ''.
"And what harm can it bode? '' rejoined Emily.
"He knows not so much as that, lady ''.
In Herman Melville 's novel Moby - Dick, Starbuck points out "corpusants '' during a thunder storm in the Japanese sea in chapter 119 "The Candles ''.
St. Elmo 's fire makes an appearance in The Adventures of Tintin comic, Tintin in Tibet, by Hergé. Tintin recognizes the phenomenon on Captain Haddock 's ice - axe.
In Kurt Vonnegut 's Slaughterhouse - Five, Billy Pilgrim sees the phenomenon on soldiers ' helmets and on rooftops. Vonnegut 's The Sirens of Titan also notes the phenomenon affecting Winston Niles Rumfoord 's dog, Kazak, the Hound of Space, in conjunction with solar disturbances of the chrono - synclastic infundibulum.
In "On The Banks of Plum Creek '' by Laura Ingalls Wilder St. Elmo 's fire is seen by the girls and Ma during one of the blizzards. It was described as coming down the stove pipe and rolling across the floor following Ma 's knitting needles; it did n't burn the floor (pages 309 - 310). The phenomenon as described, however, is more similar to ball lightning.
On the children 's television series The Mysterious Cities of Gold (1982), Episode 4 shows St. Elmo 's Fire affecting the ship as it sailed past the Strait of Magellan. The real - life footage at the end of the episode has snippets of an interview with Japanese sailor Fukunari Imada, whose comments were translated to "Although I 've never seen St. Elmo 's Fire, I 'd certainly like to. It was often considered a bad omen as it played havoc with compasses and equipment ''. The TV series also referred to St. Elmo 's Fire as being a bad omen during the cartoon. The footage was captured as part of his winning solo yacht race in 1981.
On the American television series Rawhide, in a 1959 episode titled "Incident of the Blue Fire '', cattle drovers on a stormy night see St. Elmo 's Fire glowing on the horns of their steers, which the men regard as a deadly omen. St. Elmo 's Fire is also referenced in a 1965 episode of Bonanza in which religious pilgrims staying on the Cartwright property believe an experience with St. Elmo 's Fire is the work of Satan.
St. Elmo 's fire is featured in the movie Moby Dick (1956), when Captain Ahab is stopped by it from killing Starbuck.
In the movie The Last Sunset (1961), Outlaw / Cowhand Brendan ' Bren ' O'Malley (Kirk Douglas) rides in from the herd and leads the recently widowed, and his former flame, Belle Breckenridge (Dorothy Malone) to an overview of the cattle. As he takes the rifle from her he proclaims, "Something out there, you could live five lifetimes, and never see again, '' the audience is then shown a shot of the cattle with a blue or violet glow coming from their horns. "Look. St. Elmo 's Fire. Never seen it except on ships, '' O'Malley says as Belle says, "I 've never seen it anywhere. What is it? '' Trying to win her back he says, "Well a star fell and smashed and scattered its glow all over the place. ''
In Lars von Trier 's 2011 film Melancholia, the phenomenon is clearly observed in the opening sequence and later in the film as the rogue planet Melancholia approaches Earth for an impact event.
Brian Eno 's third studio album Another Green World (1975) contains a song titled "St. Elmo 's Fire '' in which guesting King Crimson guitarist Robert Fripp (credited with playing "Wimshurst guitar '' in the liner notes) improvises a lightning - fast solo that would imitate an electrical charge between two poles on a Wimshurst high voltage generator.
Michael Frank 's song "St Elmo 's Fire '' was released on his album "The Art of Tea '' in 1976. His song has since been sampled various times by artists like Absolutely Fabolous, & more.
The first song in Susumu Hirasawa 's album Aurora, "Stone Garden '' (石 の 庭 Ishi no Niwa), contains a line which translates to
That day when I melted away into the sky Burning like St. Elmo 's sacred fire
"St. Elmo 's Fire (Man in Motion) '' is a song recorded by John Parr. It hit number one on the Billboard Hot 100 on September 7, 1985, remaining there for two weeks. It was the main theme for Joel Schumacher 's 1985 film St. Elmo 's Fire.
|
where is the castle of dracula located at | Bran castle - wikipedia
Bran Castle (Romanian: Castelul Bran; German: Törzburg; Hungarian: Törcsvár), situated near Bran and in the immediate vicinity of Brașov, is a national monument and landmark in Romania. The fortress is situated on the border between Transylvania and Wallachia, on DN73. Commonly known as "Dracula 's Castle '' (although it is one among several locations linked to the Dracula legend, including Poenari Castle and Hunyadi Castle), it is often erroneously referred to as the home of the title character in Bram Stoker 's Dracula. There is, however, no evidence that Stoker knew anything about this castle, which has only tangential associations with Vlad the Impaler, voivode of Wallachia, the putative inspiration for Dracula. Dutch author Hans Corneel de Roos, proposes as location for Castle Dracula an empty mountain top, Mount Izvorul Călimanului, 2,033 metres (6,670 ft) high, located in the Călimani Alps near the former border with Moldavia. Stoker 's description of Dracula 's crumbling fictional castle also bears no resemblance to Bran Castle.
The castle is now a museum dedicated to displaying art and furniture collected by Queen Marie. Tourists can see the interior on their own or by a guided tour. At the bottom of the hill is a small open - air museum park exhibiting traditional Romanian peasant structures (cottages, barns, etc.) from across the country.
In 1212, Teutonic Knights built the wooden castle of Dietrichstein as a fortified position in the Burzenland at the entrance to a mountain pass through which traders had travelled for more than a millennium, but in 1242 it was destroyed by the Mongols. The first documented mentioning of Bran Castle is the act issued by Louis I of Hungary on 19 November 1377, giving the Saxons of Kronstadt (Brașov) the privilege to build the stone castle on their own expense and labor force; the settlement of Bran began to develop nearby. In 1438 -- 1442, the castle was used in defense against the Ottoman Empire, and later became a customs post on the mountain pass between Transylvania and Wallachia. It is believed the castle was briefly held by Mircea the Elder of Wallachia (r. 1386 -- 1395, 1397 -- 1418) during whose period the customs point was established. The Wallachian ruler Vlad Țepeș (Vlad the Impaler; 1448 -- 1476) does not seem to have had a significant role in the history of the fortress, although he passed several times through the Bran Gorge. Bran Castle belonged to the Hungarian Kings but due to the failure of King Vladislas II (r. 1471 -- 1516) to repay loans, the city of Brașov regained possession of the fortress in 1533. Bran played a militarily strategic role up to the mid-18th century.
In 1920 the Treaty of Trianon, Hungary ceded Transylvania, and the castle became a royal residence within the Kingdom of Romania. It became the favorite home and retreat of Queen Marie, who ordered its extensive renovation conducted by the Czech architect Karel Zdeněk Líman. The castle was inherited by her daughter Princess Ileana who ran a hospital there in World War II: it was later seized by the communist regime with the expulsion of the royal family in 1948.
In 2005, the Romanian government passed a special law allowing restitution claims on properties illegally expropriated, such as Bran, and thus a year later the castle was awarded ownership to American Dominic von Habsburg, the son and heir of Princess Ileana.
In September 2007, an investigation committee of the Romanian Parliament stated that the retrocession of the castle to Archduke Dominic was illegal, as it broke the Romanian law on property and succession. However, in October 2007 the Constitutional Court of Romania rejected the parliament 's petition on the matter. In addition, an investigation commission of the Romanian government issued a decision in December 2007 reaffirming the validity and legality of the restitution procedures used and confirming that the restitution was made in full compliance with the law.
On 18 May 2009, the Bran Castle administration was transferred from the government to the administration of Archduke Dominic and his sisters, Baroness Maria Magdalena of Holzhausen and Elisabeth Sandhofer. On 1 June 2009, the Habsburgs opened the refurbished castle to the public as the first private museum of the country and disclosed with Bran Village a joint strategic concept to maintain their domination in the Romanian tourist circuit and to safeguard the economic base in the region.
The castle in 2012.
Off the beaten path.
View from the main walkway.
View from the upstairs balcony.
View of the courtyard.
View from inside.
Secret passage inside the castle, connecting the first and third floors.
Cross in the castle gardens.
Coordinates: 45 ° 30 ′ 54 '' N 25 ° 22 ′ 02 '' E / 45.51500 ° N 25.36722 ° E / 45.51500; 25.36722
http://www.transylvania-discovery-tours.ro/ Bran Castle Tour
|
which structure is not part of the scapula | Scapula - wikipedia
In anatomy, the scapula (plural scapulae or scapulas) a.k.a. shoulder blade or wing bone, is the bone that connects the humerus (upper arm bone) with the clavicle (collar bone). Like their connected bones the scapulae are paired, with the scapula on either side of the body being roughly a mirror image of the other. In early Roman times, people thought the bone resembled a trowel, a small shovel. The shoulder blade is also called omo in Latin medical terminology.
The scapula forms the back of the shoulder girdle. In humans, it is a flat bone, roughly triangular in shape, placed on a posterolateral aspect of the thoracic cage.
The scapula is a wide, flat bone lying on the thoracic wall that provides an attachment for three different groups of muscles. The intrinsic muscles of the scapula include the muscles of the rotator cuff -- the subscapularis, teres minor, supraspinatus, and infraspinatus. These muscles attach to the surface of the scapula and are responsible for the internal and external rotation of the shoulder joint, along with humeral abduction. The extrinsic muscles include the biceps, triceps, and deltoid muscles and attach to the coracoid process and supraglenoid tubercle of the scapula, infraglenoid tubercle of the scapula, and spine of the scapula. These muscles are responsible for several actions of the glenohumeral joint. The third group, which is mainly responsible for stabilization and rotation of the scapula, consists of the trapezius, serratus anterior, levator scapulae, and rhomboid muscles and attach to the medial, superior, and inferior borders of the scapula.
The head, processes, and the thickened parts of the bone contain cancellous tissue; the rest consists of a thin layer of compact tissue.
The central part of the supraspinatus fossa and the upper part of the infraspinatous fossa, but especially the former, are usually so thin in humans as to be semitransparent; occasionally the bone is found wanting in this situation, and the adjacent muscles are separated only by fibrous tissue.
The front of the scapula (also known as the costal or ventral surface) has a broad concavity called the subscapular fossa, to which the subscapularis muscle attaches. The medial two - thirds of the fossa have 3 longitudinal oblique ridges, and another thick ridge adjoins the lateral border; they run outward and upward. The ridges give attachment to the tendinous insertions, and the surfaces between them to the fleshy fibers, of the subscapularis muscle. The lateral third of the fossa is smooth and covered by the fibers of this muscle.
At the upper part of the fossa is a transverse depression, where the bone appears to be bent on itself along a line at right angles to and passing through the center of the glenoid cavity, forming a considerable angle, called the subscapular angle; this gives greater strength to the body of the bone by its arched form, while the summit of the arch serves to support the spine and acromion.
The costal surface superior of the scapula is the origin of 1st digitation for the serratus anterior origin.
The back of the scapula (also called the dorsal or posterior surface) is arched from above downward, and is subdivided into two unequal parts by the spine of the scapula. The portion above the spine is called the supraspinous fossa, and that below it the infraspinous fossa. The two fossae are connected by the spinoglenoid notch, situated lateral to the root of the spine.
There is a ridge on the outer part of the back of the scapula. This runs from the lower part of the glenoid cavity, downward and backward to the vertebral border, about 2.5 cm above the inferior angle. Attached to the ridge is a fibrous septum, which separates the infraspinatus muscle from the Teres major and Teres minor muscles. The upper two - thirds of the surface between the ridge and the axillary border is narrow, and is crossed near its center by a groove for the scapular circumflex vessels; the Teres minor attaches here.
The broad and narrow portions above alluded to are separated by an oblique line, which runs from the axillary border, downward and backward, to meet the elevated ridge: to it is attached a fibrous septum which separates the Teres muscles from each other.
Its lower third presents a broader, somewhat triangular surface, the Inferior angle of the scapula, which gives origin to the Teres major, and over which the Latissimus dorsi glides; frequently the latter muscle takes origin by a few fibers from this part.
The acromion forms the summit of the shoulder, and is a large, somewhat triangular or oblong process, flattened from behind forward, projecting at first laterally, and then curving forward and upward, so as to overhang the glenoid cavity.
There are 3 angles:
The superior angle of the scapula or medial angle, is covered by the trapezius muscle. This angle is formed by the junction of the superior and medial borders of the scapula. The superior angle is located at the approximate level of the second thoracic vertebra. The superior angle of the scapula is thin, smooth, rounded, and inclined somewhat lateralward, and gives attachment to a few fibers of the levator scapulae muscle.
The inferior angle of the scapula is the lowest part of the scapula and is covered by the latissimus dorsi muscle. It moves forwards round the chest when the arm is abducted. The inferior angle is formed by the union of the medial and lateral borders of the scapula. It is thick and rough and its posterior or back surface affords attachment to the teres major and often to a few fibers of the latissimus dorsi. The anatomical plane that passes vertically through the inferior angle is named the scapular line.
The lateral angle of the scapula or glenoid angle also known as the head of the scapula is the thickest part of the scapula. It is broad and bears the glenoid cavity on its articular surface which is directed forward, laterally and slightly upwards, and articulates with the head of the humerus. The inferior angle is broader below than above and its vertical diameter is the longest. The surface is covered with cartilage in the fresh state; and its margins, slightly raised, give attachment to a fibrocartilaginous structure, the glenoidal labrum, which deepens the cavity. At its apex is a slight elevation, the supraglenoid tuberosity, to which the long head of the biceps brachii is attached.
The neck of the scapula is the slightly constricted portion which surrounds the head and is more distinct below and behind than above and in front.
Superior angle shown in red
Lateral angle shown in red
Neck of scapula shown in red
Inferior angle shown in red
There are three borders of the scapula:
Costal surface of left scapula. Superior border shown in red.
Left scapula. Superior border shown in red.
Animation. Superior border shown in red.
Dorsal surface of left scapula. Lateral border shown in red.
Left scapula. Lateral border shown in red.
Animation. Lateral border shown in red.
Left scapula. Medial border shown in red.
Animation. Medial border shown in red.
Still image. Medial border shown in red.
Levator scapulae muscle (red)
Rhomboid minor muscle (red)
Rhomboid major muscle (red)
The scapula is ossified from 7 or more centers: one for the body, two for the coracoid process, two for the acromion, one for the vertebral border, and one for the inferior angle. Ossification of the body begins about the second month of fetal life, by an irregular quadrilateral plate of bone forming, immediately behind the glenoid cavity. This plate extends to form the chief part of the bone, the scapular spine growing up from its dorsal surface about the third month. Ossification starts as membranous ossification before birth. After birth, the cartilaginous components would undergo endochondral ossification. The larger part of the scapula undergoes membranous ossification. Some of the outer parts of the scapula are cartilaginous at birth, and would therefore undergo endochondral ossification.
At birth, a large part of the scapula is osseous, but the glenoid cavity, the coracoid process, the acromion, the vertebral border and the inferior angle are cartilaginous. From the 15th to the 18th month after birth, ossification takes place in the middle of the coracoid process, which as a rule becomes joined with the rest of the bone about the 15th year.
Between the 14th and 20th years, the remaining parts ossify in quick succession, and usually in this order; first, in the root of the coracoid process, in the form of a broad scale; secondly, near the base of the acromion; thirdly, in the inferior angle and contiguous part of the vertebral border; fourthly, near the outer end of the acromion; fifthly, in the vertebral border. The base of the acromion is formed by an extension from the spine; the two nuclei of the acromion unite, and then join with the extension from the spine. The upper third of the glenoid cavity is ossified from a separate center (sub coracoid), which appears between the 10th and 11th years and joins between the 16th and the 18th years. Further, an epiphysial plate appears for the lower part of the glenoid cavity, and the tip of the coracoid process frequently has a separate nucleus. These various epiphyses are joined to the bone by the 25th year.
Failure of bony union between the acromion and spine sometimes occurs (see os acromiale), the junction being effected by fibrous tissue, or by an imperfect articulation; in some cases of supposed fracture of the acromion with ligamentous union, it is probable that the detached segment was never united to the rest of the bone.
"In terms of comparative anatomy the human scapula represents two bones that have become fused together; the (dorsal) scapula proper and the (ventral) coracoid. The epiphyseal line across the glenoid cavity is the line of fusion. They are the counterparts of the ilium and ischium of the pelvic girdle. ''
The following muscles attach to the scapula:
Movements of the scapula are brought about by the scapular muscles. The scapula can perform six actions:
Because of its sturdy structure and protected location, fractures of the scapula are uncommon. When they do occur, they are an indication that severe chest trauma has occurred. Scapular fractures involving the neck of the scapula have two patterns. One (rare) type of fracture is through the anatomical neck of the scapula. The other more common type of fracture is through the surgical neck of the scapula. The surgical neck exits medial to the coracoid process.
An abnormally protruding inferior angle of the scapula is known as a winged scapula and can be caused by paralysis of the serratus anterior muscle. In this condition the sides of the scapula nearest the spine are positioned outward and backward. The appearance of the upper back is said to be wing - like. In addition, any condition causing weakness of the serratus anterior muscle may cause scapular "winging ''.
The scapula plays an important role in shoulder impingement syndrome.
Abnormal scapular function is called scapular dyskinesis. One action the scapula performs during a throwing or serving motion is elevation of the acromion process in order to avoid impingement of the rotator cuff tendons. If the scapula fails to properly elevate the acromion, impingement may occur during the cocking and acceleration phase of an overhead activity. The two muscles most commonly inhibited during this first part of an overhead motion are the serratus anterior and the lower trapezius. These two muscles act as a force couple within the glenohumeral joint to properly elevate the acromion process, and if a muscle imbalance exists, shoulder impingement may develop.
The name scapula as synonym of shoulder blade is of Latin origin. It is commonly used in medical English and is part of the current official Latin nomenclature, Terminologia Anatomica. In classical Latin scapula is only used in its plural scapulae. Although some sources mention that scapulae is used to refer during Roman antiquity to the shoulders or to the shoulder blades, others persist in that the Romans used scapulae only to refer to the back, in contrast to the pectus, the Latin name for breast or chest.
The Roman encyclopedist Aulus Cornelius Celsus that lived during the beginning of the era, also used scapulae to refer to the back. He used os latum scapularum to refer to the shoulder blade. This expressions can be translated as broad (Latin: latum) bone (Latin: os) of the back (Latin: scapularum). A similar expression in ancient Greek can be seen in the writings of the Greek philosopher Aristoteles and in the writings of the Greek physician Galen. They both use the name ὠμοπλάτη to refer to the shoulder blade. This compound consists of ancient Greek ὦμος, shoulder and πλάτη, blade or flat or broad object. Πλάτη in its plural πλάται without ὦμο - was also used in ancient Greek to refer to the shoulder blades. In anatomic Latin, ὠμοπλάτη is Latinized as omoplata.
The Latin word umerus is related to ὦμος. The Romans referred with umerus to what is now commonly known in English as the humerus or the upper bone of the arm, the clavicle or the collarbone and the scapula or the shoulder blade. The spelling humerus is actually incorrect in classical Latin. Those three bones were referred to as the ossa (Latin: bones) umeri (Latin: of the umerus). Umerus was also used to refer specifically to the shoulder. This mirrors the use of ὦμος in ancient Greek as that could refer to the shoulder with the upper arm or to the shoulder alone. Since Celsus, the os umeri could refer specifically to the upper bone of the arm. The 16th century antomists Andreas Vesalius used humerus to refer to the clavicle. Besides the aforementioned os latum scapularum, Celsus used os latum umeri to refer to the shoulder blade. Similarly, Laurentius used the expression latitudo umeri (Latitudo = breadth, width) to refer to the shoulder blade.
The Roman physician Caelius Aurelianus (5th century) used pala to refer to the shoulder blade. The name pala is normally used to refer to a spade in Latin and was therefore probably used by Caelius Aurelianus to describe the shoulder blade, as both exhibit a flat curvature.
During the Middle Ages spathula was used to refer to the shoulder blade. Spathula is a diminutive of spatha, with the latter originally meaning broad, two - edged sword without a point, broad, flat, wooden instrument for stirring any liquid, a spattle, spatula or spathe of the palm tree and its diminutive used in classical and late Latin for referring to a leg of pork or a little palmbranch. The English word spatula is actually derived from Latin spatula, an orthographic variant of spathula. Oddly enough, classical Latin non-diminutive spatha can be translated as English spatula, while its Latin diminutive spatula is not translated as English spatula. Latin spatha is derived from ancient Greek σπάθη. Therefore, the form spathula is more akin to its origin than spatula. Ancient Greek σπάθη has a similar meaning as Latin spatha, as any broad blade, and can also refer to a spatula or to the broad blade of a sword., but also to the blade of an oar. The aforementioned πλάται for shoulder blades was also used for blades of an oar. Concordantly σπάθη was also used to refer to the shoulder blade. Interestingly, the English word spade, as well as the Dutch equivalent spade is cognate with σπάθη. Please notice, that the aforementioned term pala as applied by Roman physician Caelius Aurelianus, also means spade. Pala is probably related to the Latin verb pandere, to spread out, and to extend. This verb is thought to be derived from an earlier form spandere, with the root spa -. Σπάθη is actually derived from the similar root spē (i), that means to extend. It seems that os latum scapularum, ὠμοπλάτη, πλάται, pala, spathula and σπάθη all refer to the same aspect of the shoulder blade, i.e. being a flat, broad blade, with the latter three words etymological related to each other.
After the Middle Ages, the name scapula for shoulder blade became dominant. The word scapula can etymologically be explained by its relatedness to ancient Greek verb σκάπτειν, to dig. This relatedness give rise to several possible explanations. First, the noun σκάπετος, trench derived from this verb, and the to scapula related noun σκαφη, similarly derived from the aforementioned verb, might connect scapula to the notion of (con) cavity. The name scapula might be related that due to existence of the spine of the scapula a concavity exist in the scapula. Otherwise, the designation scapulae is also seen as synonym of ancient Greek συνωμία, the space between the shoulder blades, that is obviously concave. Συνωμία consists of σύν, together with, and ὦμος, shoulder. Second, scapula, due to its relatedness to σκάπτειν might originally meant shovel. Similarly to the resemblance between the Latin pala (spade) and the shoulder blade, a resemblance might be felt between the shape of a shovel and the shoulder blade. Alternatively, the shoulder blade might be used originally for digging and shoveling.
Shoulder blade is colloquial name for this bone. Shoulder is cognate to German and Dutch equivalents Schulter and schouder. There are a few etymological explanations for shoulder. The first supposes that shoulder can be literally translated as that which shields or protects, as its possibly related to Icelandic skioldr, shield and skyla, to cover, to defend. The second explanation relates shoulder to ancient Greek σκέλος, leg. The latter spots the possible root skel -, meaning to bend, to curve. The third explanation links the root skel - to to cleave. This meaning could refer to the shape of the shoulder blade.
In fish, the scapular blade is a structure attached to the upper surface of the articulation of the pectoral fin, and is accompanied by a similar coracoid plate on the lower surface. Although sturdy in cartilagenous fish, both plates are generally small in most other fish, and may be partially cartilagenous, or consist of multiple bony elements.
In the early tetrapods, these two structures respectively became the scapula and a bone referred to as the procoracoid (commonly called simply the "coracoid '', but not homologous with the mammalian structure of that name). In amphibians and reptiles (birds included), these two bones are distinct, but together form a single structure bearing many of the muscle attachments for the forelimb. In such animals, the scapula is usually a relatively simple plate, lacking the projections and spine that it possesses in mammals. However, the detailed structure of these bones varies considerably in living groups. For example, in frogs, the procoracoid bones may be braced together at the animal 's underside to absorb the shock of landing, while in turtles, the combined structure forms a Y - shape in order to allow the scapula to retain a connection to the clavicle (which is part of the shell). In birds, the procoracoids help to brace the wing against the top of the sternum.
In the fossil therapsids, a third bone, the true coracoid, formed just behind the procoracoid. The resulting three - boned structure is still seen in modern monotremes, but in all other living mammals, the procoracoid has disappeared, and the coracoid bone has fused with the scapula, to become the coracoid process. These changes are associated with the upright gait of mammals, compared with the more sprawling limb arrangement of reptiles and amphibians; the muscles formerly attached to the procoracoid are no longer required. The altered musculature is also responsible for the alteration in the shape of the rest of the scapula; the forward margin of the original bone became the spine and acromion, from which the main shelf of the shoulder blade arises as a new structure.
In dinosaurs the main bones of the pectoral girdle were the scapula (shoulder blade) and the coracoid, both of which directly articulated with the clavicle. The clavicle was present in saurischian dinosaurs but largely absent in ornithischian dinosaurs. The place on the scapula where it articulated with the humerus (upper bone of the forelimb) is the called the glenoid. The scapula serves as the attachment site for a dinosaur 's back and forelimb muscles.
|
when was i still haven't found what i'm looking for written | I Still Have n't Found What I 'm Looking For - wikipedia
"I Still Have n't Found What I 'm Looking For '' is a song by Irish rock band U2. It is the second track from their 1987 album The Joshua Tree and was released as the album 's second single in May 1987. The song was a hit, becoming the band 's second consecutive number - one single on the US Billboard Hot 100 while peaking at number six on the UK Singles Chart.
The song originated from a demo the band recorded on which drummer Larry Mullen Jr. played a unique rhythm pattern. Like much of The Joshua Tree, the song was inspired by the group 's interest in American music. "I Still Have n't Found What I 'm Looking For '' exhibits influences from gospel music and its lyrics describe spiritual yearning. Lead singer Bono 's vocals are in high register and lead guitarist the Edge plays a chiming arpeggio. Adding to the gospel qualities of the song are choir - like backing vocals provided by the Edge and producers Brian Eno and Daniel Lanois.
"I Still Have n't Found What I 'm Looking For '' was critically acclaimed and received two nominations at the 30th Annual Grammy Awards in 1988, for Record of the Year and Song of the Year. It has subsequently become one of the group 's most well - known songs and has been performed on many of their concert tours. The track has appeared on several of their compilations and concert films. Many critics and publications have ranked "I Still Have n't Found What I 'm Looking For '' among the greatest tracks in music history including Rolling Stone which ranked the song at # 93 of its list of "The 500 Greatest Songs of All Time ''.
"I Still Have n't Found What I 'm Looking For '' originated from a demo variously titled "The Weather Girls '' and "Under the Weather '' that the band recorded during a jam session. Bassist Adam Clayton called the demo 's melody "a bit of a one - note groove '', while an unconvinced The Edge, the band 's guitarist, compared it to "' Eye of the Tiger ' played by a reggae band ''. However, the band liked the drum part played by drummer Larry Mullen Jr. Co-producer Daniel Lanois said, "It was a very original beat from Larry. We always look for those beats that would qualify as a signature for the song. And that certainly was one of those. It had this tom - tom thing that he does and nobody ever understands. And we just did n't want to let go of that beat, it was so unique. '' Lanois encouraged Mullen to continue developing the weird drum pattern beyond the demo. Mullen said the beat became even more unusual, and although Lanois eventually mixed most of the pattern out to just keep the basics, the rhythm became the root of "I Still Have n't Found What I 'm Looking For ''.
The group worked on the track at the studio they had set up at Danesmoate House in Dublin. Lanois compared the creation of the song to constructing a building, first laying down the drums as the foundation, then adding additional layers piece by piece, before finally "putting in furniture ''. Lead singer Bono was interested in the theme of spiritual doubt, which was fostered by Eno 's love for gospel music, and by Bono 's listening to songs by The Swan Silvertones, The Staple Singers, and Blind Willie Johnson. After the Edge wrote a chord sequence and played it on acoustic guitar "with a lot of power in the strumming '', the group attempted to compose a suitable vocal melody, trying out a variety of ideas. During a jam session, Bono began singing a "classic soul '' melody, and it was this addition that made the Edge hear the song 's potential. At that point, he remembered a phrase he had written in a notebook that morning as a possible song title, "I still have n't found what I 'm looking for ''. He suggests it was influenced by a line from the Bob Dylan song "Idiot Wind '': "You 'll find out when you reach the top you 're on the bottom ''. He wrote the phrase on a piece of paper and handed it to Bono while he was singing. The Edge called the phrase 's fit with the song "like hand in glove ''. From that point on, the song was the first piece played to visitors during the recording sessions.
As recording continued, a number of guitar overdubs were added, including an auto - pan effect and a chiming arpeggio to modernise the old - style "gospel song ''. While the Edge was improvising guitar parts one day, Bono heard a "chrome bells '' guitar hook that he liked. It was added as a counter-melody to the song 's "muddy shoes '' guitar part, and it is this hook that the Edge plays during live performances of the song. Bono sang in the upper register of his range to add to the feeling of spiritual yearning; in the verses he hits a B - flat note, and an A-flat in the chorus. Background vocals were provided by the Edge, Lanois, and co-producer Brian Eno, their voices being multi-tracked. Lanois suggests that his and Eno 's involvement in the track 's creation helped their vocals. He stated, "You 're not going to get that sound of, ' Oh they brought in some soul singers ' if you know what I mean. Our hearts and souls are already there. If we sing it 'll sound more real. '' Lanois also played a percussive guitar part, which is heard in the introduction. The song 's writing was completed relatively early during the band 's time at Danesmoate House. The mix took longer to complete, though, with most of the production team contributing. The final mix was completed by Lanois and the Edge in a home studio set up at Melbeach, a house purchased by the Edge. They mixed it on top of a previous Steve Lillywhite mix, which gave the song a phasing sound.
Lanois says he is very attached to "I Still Have n't Found What I 'm Looking For '' and has, on occasion, joined U2 on stage to perform it. The original "Weather Girls '' demo, re-titled "Desert of Our Love '', was included with the 2007 remastered version of The Joshua Tree on a bonus disc of outtakes and B - sides.
Initially, "Red Hill Mining Town '' was planned for release as the second single. However, Bono was unable to sing the song during pre-tour rehearsals and the band were reportedly unhappy with the video shot by Neil Jordan, so "I Still Have n't Found What I 'm Looking For '' became a late choice for the second single. The single was released in May 1987. On the US Billboard Hot 100, the song debuted at number 51 on 13 June 1987. After nearly 2 months on the chart, the song reached number one on 8 August 1987, becoming the band 's second consecutive number - one hit in the United States. The song spent two weeks in the top spot, and remained on the chart for 17 weeks. On other Billboard charts, the song peaked at number 16 on the Adult Contemporary chart, and number two on the Album Rock Tracks chart. The song also topped the Irish Singles Chart, while peaking at number six on the Canadian RPM Top 100 and the UK Singles Chart. In New Zealand, the song peaked at number two on the RIANZ Top 40 Singles Chart, while reaching number six on the Dutch Top 40 and number 11 on the Swedish Singles Chart.
The music video for the song was filmed on Fremont Street in Las Vegas on 12 April 1987 following their Joshua Tree Tour concert in that city. It features the band members wandering around while the Edge plays an acoustic guitar. The music video was later re-released on The U218 Videos compilation DVD. Pat Christenson, president of Las Vegas 's official event organization, credits the group 's video with improving the city 's image among musicians. "The whole perception of Vegas changed with that video, '' Christenson said, adding, "Now all the big names come here, some of them five, six times a year. ''
"Spanish Eyes '' was created early during The Joshua Tree sessions. It began as a recording made in Adam Clayton 's house of Clayton, the Edge, and Larry Mullen Jr. playing around with several different elements. The piece evolved substantially over the course of an afternoon, but the cassette and its recording was subsequently lost and forgotten. The Edge found the cassette towards the end of the album sessions and played it to the rest of the group. The band realised that it was a good track, but did not have enough time to complete it prior to The Joshua Tree 's release.
"Deep in the Heart '' stemmed from a three - chord piano piece Bono composed on the piano about the last time he had been in the family home on Cedarwood Road in Dublin, which his father had just sold. The memories of his time living there gave rise to many of the lyrical ideas on the song. The Edge and Adam Clayton reworked the piece extensively, with Bono later describing the finished result as "an almost jazz - like improvisation on three chords '', also noting that "the rhythm section turned it into a very special piece of music. '' The song was recorded in a similar manner to the song "4th of July '' from U2 's 1984 album, The Unforgettable Fire; the Edge and Clayton were playing together in a room and unaware that they were being recorded on a 4 - track cassette machine by the band 's assistant, Marc Coleman.
The song is U2 's 9th most played live song, and has been played on every tour. It was played at every date of The Joshua Tree and Lovetown Tours, typically early in the main set. It was played at most of the 1992 legs of the Zoo TV Tour, typically rounding out the main set or being played acoustically on the B - Stage mid set. For most of the 1993 Zooropa shows however, the song was dropped. It returned to be played at each of the PopMart Tour 's 93 shows, usually being played midway through the set. On the Elevation Tour it initially was very rare, only appearing once over the first and second legs. However, it became a regular again on the 3rd leg, being played late in the main set replacing the song "Mysterious Ways '', which was used in that spot on the previous two legs. It was played at the majority of both the Vertigo and U2 360 ° Tours, typically early - to - mid main set. It was used as the closing song at just under half of the shows on the Innocence + Experience Tour, rotating with "One '' and "40 ''.
Island Records commissioned New York choir director, Dennis Bell, to record a gospel version of the song, and Island intended to release it after U2 's single. However, Island boss Chris Blackwell vetoed the plan. Bell subsequently formed his own label and Rohit Jagessar picked it up for distribution in US. While in Glasgow in late July 1987 during the Joshua Tree Tour, Rob Partridge of Island Records played the demo that Bell and his choir, the New Voices of Freedom, had made. In late September, U2 rehearsed with Bell 's choir in Greater Calvary Baptist Church in Harlem for a performance together in a few days at U2 's Madison Square Garden concert. The Edge 's guitar was the only instrument that U2 brought to the church although Mullen borrowed a conga drum. The rehearsal was done with the church 's audio system and footage was used in the Rattle and Hum motion picture. Several performances were made with a piano player; however, the version used in the film includes only Bono, the Edge, Mullen, and the choir. Audio from the Madison Square Garden performance appears on the accompanying album.
A live performance of the song appears in the concert films PopMart: Live from Mexico City, Vertigo 05: Live from Milan, Live from Paris and the most recent U2 360 ° at the Rose Bowl. The versions on the Mexico City and Milan concert films consist of just Bono 's voice and the Edge 's guitar until after the first chorus where the drum and bass parts kick in. Digital live versions were released through iTunes on the Love: Live from the Point Depot and U2. COMmunication albums.
"I Still Have n't Found What I 'm Looking For '' received widespread critical acclaim. Hot Press journalist Bill Graham described the song as on the one - hand as a "smart job of pop handwork, pretty standard American radio rock - ballad fare '' but that "the band 's rhythms are far more supple and cultivated than your average bouffant HM band of that period ''. The Sunday Independent suggested that the song was proof the band could be commercially accessible without resorting to rock clichés. NME remarked that the song showed that the band cared about something, which made them "special ''. The Rocket noted that Bono 's lyrics about needing personal spirituality resulted in a "unique marriage of American gospel and Gaelic soul '' and that the "human perspective he brings to this sentiment rings far truer than the rantings of, say, the born - again Bob Dylan ''. Several publications, including The Bergen Record and The Boston Globe, called the track "hypnotic '' and interpreted it as depicting the band on a spiritual quest. The song finished in 18th place on the "Best Singles '' list from The Village Voice 's 1987 Pazz & Jop critics ' poll.
"I Still Have n't Found What I 'm Looking For '' has been acclaimed by many critics and publications as one of the greatest songs of all time. In 2001, the song was ranked at number 120 on the RIAA 's list of 365 "Songs of the Century '' -- a project intended to "promote a better understanding of America 's musical and cultural heritage '' -- despite the group 's Irish origins. In 2003, a special edition issue of Q, titled "1001 Best Songs Ever '', placed "I Still Have n't Found What I 'm Looking For '' at number 148 on its list of the greatest songs. In 2005, Blender ranked the song at number 443 on its list of "The 500 Greatest Songs Since You Were Born ''. In 2010, Rolling Stone placed the song at number 93 of its list of "The 500 Greatest Songs of All Time ''. Los Angeles Times critic Robert Hilburn called it U2 's "Let It Be '', in reference to the Beatles song. The staff of the Rock and Roll Hall of Fame selected "I Still Have n't Found What I 'm Looking For '' as one of 500 Songs that Shaped Rock and Roll.
The song was covered by Scottish band the Chimes in 1990 and was featured on their self - titled debut album. The rendition peaked at number six in both the United Kingdom and New Zealand charts. It also peaked into number twelve in the Netherlands chart.
All tracks written by U2.
U2
Additional performers
"I Still Have n't Found What I 'm Looking For '' by Scottish band The Chimes is a 1990 dance remake of U2 's "I Still Have n't Found What I 'm Looking For, '' which became a UK Top Ten hit.
The song was very successful in Europe, reaching No. 2 in Norway, No. 6 in the UK, Ireland and New Zealand.
Bono from U2 commented that the Chimes ' cover of their hit "I Still Have n't Found What I 'm Looking For '' was the "only cover version he had heard that he enjoyed and did the original justice '', adding "at last someone 's come along to sing it properly ''.
Footnotes
Bibliography
|
where does the mississippi river water come from | Mississippi River - wikipedia
The Mississippi River is the chief river of the second - largest drainage system on the North American continent, second only to the Hudson Bay drainage system. The stream is entirely within the United States (although its drainage basin reaches into Canada), its source is Lake Itasca in northern Minnesota and it flows generally south for 2,320 miles (3,730 km) to the Mississippi River Delta in the Gulf of Mexico. With its many tributaries, the Mississippi 's watershed drains all or parts of 31 U.S. states and two Canadian provinces between the Rocky and Appalachian Mountains. The Mississippi ranks as the fourth - longest and fifteenth - largest river in the world by discharge. The river either borders or passes through the states of Minnesota, Wisconsin, Iowa, Illinois, Missouri, Kentucky, Tennessee, Arkansas, Mississippi, and Louisiana.
Native Americans long lived along the Mississippi River and its tributaries. Most were hunter - gatherers, but some, such as the Mound Builders, formed prolific agricultural societies. The arrival of Europeans in the 16th century changed the native way of life as first explorers, then settlers, ventured into the basin in increasing numbers. The river served first as a barrier, forming borders for New Spain, New France, and the early United States, and then as a vital transportation artery and communications link. In the 19th century, during the height of the ideology of manifest destiny, the Mississippi and several western tributaries, most notably the Missouri, formed pathways for the western expansion of the United States.
Formed from thick layers of the river 's silt deposits, the Mississippi embayment is one of the most fertile agricultural regions of the country, which resulted in the river 's storied steamboat era. During the American Civil War, the Mississippi 's capture by Union forces marked a turning point towards victory due to the river 's importance as a route of trade and travel, not least to the Confederacy. Because of substantial growth of cities and the larger ships and barges that supplanted riverboats, the first decades of the 20th century saw the construction of massive engineering works such as levees, locks and dams, often built in combination.
Since modern development of the basin began, the Mississippi has also seen its share of pollution and environmental problems -- most notably large volumes of agricultural runoff, which has led to the Gulf of Mexico dead zone off the Delta. In recent years, the river has shown a steady shift towards the Atchafalaya River channel in the Delta; a course change would be an economic disaster for the port city of New Orleans.
The word Mississippi itself comes from Messipi, the French rendering of the Anishinaabe (Ojibwe or Algonquin) name for the river, Misi - ziibi (Great River).
In the 18th century, the river was the primary western boundary of the young United States, and since the country 's expansion westward, the Mississippi River has been widely considered a convenient if approximate dividing line between the Eastern, Southern, and Midwestern United States, and the Western United States. This is exemplified by the Gateway Arch in St. Louis and the phrase "Trans - Mississippi '' as used in the name of the Trans - Mississippi Exposition.
It is common to qualify a regionally superlative landmark in relation to it, such as "the highest peak east of the Mississippi '' or "the oldest city west of the Mississippi ''. The FCC also uses it as the dividing line for broadcast callsigns, which begin with W to the east and K to the west, mixing together in media markets along the river.
The geographical setting of the Mississippi River includes considerations of the course of the river itself, its watershed, its outflow, its prehistoric and historic course changes, and possibilities of future course changes. The New Madrid Seismic Zone along the river is also noteworthy. These various basic geographical aspects of the river in turn underlie its human history and present uses of the waterway and its adjacent lands.
The Mississippi River can be divided into three sections: the Upper Mississippi, the river from its headwaters to the confluence with the Missouri River; the Middle Mississippi, which is downriver from the Missouri to the Ohio River; and the Lower Mississippi, which flows from the Ohio to the Gulf of Mexico.
The Upper Mississippi runs from its headwaters to its confluence with the Missouri River at St. Louis, Missouri. It is divided into two sections:
The source of the Upper Mississippi branch is traditionally accepted as Lake Itasca, 1,475 feet (450 m) above sea level in Itasca State Park in Clearwater County, Minnesota. The name "Itasca '' was chosen to designate the "true head '' of the Mississippi River as a combination of the last four letters of the Latin word for truth (veritas) and the first two letters of the Latin word for head (caput). However, the lake is in turn fed by a number of smaller streams.
From its origin at Lake Itasca to St. Louis, Missouri, the waterway 's flow is moderated by 43 dams. Fourteen of these dams are located above Minneapolis in the headwaters region and serve multiple purposes, including power generation and recreation. The remaining 29 dams, beginning in downtown Minneapolis, all contain locks and were constructed to improve commercial navigation of the upper river. Taken as a whole, these 43 dams significantly shape the geography and influence the ecology of the upper river. Beginning just below Saint Paul, Minnesota, and continuing throughout the upper and lower river, the Mississippi is further controlled by thousands of wing dikes that moderate the river 's flow in order to maintain an open navigation channel and prevent the river from eroding its banks.
The head of navigation on the Mississippi is the Coon Rapids Dam in Coon Rapids, Minnesota. Before it was built in 1913, steamboats could occasionally go upstream as far as Saint Cloud, Minnesota, depending on river conditions.
The uppermost lock and dam on the Upper Mississippi River is the Upper St. Anthony Falls Lock and Dam in Minneapolis. Above the dam, the river 's elevation is 799 feet (244 m). Below the dam, the river 's elevation is 750 feet (230 m). This 49 - foot (15 m) drop is the largest of all the Mississippi River locks and dams. The origin of the dramatic drop is a waterfall preserved adjacent to the lock under an apron of concrete. Saint Anthony Falls is the only true waterfall on the entire Mississippi River. The water elevation continues to drop steeply as it passes through the gorge carved by the waterfall.
After the completion of the St. Anthony Falls Lock and Dam in 1963, the river 's head of navigation moved upstream, to the Coon Rapids Dam. However, the Locks were closed in 2015 to control the spread of invasive Asian carp, making Minneapolis once again the site of the head of navigation of the river.
The Upper Mississippi has a number of natural and artificial lakes, with its widest point being Lake Winnibigoshish, near Grand Rapids, Minnesota, over 11 miles (18 km) across. Lake Onalaska, created by Lock and Dam No. 7, near La Crosse, Wisconsin, is more than 4 miles (6.4 km) wide. Lake Pepin, a natural lake formed behind the delta of the Chippewa River of Wisconsin as it enters the Upper Mississippi, is more than 2 miles (3.2 km) wide.
By the time the Upper Mississippi reaches Saint Paul, Minnesota, below Lock and Dam No. 1, it has dropped more than half its original elevation and is 687 feet (209 m) above sea level. From St. Paul to St. Louis, Missouri, the river elevation falls much more slowly, and is controlled and managed as a series of pools created by 26 locks and dams.
The Upper Mississippi River is joined by the Minnesota River at Fort Snelling in the Twin Cities; the St. Croix River near Prescott, Wisconsin; the Cannon River near Red Wing, Minnesota; the Zumbro River at Wabasha, Minnesota; the Black, La Crosse, and Root rivers in La Crosse, Wisconsin; the Wisconsin River at Prairie du Chien, Wisconsin; the Rock River at the Quad Cities; the Iowa River near Wapello, Iowa; the Skunk River south of Burlington, Iowa; and the Des Moines River at Keokuk, Iowa. Other major tributaries of the Upper Mississippi include the Crow River in Minnesota, the Chippewa River in Wisconsin, the Maquoketa River and the Wapsipinicon River in Iowa, and the Illinois River in Illinois.
The Upper Mississippi is largely a multi-thread stream with many bars and islands. From its confluence with the St. Croix River downstream to Dubuque, Iowa, the river is entrenched, with high bedrock bluffs lying on either side. The height of these bluffs decreases to the south of Dubuque, though they are still significant through Savanna, Illinois. This topography contrasts strongly with the Lower Mississippi, which is a meandering river in a broad, flat area, only rarely flowing alongside a bluff (as at Vicksburg, Mississippi).
The Mississippi River is known as the Middle Mississippi from the Upper Mississippi River 's confluence with the Missouri River at St. Louis, Missouri, for 190 miles (310 km) to its confluence with the Ohio River at Cairo, Illinois.
The Middle Mississippi is relatively free - flowing. From St. Louis to the Ohio River confluence, the Middle Mississippi falls 220 feet (67 m) over 180 miles (290 km) for an average rate of 1.2 feet per mile (23 cm / km). At its confluence with the Ohio River, the Middle Mississippi is 315 feet (96 m) above sea level. Apart from the Missouri and Meramec rivers of Missouri and the Kaskaskia River of Illinois, no major tributaries enter the Middle Mississippi River.
The Mississippi River is called the Lower Mississippi River from its confluence with the Ohio River to its mouth at the Gulf of Mexico, a distance of about 1,000 miles (1,600 km). At the confluence of the Ohio and the Middle Mississippi, the long - term mean discharge of the Ohio at Cairo, Illinois is 281,500 cubic feet per second (7,970 cubic meters per second), while the long - term mean discharge of the Mississippi at Thebes, Illinois (just upriver from Cairo) is 208,200 cu ft / s (5,900 m / s). Thus, by volume, the main branch of the Mississippi River system at Cairo can be considered to be the Ohio River (and the Allegheny River further upstream), rather than the Middle Mississippi.
In addition to the Ohio River, the major tributaries of the Lower Mississippi River are the White River, flowing in at the White River National Wildlife Refuge in east central Arkansas; the Arkansas River, joining the Mississippi at Arkansas Post; the Big Black River in Mississippi; and the Yazoo River, meeting the Mississippi at Vicksburg, Mississippi. The widest point of the Mississippi River is in the Lower Mississippi portion where it exceeds 1 mile (1.6 km) in width in several places.
Deliberate water diversion at the Old River Control Structure in Louisiana allows the Atchafalaya River in Louisiana to be a major distributary of the Mississippi River, with 30 % of the Mississippi flowing to the Gulf of Mexico by this route, rather than continuing down the Mississippi 's current channel past Baton Rouge and New Orleans on a longer route to the Gulf. Although the Red River is commonly thought to be a tributary, it is actually not, because its water flows separately into the Gulf of Mexico through the Atchafalaya River.
The Mississippi River has the world 's fourth - largest drainage basin ("watershed '' or "catchment ''). The basin covers more than 1,245,000 square miles (3,220,000 km), including all or parts of 31 U.S. states and two Canadian provinces. The drainage basin empties into the Gulf of Mexico, part of the Atlantic Ocean. The total catchment of the Mississippi River covers nearly 40 % of the landmass of the continental United States. The highest point within the watershed is also the highest point of the Rocky Mountains, Mount Elbert at 14,440 feet (4,400 m).
In the United States, the Mississippi River drains the majority of the area between the crest of the Rocky Mountains and the crest of the Appalachian Mountains, except for various regions drained to Hudson Bay by the Red River of the North; to the Atlantic Ocean by the Great Lakes and the Saint Lawrence River; and to the Gulf of Mexico by the Rio Grande, the Alabama and Tombigbee rivers, the Chattahoochee and Appalachicola rivers, and various smaller coastal waterways along the Gulf.
The Mississippi River empties into the Gulf of Mexico about 100 miles (160 km) downstream from New Orleans. Measurements of the length of the Mississippi from Lake Itasca to the Gulf of Mexico vary somewhat, but the United States Geological Survey 's number is 2,320 miles (3,730 km). The retention time from Lake Itasca to the Gulf is typically about 90 days.
The Mississippi River discharges at an annual average rate of between 200 and 700 thousand cubic feet per second (7,000 -- 20,000 m / s). Although it is the fifth - largest river in the world by volume, this flow is a small fraction of the output of the Amazon, which moves nearly 7 million cubic feet per second (200,000 m / s) during wet seasons. On average, the Mississippi has only 8 % the flow of the Amazon River.
Fresh river water flowing from the Mississippi into the Gulf of Mexico does not mix into the salt water immediately. The images from NASA 's MODIS (to the right) show a large plume of fresh water, which appears as a dark ribbon against the lighter - blue surrounding waters. These images demonstrate that the plume did not mix with the surrounding sea water immediately. Instead, it stayed intact as it flowed through the Gulf of Mexico, into the Straits of Florida, and entered the Gulf Stream. The Mississippi River water rounded the tip of Florida and traveled up the southeast coast to the latitude of Georgia before finally mixing in so thoroughly with the ocean that it could no longer be detected by MODIS.
Before 1900, the Mississippi River transported an estimated 400 million metric tons of sediment per year from the interior of the United States to coastal Louisiana and the Gulf of Mexico. During the last two decades, this number was only 145 million metric tons per year. The reduction in sediment transported down the Mississippi River is the result of engineering modification of the Mississippi, Missouri, and Ohio rivers and their tributaries by dams, meander cutoffs, river - training structures, and bank revetments and soil erosion control programs in the areas drained by them.
Over geologic time, the Mississippi River has experienced numerous large and small changes to its main course, as well as additions, deletions, and other changes among its numerous tributaries, and the lower Mississippi River has used different pathways as its main channel to the Gulf of Mexico across the delta region.
Through a natural process known as avulsion or delta switching, the lower Mississippi River has shifted its final course to the mouth of the Gulf of Mexico every thousand years or so. This occurs because the deposits of silt and sediment begin to clog its channel, raising the river 's level and causing it to eventually find a steeper, more direct route to the Gulf of Mexico. The abandoned distributaries diminish in volume and form what are known as bayous. This process has, over the past 5,000 years, caused the coastline of south Louisiana to advance toward the Gulf from 15 to 50 miles (24 to 80 km). The currently active delta lobe is called the Birdfoot Delta, after its shape, or the Balize Delta, after La Balize, Louisiana, the first French settlement at the mouth of the Mississippi.
The current form of the Mississippi River basin was largely shaped by the Laurentide Ice Sheet of the most recent Ice Age. The southernmost extent of this enormous glaciation extended well into the present - day United States and Mississippi basin. When the ice sheet began to recede, hundreds of feet of rich sediment were deposited, creating the flat and fertile landscape of the Mississippi Valley. During the melt, giant glacial rivers found drainage paths into the Mississippi watershed, creating such features as the Minnesota River, James River, and Milk River valleys. When the ice sheet completely retreated, many of these "temporary '' rivers found paths to Hudson Bay or the Arctic Ocean, leaving the Mississippi Basin with many features "oversized '' for the existing rivers to have carved in the same time period.
Ice sheets during the Illinoian Stage about 300,000 to 132,000 years before present, blocked the Mississippi near Rock Island, Illinois, diverting it to its present channel farther to the west, the current western border of Illinois. The Hennepin Canal roughly follows the ancient channel of the Mississippi downstream from Rock Island to Hennepin, Illinois. South of Hennepin, to Alton, Illinois, the current Illinois River follows the ancient channel used by the Mississippi River before the Illinoian Stage.
Timeline of outflow course changes
In March 1876, the Mississippi suddenly changed course near the settlement of Reverie, Tennessee, leaving a small part of Tipton County, Tennessee, attached to Arkansas and separated from the rest of Tennessee by the new river channel. Since this event was an avulsion, rather than the effect of incremental erosion and deposition, the state line still follows the old channel.
The town of Kaskaskia, Illinois once stood on a peninsula at the confluence of the Mississippi and Kaskaskia (Okaw) Rivers. Founded as a French colonial community, it later became the capital of the Illinois Territory and was the first state capital of Illinois until 1819. Beginning in 1844, successive flooding caused the Mississippi River to slowly encroach east. A major flood in 1881 caused it to overtake the lower 10 miles of the Kaskaskia River, forming a new Mississippi channel and cutting off the town from the rest of the state. Later flooding destroyed most of the remaining town, including the original State House. Today, the remaining 2,300 acre island and community of 14 residents is known as an enclave of Illinois and is accessible only from the Missouri side.
The New Madrid Seismic Zone, along the Mississippi River near New Madrid, Missouri, between Memphis and St. Louis, is related to an aulacogen (failed rift) that formed at the same time as the Gulf of Mexico. This area is still quite active seismically. Four great earthquakes in 1811 and 1812, estimated at approximately 8 on the Richter magnitude scale, had tremendous local effects in the then sparsely settled area, and were felt in many other places in the midwestern and eastern U.S. These earthquakes created Reelfoot Lake in Tennessee from the altered landscape near the river.
When measured from its traditional source at Lake Itasca, the Mississippi has a length of 2,320 miles (3,730 km). When measured from its longest stream source (most distant source from the sea), Brower 's Spring in Montana, the source of the Missouri River, it has a length of 3,710 miles, making it the fourth longest river in the world after the Nile, Amazon, and Yangtze. When measured by the largest stream source (by water volume), the Ohio River, by extension the Allegheny River, would be the source, and the Mississippi would begin in Pennsylvania.
The Mississippi River runs through or along 10 states, from Minnesota to Louisiana, and is used to define portions of these states ' borders, with Wisconsin, Illinois, Kentucky, Tennessee, and Mississippi along the east side of the river, and Iowa, Missouri, and Arkansas along its west side. Substantial parts of both Minnesota and Louisiana are on either side of the river, although the Mississippi defines part of the boundary of each of these states.
In all of these cases, the middle of the riverbed at the time the borders were established was used as the line to define the borders between adjacent states. In various areas, the river has since shifted, but the state borders have not changed, still following the former bed of the Mississippi River as of their establishment, leaving several small isolated areas of one state across the new river channel, contiguous with the adjacent state. Also, due to a meander in the river, a small part of western Kentucky is contiguous with Tennessee, but isolated from the rest of its state.
Many of the communities along the Mississippi River are listed below; most have either historic significance or cultural lore connecting them to the river. They are sequenced from the source of the river to its end.
The road crossing highest on the Upper Mississippi is a simple steel culvert, through which the river (locally named "Nicolet Creek '') flows north from Lake Nicolet under "Wilderness Road '' to the West Arm of Lake Itasca, within Itasca State Park.
The earliest bridge across the Mississippi River was built in 1855. It spanned the river in Minneapolis where the current Hennepin Avenue Bridge is located. No highway or railroad tunnels cross under the Mississippi River.
The first railroad bridge across the Mississippi was built in 1856. It spanned the river between the Rock Island Arsenal in Illinois and Davenport, Iowa. Steamboat captains of the day, fearful of competition from the railroads, considered the new bridge a hazard to navigation. Two weeks after the bridge opened, the steamboat Effie Afton rammed part of the bridge, setting it on fire. Legal proceedings ensued, with Abraham Lincoln defending the railroad. The lawsuit went to the Supreme Court of the United States, which ruled in favor of the railroad.
Below is a general overview of selected Mississippi bridges which have notable engineering or landmark significance, with their cities or locations. They are sequenced from the Upper Mississippi 's source to the Lower Mississippi 's mouth.
A clear channel is needed for the barges and other vessels that make the main stem Mississippi one of the great commercial waterways of the world. The task of maintaining a navigation channel is the responsibility of the United States Army Corps of Engineers, which was established in 1802. Earlier projects began as early as 1829 to remove snags, close off secondary channels and excavate rocks and sandbars.
Steamboats entered trade in the 1820s, so the period 1830 -- 1850 became the golden age of steamboats. As there were few roads or rails in the lands of the Louisiana Purchase, river traffic was an ideal solution. Cotton, timber and food came down the river, as did Appalachian coal. The port of New Orleans boomed as it was the trans - shipment point to deep sea ocean vessels. As a result, the image of the twin stacked, wedding cake Mississippi steamer entered into American mythology. Steamers worked the entire route from the trickles of Montana, to the Ohio River; down the Missouri and Tennessee, to the main channel of the Mississippi. Only with the arrival of the railroads in the 1880s did steamboat traffic diminish. Steamboats remained a feature until the 1920s. Most have been superseded by pusher tugs. A few survive as icons -- the Delta Queen and the River Queen for instance.
A series of 29 locks and dams on the upper Mississippi, most of which were built in the 1930s, is designed primarily to maintain a 9 - foot - deep (2.7 m) channel for commercial barge traffic. The lakes formed are also used for recreational boating and fishing. The dams make the river deeper and wider but do not stop it. No flood control is intended. During periods of high flow, the gates, some of which are submersible, are completely opened and the dams simply cease to function. Below St. Louis, the Mississippi is relatively free - flowing, although it is constrained by numerous levees and directed by numerous wing dams.
On the lower Mississippi, from Baton Rouge to the mouth of the Mississippi, the navigation depth is 45 feet (14 m), allowing container ships and cruise ships to dock at the Port of New Orleans and bulk cargo ships shorter than 150 - foot (46 m) air draft that fit under the Huey P. Long Bridge to traverse the Mississippi to Baton Rouge. There is a feasibility study to dredge this portion of the river to 50 feet (15 m) to allow New Panamax ship depths.
In 1829, there were surveys of the two major obstacles on the upper Mississippi, the Des Moines Rapids and the Rock Island Rapids, where the river was shallow and the riverbed was rock. The Des Moines Rapids were about 11 miles (18 km) long and just above the mouth of the Des Moines River at Keokuk, Iowa. The Rock Island Rapids were between Rock Island and Moline, Illinois. Both rapids were considered virtually impassable.
In 1848, the Illinois and Michigan Canal was built to connect the Mississippi River to Lake Michigan via the Illinois River near Peru, Illinois. The canal allowed shipping between these important waterways. In 1900, the canal was replaced by the Chicago Sanitary and Ship Canal. The second canal, in addition to shipping, also allowed Chicago to address specific health issues (typhoid fever, cholera and other waterborne diseases) by sending its waste down the Illinois and Mississippi river systems rather than polluting its water source of Lake Michigan.
The Corps of Engineers recommended the excavation of a 5 - foot - deep (1.5 m) channel at the Des Moines Rapids, but work did not begin until after Lieutenant Robert E. Lee endorsed the project in 1837. The Corps later also began excavating the Rock Island Rapids. By 1866, it had become evident that excavation was impractical, and it was decided to build a canal around the Des Moines Rapids. The canal opened in 1877, but the Rock Island Rapids remained an obstacle. In 1878, Congress authorized the Corps to establish a 4.5 - foot - deep (1.4 m) channel to be obtained by building wing dams which direct the river to a narrow channel causing it to cut a deeper channel, by closing secondary channels and by dredging. The channel project was complete when the Moline Lock, which bypassed the Rock Island Rapids, opened in 1907.
To improve navigation between St. Paul, Minnesota, and Prairie du Chien, Wisconsin, the Corps constructed several dams on lakes in the headwaters area, including Lake Winnibigoshish and Lake Pokegama. The dams, which were built beginning in the 1880s, stored spring run - off which was released during low water to help maintain channel depth.
In 1907, Congress authorized a 6 - foot - deep (1.8 m) channel project on the Mississippi, which was not complete when it was abandoned in the late 1920s in favor of the 9 - foot - deep (2.7 m) channel project.
In 1913, construction was complete on Lock and Dam No. 19 at Keokuk, Iowa, the first dam below St. Anthony Falls. Built by a private power company (Union Electric Company of St. Louis) to generate electricity (originally for streetcars in St. Louis), the Keokuk dam was one of the largest hydro - electric plants in the world at the time. The dam also eliminated the Des Moines Rapids. Lock and Dam No. 1 was completed in Minneapolis, Minnesota in 1917. Lock and Dam No. 2, near Hastings, Minnesota, was completed in 1930.
Before the Great Mississippi Flood of 1927, the Corps 's primary strategy was to close off as many side channels as possible to increase the flow in the main river. It was thought that the river 's velocity would scour off bottom sediments, deepening the river and decreasing the possibility of flooding. The 1927 flood proved this to be so wrong that communities threatened by the flood began to create their own levee breaks to relieve the force of the rising river.
The Rivers and Harbors Act of 1930 authorized the 9 - foot (2.7 m) channel project, which called for a navigation channel 9 feet (2.7 m) feet deep and 400 feet (120 m) wide to accommodate multiple - barge tows. This was achieved by a series of locks and dams, and by dredging. Twenty - three new locks and dams were built on the upper Mississippi in the 1930s in addition to the three already in existence.
Until the 1950s, there was no dam below Lock and Dam 26 at Alton, Illinois. Chain of Rocks Lock (Lock and Dam No. 27), which consists of a low - water dam and an 8.4 - mile - long (13.5 km) canal, was added in 1953, just below the confluence with the Missouri River, primarily to bypass a series of rock ledges at St. Louis. It also serves to protect the St. Louis city water intakes during times of low water.
U.S. government scientists determined in the 1950s that the Mississippi River was starting to switch to the Atchafalaya River channel because of its much steeper path to the Gulf of Mexico. Eventually the Atchafalaya River would capture the Mississippi River and become its main channel to the Gulf of Mexico, leaving New Orleans on a side channel. As a result, the U.S. Congress authorized a project called the Old River Control Structure, which has prevented the Mississippi River from leaving its current channel that drains into the Gulf via New Orleans.
Because the large scale of high - energy water flow threatened to damage the structure, an auxiliary flow control station was built adjacent to the standing control station. This $300 million project was completed in 1986 by the Corps of Engineers. Beginning in the 1970s, the Corps applied hydrological transport models to analyze flood flow and water quality of the Mississippi. Dam 26 at Alton, Illinois, which had structural problems, was replaced by the Mel Price Lock and Dam in 1990. The original Lock and Dam 26 was demolished.
The Corps now actively creates and maintains spillways and floodways to divert periodic water surges into backwater channels and lakes, as well as route part of the Mississippi 's flow into the Atchafalaya Basin and from there to the Gulf of Mexico, bypassing Baton Rouge and New Orleans. The main structures are the Birds Point - New Madrid Floodway in Missouri; the Old River Control Structure and the Morganza Spillway in Louisiana, which direct excess water down the west and east sides (respectively) of the Atchafalaya River; and the Bonnet Carré Spillway, also in Louisiana, which directs floodwaters to Lake Pontchartrain (see diagram). Some experts blame urban sprawl for increases in both the risk and frequency of flooding on the Mississippi River.
Some of the pre-1927 strategy is still in use today, with the Corps actively cutting the necks of horseshoe bends, allowing the water to move faster and reducing flood heights.
The area of the Mississippi River basin was first settled by hunting and gathering Native American peoples and is considered one of the few independent centers of plant domestication in human history. Evidence of early cultivation of sunflower, a goosefoot, a marsh elder and an indigenous squash dates to the 4th millennium BCE. The lifestyle gradually became more settled after around 1000 BCE during what is now called the Woodland period, with increasing evidence of shelter construction, pottery, weaving and other practices. A network of trade routes referred to as the Hopewell interaction sphere was active along the waterways between about 200 and 500 CE, spreading common cultural practices over the entire area between the Gulf of Mexico and the Great Lakes. A period of more isolated communities followed, and agriculture introduced from Mesoamerica based on the Three Sisters (maize, beans and squash) gradually came to dominate. After around 800 CE there arose an advanced agricultural society today referred to as the Mississippian culture, with evidence of highly stratified complex chiefdoms and large population centers. The most prominent of these, now called Cahokia, was occupied between about 600 and 1400 CE and at its peak numbered between 8,000 and 40,000 inhabitants, larger than London, England of that time. At the time of first contact with Europeans, Cahokia and many other Mississippian cities had dispersed, and archaeological finds attest to increased social stress.
Modern American Indian nations inhabiting the Mississippi basin include Cheyenne, Sioux, Ojibwe, Potawatomi, Ho - Chunk, Fox, Kickapoo, Tamaroa, Moingwena, Quapaw and Chickasaw.
The word Mississippi itself comes from Messipi, the French rendering of the Anishinaabe (Ojibwe or Algonquin) name for the river, Misi - ziibi (Great River). The Ojibwe called Lake Itasca Omashkoozo - zaaga'igan (Elk Lake) and the river flowing out of it Omashkoozo - ziibi (Elk River). After flowing into Lake Bemidji, the Ojibwe called the river Bemijigamaag - ziibi (River from the Traversing Lake). After flowing into Cass Lake, the name of the river changes to Gaa - miskwaawaakokaag - ziibi (Red Cedar River) and then out of Lake Winnibigoshish as Wiinibiigoonzhish - ziibi (Miserable Wretched Dirty Water River), Gichi - ziibi (Big River) after the confluence with the Leech Lake River, then finally as Misi - ziibi (Great River) after the confluence with the Crow Wing River. After the expeditions by Giacomo Beltrami and Henry Schoolcraft, the longest stream above the juncture of the Crow Wing River and Gichi - ziibi was named "Mississippi River ''. The Mississippi River Band of Chippewa Indians, known as the Gichi - ziibiwininiwag, are named after the stretch of the Mississippi River known as the Gichi - ziibi. The Cheyenne, one of the earliest inhabitants of the upper Mississippi River, called it the Máʼxe - éʼometaaʼe (Big Greasy River) in the Cheyenne language. The Arapaho name for the river is Beesniicíe. The Pawnee name is Kickaátit.
The Mississippi was spelled Mississipi or Missisipi during French Louisiana and was also known as the Rivière Saint - Louis.
On May 8, 1541, Spanish explorer Hernando de Soto became the first recorded European to reach the Mississippi River, which he called Río del Espíritu Santo ("River of the Holy Spirit ''), in the area of what is now Mississippi. In Spanish, the river is called Río Mississippi.
French explorers Louis Jolliet and Jacques Marquette began exploring the Mississippi in the 17th century. Marquette traveled with a Sioux Indian who named it Ne Tongo ("Big river '' in Sioux language) in 1673. Marquette proposed calling it the River of the Immaculate Conception.
When Louis Jolliet explored the Mississippi Valley in the 17th century, natives guided him to a quicker way to return to French Canada via the Illinois River. When he found the Chicago Portage, he remarked that a canal of "only half a league '' (less than 2 miles (3.2 km), 3 km) would join the Mississippi and the Great Lakes. In 1848, the continental divide separating the waters of the Great Lakes and the Mississippi Valley was breached by the Illinois and Michigan canal via the Chicago River. This both accelerated the development, and forever changed the ecology of the Mississippi Valley and the Great Lakes.
In 1682, René - Robert Cavelier, Sieur de La Salle and Henri de Tonti claimed the entire Mississippi River Valley for France, calling the river Colbert River after Jean - Baptiste Colbert and the region La Louisiane, for King Louis XIV. On March 2, 1699, Pierre Le Moyne d'Iberville rediscovered the mouth of the Mississippi, following the death of La Salle. The French built the small fort of La Balise there to control passage.
In 1718, about 100 miles (160 km) upriver, New Orleans was established along the river crescent by Jean - Baptiste Le Moyne, Sieur de Bienville, with construction patterned after the 1711 resettlement on Mobile Bay of Mobile, the capital of French Louisiana at the time.
Following Britain 's victory in the Seven Years War the Mississippi became the border between the British and Spanish Empires. The Treaty of Paris (1763) gave Great Britain rights to all land east of the Mississippi and Spain rights to land west of the Mississippi. Spain also ceded Florida to Britain to regain Cuba, which the British occupied during the war. Britain then divided the territory into East and West Florida.
Article 8 of the Treaty of Paris (1783) states, "The navigation of the river Mississippi, from its source to the ocean, shall forever remain free and open to the subjects of Great Britain and the citizens of the United States ''. With this treaty, which ended the American Revolutionary War, Britain also ceded West Florida back to Spain to regain the Bahamas, which Spain had occupied during the war. In 1800, under duress from Napoleon of France, Spain ceded an undefined portion of West Florida to France. When France then sold the Louisiana Territory to the U.S. in 1803, a dispute arose again between Spain and the U.S. on which parts of West Florida exactly had Spain ceded to France, which would in turn decide which parts of West Florida were now U.S. property versus Spanish property. These aspirations ended when Spain was pressured into signing Pinckney 's Treaty in 1795.
France reacquired ' Louisiana ' from Spain in the secret Treaty of San Ildefonso in 1800. The United States then secured effective control of the river when it bought the Louisiana Territory from France in the Louisiana Purchase of 1803. The last serious European challenge to U.S. control of the river came at the conclusion of War of 1812 when British forces mounted an attack on New Orleans -- the attack was repulsed by an American army under the command of General Andrew Jackson.
In the Treaty of 1818, the U.S. and Great Britain agreed to fix the border running from the Lake of the Woods to the Rocky Mountains along the 49th parallel north. In effect, the U.S. ceded the northwestern extremity of the Mississippi basin to the British in exchange for the southern portion of the Red River basin.
So many settlers traveled westward through the Mississippi river basin, as well as settled in it, that Zadok Cramer wrote a guide book called The Navigator, detailing the features and dangers and navigable waterways of the area. It was so popular that he updated and expanded it through 12 editions over a period of 25 years.
The colonization of the area was barely slowed by the three earthquakes in 1811 and 1812, estimated at approximately 8 on the Richter magnitude scale, that were centered near New Madrid, Missouri.
Mark Twain 's book, Life on the Mississippi, covered the steamboat commerce which took place from 1830 to 1870 on the river before more modern ships replaced the steamer. The book was published first in serial form in Harper 's Weekly in seven parts in 1875. The full version, including a passage from the then unfinished Adventures of Huckleberry Finn and works from other authors, was published by James R. Osgood & Company in 1885.
The first steamboat to travel the full length of the Lower Mississippi from the Ohio River to New Orleans was the New Orleans in December 1811. Its maiden voyage occurred during the series of New Madrid earthquakes in 1811 -- 12. The Upper Mississippi was treacherous, unpredictable and to make traveling worse, the area was not properly mapped out or surveyed. Until the 1840s only two trips a year to the Twin Cities landings were made by steamboats which suggests it was not very profitable.
Steamboat transport remained a viable industry, both in terms of passengers and freight until the end of the first decade of the 20th century. Among the several Mississippi River system steamboat companies was the noted Anchor Line, which, from 1859 to 1898, operated a luxurious fleet of steamers between St. Louis and New Orleans.
Italian explorer Giacomo Beltrami, wrote about his journey on the Virginia, which was the first steam boat to make it to Fort St. Anthony in Minnesota. He referred to his voyage as a promenade that was once a journey on the Mississippi. The steamboat era changed the economic and political life of the Mississippi, as well as the nature of travel itself. The Mississippi was completely changed by the steamboat era as it transformed into a flourishing tourists trade.
Control of the river was a strategic objective of both sides in the American Civil War. In 1862 Union forces coming down the river successfully cleared Confederate defenses at Island Number 10 and Memphis, Tennessee, while Naval forces coming upriver from the Gulf of Mexico captured New Orleans, Louisiana. The remaining major Confederate stronghold was on the heights overlooking the river at Vicksburg, Mississippi, and the Union 's Vicksburg Campaign (December 1862 to July 1863), and the fall of Port Hudson, completed control of the lower Mississippi River. The Union victory ending the Siege of Vicksburg on July 4, 1863, was pivotal to the Union 's final victory of the Civil War.
The "Big Freeze '' of 1918 -- 19 blocked river traffic north of Memphis, Tennessee, preventing transportation of coal from southern Illinois. This resulted in widespread shortages, high prices, and rationing of coal in January and February.
In the spring of 1927, the river broke out of its banks in 145 places, during the Great Mississippi Flood of 1927 and inundated 27,000 sq mi (70,000 km) to a depth of up to 30 feet (9.1 m).
In 1962 and 1963, industrial accidents spilled 3.5 million US gallons (13,000,000 L) of soybean oil into the Mississippi and Minnesota rivers. The oil covered the Mississippi River from St. Paul to Lake Pepin, creating an ecological disaster and a demand to control water pollution.
On October 20, 1976, the automobile ferry, MV George Prince, was struck by a ship traveling upstream as the ferry attempted to cross from Destrehan, Louisiana, to Luling, Louisiana. Seventy - eight passengers and crew died; only eighteen survived the accident.
In 1988, the water level of the Mississippi fell to 10 feet (3.0 m) below zero on the Memphis gauge. The remains of wooden - hulled water craft were exposed in an area of 4.5 acres (18,000 m) on the bottom of the Mississippi River at West Memphis, Arkansas. They dated to the late 19th to early 20th centuries. The State of Arkansas, the Arkansas Archeological Survey, and the Arkansas Archeological Society responded with a two - month data recovery effort. The fieldwork received national media attention as good news in the middle of a drought.
The Great Flood of 1993 was another significant flood, primarily affecting the Mississippi above its confluence with the Ohio River at Cairo, Illinois.
Two portions of the Mississippi were designated as American Heritage Rivers in 1997: the lower portion around Louisiana and Tennessee, and the upper portion around Iowa, Illinois, Minnesota and Missouri. The Nature Conservancy 's project called "America 's Rivershed Initiative '' announced a ' report card ' assessment of the entire basin in October 2015 and gave the grade of D+. The assessment noted the aging navigation and flood control infrastructure along with multiple environmental problems.
In 2002, Slovenian long - distance swimmer Martin Strel swam the entire length of the river, from Minnesota to Louisiana, over the course of 68 days. In 2005, the Source to Sea Expedition paddled the Mississippi and Atchafalaya Rivers to benefit the Audubon Society 's Upper Mississippi River Campaign.
Geologists believe that the lower Mississippi could take a new course to the Gulf. Either of two new routes -- through the Atchafalaya Basin or through Lake Pontchartrain -- might become the Mississippi 's main channel if flood - control structures are overtopped or heavily damaged during a severe flood.
Failure of the Old River Control Structure, the Morganza Spillway, or nearby levees would likely re-route the main channel of the Mississippi through Louisiana 's Atchafalaya Basin and down the Atchafalaya River to reach the Gulf of Mexico south of Morgan City in southern Louisiana. This route provides a more direct path to the Gulf of Mexico than the present Mississippi River channel through Baton Rouge and New Orleans. While the risk of such a diversion is present during any major flood event, such a change has so far been prevented by active human intervention involving the construction, maintenance, and operation of various levees, spillways, and other control structures by the U.S. Army Corps of Engineers.
The Old River Control Structure, between the present Mississippi River channel and the Atchafalaya Basin, sits at the normal water elevation and is ordinarily used to divert 30 % of the Mississippi 's flow to the Atchafalaya River. There is a steep drop here away from the Mississippi 's main channel into the Atchafalaya Basin. If this facility were to fail during a major flood, there is a strong concern the water would scour and erode the river bottom enough to capture the Mississippi 's main channel. The structure was nearly lost during the 1973 flood, but repairs and improvements were made after engineers studied the forces at play. In particular, the Corps of Engineers made many improvements and constructed additional facilities for routing water through the vicinity. These additional facilities give the Corps much more flexibility and potential flow capacity than they had in 1973, which further reduces the risk of a catastrophic failure in this area during other major floods, such as that of 2011.
Because the Morganza Spillway is slightly higher and well back from the river, it is normally dry on both sides. Even if it failed at the crest during a severe flood, the flood waters would have to erode to normal water levels before the Mississippi could permanently jump channel at this location. During the 2011 floods, the Corps of Engineers opened the Morganza Spillway to 1 / 4 of its capacity to allow 150,000 ft / sec of water to flood the Morganza and Atchafalaya floodways and continue directly to the Gulf of Mexico, bypassing Baton Rouge and New Orleans. In addition to reducing the Mississippi River crest downstream, this diversion reduced the chances of a channel change by reducing stress on the other elements of the control system.
Some geologists have noted that the possibility for course change into the Atchafalaya also exists in the area immediately north of the Old River Control Structure. Army Corps of Engineers geologist Fred Smith once stated, "The Mississippi wants to go west. 1973 was a forty - year flood. The big one lies out there somewhere -- when the structures ca n't release all the floodwaters and the levee is going to have to give way. That is when the river 's going to jump its banks and try to break through. ''
Another possible course change for the Mississippi River is a diversion into Lake Pontchartrain near New Orleans. This route is controlled by the Bonnet Carré Spillway, built to reduce flooding in New Orleans. This spillway and an imperfect natural levee about 4 -- 6 meters (12 to 20 feet) high are all that prevents the Mississippi from taking a new, shorter course through Lake Pontchartrain to the Gulf of Mexico. Diversion of the Mississippi 's main channel through Lake Pontchartrain would have consequences similar to an Atchafalaya diversion, but to a lesser extent, since the present river channel would remain in use past Baton Rouge and into the New Orleans area.
The sport of water skiing was invented on the river in a wide region between Minnesota and Wisconsin known as Lake Pepin. Ralph Samuelson of Lake City, Minnesota, created and refined his skiing technique in late June and early July 1922. He later performed the first water ski jump in 1925 and was pulled along at 80 mph (130 km / h) by a Curtiss flying boat later that year.
There are seven National Park Service sites along the Mississippi River. The Mississippi National River and Recreation Area is the National Park Service site dedicated to protecting and interpreting the Mississippi River itself. The other six National Park Service sites along the river are (listed from north to south):
The Mississippi basin is home to a highly diverse aquatic fauna and has been called the "mother fauna '' of North American fresh water.
About 375 fish species are known from the Mississippi basin, far exceeding other North Hemisphere river basin exclusively within temperate / subtropical regions, except the Yangtze. Within the Mississippi basin, streams that have their source in the Appalachian and Ozark highlands contain especially many species. Among the fish species in the basin are numerous endemics, as well as relicts such as paddlefish, sturgeon, gar and bowfin.
Because of its size and high species diversity, the Mississippi basin is often divided into subregions. The Upper Mississippi River alone is home to about 120 fish species, including walleye, sauger, large mouth bass, small mouth bass, white bass, northern pike, bluegill, crappie, channel catfish, flathead catfish, common shiner, freshwater drum and shovelnose sturgeon.
In addition to fish, several species of turtles (such as snapping, musk, mud, map, cooter, painted and softshell turtles), American alligator, aquatic amphibians (such as hellbender, mudpuppy, three - toed amphiuma and lesser siren), and cambarid crayfish (such as the red swamp crayfish) are native to the Mississippi basin.
Numerous introduced species are found in the Mississippi and some of these are invasive. Among the introductions are fish such as Asian carp, including the silver carp that have become infamous for outcompeting native fish and their potentially dangerous jumping behavior. They have spread throughout much of the basin, even approaching (but not yet invading) the Great Lakes. The Minnesota Department of Natural Resources has designated much of the Mississippi River in the state as infested waters by the exotic species zebra mussels and Eurasian watermilfoil.
|
what was the hottest temperature ever recorded in australia | List of Weather records - wikipedia
This is a list of weather records, a list of the most extreme occurrences of weather phenomena for various categories. Many weather records are measured under specific conditions -- such as surface temperature and wind speed -- to keep consistency among measurements around the Earth. Each of these records is understood to be the record value officially observed, as these records may have been exceeded before modern weather instrumentation was invented, or in remote areas without an official weather station. This list does not include remotely sensed observations such as satellite measurements, since those values are not considered official records.
The standard measuring conditions for temperature are in the air, 1.5 metres (4.9 ft) above the ground, and shielded from direct sunlight intensity (hence the term, x degrees "in the shade ''). The following lists include all officially confirmed claims measured by those methods.
Temperatures measured directly on the ground may exceed air temperatures by 30 to 50 ° C (54 to 90 ° F). The highest natural ground surface temperature ever recorded was 93.9 ° C (201.0 ° F) at Furnace Creek, Death Valley, California, United States on 15 July 1972. In recent years a ground temperature of 84 ° C (183.2 ° F) has been recorded in Port Sudan, Sudan. The theoretical maximum possible ground surface temperature has been estimated to be between 90 and 100 ° C (194 and 212 ° F) for dry, darkish soils of low thermal conductivity.
Satellite measurements of ground temperature taken between 2003 and 2009, taken with the MODIS infrared spectroradiometer on the Aqua satellite, found a maximum temperature of 70.7 ° C (159.3 ° F), which was recorded in 2005 in the Lut Desert, Iran. The Lut Desert was also found to have the highest maximum temperature in 5 of the 7 years measured (2004, 2005, 2006, 2007 and 2009). These measurements reflect averages over a large region and so are lower than the maximum point surface temperature.
Satellite measurements of the surface temperature of Antarctica, taken between 1982 and 2013, found a coldest temperature of − 93.2 ° C (− 135.8 ° F) on 10 August 2010, at 81 ° 48 ′ S 59 ° 18 ′ E / 81.8 ° S 59.3 ° E / - 81.8; 59.3. Although this is not comparable to an air temperature, it is believed that the air temperature at this location would have been lower than the official record lowest air temperature of − 89.2 ° C (− 128.6 ° F).
According to the World Meteorological Organization 's (WMO), the highest temperature ever recorded was 56.7 ° C (134.1 ° F) on 10 July 1913 in Furnace Creek (Greenland Ranch), California, USA. According to the WMO this temperature may have been the result of "a sandstorm that occurred at the time. Such a storm may have caused superheated surface materials to hit upon the temperature in the shelter. ''
The former highest official temperature on Earth (held for 90 years by ' Aziziya, Libya) was reassessed in July 2012 by the WMO which published a report that invalidated the record.
There have been other unconfirmed reports of high temperatures, with readings as high as 66.8 ° C (152.2 ° F) in the Flaming Mountains of China in 2008. However, these temperatures have never been confirmed, and are currently considered to have been recorder 's errors, thus not being recognised as world records.
Christopher C. Burt, the weather historian writing for Weather Underground who shepherded the Libya reading 's 2012 disqualification, believes that the 1913 Death Valley reading is "a myth '', and is at least 2.2 or 2.8 ° C (4 or 5 ° F) too high. Burt proposes that the highest reliably recorded temperature on Earth could be at Death Valley, but is instead 54.0 ° C (129.2 ° F) recorded on 30 June 2013. 53.9 ° C (129.0 ° F) was recorded another four times: 20 July 1960, 18 July 1998, 20 July 2005, and 7 July 2007. On 21 July 2016, Mitribah in Kuwait also recorded a maximum temperature of 54.0 ° C (129.2 ° F), tying Death Valley 's highest reliably recorded temperature on Earth, while Basra in Iraq reached 53.9 ° C (129.0 ° F) that day. On 29 June 2017, the air at the airport of Ahvaz in Iran reached 54.0 ° C (129.2 ° F) as well. In a second part to his analysis, he gave a list of 11 other occasions in which temperatures of 52.8 ° C (127.0 ° F) or more were reliably measured as well as the highest reliably measured temperatures on each continent.
Cruz Bay
|
what does the word sin mean in hebrew and greek | Sin - wikipedia
In a religious context, sin is the act of transgression against divine law. Sin can also be viewed as any thought or action that endangers the ideal relationship between an individual and God; or as any diversion from the perceived ideal order for human living. "To sin '' has been defined from a Greek concordance as "to miss the mark ''.
The word derives from "Old English syn (n), for original * sunjō. The stem may be related to that of Latin ' sons, sont - is ' guilty. In Old English there are examples of the original general sense, ' offence, wrong - doing, misdeed ' ''. The English Biblical terms translated as "sin '' or "syn '' from the Biblical Greek and Jewish terms sometimes originate from words in the latter languages denoting the act or state of missing the mark; the original sense of New Testament Greek ἁμαρτία hamartia "sin '', is failure, being in error, missing the mark, especially in spear throwing; Hebrew hata "sin '' originates in archery and literally refers to missing the "gold '' at the centre of a target, but hitting the target, i.e. error.
In the Bahá'í Faith, humans are considered naturally good (perfect), fundamentally spiritual beings. Human beings were created because of God 's immeasurable love. However, the Bahá'í teachings compare the human heart to a mirror, which, if turned away from the light of the sun (i.e. God), is incapable of receiving God 's love.
Buddhism believes in the principle of karma, whereby suffering is the inevitable consequence of greed, anger, and delusion (known as the Three poisons). While there is no direct Buddhist equivalent of the Abrahamic concept of sin, wrongdoing is recognized in Buddhism. The concept of Buddhist ethics is consequentialist in nature and is not based upon duty towards any deity. Karma means action, and in Buddhist context, motivation is the most important aspect of an action. Whether karma done with mind, body and / or speech is called ' good ' or ' bad ', depends on whether it would bring pleasant or unpleasant results to the person who does the action. One needs to purify negative karma Four Satipatthanas to free oneself from obstacles to liberation from the vicious circle of rebirth. The purification reduces suffering and in the end one reaches Nirvana, the ultimate purification by realizing selflessness or emptiness. An enlightened being is free of all the suffering and karmas, and will not be automatically born again.
In the Old Testament, some sins were punishable by death in different forms, while most sins are forgiven by burnt offerings. Christians consider the Old Covenant to be fulfilled by the Gospel.
In the New Testament the forgiveness of sin is effected through faith and repentance (Mark 1: 15). Sin is forgiven when the sinner acknowledges, confesses, and repents for their sin as a part of believing in Jesus Christ. The sinner is expected to confess his sins to God as a part of an ongoing relationship, which also includes giving thanks to God. The sinful man has never before been in a favorable relationship with God. When, as a part of his salvation, he is forgiven, he enters into a union with God which abides forever. In the Epistle to the Romans 6: 23, it is mentioned that "the wages of sin is death '', which is commonly interpreted as, if one repents for his sins, such person will inherit salvation.
In Jewish Christianity, sin is believed to alienate the sinner from God even though He has extreme love for mankind. It has damaged and completely severed the relationship of humanity to God. That relationship can only be restored through acceptance of Jesus Christ and his death on the cross as a satisfactory sacrifice for the sins of humanity. Humanity was destined for life with God when Adam disobeyed God. The Bible in John 3: 16 says "For God so loved the world, as to give his only begotten Son; that whosoever believeth in him, may not perish, but may have life everlasting. ''
In Eastern Christianity, sin is viewed in terms of its effects on relationships, both among people and likewise between people and God. Also as in Jewish Christianity, Sin is likewise seen as the refusal to follow God 's plan and the desire to be "like God '' (as stated in Genesis 3: 5) and thus in direct opposition to God 's will (see the account of Adam and Eve in Genesis).
Original sin is a Western concept that states that sin entered the human world through Adam and Eve 's sin in the Garden of Eden and that human beings have since lived with the consequences of this first sin.
The serpent who beguiled Eve to eat of the fruit was punished by having it and its kind being made to crawl on the ground and God set an enmity between them and Eve 's descendants (Genesis 3: 14 - 15). Eve was punished by the pains of childbirth and the sorrow of bringing about life that would eventually age, sicken and die (Genesis 3: 16). The second part of the curse about being subordinate to Adam originates from her creation from one of Adam 's ribs to be his helper (Genesis 2: 18 - 25); the curse now clarifies that she must now obey her husband and desire only him. Adam was punished by having to work endlessly to feed himself and his family. The land would bring forth both thistles and thorns to be cleared and herbs and grain to be planted, nurtured, and harvested. The second part of the curse about his mortality is from his origin as red clay - he is from the land and he and his descendants would return to it when buried after death. When Adam 's son Cain slew his brother Abel, he introduced murder into the world (Genesis 4: 8 - 10). For his punishment, God banished him as a fugitive, but first marked him with a sign that would protect him and his descendants from harm (Genesis 4: 11 - 16).
One concept of sin deals with things that exist on Earth, but not in Heaven. Food, for example, while a necessary good for the (health of the temporal) body, is not of (eternal) transcendental living and therefore its excessive savoring is considered a sin. The unforgivable sin (or eternal sin) is a sin that can never be forgiven; Matthew 12: 30 - 32: "He that is not with me, is against me: and he that gathereth not with me, scattereth. And Therefore I say to you: Every sin and blasphemy shall be forgiven men, but the blasphemy of the Spirit shall not be forgiven. And whosoever shall speak a word against the Son of man, it shall be forgiven him: but he that shall speak against the Holy Ghost, it shall not be forgiven him, neither in this world, nor in the world to come. ''
In Catholic Christianity sins are classified into grave sins called mortal sins and less serious sins called venial sin. Mortal sins cause one to lose salvation unless the sinner repents and venial sins require some sort of penance either on Earth or in Purgatory.
Jesus was said to have paid for the complete mass of sins past, present, and to come in future. Even inevitable sin is said to have already been cleansed.
The Lamb of God was and is God himself and is therefore sinless. In the Old Testament, Leviticus 16: 21 states that ' the laying on of hands ' was the action that the High Priest Aaron was ordered to do yearly by God to take sins of Israel 's nation onto a spotless young lamb.
In Hinduism, the term sin (pāpa in Sanskrit) is often used to describe actions that create negative Karma by violating moral and ethical codes, which automatically brings negative consequences. This is somehow similar to Abrahamic sin in the sense that pāpa is considered a crime against the laws of God, which is known as (1) Dharma, or moral order, and (2) one 's own self, but another term aparadha is used for grave offences. The term papa can not be taken however, in literal sense as that of a sin. This is because there is no consensus regarding the nature of ultimate reality or God in Hinduism. Only, the vedanta school being unambiguously theistic, whereas no anthropomorphic God exists in the rest five schools namely Samkhya, Nyaya Yoga, Vaishashikha, and Purva - Mimansa. The term papa however in the strictest sense refers to actions which bring about wrong / unfavourable consequences, not relating to a specific divine will in the absolute sense. To conclude, considering a lack of consensus regarding the nature of ultimate reality in Hinduism, it can be considered that papa has lesser insistence on God for it be translated as Sin, and that there is no exact equivalent to Sin in Hinduism.
In Islamic ethics, Muslims see sin as anything that goes against the commands of Allah (God), a breach of the laws and norms laid down by religion. Islamic terms for sin include dhanb and khaṭīʾa, which are synonymous and refer to intentional sins; khiṭʾ, which means simply a sin; and ithm, which is used for grave sins.
Mainstream Judaism regards the violation of any of the 613 commandments of the Mosaic law for Jews, or the seven Noahide laws for Gentiles as a sin. Judaism teaches that all humans are inclined to sin from birth. Sin has many classifications and degrees. Some sins are punishable with death by the court, others with death by heaven, others with lashes, and others without such punishment, but no sins with willful intent go without consequence. Unwillful violations of the mitzvot (without negligence) do not count as sins. "Sins by error '' are considered as less severe sins. When the Temple yet stood in Jerusalem, people would offer sacrifices for their misdeeds. The atoning aspect of korbanot is carefully circumscribed. For the most part, korbanot only expiate such "sins by error '', that is, sins committed because a person forgot or did not know that this thing was a sin. In some circumstances, lack of knowledge is considered close to willful intent. No atonement is needed for violations committed under duress, and for the most part, korbanot can not atone for a deliberate sin. In addition, korbanot have no expiating effect unless the person making the offering sincerely repents his or her actions before making the offering, and makes restitution to any person who suffered harm through the violation.
Judaism teaches that all willful sin has consequences. The completely righteous suffer for their sins (by humiliation, poverty, and suffering that God sends them) in this world and receive their reward in the world to come. The in - between (not completely righteous or completely wicked), suffer for and repent their sins after death and thereafter join the righteous. The very evil do not repent even at the gates of hell. Such people prosper in this world to receive their reward for any good deed, but can not be cleansed by and hence can not leave gehinnom, because they do not or can not repent. This world can therefore seem unjust where the righteous suffer, while the wicked prosper. Many great thinkers have contemplated this.
In Mesopotamian mythology, Adamu (or Addamu / Admu, or Adapa) goes on trial for the "sin of casting down a divinity ''. His crime is breaking the wings of the south wind.
Evil deeds fall into two categories in Shinto: amatsu tsumi, "the most pernicious crimes of all '', and kunitsu tsumi, "more commonly called misdemeanors ''.
|
who was the starting 5 for the 96 bulls | 1995 -- 96 Chicago Bulls season - wikipedia
The 1995 -- 96 NBA season was the Bulls ' 30th season in the National Basketball Association. During the offseason, the Bulls acquired rebound - specialist Dennis Rodman from the San Antonio Spurs, and signed free agent Randy Brown. Midway through the season, the team signed John Salley, who was released by the expansion Toronto Raptors. Salley won championships with the Detroit Pistons along with Rodman in 1989 and 1990. This season saw the Bulls set the record for most wins in an NBA regular season in which they won the championship, finishing with 72 wins and 10 losses. The regular season record was broken by the 2015 -- 16 Golden State Warriors, who won 73 games and lost 9, but did not win a championship. Those Warriors have a connection to this Bulls team, as Steve Kerr, the current Golden State coach, was a reserve point guard with the Bulls.
The team started 37 -- 0 at home, part of a then - NBA - record 44 - game winning streak that included games from the 1994 -- 95 regular - season. Their 33 road wins were the most in NBA history until the 2015 -- 16 Warriors won 34 road games. The season was the best 3 - loss start in NBA history at 41 -- 3 (. 932), which included an 18 - game winning streak for the team. The Bulls became the first NBA team to ever win 70 regular season games, finishing first overall in their division, conference, and the entire NBA. They are also the only team in NBA history to win more than 70 games and an NBA title in the same season. Michael Jordan and Scottie Pippen were both selected for the 1996 NBA All - Star Game, as Jordan led the league in scoring with 30.4 points per game, while Phil Jackson was named Coach of The Year, and was selected to coach the Eastern Conference in the All - Star Game.
The Bulls swept the Miami Heat 3 -- 0 in the first round of the playoffs, defeated the New York Knicks 4 -- 1 in five games of the semifinals, then swept the Orlando Magic 4 -- 0 in the Conference Finals. They then defeated the Seattle SuperSonics 4 -- 2 in the 1996 NBA Finals, winning their fourth NBA title in six seasons. The Bulls have the best combined regular and postseason record in NBA history 87 -- 13 (. 870). For the season, the Bulls added black pinstripe alternate road uniforms. They would remove the pinstripes from the jerseys in 1997.
Before the 1995 -- 96 NBA season, the Bulls acquired Dennis Rodman and Jack Haley from the Spurs in exchange for Will Perdue and cash considerations to fill a void at power forward left by Horace Grant, who left the Bulls before the 1994 -- 95 NBA season.
In his book Bad as I Wanna Be, Rodman wrote that Michael Jordan and Scottie Pippen had to approve the trade. Rodman chose the number 91 (9 + 1 = 10 according to Rodman for why he chose that number) for his jersey since # 10 was retired by the Bulls in 1995 in honor of Bob Love.
Haley played in one game, the final game of the regular season, and did n't participate in the playoffs. He was best known for his friendship with the enigmatic Rodman.
Roster Transactions Last transaction: 1996 -- 03 -- 24
Source: Team Splits on Basketball Reference
The Bulls began the 1995 -- 96 season on November 3 against the Charlotte Hornets and defeated them, 105 -- 91, with Michael Jordan scoring 42 points. The next day, Chicago defeated the Boston Celtics in a 22 - point blowout. On November 7, the Bulls defeated the Toronto Raptors behind Jordan 's 38 points. In Gund Arena, Chicago defeated the Cleveland Cavaliers on November 9, where Scottie Pippen accumulated a triple - double with 18 points, 13 rebounds, and 12 assists. After defeating the Portland Trail Blazers on November 11, the Bulls reached a 5 -- 0 start for the season. On November 14, Chicago 's undefeated streak ended with a loss to the Orlando Magic, despite a double - double performance by Pippen. Following their first loss of the season, Chicago bounced back to defeat Cleveland on November 15. The Bulls would continue their winning ways by defeating the New Jersey Nets on November 17.
From November 21 -- December 2, the Bulls went on a road trip to play against Western Conference teams. On November 21, Chicago played in its first overtime game of the season in a win against the Dallas Mavericks, backed by a double - double performance by Pippen and 36 points from Jordan. On the next day, the Bulls defeated the San Antonio Spurs behind another triple - double by Pippen and Jordan 's 38 points. Chicago then went to Delta Center to play against the Utah Jazz on November 24. In the game, the Bulls defeated the Jazz, 90 -- 85. On November 26, the Bulls headed to Seattle and were dealt their second loss of the season after losing by five points. Heading south into Portland, Chicago would defeat the Trail Blazers by three points on November 27. In their last game of the month, the Bulls went to Canada to play against the Vancouver Grizzlies and defeated them.
Chicago 's road trip ended in Los Angeles on December 2 after defeating the Clippers. On December 6, the Bulls headed back to the United Center to play the New York Knicks and had defeated them despite Jordan 's struggle that night. The Bulls won their fifth straight game on December 8 against the Spurs. On the next day, Chicago defeated the Milwaukee Bucks behind Jordan 's 45 - point performance.
The Bulls went undefeated in January.
The Bulls continued their success in February winning 11 of 14.
The Bulls added 2 of their final 10 losses in March:
On Sunday, March 10, they were blown out 104 - 72 in Madison Square Garden by Ewing 's Knicks -- their largest margin of defeat on the season.
Two weeks later, they dropped a game at the hands of the expansion Raptors, 109 - 108. Damon Stoudamire posted an efficient 30 - point, 11 - assist effort to lead Toronto.
The Bulls lost two home games in the final month losing to the Charlotte Hornets, then their final home game of the season to the Indiana Pacers. Those were their only home losses of the entire season, including the playoffs, as Chicago finished with an overall 39 - 2 record at the United Center.
The Bulls ' playoff run began on April 26. Their First Round opponent was the Miami Heat, whom they defeated 3 -- 1 in the regular season. In Game 1, the Bulls defeated Miami in a blowout victory behind Jordan 's 35 points. Winning in a 31 - point blowout, Chicago once again defeated the Heat. To reach the Conference Semifinals, the Bulls defeated the Heat in Miami in a game where Pippen accumulated a triple - double with 22 points, 18 rebounds, and 10 assists.
The Bulls met rival New York Knicks in the Conference Semifinals. In the regular season, Chicago won the season series, 4 -- 1. In Game 1 on May 5, the Bulls defeated the Knicks behind Jordan 's 44 points. Chicago would defeat New York again on May 7 to take a 2 -- 0 series lead. Playing at Madison Square Garden, the Bulls lost Game 3 in overtime, despite a 46 - point offensive performance by Jordan. In Game 4, Chicago defeated the Knicks by three points to take a 3 -- 1 series lead. To close out the series, the Bulls defeated New York at home behind double - double performances by Pippen and Rodman.
In the Conference Finals, the Bulls met the Atlantic Division champions, Orlando Magic, a team led by Shaquille O'Neal and Anfernee Hardaway, who had reached the finals the previous year and were swept by Hakeem Olajuwon and the Houston Rockets. The Bulls won the regular - season series against the Magic, 3 -- 1. To start the series, the Bulls took Game 1 in a 38 - point blowout on May 19. Behind Jordan 's 35 points, Chicago defeated Orlando on May 21. In Game 3, the Bulls continued their winning ways by taking a 3 -- 0 series lead against the Magic. Completing the series sweep, the Bulls won Game 4 by five points behind a 45 - point performance by Jordan.
Chicago took on the Seattle SuperSonics, whose 64 -- 18 franchise - best regular season record was overshadowed by the Bulls ' 72 -- 10 record. In the regular season, the two teams split the season series, 1 -- 1. In Game 1 of the NBA Finals, Chicago defeated Seattle by 17 points. The Bulls took a 2 -- 0 series lead against the Sonics in the second game where Rodman accumulated 20 rebounds. In KeyArena, Chicago won Game 3 behind Jordan 's 36 points. The Bulls lost Game 4 in a 21 - point blowout on June 12. On June 14, the Bulls lost against Seattle in Game 5. Back in the United Center, Chicago defeated Seattle in Game 6 to win the NBA championship four games to two.
Rate statistic requirements
|
who was the leader of the 1857 revolt in assam | Maniram Dewan - Wikipedia
Maniram Dutta Baruah, popularly known as Maniram Dewan (17 April 1806 -- 26 February 1858), was an Assamese nobleman in British India. He was one of the first people to establish tea gardens in Assam. A loyal ally of the British East India Company in his early years, he was hanged by the British for conspiring against them during the 1857 uprising. He was popular among the people of Upper Assam as "Kalita Raja '' (king of the Kalita caste).
Maniram was born into a Kalita family that had migrated from Kannauj to Assam in the early 16th century. His paternal ancestors held high offices in the Ahom court. The Ahom rule had weakened considerably following the Moamoria rebellion (1769 -- 1806). During the Burmese invasions of Assam (1817 - 1826), Maniram 's family sought asylum in Bengal, which was under the control of the British East India Company. The family returned to Assam under the British protection, during the early days of the First Anglo - Burmese War (1824 - 1826). The East India Company defeated the Burmese and gained the control of Assam through the Treaty of Yandabo (1826).
Early in his career, Maniram became a loyal associate of the British East India Company administration under David Scott, the Agent of the Governor General in North East India. In 1828, the 22 - year - old Maniram was appointed as a tehsildar and a sheristadar of Rangpur under Scott 's deputy Captain John Bryan Neufville.
Later, Maniram was made a borbhandar (Prime Minister) by Purandar Singha, the titular ruler of Assam during 1833 -- 1838. He continued to be an associate of Purandar 's son Kamaleswar Singha and grandsom Kandarpeswar Singha. Maniram became a loyal confidante of Purandar Singha, and resigned from the posts of sheristadar and tehsildar, when the king was deposed by the British.
It was Maniram who informed the British about the Assam tea grown by the Singpho people, which was hitherto unknown to the rest of the world. In the early 1820s, he directed the cultivators Robert Bruce and his brother Charles Alexander Bruce to the local Singpho chief Bessa Gam. Charles Bruce collected the tea plants from the Singphos and took them to the Company administration. However, Dr. Nathaniel Wallich, the superintendent of the Calcutta Botanical Garden declared that these samples were not same species as the tea plants of China.
In 1833, after its monopoly on the Chinese tea trade ended, the East India Company decided to establish major tea plantations in India. Lord William Bentinck established the Tea Committee on 1 February 1834 towards achieving this goal. The committee sent out circulars asking about the suitable places for tea cultivation, to which Captain F. Jenkins responded, suggesting Assam. The tea plant samples collected by his assistant Lieutenant Charlton were acknowledged by Dr. Wallich as genuine tea. When the Tea Committee visited Assam to study the feasibility of tea cultivation, Maniram met Dr. Wallich as a representative of Purandar Singha, and highlighted the region 's prospects for tea cultivation.
In 1839, Maniram became the Dewan of the Assam Tea Company at Nazira, drawing a salary of 200 rupees per month. In the mid-1840s, he quit his job due to differences of opinion with the company officers. By this time, Maniram had acquired tea cultivation expertise. He established his own tea garden at Cinnamara in Jorhat, thus becoming the first Indian to grow tea commercially in Assam. Jorhat later became home to the tea research laboratory Tocklai Experimental Station. He established another plantation at Selung (or Singlo) in Sibsagar.
Apart from the tea industry, Maniram also ventured into iron smelting, gold procuring and salt production. He was also involved in the manufacturing of goods like matchlocks, hoes and cutlery. His other business activities included handloom, boat making, brick making, bellmetal, dyeing, ivory work, ceramic, coal supply, elephant trade, construction of buildings for military headquarters and agricultural products. Some of the markets established by him include the Garohat in Kamrup, Nagahat near Sivasagar, Borhat in Dibrugarh, Sissihat in Dhemaji and Darangia Haat in Darramg.
By the 1850s, Maniram had become hostile to the British. He had faced numerous administrative obstacles in establishing private tea plantations, due to opposition from the competing European tea planters. In 1851, an officer seized all the facilities provided to him due to a tea garden dispute. Maniram, whose family consisted of 185 people, had to face economic hardship.
In 1852, Maniram presented a petition to A.G. Moffat Mills, the judge of the Sadar Court, Calcutta. He wrote that the people of Assam had been "reduced to the most abject and hopeless state of misery from the loss of their fame, honour, rank, caste, employment etc. '' He pointed out that the British policies were aimed at recovering the expenses incurred in conquering the Assam province from the Burmese, resulting in exploitation of the local economy. He protested against the waste of money on frivolous court cases, the unjust taxation system, the unfair pension system and the introduction of opium cultivation. He also criticized the discontinuation of the puja (Hindu worship) at the Kamakhya Temple, which according to him resulted in calamities. Maniram further wrote that the "objectionable treatment '' of the Hill Tribes (such as the Nagas) was resulting in constant warfare leading to mutual loss of life and money. He complained against the desecration of the Ahom royal tombs and looting of wealth from these relics. He also expressed his disapproval of the appointment of the Marwaris and the Bengalis as Mouzadars (a civil service post), when a number of Assamese people remained unemployed.
As a solution to all these issues, Maniram proposed that the former native administration of the Ahom kings be reintroduced. The judge Mills dismissed the petition as a "curious document '' from "a discontended (sic) subject ''. He also remarked that Maniram was "a clever but an untrustworthy and intriguing person ''. To gather support for the reintroduction of the Ahom rule, Maniram arrived in Calcutta, the then capital of British India, in April 1857, and networked with several influential people. On behalf of the Ahom royal Kandarpeswar Singha, he petitioned the British administrators for restoration of the Ahom rule on 6 May 1857.
When the Indian sepoys started an uprising against the British on 10 May, Maniram saw it as an opportunity to restore the Ahom rule. With help from messengers disguised as fakirs, he sent coded letters to Piyali Baruah, who had been acting as the chief advisor of Kandarpeswar in his absence. In these letters, he urged Kandarpeswar Singha to launch a rebellion against the British, with help from the sepoys at Dibrugarh and Golaghat. Kandarpeswar and his loyal men hatched an anti-British plot and gathered arms. The plot was supported by several influential local leaders including Urbidhar Barua, Mayaram Barbora, Chitrasen Barbora, Kamala Charingia Barua, Mahidhar Sarma Muktear, Luki Senchowa Barua, Ugrasen Marangikhowa Gohain, Deoram Dihingia Barua, Dutiram Barua, Bahadur Gaonburha, Sheikh Formud Ali and Madhuram Koch.
The conspirators were joined by the Subedars Sheikh Bhikun and Nur Mahammad, after Kandarpeswar promised to double the salary of the sepoys if they succeeded in defeating the British. On 29 August 1857, the rebels met at Sheikh Bhikun 's residence at Nogora. They planned a march to Jorhat, where Kandarpeswar would be installed as the King on the day of the Durga Puja; later Sivsagar and Dibrugarh would be captured. However, the plot was uncovered before it could be executed. Kandarpeswar, Maniram, and other leaders were arrested.
Maniram was arrested in Calcutta, detained in Alipur for a few weeks, and then brought to Jorhat. His letters to Kandarpeswar had been intercepted by the Special Commissioner Captain Charles Holroyd, who judged the trial. Based on the statement of Haranath Parbatia Baruah, the daroga (inspector) of Sivsagar, Maniram was identified as the kingpin of the plot. He and Piyali Barua were publicly hanged on 26 February 1858 at the Jorhat jail. Maniram 's death was widely mourned in Assam, and several tea gaden workers struck work to express their support for the rebellion. The executions led to resentment among the public, resulting in an open rebellion which was suppressed forcefully.
After his death, Maniram 's tea estates were sold to George Williamson in an auction. Several folk songs, known as the "Maniram Dewanar Geet '', were composed in his memory. The Maniram Dewan Trade Centre of Guwahati and the Maniram Dewan Boys ' Hostel of the Dibrugarh University is named after him. In 2012, the Planning Commission Deputy Chairman Montek Singh Ahluwalia announced that he planned to declare tea as the national drink of India to coincide with the 212th birth anniversary of Maniram Dewan.
|
who scored the best goal in premier league last season | BBC goal of the season - wikipedia
In English football, the Goal of the Season is an annual competition and award given on BBC 's Match of the Day, in honour of the most spectacular goal scored that season. It is typically contested between the winners of the preceding ten Goals of the Month, although the goal can and has come from any game in the regular season, including international qualifiers and friendlies -- potentially from the opening league games of the season to the end of the European season UEFA Champions League final. In several instances, the goal has come in the final game of the domestic season, the FA Cup Final, the most recent example of which is Steven Gerrard 's last minute goal in 2006. However, in 1980 - 81 for example the superb goal scored by Ricky Villa in the FA Cup Final replay for Tottenham Hotspur against Manchester City could not be considered as voting had already taken place.
In general, the winning goal has occurred for an English side within the domestic English league or cups, although these are not particular rules -- Kenny Dalglish 's goal in 1982 -- 83 for Scotland being an example. The goal usually comes from competitions to which the BBC holds television rights and which are shown under the Match of the Day banner -- at present Premier League highlights and FA Cup live matches and highlights -- although some have come from the equivalent Sportscene broadcast by BBC Scotland. Due to the lack of BBC European club football coverage, held predominantly by ITV and Sky, no goal of the season has ever been scored in European club competition despite many contenders.
Due to a transfer of broadcast rights, the entries for the 2001 -- 02, 2002 -- 03 and 2003 -- 04 seasons were decided on ITV 's The Premiership, which have been subsequently recognised by the BBC. When the BBC previously could not show league footage from 1988 -- 89 to 1991 -- 92, the winning goal in each season was scored in the FA Cup which they held the rights to. League rights holder ITV had its own competition during these seasons for Goal of the Season, broadcast on the Saint and Greavsie show. Previously the channels had shared league and cup rights (showing different matches to each other) and for many years ITV broadcast its own Golden Goals competition as an equivalent of Goal of the Season. From 2013 -- 14 season onwards, the Goal of the Season has been chosen by a Twitter poll and the BBC Sport website. Jack Wilshere is the first player to win Goal of the Season in consecutive seasons (2013 -- 14 and 2014 -- 15) since the start of the Premier League. The 1987 - 88 competition was unique in that all 10 goals shortlisted were scored by Liverpool players. As of 2015 - 16, this is the only occasion where the contenders were made up entirely of goals scored by players for one club.
For several years in the late 2000s, the winner was not subject to public vote due to the 2007 Phone - in scandals. The winning goal was instead decided by pundits in the studio.
On 24 May 2015, the final day of the 2014 -- 15 season, Match of the Day held an online vote at around 11 pm GMT for the Goal of the Season award. Users were able to vote via the BBC website or via Twitter. The poll was quickly rigged by Arsenal supporters, many from the Far East, resulting in Jack Wilshere winning the award for his final day strike against West Bromwich Albion, despite not being the favourite. Host Gary Lineker expressed surprise as he read out the winner, and pundit Alan Shearer suggested that Charlie Adam should have won the award for his 66 - yard effort against Chelsea; fellow pundit Danny Murphy felt former Fulham teammate Bobby Zamora should have won.
The incident was labelled a "shambles '' by Pete Smith of The Stoke Sentinel who thought that Stoke City 's Adam should have won, and a "concerted campaign by Arsenal fans '' by Alan Pattullo of The Scotsman, who also believed that the Scottish midfielder was deserving of the award. Mark Brus of Caught Offside also criticized the choice stating that a goal in a meaningless game should not have won Goal of the Season and that Juan Mata 's acrobatic effort against Liverpool was worthy of the award.
The following season, before the final episode of that season 's Match of the Day, the programme 's producers changed the rules to prevent a similar situation. The Goal of the Season award has since been decided by the pundits on the show, who will choose the winner based on the top three goals voted for by the public.
Source
|
what is the cruising altitude of a boeing 747 | Boeing 747 - wikipedia
The Boeing 747 is an American wide - body commercial jet airliner and cargo aircraft, often referred to by its original nickname, "Jumbo Jet ''. Its distinctive "hump '' upper deck along the forward part of the aircraft has made it one of the most recognizable aircraft, and it was the first wide - body airplane produced. Manufactured by Boeing 's Commercial Airplane unit in the United States, the 747 was originally envisioned to have 150 percent greater capacity than the Boeing 707, a common large commercial aircraft of the 1960s. First flown commercially in 1970, the 747 held the passenger capacity record for 37 years.
The four - engine 747 uses a double - deck configuration for part of its length and is available in passenger, freighter and other versions. Boeing designed the 747 's hump - like upper deck to serve as a first -- class lounge or extra seating, and to allow the aircraft to be easily converted to a cargo carrier by removing seats and installing a front cargo door. Boeing expected supersonic airliners -- the development of which was announced in the early 1960s -- to render the 747 and other subsonic airliners obsolete, while the demand for subsonic cargo aircraft would remain robust well into the future. Though the 747 was expected to become obsolete after 400 were sold, it exceeded critics ' expectations with production surpassing 1,000 in 1993. By January 2018, 1,543 aircraft had been built, with 11 of the 747 - 8 variants remaining on order. As of January 2017, the 747 has been involved in 60 hull losses, resulting in 3,722 fatalities.
The 747 - 400, the most common variant in service, has a high - subsonic cruise speed of Mach 0.85 -- 0.855 (up to 570 mph or 920 km / h) with an intercontinental range of 7,260 nautical miles (8,350 statute miles or 13,450 km). The 747 - 400 can accommodate 416 passengers in a typical three - class layout, 524 passengers in a typical two - class layout, or 660 passengers in a high -- density one - class configuration. The newest version of the aircraft, the 747 - 8, is in production and received certification in 2011. Deliveries of the 747 - 8F freighter version began in October 2011; deliveries of the 747 - 8I passenger version began in May 2012.
In 1963, the United States Air Force started a series of study projects on a very large strategic transport aircraft. Although the C - 141 Starlifter was being introduced, they believed that a much larger and more capable aircraft was needed, especially the capability to carry outsized cargo that would not fit in any existing aircraft. These studies led to initial requirements for the CX - Heavy Logistics System (CX - HLS) in March 1964 for an aircraft with a load capacity of 180,000 pounds (81,600 kg) and a speed of Mach 0.75 (500 mph or 800 km / h), and an unrefueled range of 5,000 nautical miles (9,300 km) with a payload of 115,000 pounds (52,200 kg). The payload bay had to be 17 feet (5.18 m) wide by 13.5 feet (4.11 m) high and 100 feet (30 m) long with access through doors at the front and rear.
Featuring only four engines, the design also required new engine designs with greatly increased power and better fuel economy. In May 1964, airframe proposals arrived from Boeing, Douglas, General Dynamics, Lockheed, and Martin Marietta; engine proposals were submitted by General Electric, Curtiss - Wright, and Pratt & Whitney. After a downselect, Boeing, Douglas, and Lockheed were given additional study contracts for the airframe, along with General Electric and Pratt & Whitney for the engines.
All three of the airframe proposals shared a number of features. As the CX - HLS needed to be able to be loaded from the front, a door had to be included where the cockpit usually was. All of the companies solved this problem by moving the cockpit above the cargo area; Douglas had a small "pod '' just forward and above the wing, Lockheed used a long "spine '' running the length of the aircraft with the wing spar passing through it, while Boeing blended the two, with a longer pod that ran from just behind the nose to just behind the wing. In 1965 Lockheed 's aircraft design and General Electric 's engine design were selected for the new C - 5 Galaxy transport, which was the largest military aircraft in the world at the time. The nose door and raised cockpit concepts would be carried over to the design of the 747.
The 747 was conceived while air travel was increasing in the 1960s. The era of commercial jet transportation, led by the enormous popularity of the Boeing 707 and Douglas DC - 8, had revolutionized long - distance travel. Even before it lost the CX - HLS contract, Boeing was asked by Juan Trippe, president of Pan American World Airways (Pan Am), one of their most important airline customers, to build a passenger aircraft more than twice the size of the 707. During this time, airport congestion, worsened by increasing numbers of passengers carried on relatively small aircraft, became a problem that Trippe thought could be addressed by a larger new aircraft.
In 1965, Joe Sutter was transferred from Boeing 's 737 development team to manage the design studies for the new airliner, already assigned the model number 747. Sutter initiated a design study with Pan Am and other airlines, to better understand their requirements. At the time, it was widely thought that the 747 would eventually be superseded by supersonic transport aircraft. Boeing responded by designing the 747 so that it could be adapted easily to carry freight and remain in production even if sales of the passenger version declined. In the freighter role, the clear need was to support the containerized shipping methodologies that were being widely introduced at about the same time. Standard containers are 8 ft (2.4 m) square at the front (slightly higher due to attachment points) and available in 20 and 40 ft (6.1 and 12 m) lengths. This meant that it would be possible to support a 2 - wide 2 - high stack of containers two or three ranks deep with a fuselage size similar to the earlier CX - HLS project.
In April 1966, Pan Am ordered 25 747 - 100 aircraft for US $525 million. During the ceremonial 747 contract - signing banquet in Seattle on Boeing 's 50th Anniversary, Juan Trippe predicted that the 747 would be "... a great weapon for peace, competing with intercontinental missiles for mankind 's destiny ''. As launch customer, and because of its early involvement before placing a formal order, Pan Am was able to influence the design and development of the 747 to an extent unmatched by a single airline before or since.
Ultimately, the high - winged CX - HLS Boeing design was not used for the 747, although technologies developed for their bid had an influence. The original design included a full - length double - deck fuselage with eight - across seating and two aisles on the lower deck and seven - across seating and two aisles on the upper deck. However, concern over evacuation routes and limited cargo - carrying capability caused this idea to be scrapped in early 1966 in favor of a wider single deck design. The cockpit was, therefore, placed on a shortened upper deck so that a freight - loading door could be included in the nose cone; this design feature produced the 747 's distinctive "bulge ''. In early models it was not clear what to do with the small space in the pod behind the cockpit, and this was initially specified as a "lounge '' area with no permanent seating. (A different configuration that had been considered in order to keep the flight deck out of the way for freight loading had the pilots below the passengers, and was dubbed the "anteater ''.)
One of the principal technologies that enabled an aircraft as large as the 747 to be drawn up was the high - bypass turbofan engine. The engine technology was thought to be capable of delivering double the power of the earlier turbojets while consuming a third less fuel. General Electric had pioneered the concept but was committed to developing the engine for the C - 5 Galaxy and did not enter the commercial market until later. Pratt & Whitney was also working on the same principle and, by late 1966, Boeing, Pan Am and Pratt & Whitney agreed to develop a new engine, designated the JT9D to power the 747.
The project was designed with a new methodology called fault tree analysis, which allowed the effects of a failure of a single part to be studied to determine its impact on other systems. To address concerns about safety and flyability, the 747 's design included structural redundancy, redundant hydraulic systems, quadruple main landing gear and dual control surfaces. Additionally, some of the most advanced high - lift devices used in the industry were included in the new design, to allow it to operate from existing airports. These included slats running almost the entire length of the wing, as well as complex three - part slotted flaps along the trailing edge of the wing. The wing 's complex three - part flaps increase wing area by 21 percent and lift by 90 percent when fully deployed compared to their non-deployed configuration.
Boeing agreed to deliver the first 747 to Pan Am by the end of 1969. The delivery date left 28 months to design the aircraft, which was two - thirds of the normal time. The schedule was so fast - paced that the people who worked on it were given the nickname "The Incredibles ''. Developing the aircraft was such a technical and financial challenge that management was said to have "bet the company '' when it started the project.
As Boeing did not have a plant large enough to assemble the giant airliner, they chose to build a new plant. The company considered locations in about 50 cities, and eventually decided to build the new plant some 30 miles (50 km) north of Seattle on a site adjoining a military base at Paine Field near Everett, Washington. It bought the 780 - acre (320 ha) site in June 1966.
Developing the 747 had been a major challenge, and building its assembly plant was also a huge undertaking. Boeing president William M. Allen asked Malcolm T. Stamper, then head of the company 's turbine division, to oversee construction of the Everett factory and to start production of the 747. To level the site, more than four million cubic yards (three million cubic meters) of earth had to be moved. Time was so short that the 747 's full - scale mock - up was built before the factory roof above it was finished. The plant is the largest building by volume ever built, and has been substantially expanded several times to permit construction of other models of Boeing wide - body commercial jets.
Before the first 747 was fully assembled, testing began on many components and systems. One important test involved the evacuation of 560 volunteers from a cabin mock - up via the aircraft 's emergency chutes. The first full - scale evacuation took two and a half minutes instead of the maximum of 90 seconds mandated by the Federal Aviation Administration (FAA), and several volunteers were injured. Subsequent test evacuations achieved the 90 - second goal but caused more injuries. Most problematic was evacuation from the aircraft 's upper deck; instead of using a conventional slide, volunteer passengers escaped by using a harness attached to a reel. Tests also involved taxiing such a large aircraft. Boeing built an unusual training device known as "Waddell 's Wagon '' (named for a 747 test pilot, Jack Waddell) that consisted of a mock - up cockpit mounted on the roof of a truck. While the first 747s were still being built, the device allowed pilots to practice taxi maneuvers from a high upper - deck position.
On September 30, 1968, the first 747 was rolled out of the Everett assembly building before the world 's press and representatives of the 26 airlines that had ordered the airliner. Over the following months, preparations were made for the first flight, which took place on February 9, 1969, with test pilots Jack Waddell and Brien Wygle at the controls and Jess Wallick at the flight engineer 's station. Despite a minor problem with one of the flaps, the flight confirmed that the 747 handled extremely well. The 747 was found to be largely immune to "Dutch roll '', a phenomenon that had been a major hazard to the early swept - wing jets.
During later stages of the flight test program, flutter testing showed that the wings suffered oscillation under certain conditions. This difficulty was partly solved by reducing the stiffness of some wing components. However, a particularly severe high - speed flutter problem was solved only by inserting depleted uranium counterweights as ballast in the outboard engine nacelles of the early 747s. This measure caused anxiety when these aircraft crashed, for example El Al Flight 1862 at Amsterdam in 1992 with 282 kilograms (622 lb) of uranium in the tailplane (horizontal stabilizer).
The flight test program was hampered by problems with the 747 's JT9D engines. Difficulties included engine stalls caused by rapid throttle movements and distortion of the turbine casings after a short period of service. The problems delayed 747 deliveries for several months; up to 20 aircraft at the Everett plant were stranded while awaiting engine installation. The program was further delayed when one of the five test aircraft suffered serious damage during a landing attempt at Renton Municipal Airport, site of the company 's Renton factory. On December 13, 1969 a test aircraft was being taken to have test equipment removed and a cabin installed when pilot Ralph C. Cokely undershot the airport 's short runway. The 747 's right, outer landing gear was torn off and two engine nacelles were damaged. However, these difficulties did not prevent Boeing from taking a test aircraft to the 28th Paris Air Show in mid-1969, where it was displayed to the public for the first time. The 747 received its FAA airworthiness certificate in December 1969, clearing it for introduction into service.
The huge cost of developing the 747 and building the Everett factory meant that Boeing had to borrow heavily from a banking syndicate. During the final months before delivery of the first aircraft, the company had to repeatedly request additional funding to complete the project. Had this been refused, Boeing 's survival would have been threatened. The firm 's debt exceeded $2 billion, with the $1.2 billion owed to the banks setting a record for all companies. Allen later said, "It was really too large a project for us. '' Ultimately, the gamble succeeded, and Boeing held a monopoly in very large passenger aircraft production for many years.
On January 15, 1970, First Lady of the United States Pat Nixon christened Pan Am 's first 747, at Dulles International Airport (later Washington Dulles International Airport) in the presence of Pan Am chairman Najeeb Halaby. Instead of champagne, red, white, and blue water was sprayed on the aircraft. The 747 entered service on January 22, 1970, on Pan Am 's New York -- London route; the flight had been planned for the evening of January 21, but engine overheating made the original aircraft unusable. Finding a substitute delayed the flight by more than six hours to the following day when Clipper Victor was used.
The 747 enjoyed a fairly smooth introduction into service, overcoming concerns that some airports would not be able to accommodate an aircraft that large. Although technical problems occurred, they were relatively minor and quickly solved. After the aircraft 's introduction with Pan Am, other airlines that had bought the 747 to stay competitive began to put their own 747s into service. Boeing estimated that half of the early 747 sales were to airlines desiring the aircraft 's long range rather than its payload capacity. While the 747 had the lowest potential operating cost per seat, this could only be achieved when the aircraft was fully loaded; costs per seat increased rapidly as occupancy declined. A moderately loaded 747, one with only 70 percent of its seats occupied, used more than 95 percent of the fuel needed by a fully occupied 747. Nonetheless, many flag - carriers purchased the 747 due to its iconic status "even if it made no sense economically '' to operate. During the 1970s and 1980s, there was often over 30 regularly scheduled 747s at John F. Kennedy International Airport.
The recession of 1969 - 1970 greatly affected Boeing. For the year and a half after September 1970 it only sold two 747s in the world, and did not sell any to an American carrier for almost three years. When economic problems in the United States and other countries after the 1973 oil crisis led to reduced passenger traffic, several airlines found they did not have enough passengers to fly the 747 economically, and they replaced them with the smaller and recently introduced McDonnell Douglas DC - 10 and Lockheed L - 1011 TriStar trijet wide bodies (and later the 767 and A300 / A310 twinjets). Having tried replacing coach seats on its 747s with piano bars in an attempt to attract more customers, American Airlines eventually relegated its 747s to cargo service and in 1983 exchanged them with Pan Am for smaller aircraft; Delta Air Lines also removed its 747s from service after several years. Later, Delta acquired 747s again in 2008 as part of its merger with Northwest Airlines, although it intends to retire them from service by 2018.
International flights that bypassed traditional hub airports and landed at smaller cities became more common throughout the 1980s, and this eroded the 747 's original market. However, many international carriers continued to use the 747 on Pacific routes. In Japan, 747s on domestic routes were configured to carry close to the maximum passenger capacity.
After the initial 747 - 100, Boeing developed the - 100B, a higher maximum takeoff weight (MTOW) variant, and the - 100SR (Short Range), with higher passenger capacity. Increased maximum takeoff weight allows aircraft to carry more fuel and have longer range. The - 200 model followed in 1971, featuring more powerful engines and a higher MTOW. Passenger, freighter and combination passenger - freighter versions of the - 200 were produced. The shortened 747SP (special performance) with a longer range was also developed, and entered service in 1976.
The 747 line was further developed with the launch of the 747 - 300 in 1980. The 300 series resulted from Boeing studies to increase the seating capacity of the 747, during which modifications such as fuselage plugs and extending the upper deck over the entire length of the fuselage were rejected. The first 747 - 300, completed in 1983, included a stretched upper deck, increased cruise speed, and increased seating capacity. The - 300 variant was previously designated 747SUD for stretched upper deck, then 747 - 200 SUD, followed by 747EUD, before the 747 - 300 designation was used. Passenger, short range and combination freighter - passenger versions of the 300 series were produced.
In 1985, development of the longer range 747 - 400 began. The variant had a new glass cockpit, which allowed for a cockpit crew of two instead of three, new engines, lighter construction materials, and a redesigned interior. Development cost soared, and production delays occurred as new technologies were incorporated at the request of airlines. Insufficient workforce experience and reliance on overtime contributed to early production problems on the 747 - 400. The - 400 entered service in 1989.
In 1991, a record - breaking 1,087 passengers were airlifted aboard a 747 to Israel as part of Operation Solomon. The 747 remained the heaviest commercial aircraft in regular service until the debut of the Antonov An - 124 Ruslan in 1982; variants of the 747 - 400 would surpass the An - 124 's weight in 2000. The Antonov An - 225 Mriya cargo transport, which debuted in 1988, remains the world 's largest aircraft by several measures (including the most accepted measures of maximum takeoff weight and length); one aircraft has been completed and is in service as of 2017. The Hughes H - 4 Hercules is the largest aircraft by wingspan, but it only completed a single flight.
Since the arrival of the 747 - 400, several stretching schemes for the 747 have been proposed. Boeing announced the larger 747 - 500X and - 600X preliminary designs in 1996. The new variants would have cost more than US $5 billion to develop, and interest was not sufficient to launch the program. In 2000, Boeing offered the more modest 747X and 747X stretch derivatives as alternatives to the Airbus A3XX. However, the 747X family was unable to attract enough interest to enter production. A year later, Boeing switched from the 747X studies to pursue the Sonic Cruiser, and after the Sonic Cruiser program was put on hold, the 787 Dreamliner. Some of the ideas developed for the 747X were used on the 747 - 400ER, a longer range variant of the 747 - 400.
After several variants were proposed but later abandoned, some industry observers became skeptical of new aircraft proposals from Boeing. However, in early 2004, Boeing announced tentative plans for the 747 Advanced that were eventually adopted. Similar in nature to the 747 - X, the stretched 747 Advanced used technology from the 787 to modernize the design and its systems. The 747 remained the largest passenger airliner in service until the Airbus A380 began airline service in 2007.
On November 14, 2005, Boeing announced it was launching the 747 Advanced as the Boeing 747 - 8. The last 747 - 400s were completed in 2009. As of 2011, most orders of the 747 - 8 have been for the freighter variant. On February 8, 2010, the 747 - 8 Freighter made its maiden flight. The first delivery of the 747 - 8 went to Cargolux in 2011. The 1,500 th produced Boeing 747 was delivered in June 2014.
In January 2016, Boeing stated it was reducing 747 - 8 production to six a year beginning in September 2016, incurring a $569 million post-tax charge against its fourth - quarter 2015 profits. At the end of 2015, the company had 20 orders outstanding. On January 29, 2016, Boeing announced that it had begun the preliminary work on the modifications to a commercial 747 - 8 for the next Air Force One Presidential aircraft, expected to be operational by 2020.
On July 12, 2016, Boeing announced that it had finalized terms of acquisition with Volga - Dnepr Group for 20 747 - 8 freighters, valued at $7.58 billion at list prices. Four aircraft were delivered beginning in 2012. Volga - Dnepr Group is the parent of three major Russian air - freight carriers -- Volga - Dnepr Airlines, AirBridgeCargo Airlines and Atran Airlines. The new 747 - 8 freighters will replace AirBridgeCargo 's current 747 - 400 aircraft and expand the airline 's fleet and will be acquired through a mix of direct purchases and leasing over the next six years, Boeing said.
On July 27, 2016, in its quarterly report to the Securities and Exchange Commission, Boeing discussed the potential termination of 747 production due to insufficient demand and market for the aircraft. With a firm order backlog of 21 aircraft and a production rate of six per year, program accounting has been reduced to 1,555 aircraft, and the 747 line could be closed in the third quarter of 2019. In October 2016, UPS Airlines ordered 14 - 8Fs to add capacity, along with 14 options, which it took in February 2018 bringing the total ordered to 28 and increases the backlog to 25 -- including some to refractory airlines -- with deliveries scheduled through 2022.
The Boeing 747 is a large, wide - body (two - aisle) airliner with four wing - mounted engines. The wings have a high sweep angle of 37.5 degrees for a fast, efficient cruise of Mach 0.84 to 0.88, depending on the variant. The sweep also reduces the wingspan, allowing the 747 to use existing hangars. Seating capacity is more than 366 with a 3 -- 4 -- 3 seat arrangement (a cross section of 3 seats, an aisle, 4 seats, another aisle, and 3 seats) in economy class and a 2 -- 3 -- 2 arrangement in first class on the main deck. The upper deck has a 3 -- 3 seat arrangement in economy class and a 2 -- 2 arrangement in first class.
Raised above the main deck, the cockpit creates a hump. The raised cockpit allows front loading of cargo on freight variants. The upper deck behind the cockpit provides space for a lounge or extra seating. The "stretched upper deck '' became available as an option on the 747 - 100B variant and later as standard beginning on the 747 - 300. The upper deck was stretched more on the 747 - 8. The 747 cockpit roof section also has an escape hatch from which crew can exit in the event of an emergency if they can not exit through the cabin.
The 747 's maximum takeoff weight ranges from 735,000 pounds (333,400 kg) for the - 100 to 970,000 lb (439,985 kg) for the - 8. Its range has increased from 5,300 nautical miles (6,100 mi, 9,800 km) on the - 100 to 8,000 nmi (9,200 mi, 14,815 km) on the - 8I.
The 747 has redundant structures along with four redundant hydraulic systems and four main landing gears with four wheels each, which provide a good spread of support on the ground and safety in case of tire blow - outs. The main gear are redundant so that landing can be performed on two opposing landing gears if the others do not function properly. In addition, the 747 has split control surfaces and was designed with sophisticated triple - slotted flaps that minimize landing speeds and allow the 747 to use standard - length runways. For transportation of spare engines, 747s can accommodate a non-functioning fifth - pod engine under the port wing of the aircraft between the inner functioning engine and the fuselage.
The 747 - 100 was the original variant launched in 1966. The 747 - 200 soon followed, with its launch in 1968. The 747 - 300 was launched in 1980 and was followed by the 747 - 400 in 1985. Ultimately, the 747 - 8 was announced in 2005. Several versions of each variant have been produced, and many of the early variants were in production simultaneously. The International Civil Aviation Organization (ICAO) classifies variants using a shortened code formed by combining the model number and the variant designator (e.g. "B741 '' for all - 100 models).
The first 747 - 100s were built with six upper deck windows (three per side) to accommodate upstairs lounge areas. Later, as airlines began to use the upper deck for premium passenger seating instead of lounge space, Boeing offered a ten - window upper deck as an option. Some early - 100s were retrofitted with the new configuration. The - 100 was equipped with Pratt & Whitney JT9D - 3A engines. No freighter version of this model was developed, but many 747 - 100s were converted into freighters. A total of 167 747 - 100s were built.
Responding to requests from Japanese airlines for a high - capacity aircraft to serve domestic routes between major cities, Boeing developed the 747SR as a short - range version of the 747 - 100 with lower fuel capacity and greater payload capability. With increased economy class seating, up to 498 passengers could be carried in early versions and up to 550 in later models. The 747SR had an economic design life objective of 52,000 flights during 20 years of operation, compared to 24,600 flights in 20 years for the standard 747. The initial 747SR model, the - 100SR, had a strengthened body structure and landing gear to accommodate the added stress accumulated from a greater number of takeoffs and landings. Extra structural support was built into the wings, fuselage, and the landing gear along with a 20 percent reduction in fuel capacity.
The initial order for the - 100SR -- four aircraft for Japan Air Lines (JAL, later Japan Airlines) -- was announced on October 30, 1972; rollout occurred on August 3, 1973, and the first flight took place on August 31, 1973. The type was certified by the FAA on September 26, 1973, with the first delivery on the same day. The - 100SR entered service with JAL, the type 's sole customer, on October 7, 1973, and typically operated flights within Japan. Seven - 100SRs were built between 1973 and 1975, each with a 520,000 - pound (240,000 kg) MTOW and Pratt & Whitney JT9D - 7A engines derated to 43,000 pounds - force (190,000 N) of thrust.
Following the - 100SR, Boeing produced the - 100BSR, a 747SR variant with increased takeoff weight capability. Debuting in 1978, the - 100BSR also incorporated structural modifications for a high cycle - to - flying hour ratio; a related standard - 100B model debuted in 1979. The - 100BSR first flew on November 3, 1978, with first delivery to All Nippon Airways (ANA) on December 21, 1978. A total of twenty - 100BSRs were produced for ANA and JAL. The - 100BSR had a 600,000 lb MTOW and was powered by the same JT9D - 7A or General Electric CF6 - 45 engines used on the - 100SR. ANA operated this variant on domestic Japanese routes with 455 or 456 seats until retiring its last aircraft in March 2006.
In 1986, two - 100BSR SUD models, featuring the stretched upper deck (SUD) of the - 300, were produced for JAL. The type 's maiden flight occurred on February 26, 1986, with FAA certification and first delivery on March 24, 1986. JAL operated the - 100BSR SUD with 563 seats on domestic routes until their retirement in the third quarter of 2006. While only two - 100BSR SUDs were produced, in theory, standard - 100Bs can be modified to the SUD certification. Overall, twenty - nine 747SRs were built, consisting of seven - 100SRs, twenty - 100BSRs, and two - 100BSR SUDs.
The 747 - 100B model was developed from the - 100SR, using its stronger airframe and landing gear design. The type had an increased fuel capacity of 48,070 US gal (182,000 l; 40,030 imp gal), allowing for a 5,000 - nautical - mile (9,300 km; 5,800 mi) range with a typical 452 - passenger payload, and an increased MTOW of 750,000 lb (340,000 kg) was offered. The first - 100B order, one aircraft for Iran Air, was announced on June 1, 1978. This aircraft first flew on June 20, 1979, received FAA certification on August 1, 1979, and was delivered the next day. Nine - 100Bs were built, one for Iran Air and eight for Saudi Arabian Airlines. Unlike the original - 100, the - 100B was offered with Pratt & Whitney JT9D - 7A, General Electric CF6 - 50, or Rolls - Royce RB211 - 524 engines. However, only RB211 - 524 (Saudia) and JT9D - 7A (Iran Air) engines were ordered. The last 747 - 100B, EP - IAM was retired by Iran Air in 2014, the last commercial operator of the 747 - 100 and - 100B.
The development of the 747SP stemmed from a joint request between Pan American World Airways and Iran Air, who were looking for a high - capacity airliner with enough range to cover Pan Am 's New York -- Middle Eastern routes and Iran Air 's planned Tehran -- New York route. The Tehran -- New York route, when launched, was the longest non-stop commercial flight in the world. The 747SP is 48 feet 4 inches (14.73 m) shorter than the 747 - 100. Fuselage sections were eliminated fore and aft of the wing, and the center section of the fuselage was redesigned to fit mating fuselage sections. The SP 's flaps used a simplified single - slotted configuration. The 747SP, compared to earlier variants, had a tapering of the aft upper fuselage into the empennage, a double - hinged rudder, and longer vertical and horizontal stabilizers. Power was provided by Pratt & Whitney JT9D - 7 (A / F / J / FW) or Rolls - Royce RB211 - 524 engines.
The 747SP was granted a supplemental certificate on February 4, 1976 and entered service with launch customers Pan Am and Iran Air that same year. The aircraft was chosen by airlines wishing to serve major airports with short runways. A total of 45 747SPs were built, with the 44th 747SP delivered on August 30, 1982. In 1987, Boeing re-opened the 747SP production line after five years to build one last 747SP for an order by the United Arab Emirates government. In addition to airline use, one 747SP was modified for the NASA / German Aerospace Center SOFIA experiment. Iran Air is the last civil operator of the type; its final 747 - SP (EP - IAC) was to be retired in June 2016.
While the 747 - 100 powered by Pratt & Whitney JT9D - 3A engines offered enough payload and range for US domestic operations, it was marginal for long international route sectors. The demand for longer range aircraft with increased payload quickly led to the improved - 200, which featured more powerful engines, increased MTOW, and greater range than the - 100. A few early - 200s retained the three - window configuration of the - 100 on the upper deck, but most were built with a ten - window configuration on each side. The 747 - 200 was produced in passenger (- 200B), freighter (- 200F), convertible (- 200C), and combi (- 200M) versions.
The 747 - 200B was the basic passenger version, with increased fuel capacity and more powerful engines; it entered service in February 1971. In its first three years of production, the - 200 was equipped with Pratt & Whitney JT9D - 7 engines (initially the only engine available). Range with a full passenger load started at over 5,000 nmi (9,300 km) and increased to 6,000 nmi (11,000 km) with later engines. Most - 200Bs had an internally stretched upper deck, allowing for up to 16 passenger seats. The freighter model, the 747 - 200F, could be fitted with or without a side cargo door, and had a capacity of 105 tons (95.3 tonnes) and an MTOW of up to 833,000 lb (378,000 kg). It entered service in 1972 with Lufthansa. The convertible version, the 747 - 200C, could be converted between a passenger and a freighter or used in mixed configurations, and featured removable seats and a nose cargo door. The - 200C could also be fitted with an optional side cargo door on the main deck.
The combi model, the 747 - 200M, could carry freight in the rear section of the main deck via a side cargo door. A removable partition on the main deck separated the cargo area at the rear from the passengers at the front. The - 200M could carry up to 238 passengers in a three - class configuration with cargo carried on the main deck. The model was also known as the 747 - 200 Combi. As on the - 100, a stretched upper deck (SUD) modification was later offered. A total of 10 converted 747 - 200s were operated by KLM. Union des Transports Aériens (UTA) also had two aircraft converted.
After launching the - 200 with Pratt & Whitney JT9D - 7 engines, on August 1, 1972 Boeing announced that it had reached an agreement with General Electric to certify the 747 with CF6 - 50 series engines to increase the aircraft 's market potential. Rolls - Royce followed 747 engine production with a launch order from British Airways for four aircraft. The option of RB211 - 524B engines was announced on June 17, 1975. The - 200 was the first 747 to provide a choice of powerplant from the three major engine manufacturers.
A total of 393 of the 747 - 200 versions had been built when production ended in 1991. Of these, 225 were - 200B, 73 were - 200F, 13 were - 200C, 78 were - 200M, and 4 were military. Many 747 - 200s remained in operation, although most large carriers have retired them from their fleets and sold them to smaller operators as of early 2000s. Large carriers have sped up fleet retirement following the September 11 attacks and the subsequent drop in demand for air travel, scrapping some or turning others into freighters. Iran Air retired the last passenger 747 - 200 in May 2016, 36 years after it was delivered.
The 747 - 300 features a 23 - foot - 4 - inch - longer (7.11 m) upper deck than the - 200. The stretched upper deck has two emergency exit doors and is the most visible difference between the - 300 and previous models. Before being made standard on the 747 - 300, the stretched upper deck was previously offered as a retrofit, and appeared on two Japanese 747 - 100SR aircraft. The 747 - 300 introduced a new straight stairway to the upper deck, instead of a spiral staircase on earlier variants, which creates room above and below for more seats. Minor aerodynamic changes allowed the - 300 's cruise speed to reach Mach 0.85 compared with Mach 0.84 on the - 200 and - 100 models, while retaining the same takeoff weight. The - 300 could be equipped with the same Pratt & Whitney and Rolls - Royce powerplants as on the - 200, as well as updated General Electric CF6 - 80C2B1 engines.
Swissair placed the first order for the 747 - 300 on June 11, 1980. The variant revived the 747 - 300 designation, which had been previously used on a design study that did not reach production. The 747 - 300 first flew on October 5, 1982, and the type 's first delivery went to Swissair on March 23, 1983. Besides the passenger model, two other versions (- 300M, - 300SR) were produced. The 747 - 300M features cargo capacity on the rear portion of the main deck, similar to the - 200M, but with the stretched upper deck it can carry more passengers. The 747 - 300SR, a short range, high - capacity domestic model, was produced for Japanese markets with a maximum seating for 584. No production freighter version of the 747 - 300 was built, but Boeing began modifications of used passenger - 300 models into freighters in 2000.
A total of 81 747 - 300 series aircraft were delivered, 56 for passenger use, 21 - 300M and 4 - 300SR versions. In 1985, just two years after the - 300 entered service, the type was superseded by the announcement of the more advanced 747 - 400. The last 747 - 300 was delivered in September 1990 to Sabena. While some - 300 customers continued operating the type, several large carriers replaced their 747 - 300s with 747 - 400s. Air France, Air India, Pakistan International Airlines, and Qantas were some of the last major carriers to operate the 747 - 300. On December 29, 2008, Qantas flew its last scheduled 747 - 300 service, operating from Melbourne to Los Angeles via Auckland. In July 2015, Pakistan International Airlines retired their final 747 - 300 after 30 years of service.As of July 2017, only five 747 - 300 's remain in commercial service, with Max Air (3), Mahan Air (1) and TransAVIAexport Airlines (1).
The 747 - 400 is an improved model with increased range. It has wingtip extensions of 6 ft (1.8 m) and winglets of 6 ft (1.8 m), which improve the type 's fuel efficiency by four percent compared to previous 747 versions. The 747 - 400 introduced a new glass cockpit designed for a flight crew of two instead of three, with a reduction in the number of dials, gauges and knobs from 971 to 365 through the use of electronics. The type also features tail fuel tanks, revised engines, and a new interior. The longer range has been used by some airlines to bypass traditional fuel stops, such as Anchorage. Powerplants include the Pratt & Whitney PW4062, General Electric CF6 - 80C2, and Rolls - Royce RB211 - 524.
The - 400 was offered in passenger (- 400), freighter (- 400F), combi (- 400M), domestic (- 400D), extended range passenger (- 400ER), and extended range freighter (- 400ERF) versions. Passenger versions retain the same upper deck as the - 300, while the freighter version does not have an extended upper deck. The 747 - 400D was built for short - range operations with maximum seating for 624. Winglets were not included, but they can be retrofitted. Cruising speed is up to Mach 0.855 on different versions of the 747 - 400.
The passenger version first entered service in February 1989 with launch customer Northwest Airlines on the Minneapolis to Phoenix route. The combi version entered service in September 1989 with KLM, while the freighter version entered service in November 1993 with Cargolux. The 747 - 400ERF entered service with Air France in October 2002, while the 747 - 400ER entered service with Qantas, its sole customer, in November 2002. In January 2004, Boeing and Cathay Pacific launched the Boeing 747 - 400 Special Freighter program, later referred to as the Boeing Converted Freighter (BCF), to modify passenger 747 - 400s for cargo use. The first 747 - 400BCF was redelivered in December 2005.
In March 2007, Boeing announced that it had no plans to produce further passenger versions of the - 400. However, orders for 36 - 400F and - 400ERF freighters were already in place at the time of the announcement. The last passenger version of the 747 - 400 was delivered in April 2005 to China Airlines. Some of the last built 747 - 400s were delivered with Dreamliner livery along with the modern Signature interior from the Boeing 777. A total of 694 of the 747 - 400 series aircraft were delivered. At various times, the largest 747 - 400 operator has included Singapore Airlines, Japan Airlines, and British Airways with 31 as of January 2017. As of July 2017, 370 747 - 400s remain in service.
The 747 - 400 Dreamlifter (originally called the 747 Large Cargo Freighter or LCF) is a Boeing - designed modification of existing 747 - 400s to a larger configuration to ferry 787 Dreamliner sub-assemblies. Evergreen Aviation Technologies Corporation of Taiwan was contracted to complete modifications of 747 - 400s into Dreamlifters in Taoyuan. The aircraft flew for the first time on September 9, 2006 in a test flight. Modification of four aircraft was completed by February 2010. The Dreamlifters have been placed into service transporting sub-assemblies for the 787 program to the Boeing plant in Everett, Washington, for final assembly. The aircraft is certified to carry only essential crew and not passengers.
Boeing announced a new 747 variant, the 747 - 8, on November 14, 2005. Referred to as the 747 Advanced prior to its launch, the 747 - 8 uses the same engine and cockpit technology as the 787, hence the use of the "8 ''. The variant is designed to be quieter, more economical, and more environmentally friendly. The 747 - 8 's fuselage is lengthened from 232 to 251 feet (70.8 to 76.4 m), marking the first stretch variant of the aircraft. Power is supplied by General Electric GEnx - 2B67 engines.
The 747 - 8 Freighter, or 747 - 8F, is derived from the 747 - 400ERF. The variant has 16 percent more payload capacity than its predecessor, allowing it to carry seven more standard air cargo containers, with a maximum payload capacity of 154 tons (140 tonnes) of cargo. As on previous 747 freighters, the 747 - 8F features an overhead nose - door and a side - door on the main deck plus a side - door on the lower deck ("belly '') to aid loading and unloading. The 747 - 8F made its maiden flight on February 8, 2010. The variant received its amended type certificate jointly from the FAA and the European Aviation Safety Agency (EASA) on August 19, 2011. The - 8F was first delivered to Cargolux on October 12, 2011.
The passenger version, named 747 - 8 Intercontinental or 747 - 8I, is designed to carry up to 467 passengers in a 3 - class configuration and fly more than 8,000 nmi (15,000 km) at Mach 0.855. As a derivative of the already common 747 - 400, the 747 - 8 has the economic benefit of similar training and interchangeable parts. The type 's first test flight occurred on March 20, 2011. The 747 - 8 has surpassed the Airbus A340 - 600 as the world 's longest airliner. The first - 8I was delivered in May 2012 to Lufthansa. The 747 - 8 has received 150 total orders, including 103 for the - 8F and 47 for the - 8I as of February 2018.
Boeing has studied a number of 747 variants that have not gone beyond the concept stage.
During the late 1960s and early 1970s, Boeing studied the development of a shorter 747 with three engines, to compete with the smaller L - 1011 TriStar and DC - 10. The 747 trijet would have had more payload, range, and passenger capacity than the L - 1011 and DC - 10. The center engine would have been fitted in the tail with an S - duct intake similar to the L - 1011 's. However, engineering studies showed that a total redesign of the 747 wing would be necessary. Maintaining the same 747 handling characteristics would be important to minimize pilot retraining. Boeing decided instead to pursue a shortened four - engine 747, resulting in the 747SP.
Boeing announced the 747 ASB (Advanced Short Body) in 1986 as a response to the Airbus A340 and the McDonnell Douglas MD - 11. This aircraft design would have combined the advanced technology used on the 747 - 400 with the foreshortened 747SP fuselage. The aircraft was to carry 295 passengers a range of 8,000 nmi (9,200 mi; 15,000 km). However, airlines were not interested in the project and it was canceled in 1988 in favor of the 777.
Boeing announced the 747 - 500X and - 600X at the 1996 Farnborough Airshow. The proposed models would have combined the 747 's fuselage with a new 251 ft (77 m) span wing derived from the 777. Other changes included adding more powerful engines and increasing the number of tires from two to four on the nose landing gear and from 16 to 20 on the main landing gear.
The 747 - 500X concept featured an increased fuselage length of 18 ft (5.5 m) to 250 ft (76.2 m) long, and the aircraft was to carry 462 passengers over a range up to 8,700 nautical miles (10,000 mi, 16,100 km), with a gross weight of over 1.0 Mlb (450 tonnes). The 747 - 600X concept featured a greater stretch to 279 ft (85 m) with seating for 548 passengers, a range of up to 7,700 nmi (8,900 mi, 14,300 km), and a gross weight of 1.2 Mlb (540 tonnes). A third study concept, the 747 - 700X, would have combined the wing of the 747 - 600X with a widened fuselage, allowing it to carry 650 passengers over the same range as a 747 - 400. The cost of the changes from previous 747 models, in particular the new wing for the 747 - 500X and - 600X, was estimated to be more than US $5 billion. Boeing was not able to attract enough interest to launch the aircraft.
As Airbus progressed with its A3XX study, Boeing offered a 747 derivative as an alternative in 2000; a more modest proposal than the previous - 500X and - 600X that retained the 747 's overall wing design and add a segment at the root, increasing the span to 229 ft (69.8 m). Power would have been supplied by either the Engine Alliance GP7172 or the Rolls - Royce Trent 600, which were also proposed for the 767 - 400ERX. A new flight deck based on the 777 's would be used. The 747X aircraft was to carry 430 passengers over ranges of up to 8,700 nmi (10,000 mi, 16,100 km). The 747X Stretch would be extended to 263 ft (80.2 m) long, allowing it to carry 500 passengers over ranges of up to 7,800 nmi (9,000 mi, 14,500 km). Both would feature an interior based on the 777. Freighter versions of the 747X and 747X Stretch were also studied.
Like its predecessor, the 747X family was unable to garner enough interest to justify production, and it was shelved along with the 767 - 400ERX in March 2001, when Boeing announced the Sonic Cruiser concept. Though the 747X design was less costly than the 747 - 500X and - 600X, it was criticized for not offering a sufficient advance from the existing 747 - 400. The 747X did not make it beyond the drawing board, but the 747 - 400X being developed concurrently moved into production to become the 747 - 400ER.
After the end of the 747X program, Boeing continued to study improvements that could be made to the 747. The 747 - 400XQLR (Quiet Long Range) was meant to have an increased range of 7,980 nmi (9,200 mi, 14,800 km), with improvements to boost efficiency and reduce noise. Improvements studied included raked wingtips similar to those used on the 767 - 400ER and a sawtooth engine nacelle for noise reduction. Although the 747 - 400XQLR did not move to production, many of its features were used for the 747 Advanced, which has now been launched as the 747 - 8.
As of July 2017, 488 Boeing 747s are in airline service, with British Airways being the largest operator with 36 747 - 400s.
The last US passenger Boeing 747 was retired from Delta Air Lines in December 2017, after it flew for every American major carrier since its 1970 introduction. Delta flew three of its last four aircraft on a farewell tour, from Seattle to Atlanta on December 19 then to Los Angeles and Minneapolis / St Paul on December 20.
Boeing 747 orders and deliveries (cumulative, by year):
Orders
Deliveries
The 747 has been involved in 146 aviation accidents and incidents, including 61 accidents and hull losses which resulted in 3722 fatalities. The last crash was Turkish Airlines Flight 6491 in January 2017. There were also 24 deaths in 32 aircraft hijackings, such as Pan Am Flight 73 where a Boeing 747 - 121 was hijacked by four terrorists and resulted in 20 deaths.
Few crashes have been attributed to design flaws of the 747. The Tenerife airport disaster resulted from pilot error and communications failure, while the Japan Airlines Flight 123 and China Airlines Flight 611 crashes stemmed from improper aircraft repair. United Airlines Flight 811, which suffered an explosive decompression mid-flight on February 24, 1989, led the National Transportation Safety Board (NTSB) to issue a recommendation that 747 - 200 cargo doors similar to those on the Flight 811 aircraft be modified. Korean Air Lines Flight 007 was shot down by a Soviet fighter aircraft in 1983 after it had strayed into Soviet territory, causing U.S. President Ronald Reagan to authorize the then - strictly military global positioning system (GPS) for civilian use.
Accidents due to design deficiencies included TWA Flight 800, where a 747 - 100 exploded in mid-air on July 17, 1996, probably due to sparking electricity wires inside the fuel tank; this finding led the FAA to propose a rule requiring installation of an inerting system in the center fuel tank of most large aircraft that was adopted in July 2008, after years of research into solutions. At the time, the new safety system was expected to cost US $100,000 to $450,000 per aircraft and weigh approximately 200 pounds (91 kg). El Al Flight 1862 crashed after the fuse pins for an engine broke off shortly after take - off due to metal fatigue. Instead of dropping away from the wing, the engine knocked off the adjacent engine and damaged the wing.
As increasing numbers of "classic '' 747 - 100 and 747 - 200 series aircraft have been retired, some have found their way into museums or other uses. The City of Everett, the first 747 and prototype, is at the Museum of Flight, Seattle, Washington, USA where it is sometimes leased to Boeing for test purposes.
Other 747s in museums include those at the Aviodrome, Lelystad, The Netherlands; the Qantas Founders Outback Museum, Longreach, Queensland, Australia; Rand Airport, Johannesburg, South Africa; Technikmuseum Speyer, Speyer, Germany; Musée de l'Air et de l'Espace, Paris, France; Tehran Aerospace Exhibition, Tehran, Iran; Jeongseok Aviation Center, Jeju, South Korea, Evergreen Aviation & Space Museum, McMinnville, Oregon, and the National Air and Space Museum, Washington, D.C.
In recent years, some older 747 - 300s and 747 - 400s have also found their way into museums as well. In April 2016, the first production 747 - 400 was moved to the Delta Flight Museum in Atlanta, Georgia.
Upon its retirement from service, the 747 number two in the production line was dismantled and shipped to Hopyeong, Namyangju, Gyeonggi - do, South Korea where it was re-assembled, repainted in a livery similar to that of Air Force One and converted into a restaurant. Originally flown commercially by Pan Am as N747PA, Clipper Juan T. Trippe, and repaired for service following a tailstrike, it stayed with the airline until its bankruptcy. The restaurant closed by 2009, and the aircraft was scrapped in 2010.
A former British Airways 747 - 200B, G - BDXJ, is parked at the Dunsfold Aerodrome in Surrey, England and has been used as a movie set for productions such as the 2006 James Bond film, Casino Royale. The plane also appears frequently in the BBC television series Top Gear, which is filmed at Dunsfold.
The Jumbohostel, using a converted 747 - 200, opened at Arlanda Airport, Stockholm on January 15, 2009.
The wings of a 747 have been recycled as roofs of a house in Malibu, California.
Following its debut, the 747 rapidly achieved iconic status, appearing in numerous film productions such as Airport 1975 and Airport ' 77 disaster films, Air Force One, Die Hard 2, and Executive Decision. Appearing in over 300 film productions the 747 is one of the most widely depicted civilian aircraft and is considered by many as one of the most iconic in film history. The aircraft entered the cultural lexicon as the original Jumbo Jet, a term coined by the aviation media to describe its size, and was also nicknamed Queen of the Skies.
|
where were most of the japanese internment camps located | Internment of Japanese Americans - wikipedia
The internment of Japanese Americans in the United States during World War II was the forced relocation and incarceration in camps in the western interior of the country of between 110,000 and 120,000 people of Japanese ancestry, most of whom lived on the Pacific coast. 62 percent of the internees were United States citizens. These actions were ordered by President Franklin D. Roosevelt shortly after Imperial Japan 's attack on Pearl Harbor.
Japanese Americans were incarcerated based on local population concentrations and regional politics. More than 110,000 Japanese Americans in the mainland U.S., who mostly lived on the West Coast, were forced into interior camps. However, in Hawaii, where 150,000 - plus Japanese Americans composed over one - third of the population, only 1,200 to 1,800 were also interned. The internment is considered to have resulted more from racism than from any security risk posed by Japanese Americans. Those who were as little as 1 / 16 Japanese and orphaned infants with "one drop of Japanese blood '' were placed in internment camps.
Roosevelt authorized the deportation and incarceration with Executive Order 9066, issued on February 19, 1942, which allowed regional military commanders to designate "military areas '' from which "any or all persons may be excluded ''. This authority was used to declare that all people of Japanese ancestry were excluded from the West Coast, including all of California and parts of Oregon, Washington, and Arizona, except for those in government camps. Approximately 5,000 Japanese Americans voluntarily relocated outside the exclusion zone before March 1942, while some 5,500 community leaders arrested immediately after the Pearl Harbor attack, were already in custody. The majority of nearly 130,000 Japanese Americans living in the U.S. mainland were forcibly relocated from their West Coast homes during the spring of 1942.
The United States Census Bureau assisted the internment efforts by providing confidential neighborhood information on Japanese Americans. The Bureau denied its role for decades, but it became public in 2007. In 1944, the U.S. Supreme Court upheld the constitutionality of the removal by ruling against Fred Korematsu 's appeal for violating an exclusion order. The Court limited its decision to the validity of the exclusion orders, avoiding the issue of the incarceration of U.S. citizens without due process.
In 1980, under mounting pressure from the Japanese American Citizens League and redress organizations, President Jimmy Carter opened an investigation to determine whether the decision to put Japanese Americans into internment camps had been justified by the government. He appointed the Commission on Wartime Relocation and Internment of Civilians (CWRIC) to investigate the camps. The Commission 's report, titled Personal Justice Denied, found little evidence of Japanese disloyalty at the time and concluded that the incarceration had been the product of racism. It recommended that the government pay reparations to the internees. In 1988, President Ronald Reagan signed into law the Civil Liberties Act of 1988, which apologized for the internment on behalf of the U.S. government and authorized a payment of $20,000 (equivalent to $41,000 in 2017) to each camp internee. The legislation admitted that government actions were based on "race prejudice, war hysteria, and a failure of political leadership ''. The U.S. government eventually disbursed more than $1.6 billion (equivalent to $3,310,000,000 in 2017) in reparations to 82,219 Japanese Americans who had been interned and their heirs.
Of 127,000 Japanese Americans living in the continental United States at the time of the Pearl Harbor attack, 112,000 resided on the West Coast. About 80,000 were nisei (literal translation: "second generation ''; American - born Japanese with U.S. citizenship) and sansei ("third generation ''; the children of Nisei). The rest were issei ("first generation '') immigrants born in Japan who were ineligible for U.S. citizenship under U.S. law.
Due in large part to socio - political changes stemming from the Meiji Restoration -- and a recession caused by the abrupt opening of Japan 's economy to the world market -- people began emigrating from the Empire of Japan in 1868 to find work to survive. From 1869 to 1924 approximately 200,000 immigrated to the islands of Hawaii, mostly laborers expecting to work on the islands ' sugar plantations. Some 180,000 went to the U.S. mainland, with the majority settling on the West Coast and establishing farms or small businesses. Most arrived before 1908, when the Gentlemen 's Agreement between Japan and the United States banned the immigration of unskilled laborers. A loophole allowed the wives of men already in the US to join their husbands. The practice of women marrying by proxy and immigrating to the U.S. resulted in a large increase in the number of "picture brides ''.
As the Japanese - American population continued to grow, European Americans on the West Coast resisted the new group, fearing competition and exaggerating the idea of hordes of Asians keen to take over white - owned farmland and businesses. Groups such as the Asiatic Exclusion League, the California Joint Immigration Committee, and the Native Sons of the Golden West organized in response to this "Yellow Peril. '' They lobbied successfully to restrict the property and citizenship rights of Japanese immigrants, as similar groups had previously organized against Chinese immigrants. Several laws and treaties attempting to slow immigration from Japan were introduced beginning in the late 19th century. The Immigration Act of 1924, following the example of the 1882 Chinese Exclusion Act, effectively banned all immigration from Japan and other "undesirable '' Asian countries.
The 1924 ban on immigration produced unusually well - defined generational groups within the Japanese - American community. The issei were exclusively those who had immigrated before 1924; some desired to return to their homeland. Because no new immigration was permitted, all Japanese Americans born after 1924 were, by definition, born in the U.S. and automatically U.S. citizens. This nisei generation were a distinct cohort from their parents. In addition to the usual generational differences, issei men had been typically ten to fifteen years older than their wives, making them significantly older than the younger children of their often large families. U.S. law prohibited Japanese immigrants from becoming naturalized citizens, making them dependent on their children to rent or purchase property. Communication between English - speaking children and parents who spoke mostly or completely in Japanese was often difficult. A significant number of older nisei, many of whom were born prior to the immigration ban, had married and already started families of their own by the time the US joined World War II.
Despite racist legislation that prevented issei from becoming naturalized citizens (and therefore from owning property, voting, or running for political office), these Japanese immigrants established communities in their new hometowns. Japanese Americans contributed to the agriculture of California and other Western states, by introducing irrigation methods that enabled the cultivation of fruits, vegetables, and flowers on previously inhospitable land. In both rural and urban areas, kenjinkai, community groups for immigrants from the same Japanese prefecture, and fujinkai, Buddhist women 's associations, organized community events and charitable work, provided loans and financial assistance, and built Japanese language schools for their children. Excluded from setting up shop in white neighborhoods, nikkei - owned small businesses thrived in the Nihonmachi, or Japantowns of urban centers such as Los Angeles, San Francisco, and Seattle.
In the 1930s the Office of Naval Intelligence (ONI), concerned by Imperial Japan 's rising military power in Asia, began conducting surveillance on Japanese - American communities in Hawaii. From 1936, at the behest of President Roosevelt, the ONI began compiling a "special list of those who would be the first to be placed in a concentration camp in the event of trouble '' between Japan and the United States. In 1939, again by order of the President, the ONI, Military Intelligence Division, and FBI began working together to compile a larger Custodial Detention Index. Early in 1941, Roosevelt commissioned Curtis Munson to conduct an investigation on Japanese Americans living on the West Coast and in Hawaii. After working with FBI and ONI officials and interviewing Japanese Americans and those familiar with them, Munson determined that the "Japanese problem '' was nonexistent. His final report to the President, submitted November 7, 1941, "certified a remarkable, even extraordinary degree of loyalty among this generally suspect ethnic group. '' A subsequent report by Kenneth Ringle, delivered to the President in January 1942, also found little evidence to support claims of Japanese - American disloyalty and argued against mass incarceration.
The attack on Pearl Harbor on December 7, 1941, led military and political leaders to suspect that Imperial Japan was preparing a full - scale attack on the United States. Due to Japan 's rapid military conquest of a large portion of Asia and the Pacific between 1936 and 1942, some Americans feared that its military forces were unstoppable.
American public opinion initially stood by the large population of Japanese Americans living in the United States, with the Los Angeles Times characterizing them as "good Americans, born and educated as such. '' Many Americans believed that their loyalty to the United States was unquestionable.
But, six weeks after the attack, public opinion along the Pacific began to turn against Japanese Americans living on the West Coast, as the press and other Americans became nervous about the potential for fifth column activity. Though the administration (including the President Franklin D. Roosevelt and FBI Director J. Edgar Hoover) dismissed all rumors of Japanese - American espionage on behalf of the Japanese War effort, pressure mounted upon the Administration as the tide of public opinion turned against Japanese Americans. Civilian and military officials had serious concerns about the loyalty of the ethnic Japanese after the Niihau Incident which immediately followed the attack on Pearl Harbor, when a civilian Japanese national and two Hawaiian - born ethnic Japanese on the island of Ni'ihau violently freed a downed and captured Japanese naval airman, attacking their fellow Ni'ihau islanders in the process.
Several concerns over the loyalty of ethnic Japanese seemed to stem from racial prejudice rather than any evidence of malfeasance. Major Karl Bendetsen and Lieutenant General John L. DeWitt, head of the Western Command, each questioned Japanese - American loyalty. DeWitt, who administered the internment program, repeatedly told newspapers that "A Jap 's a Jap '' and testified to Congress,
I do n't want any of them (persons of Japanese ancestry) here. They are a dangerous element. There is no way to determine their loyalty... It makes no difference whether he is an American citizen, he is still a Japanese. American citizenship does not necessarily determine loyalty... But we must worry about the Japanese all the time until he is wiped off the map.
DeWitt also sought approval to conduct search and seizure operations aimed at preventing alien Japanese from making radio transmissions to Japanese ships. The Justice Department declined, stating that there was no probable cause to support DeWitt 's assertion, as the FBI concluded that there was no security threat. On January 2, the Joint Immigration Committee of the California Legislature sent a manifesto to California newspapers which attacked "the ethnic Japanese, '' who it alleged were "totally unassimilable. '' This manifesto further argued that all people of Japanese heritage were loyal subjects of the Emperor of Japan; the manifesto contended that Japanese language schools were bastions of racism which advanced doctrines of Japanese racial superiority.
The manifesto was backed by the Native Sons and Daughters of the Golden West and the California Department of the American Legion, which in January demanded that all Japanese with dual citizenship be placed in concentration camps. Internment was not limited to those who had been to Japan, but included a very small number of German and Italian enemy aliens. By February, Earl Warren, the Attorney General of California, had begun his efforts to persuade the federal government to remove all people of Japanese ethnicity from the West Coast.
Those who were as little as 1 / 16 Japanese could be placed in internment camps. There is evidence supporting the argument that the measures were racially motivated, rather than a military necessity. Bendetsen, promoted to colonel, said in 1942 "I am determined that if they have one drop of Japanese blood in them, they must go to camp. ''
Upon the bombing of Pearl Harbor and pursuant to the Alien Enemies Act, Presidential Proclamations 2525, 2526 and 2527 were issued designating Japanese, German and Italian nationals as enemy aliens. Information from the CDI was used to locate and incarcerate foreign nationals from Japan, Germany and Italy (although Germany and Italy did not declare war on the U.S. until December 11).
Presidential Proclamation 2537 was issued on January 14, 1942, requiring aliens to report any change of address, employment or name to the FBI. Enemy aliens were not allowed to enter restricted areas. Violators of these regulations were subject to "arrest, detention and internment for the duration of the war. ''
Executive Order 9066, signed by Franklin D. Roosevelt on February 19, 1942, authorized military commanders to designate "military areas '' at their discretion, "from which any or all persons may be excluded. '' These "exclusion zones, '' unlike the "alien enemy '' roundups, were applicable to anyone that an authorized military commander might choose, whether citizen or non-citizen. Eventually such zones would include parts of both the East and West Coasts, totaling about 1 / 3 of the country by area. Unlike the subsequent deportation and incarceration programs that would come to be applied to large numbers of Japanese Americans, detentions and restrictions directly under this Individual Exclusion Program were placed primarily on individuals of German or Italian ancestry, including American citizens.
These edicts included persons of part - Japanese ancestry as well. Anyone with at least one - sixteenth (equivalent to having one great - great grandparent) Japanese ancestry was eligible. Korean - Americans and Taiwanese, classified as ethnically Japanese because both Korea and Taiwan were Japanese colonies at the time, were also included.
The deportation and incarceration were popular among many white farmers who resented the Japanese American farmers. "White American farmers admitted that their self - interest required removal of the Japanese. '' These individuals saw internment as a convenient means of uprooting their Japanese - American competitors. Austin E. Anson, managing secretary of the Salinas Vegetable Grower - Shipper Association, told the Saturday Evening Post in 1942:
We 're charged with wanting to get rid of the Japs for selfish reasons. We do. It 's a question of whether the white man lives on the Pacific Coast or the brown men. They came into this valley to work, and they stayed to take over... If all the Japs were removed tomorrow, we 'd never miss them in two weeks, because the white farmers can take over and produce everything the Jap grows. And we do not want them back when the war ends, either.
The Roberts Commission Report, prepared at President Franklin D. Roosevelt 's request, has been cited as an example of the fear and prejudice informing the thinking behind the internment program. The Report sought to link Japanese Americans with espionage activity, and to associate them with the bombing of Pearl Harbor. Columnist Henry McLemore, who wrote for the Hearst newspapers, reflected the growing public sentiment that was fueled by this report:
I am for the immediate removal of every Japanese on the West Coast to a point deep in the interior. I do n't mean a nice part of the interior either. Herd ' em up, pack ' em off and give ' em the inside room in the badlands... Personally, I hate the Japanese. And that goes for all of them.
Other California newspapers also embraced this view. According to a Los Angeles Times editorial,
A viper is nonetheless a viper wherever the egg is hatched... So, a Japanese American born of Japanese parents, nurtured upon Japanese traditions, living in a transplanted Japanese atmosphere... notwithstanding his nominal brand of accidental citizenship almost inevitably and with the rarest exceptions grows up to be a Japanese, and not an American... Thus, while it might cause injustice to a few to treat them all as potential enemies, I can not escape the conclusion... that such treatment... should be accorded to each and all of them while we are at war with their race.
State politicians joined the bandwagon that was embraced by Leland Ford of Los Angeles, who demanded that "all Japanese, whether citizens or not, be placed in (inland) concentration camps. ''
Incarceration of Japanese Americans, who provided critical agricultural labor on the West Coast, created a labor shortage, which was exacerbated by the induction of many American laborers into the Armed Forces. This vacuum precipitated a mass immigration of Mexican workers into the United States to fill these jobs, under the banner of what became known as the Bracero Program. Many Japanese internees were temporarily released from their camps -- for instance, to harvest Western beet crops -- to address this wartime labor shortage.
Like many white American farmers, the white businessmen of Hawaii had their own motives for determining how to deal with the Japanese Americans, but they opposed internment. Instead, these individuals gained passage of legislation to retain in freedom the nearly 150,000 Japanese Americans who would have been otherwise sent to internment camps within Hawaii. As a result, only 1,200 to 1,800 Japanese Americans in Hawaii were interned.
The powerful businessmen of Hawaii concluded that imprisonment of such a large proportion of the islands ' population would adversely affect the economic prosperity of the island state. The Japanese represented "over 90 percent of the carpenters, nearly all of the transportation workers, and a significant portion of the agricultural laborers '' on the island. General Delos Carleton Emmons, the military governor of Hawaii, also argued that Japanese labor was "' absolutely essential ' for rebuilding the defenses destroyed at Pearl Harbor ''. Recognizing the Japanese - American community 's contribution to the affluence of the Hawaiian economy, General Emmons fought against the internment of the Japanese Americans and had the support of most of the businessmen of Hawaii.
Coming to different conclusions about how to deal with the Japanese - American community, both the white farmers of the United States and the white businessmen of Hawaii placed priority on protecting their own economic interests.
Though internment was a generally popular policy in California, support was not universal. R.C. Hoiles, publisher of the Orange County Register, argued during the war that the internment was unethical and unconstitutional:
The Niihau Incident occurred in December 1941, just after the Japanese attack on Pearl Harbor. Three Japanese Americans on the Hawaiian island of Niihau assisted a Japanese pilot, Shigenori Nishikaichi, who crashed there. Despite the incident, the Territorial Governor of Hawaii rejected calls for the mass internment of the Japanese Americans living there. Shigenori Nishikaichi is buried in his hometown, Hashihama, Japan. On his grave stone is written, ' His meritorious deed will live forever. '
In Magic: The Untold Story of U.S. Intelligence and the Evacuation of Japanese Residents From the West Coast During World War II, David Lowman, a former National Security Agency (NSA) operative, argues that Magic ("Magic '' was the code - name for American code - breaking efforts) intercepts posed "frightening specter of massive espionage nets, '' thus justifying internment. Lowman contended that incarceration served to ensure the secrecy of U.S. code - breaking efforts, because effective prosecution of Japanese Americans might necessitate disclosure of secret information. If U.S. code - breaking technology was revealed in the context of trials of individual spies, the Japanese Imperial Navy would change its codes, thus undermining U.S. strategic wartime advantage.
Some scholars have criticized or dismissed Lowman 's reasoning that "disloyalty '' among some individual Japanese Americans could legitimize "incarcerating 120,000 people, including infants, the elderly, and the mentally ill ''. Lowman 's reading of the contents of the Magic cables has also been challenged, as some scholars contend that the cables demonstrate that Japanese Americans were not heeding the overtures of Imperial Japan to spy against the United States. According to one critic, Lowman 's book has long since been "refuted and discredited ''.
The controversial conclusions drawn by Lowman were defended by conservative commentator Michelle Malkin in her book In Defense of Internment; The Case for ' Racial Profiling ' in World War II and the War on Terror (2004). Malkin 's defense of Japanese internment was due in part to reaction to what she describes as the "constant alarmism from Bush - bashers who argue that every counter-terror measure in America is tantamount to the internment ''. She criticized academia 's treatment of the subject, and suggested that academics critical of Japanese internment had ulterior motives. Her book was widely criticized, particularly with regard to her reading of the "Magic '' cables. Daniel Pipes, also drawing on Lowman, has defended Malkin, and said that Japanese American internment was "a good idea '' which offers "lessons for today ''.
A letter by General DeWitt and Colonel Bendetsen expressing racist bias against Japanese Americans was circulated and then hastily redacted in 1943 -- 1944. DeWitt 's final report stated that, because of their race, it was impossible to determine the loyalty of Japanese Americans, thus necessitating internment. The original version was so offensive -- even in the atmosphere of the wartime 1940s -- that Bendetsen ordered all copies to be destroyed.
In 1980, a copy of the original Final Report: Japanese Evacuation from the West Coast -- 1942 was found in the National Archives, along with notes showing the numerous differences between the original and redacted versions. This earlier, racist and inflammatory version, as well as the FBI and Office of Naval Intelligence (ONI) reports, led to the coram nobis retrials which overturned the convictions of Fred Korematsu, Gordon Hirabayashi and Minoru Yasui on all charges related to their refusal to submit to exclusion and internment. The courts found that the government had intentionally withheld these reports and other critical evidence, at trials all the way up to the Supreme Court, which proved that there was no military necessity for the exclusion and internment of Japanese Americans. In the words of Department of Justice officials writing during the war, the justifications were based on "willful historical inaccuracies and intentional falsehoods. ''
In May 2011, U.S. Solicitor General Neal Katyal, after a year of investigation, found Charles Fahy had intentionally withheld The Ringle Report drafted by the Office of Naval Intelligence, in order to justify the Roosevelt administration 's actions in the cases of Hirabayashi v. United States and Korematsu v. United States. The report would have undermined the administration 's position of the military necessity for such action, as it concluded that most Japanese Americans were not a national security threat, and that allegations of communication espionage had been found to be without basis by the FBI and Federal Communications Commission.
Editorials from major newspapers at the time were generally supportive of the internment of the Japanese by the United States.
A Los Angeles Times editorial dated February 19, 1942, stated that: "Since Dec. 7 there has existed an obvious menace to the safety of this region in the presence of potential saboteurs and fifth columnists close to oil refineries and storage tanks, airplane factories, Army posts, Navy facilities, ports and communications systems. Under normal sensible procedure not one day would have elapsed after Pearl Harbor before the government had proceeded to round up and send to interior points all Japanese aliens and their immediate descendants for classification and possible internment. ''
An Atlanta Constitution editorial dated February 20, 1942, stated that: "The time to stop taking chances with Japanese aliens and Japanese - Americans has come... While Americans have an inate distaste for stringent measures, every one must realize this is a total war, that there are no Americans running loose in Japan or Germany or Italy and there is absolutely no sense in this country running even the slightest risk of a major disaster from enemy groups within the nation. ''
A Washington Post editorial dated February 22, 1942, stated that: "There is but one way in which to regard the Presidential order empowering the Army to establish "military areas '' from which citizens or aliens may be excluded. That is to accept the order as a necessary accompaniment of total defense. ''
A Los Angeles Times editorial dated February 28, 1942, stated that: "As to a considerable number of Japanese, no matter where born, there is unfortunately no doubt whatever. They are for Japan; they will aid Japan in every way possible by espionage, sabotage and other activity; and they need to be restrained for the safety of California and the United States. And since there is no sure test for loyalty to the United States, all must be restrained. Those truly loyal will understand and make no objection. ''
A Los Angeles Times editorial dated December 8, 1942, stated that: "The Japs in these centers in the United States have been afforded the very best of treatment, together with food and living quarters far better than many of them ever knew before, and a minimum amount of restraint. They have been as well fed as the Army and as well as or better housed... The American people can go without milk and butter, but the Japs will be supplied. ''
A Los Angeles Times editorial dated April 22, 1943, stated that: "As a race, the Japanese have made for themselves a record for conscienceless treachery unsurpassed in history. Whatever small theoretical advantages there might be in releasing those under restraint in this country would be enormously outweighed by the risks involved. ''
While this event is most commonly called the internment of Japanese Americans, the government operated several different types of camps holding Japanese Americans. The best known facilities were the military - run Wartime Civil Control Administration (WCCA) Assembly Centers and the civilian - run War Relocation Authority (WRA) Relocation Centers, which are generally (but unofficially) referred to as "internment camps. '' Scholars have urged dropping such euphemisms and refer to them as concentration camps and the people as incarcerated. The Department of Justice (DOJ) operated camps officially called Internment Camps, which were used to detain those suspected of crimes or of "enemy sympathies. '' The government also operated camps for a limited number of German Americans and Italian Americans, who sometimes were assigned to share facilities with the Japanese Americans. The WCCA and WRA facilities were the largest and the most public. The WCCA Assembly Centers were temporary facilities that were first set up in horse racing tracks, fairgrounds and other large public meeting places to assemble and organize internees before they were transported to WRA Relocation Centers by truck, bus or train. The WRA Relocation Centers were semi-permanent camps that housed persons removed from the exclusion zone after March 1942, or until they were able to relocate elsewhere in the United States outside the exclusion zone.
Eight U.S. Department of Justice Camps (in Texas, Idaho, North Dakota, New Mexico, and Montana) held Japanese Americans, primarily non-citizens and their families. The camps were run by the Immigration and Naturalization Service, under the umbrella of the DOJ, and guarded by Border Patrol agents rather than military police. The population of these camps included approximately 3,800 of the 5,500 Buddhist and Christian ministers, Japanese - language school instructors, newspaper workers, fishermen, and community leaders who had been accused of fifth column activity and arrested by the FBI after Pearl Harbor. (The remaining 1,700 were released to WRA relocation centers.) Immigrants and nationals of German and Italian ancestry were also held in these facilities, often in the same camps as Japanese Americans. Approximately 7,000 German Americans and 3,000 Italian Americans from Hawai'i and the U.S. mainland were interned in DOJ camps, along with 500 German seamen already in custody after being rescued from the SS Columbus in 1938. In addition 2,264 ethnic Japanese, 4,058 ethnic Germans, and 288 ethnic Italians were deported from 19 Latin American countries for a later - abandoned hostage exchange program with Axis countries or confinement in DOJ camps.
Several U.S. Army internment camps held Japanese, Italian and German American men considered "potentially dangerous. '' Camp Lordsburg, in New Mexico, was the only site built specifically to confine Japanese Americans. In May 1943, the Army was given responsibility for the detention of prisoners of war and all civilian internees were transferred to DOJ camps.
Executive Order 9066 authorized the removal of all persons of Japanese ancestry from the West Coast; however, it was signed before there were any facilities completed to house the displaced Japanese Americans. After the voluntary evacuation program failed to result in many families leaving the exclusion zone, the military took charge of the now - mandatory evacuation. On April 9, 1942, the Wartime Civilian Control Administration (WCCA) was established by the Western Defense Command to coordinate the forced removal of Japanese Americans to inland concentration camps.
The relocation centers faced opposition from inland communities near the proposed sites who disliked the idea of their new "Jap '' neighbors. In addition, government forces were struggling to build what would essentially be self - sufficient towns in very isolated, undeveloped and harsh regions of the country; they were not prepared to house the influx of over 110,000 internees. Since Japanese Americans living in the restricted zone were considered too dangerous to conduct their daily business, the military decided it had to house them in temporary centers until the relocation centers were completed.
Under the direction of Colonel Karl Bendetsen, existing facilities had been designated for conversion to WCCA use in March 1942, and the Army Corps of Engineers finished construction on these sites on April 21, 1942. All but four of the 15 confinement sites (12 in California, and one each in Washington, Oregon and Arizona) had previously been racetracks or fairgrounds. The stables and livestock areas were cleaned out and hastily converted to living quarters for families of up to six, while wood and tarpaper barracks were constructed for additional housing, as well as communal latrines, laundry facilities and mess halls. A total of 92,193 Japanese Americans were transferred to these temporary detention centers from March to August 1942. (18,026 more had been taken directly to two "reception centers '' that were developed as the Manzanar and Poston WRA camps.) The WCCA was dissolved on March 15, 1943, when it became the War Relocation Authority and turned its attentions to the more permanent relocation centers.
The War Relocation Authority (WRA) was the U.S. civilian agency responsible for the relocation and detention. The WRA was created by President Roosevelt on March 18, 1942, with Executive Order 9102 and officially ceased to exist June 30, 1946. Milton S. Eisenhower, then an official of the Department of Agriculture, was chosen to head the WRA. In the 1943 US Government film Japanese Relocation he said, "This picture tells how the mass migration was accomplished. Neither the Army, not the War Relocation Authority relish the idea of taking men, women and children from their homes, their shops and their farms. So, the military and civilian agencies alike, determined to do the job as a democracy should - with real consideration for the people involved. '' Dillon S. Myer replaced Eisenhower three months later on June 17, 1942. Myer served as Director of the WRA until the centers were closed. Within nine months, the WRA had opened ten facilities in seven states, and transferred over 100,000 people from the WCCA facilities.
The WRA camp at Tule Lake, though initially like the other camps, eventually was used as a detention center for people believed to pose a security risk. Tule Lake also served as a "segregation center '' for individuals and families who were deemed "disloyal, '' and for those who were to be deported to Japan.
There were three types of camps. Civilian Assembly Centers were temporary camps, frequently located at horse tracks, where Japanese Americans were sent as they were removed from their communities. Eventually, most were sent to Relocation Centers, also known as internment camps. Detention camps housed Nikkei considered to be disruptive or of special interest to the government.
These camps often held German - American and Italian - American detainees in addition to Japanese Americans:
The Citizen Isolation Centers were for those considered to be problem inmates.
Detainees convicted of crimes, usually draft resistance, were sent to these sites, mostly federal prisons:
These camps often held German and Italian detainees in addition to Japanese Americans:
These immigration detention stations held the roughly 5,500 men arrested immediately after Pearl Harbor, in addition to several thousand German and Italian detainees, and served as processing centers from which the men were transferred to DOJ or Army camps:
Somewhere between 110,000 and 120,000 people of Japanese ancestry were subject to this mass exclusion program, of whom about two - thirds were U.S. citizens. The remaining one - third were non-citizens subject to internment under the Alien Enemies Act; many of these "resident aliens '' had been inhabitants of the United States for decades, but had been deprived by law of being able to become naturalized citizens. Also part of the West Coast removal were 101 children of Japanese descent taken from orphanages and foster homes within the exclusion zone.
Internees of Japanese descent were first sent to one of 17 temporary "Civilian Assembly Centers, '' where most awaited transfer to more permanent relocation centers being constructed by the newly formed War Relocation Authority (WRA). Some of those who reported to the civilian assembly centers were not sent to relocation centers, but were released under the condition that they remain outside the prohibited zone until the military orders were modified or lifted. Almost 120,000 Japanese Americans and resident Japanese aliens were eventually removed from their homes in California, the western halves of Oregon and Washington and southern Arizona as part of the single largest forced relocation in U.S. history.
Most of these camps / residences, gardens, and stock areas were placed on Native American reservations, for which the Native Americans were formally compensated. The Native American councils disputed the amounts negotiated in absentia by US government authorities. They later sued to gain relief and additional compensation for some items of dispute.
Under the National Student Council Relocation Program (supported primarily by the American Friends Service Committee), students of college age were permitted to leave the camps to attend institutions willing to accept students of Japanese ancestry. Although the program initially granted leave permits to a very small number of students, this eventually included 2,263 students by December 31, 1943.
On March 2, 1942, General John DeWitt, commanding general of the Western Defense Command, publicly announced the creation of two military restricted zones. Military Area No. 1 consisted of the southern half of Arizona and the western half of California, Oregon and Washington, as well as all of California south of Los Angeles. Military Area No. 2 covered the rest of those states. DeWitt 's proclamation informed Japanese Americans they would be required to leave Military Area 1, but stated that they could remain in the second restricted zone. Removal from Military Area No. 1 initially occurred through "voluntary evacuation. '' Japanese Americans were free to go anywhere outside of the exclusion zone or inside Area 2, with arrangements and costs of relocation to be borne by the individuals. The policy was short - lived; DeWitt issued another proclamation on March 27 that prohibited Japanese Americans from leaving Area 1. A night - time curfew, also initiated on March 27, 1942, placed further restrictions on the movements and daily lives of Japanese Americans.
Eviction from the West Coast began on March 24, 1942, with Civilian Exclusion Order No. 1, which gave the 227 Japanese American residents of Bainbridge Island, Washington six days to prepare for their "evacuation '' directly to Manzanar. Colorado governor Ralph Lawrence Carr was the only elected official to publicly denounce the internment of American citizens (an act that cost his reelection, but gained him the gratitude of the Japanese American community, such that a statue of him was erected in the Denver Japantown 's Sakura Square). A total of 108 exclusion orders issued by the Western Defense Command over the next five months completed the removal of Japanese Americans from the West Coast in August 1942.
In 1943, Secretary of the Interior Harold L. Ickes wrote "the situation in at least some of the Japanese internment camps is bad and is becoming worse rapidly. '' The quality of life in the camps was heavily influenced by which government entity was responsible for them. INS Camps were regulated by international treaty. The legal difference between interned and relocated had significant effects on those locked up. INS camps were required to provide food quality and housing at the minimum equal to that experienced by the lowest ranked person in the military.
According to a 1943 War Relocation Authority report, internees were housed in "tar paper - covered barracks of simple frame construction without plumbing or cooking facilities of any kind. '' The spartan facilities met international laws, but left much to be desired. Many camps were built quickly by civilian contractors during the summer of 1942 based on designs for military barracks, making the buildings poorly equipped for cramped family living. Throughout many camps, twenty - five people were forced to live in space built to contain four, leaving no room for privacy.
The Heart Mountain War Relocation Center in northwestern Wyoming was a barbed - wire - surrounded enclave with unpartitioned toilets, cots for beds, and a budget of 45 cents daily per capita for food rations.
Armed guards were posted at the camps, which were all in remote, desolate areas far from population centers. Internees were typically allowed to stay with their families, and were treated decently unless they violated the rules. There are documented instances of guards shooting internees who reportedly attempted to walk outside the fences. One such shooting, that of James Wakasa at Topaz, led to a re-evaluation of the security measures in the camps. Some camp administrations eventually allowed relatively free movement outside the marked boundaries of the camps. Nearly a quarter of the internees left the camps to live and work elsewhere in the United States, outside the exclusion zone. Eventually, some were authorized to return to their hometowns in the exclusion zone under supervision of a sponsoring American family or agency whose loyalty had been assured.
The phrase "shikata ga nai '' (loosely translated as "it can not be helped '') was commonly used to summarize the interned families ' resignation to their helplessness throughout these conditions. This was noticed by their children, as mentioned in the well - known memoir Farewell to Manzanar by Jeanne Wakatsuki Houston and James D. Houston. Further, it is noted that parents may have internalized these emotions to withhold their disappointment and anguish from affecting their children. Nevertheless, children still were cognizant of this emotional repression.
Before the war, 87 physicians and surgeons, 137 nurses, 105 dentists, 132 pharmacists, 35 optometrists, and 92 lab technicians provided healthcare to the Japanese American population, with most practicing in urban centers like Los Angeles, San Francisco and Seattle. As the eviction from the West Coast was carried out, the Wartime Civilian Control Administration worked with the United States Public Health Service and many of these professionals to establish infirmaries within the temporary assembly centers. An Issei doctor was appointed to manage each facility, and additional healthcare staff worked under his supervision, although the USPHS recommendation of one physician for every 1,000 inmates and one nurse to 200 inmates was not met. Overcrowded and unsanitary conditions forced assembly center infirmaries to prioritize inoculations over general care, obstetrics and surgeries; at Manzanar, for example, hospital staff performed over 40,000 immunizations against typhoid and smallpox. Food poisoning was common and also demanded significant attention. Those who were interned in Topaz, Minidoka, and Jerome experienced outbreaks of dysentery.
Facilities in the more permanent "relocation centers '' eventually surpassed the makeshift assembly center infirmaries, but in many cases these hospitals were incomplete when inmates began to arrive and were not fully functional for several months. Additionally, vital medical supplies such as medications and surgical and sterilization equipment were limited. The staff shortages suffered in the assembly centers continued in the WRA camps. The administration 's decision to invert the management structure and demote Japanese American medical workers to positions below white employees, while capping their pay rate at a $20 / month, further exacerbated this problem. (At Heart Mountain, for example, Japanese American doctors received $19 / month compared to white nurses ' $150 / month.) The war had caused a shortage of healthcare professionals across the country, and the camps often lost potential recruits to outside hospitals that offered better pay and living conditions. When the WRA began to allow some Japanese Americans to leave camp, many Nikkei medical professionals resettled outside camp. Those who remained had little authority in administration of the hospitals. Combined with the inequitable payment of salaries between white and Japanese American employees, conflicts arose at several hospitals, and there were two Japanese American walk - outs at Heart Mountain in 1943.
Despite a shortage of healthcare workers, limited access to equipment, and tension between white administrators and Japanese American staff, these hospitals provided much needed medical care in camp. The extreme climates of the remote incarceration sites were hard on infants and elderly inmates. The frequent dust storms of the high desert locations led to increased cases of asthma and coccidioidomycosis, while the swampy, mosquito - infested Arkansas camps exposed residents to malaria, all of which were treated in camp. Almost 6,000 live deliveries were performed in these hospitals, and all mothers received pre - and postnatal care. The WRA recorded 1,862 deaths across the ten camps, with cancer, heart disease, tuberculosis, and vascular disease accounting for the majority.
Of the 110,000 Japanese Americans detained by the United States government during World War II, 30,000 were children. Most were school - age children, so educational facilities were set up in the camps. Allowing them to continue their education, however, did not erase the potential for traumatic experiences during their overall time in the camps. The government had not adequately planned for the camps, and no real budget or plan was set aside for the new camp educational facilities. Camp schoolhouses were crowded and had insufficient materials, books, notebooks, and desks for students. Not only that the education / instruction was all in English, the schools in Japanese Internment Camps also did n't have any books or supplies to go on as they opened. The state decided to issue a few books only a month after the opening. Wood stoves were used to heat the buildings, and instead of using separate rooms for different kinds of activities only partitions were used to accomplish that. Japanese Internment Camps also did not have any libraries (and consequently no library books), writing arm chairs or desks, and no science equipment. These ' schoolhouses ' were essentially prison blocks that contained few windows. In the Southwest, when temperatures rose and the schoolhouse filled, the rooms would be sweltering and unbearable. Class sizes were immense. At the height of its attendance, the Rohwer Camp of Arkansas reached 2,339, with only 45 certified teachers. The student to teacher ratio in the camps was 48: 1 in elementary schools and 35: 1 for secondary schools, compared to the national average of 28: 1. This was due to a few things. One of them was that there was a general teacher shortage in the US at the moment, and the fact that the teachers were required to live in those poor conditions in the camps themselves. "There was persistent mud or dust, heat, mosquitoes, poor food and living conditions, inadequate instructional supplies, and a half mile or more walk each day just to and from the school block ''. Despite the triple salary increase in the internment camps, they were still unable to fill in all the needed teacher positions with certified personnel, and so in the end they had to hire non-certified teacher detainees to help out the teachers as assistants.
The rhetorical curriculum of the schools was based mostly on the study of "the democratic ideal and to discover its many implications. '' English compositions researched at the Jerome and Rohwer camps in Arkansas focused on these ' American ideals ', and many of the compositions pertained to the camps. Responses were varied, as schoolchildren of the Topaz camp were patriotic and believed in the war effort, but could not ignore the fact of their incarceration. To build patriotism, the Japanese language was banned in the camps, forcing the children to learn English and then go home and teach their parents.
Although life in the camps was very difficult, Japanese Americans formed many different sports teams, including baseball and football teams. In January 1942, President Franklin D. Roosevelt issued what came to be known as the "Green Light Letter, '' to MLB Commissioner Kenesaw Mountain Landis, which urged him to continue playing Major League Baseball games despite the ongoing war. In it Roosevelt said that "baseball provides a recreation, '' and this was true for Japanese American incarcerees as well. Over 100 baseball teams were formed in the Manzanar camp so that Japanese Americans could have some recreation, and some of the team names were carry - overs from teams formed before the incarceration.
Both men and women participated in the sports. In some cases, the Japanese American baseball teams from the camps traveled to outside communities to play other teams. Incarcerees from Idaho competed in the state tournament in 1943, and there were games between the prison guards and the Japanese American teams. Branch Rickey, who would be responsible for bringing Jackie Robinson into Major League Baseball in 1947, sent a letter to all of the WRA camps expressing interest in scouting some of the Nisei players. In fall of 1943, three players tried out for the Brooklyn Dodgers in front of MLB scout George Sisler, however, none of them made the team.
Although most Nisei college students followed their families into camp, a small number tried to arrange for transfers to schools outside the exclusion zone in order to continue their education. Their initial efforts expanded as sympathetic college administrators and the American Friends Service Committee began to coordinate a larger student relocation program. The Friends petitioned WRA Director Milton Eisenhower to place college students in Eastern and Midwestern academic institutions. The National Japanese American Student Relocation Council was formed on May 29, 1942, and the AFSC administered the program. By September 1942, after the initial roundup of Japanese Americans, 250 students from assembly centers and WRA camps were back at school. Their tuition, book costs and living expenses were absorbed by the U.S. government, private foundations and church scholarships, in addition to significant fundraising efforts led by Issei parents in camp. Outside camp, the students took on the role of "ambassadors of good will, '' and the NJASRC and WRA promoted this image to soften anti-Japanese prejudice and prepare the public for the resettlement of Japanese Americans in their communities. At Earlham College, President William Dennis helped institute a program that enrolled several dozen Japanese - American students in order to spare them from incarceration. While this action was controversial in Richmond, Indiana, it helped strengthen the college 's ties to Japan and the Japanese - American community. At Oberlin College, about 40 evacuated Nisei students were enrolled. One of them, Kenji Okuda, was elected as student council president. In total, over 600 institutions east of the exclusion zone opened their doors to more than 4,000 college - age youth who had been placed behind barbed wire, many of whom were enrolled in West Coast schools prior to their removal. The NJASRC ceased operations on June 7, 1946.
In early 1943, War Relocation Authority officials, working with the War Department and the Office of Naval Intelligence, circulated a questionnaire in an attempt to determine the loyalty of incarcerated Nisei men they hoped to recruit into military service. The "Statement of United States Citizen of Japanese Ancestry '' was initially given only to Nisei who were eligible for service (or would have been, but for the 4 - C classification imposed on them at the start of the war). Authorities soon revised the questionnaire and required all adults in camp to complete the form. Most of the 28 questions were designed to assess the "Americanness '' of the respondent -- had they been educated in Japan or the U.S.? were they Buddhist or Christian? did they practice judo or play on a baseball team? The final two questions on the form, which soon came to be known as the "loyalty questionnaire, '' were more direct:
Question 27: Are you willing to serve in the armed forces of the United States on combat duty, wherever ordered?
Question 28: Will you swear unqualified allegiances to the United States of America and faithfully defend the United States from any and all attack by foreign or domestic forces, and forswear any form of allegiance or obedience to the Japanese emperor, or other foreign government, power or organization?
Across the camps, persons who answered No to both questions became known as "No Nos. ''
While most camp inmates simply answered "yes '' to both questions, several thousand -- 17 percent of the total respondents, 20 percent of the Nisei -- gave negative or qualified replies out of confusion, fear or anger at the wording and implications of the questionnaire. In regard to Question 27, many worried that expressing a willingness to serve would be equated with volunteering for combat, while others felt insulted at being asked to risk their lives for a country that had imprisoned them and their families. An affirmative answer to Question 28 brought up other issues. Some believed that renouncing their loyalty to Japan would suggest that they had at some point been loyal to Japan and disloyal to the United States. Many believed they were to be deported to Japan no matter how they answered; they feared an explicit disavowal of the Emperor would become known and make such resettlement extremely difficult.
On July 15, 1943, Tule Lake, the site with the highest number of "no '' responses to the questionnaire, was designated to house inmates whose answers suggested they were "disloyal ''. During the remainder of 1943 and into early 1944, more than 12,000 men, women and children were transferred from other camps to the maximum - security Tule Lake Segregation Center.
After these insults, the government passed the Renunciation Act of 1944, a law that made it possible for Nisei and Kibei to renounce their American citizenship. A total of 5,589 internees opted to do so; 5,461 of these were sent to Tule Lake. Of those who renounced US citizenship, 1,327 were repatriated to Japan. Those persons who stayed in the US faced discrimination from the Japanese - American community, both during and after the war, for having made that choice of renunciation. At the time, they feared what their futures held were they to remain American, and remain interned.
These renunciations of American citizenship have been highly controversial, for a number of reasons. Some apologists for internment have cited the renunciations as evidence that "disloyalty '' or anti-Americanism was well represented among the interned peoples, thereby justifying the internment. Many historians have dismissed the latter argument, for its failure to consider that the small number of individuals in question had been mistreated and persecuted by their own government at the time of the "renunciation '':
(T) he renunciations had little to do with "loyalty '' or "disloyalty '' to the United States, but were instead the result of a series of complex conditions and factors that were beyond the control of those involved. Prior to discarding citizenship, most or all of the renunciants had experienced the following misfortunes: forced removal from homes; loss of jobs; government and public assumption of disloyalty to the land of their birth based on race alone; and incarceration in a "segregation center '' for "disloyal '' ISSEI or NISEI...
Minoru Kiyota, who was among those who renounced his citizenship and soon came to regret the decision, has said that he wanted only "to express my fury toward the government of the United States, '' for his internment and for the mental and physical duress, as well as the intimidation, he was made to face.
(M) y renunciation had been an expression of momentary emotional defiance in reaction to years of persecution suffered by myself and other Japanese Americans and, in particular, to the degrading interrogation by the FBI agent at Topaz and being terrorized by the guards and gangs at Tule Lake.
Civil rights attorney Wayne M. Collins successfully challenged most of these renunciations as invalid, owing to the conditions of duress and intimidation under which the government obtained them. Many of the deportees were Issei (first generation) or Kibei, who often had difficulty with English and often did not understand the questions they were asked. Even among those Issei who had a clear understanding, Question 28 posed an awkward dilemma: Japanese immigrants were denied U.S. citizenship at the time, so when asked to renounce their Japanese citizenship, answering "Yes '' would have made them stateless persons.
When the government began seeking army volunteers from among the camps, only 6 % of military - aged male inmates volunteered to serve in the U.S. Armed Forces. Most of those who refused tempered that refusal with statements of willingness to fight if they were restored their rights as American citizens. Eventually 20,000 Japanese - American men and many Japanese - American women served in the U.S. Army during World War II.
The 442nd Regimental Combat Team, which fought in Europe, was formed from those Japanese Americans who agreed to serve. This unit was the most highly decorated U.S. military unit of its size and duration. The 442nd 's Nisei segregated field artillery battalion, then on detached service within the U.S. Army in Bavaria, liberated at least one of the satellite labor camps of the Nazis ' original concentration camp at Dachau on April 29, 1945, and only days later, on May 2, halted a death march in southern Bavaria.
Many Nisei worked to prove themselves as loyal American citizens. Of the 20,000 Japanese Americans who served in the Army during World War II, "many Japanese - American soldiers had gone to war to fight racism at home '' and they were "proving with their blood, their limbs, and their bodies that they were truly American ''. It was not only men either, some one hundred Nisei women volunteered for the WAC (Women 's Army Corps), where, after undergoing rigorous basic training, they had assignments as typists, clerks, and drivers. Satoshi Ito, an internment camp internee, reinforces the idea of the immigrants ' children striving to demonstrate their patriotism to the United States. He notes that his mother would tell him, "' you 're here in the United States, you need to do well in school, you need to prepare yourself to get a good job when you get out into the larger society ' ''. He said she would tell him, "' do n't be a dumb farmer like me, like us ' '' to encourage Ito to successfully assimilate into American society. As a result, he worked exceptionally hard to excel in school and later became a professor at the College of William & Mary. His story, along with the countless Japanese Americans willing to risk their lives in war, demonstrate the lengths many in their community went to prove their American patriotism.
As early as 1939, when war broke out in Europe and while armed conflict began to rage in East Asia, the FBI and branches of the Department of Justice and the armed forces began to collect information and surveil influential members of the Japanese community in the United States. These data were included in the Custodial Detention index (CDI). Agents in the Department of Justice 's Special Defense Unit classified the subjects into three groups: A, B and C, with A being "most dangerous, '' and C being "possibly dangerous. ''
After the Pearl Harbor attack, Roosevelt authorized his attorney general to put into motion a plan for the arrest of individuals on the potential enemy alien lists. Armed with a blanket arrest warrant, the FBI seized these men on the eve of December 8, 1941. These men were held in municipal jails and prisons until they were moved to Department of Justice detention camps, separate from those of the Wartime Relocation Authority (WRA). These camps operated under far more stringent conditions and were subject to heightened criminal - style guards, despite the absence of criminal proceedings.
Crystal City, Texas, was one such camp where Japanese Americans, German Americans, Italian Americans, and a large number of U.S. - seized, Axis - descended nationals from several Latin - American countries were interned.
The Canadian government also confined citizens with Japanese ancestry during World War II (see Japanese Canadian internment), for much the same reasons of fear and prejudice. Some Latin American countries of the Pacific Coast, such as Peru, interned ethnic Japanese or sent them to the United States for internment. Brazil also restricted its Japanese Brazilian population.
Although Japanese Americans in Hawaii comprised more than one third of the population, businessmen resisted their being interned or deported to mainland concentration camps, as they recognized their contributions to the economy. In the hysteria of the time, some mainland Congressmen (Hawaii was only a U.S. territory at the time, and did not have a voting representative or senator in Congress) promoted that all Japanese Americans and Japanese immigrants should be removed from Hawaii but were unsuccessful. An estimated 1,200 to 1,800 Japanese nationals and American - born Japanese from Hawaii were interned, either in five camps on the islands or in one of the mainland internment camps, but this represented well under two percent of the total Japanese American residents in the islands. "No serious explanations were offered as to why... the internment of individuals of Japanese descent was necessary on the mainland, but not in Hawaii, where the large Japanese - Hawaiian population went largely unmolested. ''
The vast majority of Japanese Americans and their immigrant parents in Hawaii were not interned because the government had already declared martial law in Hawaii and this allowed it to significantly reduce the supposed risk of espionage and sabotage by residents of Japanese ancestry. Also, Japanese Americans comprised over 35 % of the territory 's population, with 157,905 of Hawaii 's 423,330 inhabitants at the time of the 1940 census, making them the largest ethnic group at that time; detaining so many people would have been enormously challenging in terms of logistics. Additionally, the whole of Hawaiian society was dependent on their productivity. According to intelligence reports at the time, "the Japanese, through a concentration of effort in select industries, had achieved a virtual stranglehold on several key sectors of the economy in Hawaii, '' and they "had access to virtually all jobs in the economy, including high - status, high - paying jobs (e.g., professional and managerial jobs). '' To imprison such a large percentage of the islands ' work force would have crippled the Hawaiian economy. Thus, the unfounded fear of Japanese Americans turning against the United States was overcome by the reality - based fear of massive economic loss.
Lieutenant General Delos C. Emmons, commander of the Hawaii Department, promised the local Japanese - American community that they would be treated fairly so long as they remained loyal to the United States. He succeeded in blocking efforts to relocate them to the outer islands or mainland by pointing out the logistical difficulties. Among the small number interned were community leaders and prominent politicians, including territorial legislators Thomas Sakakihara and Sanji Abe.
A total of five internment camps operated in the territory of Hawaii, referred to as the "Hawaiian Island Detention Camps ''. One camp was located at Sand Island at the mouth of Honolulu Harbor. This camp was prepared in advance of the war 's outbreak. All prisoners held here were "detained under military custody... because of the imposition of martial law throughout the Islands ''. Another Hawaiian camp was the Honouliuli Internment Camp, near Ewa, on the southwestern shore of Oahu; it was opened in 1943 to replace the Sand Island camp. Another was located on the island of Maui in the town of Haiku, in addition to the Kilauea Detention Center on Hawaii and Camp Kalaheo on Kauai.
During World War II, over 2,200 Japanese from Latin America were held in internment camps run by the Immigration and Naturalization Service, part of the Department of Justice. Beginning in 1942, Latin Americans of Japanese ancestry were rounded up and transported to American internment camps run by the INS and the U.S. Justice Department. The majority of these internees, approximately 1,800, came from Peru. An additional 250 were from Panama, and Bolivia, Colombia, Costa Rica, Cuba, Ecuador, El Salvador, Mexico, Nicaragua, and Venezuela.
The first group of Japanese Latin Americans arrived in San Francisco on April 20, 1942, on board the Etolin along with 360 ethnic Germans and 14 ethnic Italians from Peru, Ecuador and Colombia. The 151 men -- ten from Ecuador, the rest from Peru -- had volunteered for deportation believing they were to be repatriated to Japan. They were denied visas by U.S. Immigration authorities and then detained on the grounds they had tried to enter the country illegally, without a visa or passport. Subsequent transports brought additional "volunteers, '' including the wives and children of men who had been deported earlier. A total of 2,264 Japanese Latin Americans, about two - thirds of them from Peru, were interned in facilities on the U.S. mainland during the war.
The United States originally intended to trade these Latin American internees as part of a hostage exchange program with Japan and other Axis nations. A thorough examination of the documents shows at least one trade occurred. Over 1,300 persons of Japanese ancestry were exchanged for a like number of non-official Americans in October 1943, at the port of Marmagao, India. Over half were Japanese Latin Americans (the rest being ethnic Germans and Italians) and of that number one - third were Japanese Peruvians.
On September 2, 1943, the Swedish ship MS Gripsholm departed the U.S. with just over 1,300 Japanese nationals (including nearly a hundred from Canada and Mexico) en route for the exchange location, Marmagao, the main port of the Portuguese colony of Goa on the west coast of India. After two more stops in South America to take on additional Japanese nationals, the passenger manifest reached 1,340. Of that number, Latin American Japanese numbered 55 percent of the Gripsholm 's travelers, 30 percent of whom were Japanese Peruvian. Arriving in Marmagao on October 16, 1943, the Gripsholm 's passengers disembarked and then boarded the Japanese ship Teia Maru. In return, "non-official '' Americans (secretaries, butlers, cooks, embassy staff workers, etc.) previously held by the Japanese Army boarded the Gripsholm while the Teia Maru headed for Tokyo. Because this exchange was done with those of Japanese ancestry officially described as "volunteering '' to return to Japan, no legal challenges were encountered. The U.S. Department of State was pleased with the first trade and immediately began to arrange a second exchange of non-officials for February 1944. This exchange would involve 1,500 non-volunteer Japanese who were to be exchanged for 1,500 Americans. The US was busy with Pacific Naval activity and future trading plans stalled. Further slowing the program were legal and political "turf '' battles between the State Department, the Roosevelt administration, and the DOJ, whose officials were not convinced of the legality of the program.
The completed October 1943 trade took place at the height of the Enemy Alien Deportation Program. Japanese Peruvians were still being "rounded up '' for shipment to the U.S. in previously unseen numbers. Despite logistical challenges facing the floundering prisoner exchange program, deportation plans were moving ahead. This is partly explained by an early - in - the - war revelation of the overall goal for Latin Americans of Japanese ancestry under the Enemy Alien Deportation Program. The goal: that the hemisphere was to be free of Japanese. Secretary of State Cordell Hull wrote an agreeing President Roosevelt, "(that the US must) continue our efforts to remove all the Japanese from these American Republics for internment in the United States. ''
"Native '' Peruvians expressed extreme animosity toward their Japanese citizens and expatriates, and Peru refused to accept the post-war return of Japanese Peruvians from the US. Although a small number asserting special circumstances, such as marriage to a non-Japanese Peruvian, did return, the majority were trapped. Their home country refused to take them back (a political stance Peru would maintain until 1950), they were generally Spanish speakers in the Anglo US, and in the postwar U.S., the Department of State started expatriating them to Japan. ACLU lawyer Wayne Collins filed injunctions on behalf of the remaining internees, helping them obtain "parole '' relocation to the labor - starved Seabrook Farms in New Jersey. He started a legal battle that would not be resolved until 1953, when, after working as undocumented immigrants for almost ten years, those Japanese Peruvians remaining in the U.S. were finally offered citizenship.
On December 18, 1944, the Supreme Court handed down two decisions on the legality of the incarceration under Executive Order 9066. Korematsu v. United States, a 6 -- 3 decision upholding a Nisei 's conviction for violating the military exclusion order, stated that, in general, the removal of Japanese Americans from the West Coast was constitutional. However, Ex parte Endo unanimously declared that loyal citizens of the United States, regardless of cultural descent, could not be detained without cause. In effect, the two rulings held that, while the eviction of U.S. citizens in the name of military necessity was legal, the subsequent incarceration was not -- thus paving the way for their release.
Although WRA Director Dillon Myer and others had pushed for an earlier end to the incarceration, the exclusion order was not rescinded until January 2, 1945 (postponed until after the November 1944 election, so as not to impede Roosevelt 's reelection campaign). Many younger internees had already "resettled '' in Midwest or Eastern cities to pursue work or educational opportunities. (For example, 20,000 were sent to Lake View, Chicago.) The remaining population began to leave the camps to try to rebuild their lives at home. Former inmates were given $25 and a train ticket to their pre-war places of residence, but many had little or nothing to return to, having lost their homes and businesses. Some emigrated to Japan, although many of these individuals were "repatriated '' against their will. The camps remained open for residents who were not ready to return (mostly elderly Issei and families with young children), but the WRA pressured stragglers to leave by gradually eliminating services in camp. Those who had not left by each camp 's close date were forcibly removed and sent back to the West Coast.
Nine of the ten WRA camps were shut down by the end of 1945, although Tule Lake, which held "renunciants '' slated for deportation to Japan, was not closed until March 20, 1946. Japanese Latin Americans brought to the U.S. from Peru and other countries, who were still being held in the DOJ camps at Santa Fe and Crystal City, took legal action in April 1946 in an attempt to avoid deportation to Japan.
Following recognition of the injustices done to the Japanese Americans, in 1992 Manzanar camp was designated a National Historic Site to "provide for the protection and interpretation of historic, cultural, and natural resources associated with the relocation of Japanese Americans during World War II '' (Public Law 102 - 248). In 2001, the site of the Minidoka War Relocation Center in Idaho was designated the Minidoka National Historic Site.
Many internees lost irreplaceable personal property due to restrictions that prohibited them from taking more than they could carry into the camps. These losses were compounded by theft and destruction of items placed in governmental storage. Leading up to their incarceration, Nikkei were prohibited from leaving the Military Zones or traveling more than 5 miles (8.0 km) from home, forcing those who had to travel for work, like truck farmers and residents of rural towns, to quit their jobs. Many others were simply fired for their "Jap '' heritage.
Alien land laws in the West Coast states barred the Issei from owning their pre-war homes and farms. Many had cultivated land for decades as tenant farmers, but they lost their rights to farm those lands when they were forced to leave. Other Issei (and Nisei who were renting or had not completed payments on their property) had found families willing to occupy their homes or tend their farms during their incarceration. However, those unable to strike a deal with caretakers had to sell their property, often in a matter of days and at great financial loss to predatory land speculators, who made huge profits.
In addition to these monetary and property losses, a number of persons died or suffered for lack of medical care in camp. Seven were shot and killed by sentries: Kanesaburo Oshima, 58, during an escape attempt from Fort Sill, Oklahoma; Toshio Kobata, 58, and Hirota Isomura, 59, during transfer to Lordsburg, New Mexico; James Ito, 17, and Katsuji James Kanegawa, 21, during the December 1942 Manzanar Riot; James Hatsuaki Wakasa, 65, while walking near the perimeter wire of Topaz; and Shoichi James Okamoto, 30, during a verbal altercation with a sentry at the Tule Lake Segregation Center.
Psychological injury was observed by Dillon S. Myer, director of the WRA camps. In June 1945, Myer described how the Japanese Americans had grown increasingly depressed, and overcome with feelings of helplessness and personal insecurity. Author Betty Furuta explains that the Japanese used gaman, loosely meaning "perseverance '', to overcome hardships; this was mistaken by non-Japanese as being introverted and lacking initiative.
Japanese Americans also encountered hostility and even violence when they returned to the West Coast. Concentrated largely in rural areas of Central California, there were dozens of reports of gun shots, fires, and explosions aimed at Japanese American homes, businesses and places of worship, in addition to non-violent crimes like vandalism and the defacing of Japanese graves. In one of the only cases to go to trial, four men were accused of attacking the Doi family of Placer County, California, setting off an explosion and starting a fire on the family 's farm in January 1945. Despite a confession from one of the men that implicated the others, the jury accepted their defense attorney 's framing of the attack as a justifiable attempt to keep California "a white man 's country '' and acquitted all four defendants.
To compensate former internees for their property losses, the US Congress, on July 2, 1948, passed the "American Japanese Claims Act, '' allowing Japanese Americans to apply for compensation for property losses which occurred as "a reasonable and natural consequence of the evacuation or exclusion. '' By the time the Act was passed, the IRS had already destroyed most of the internees ' 1939 -- 42 tax records. Due to the time pressure and strict limits on how much they could take to the camps, few were able to preserve detailed tax and financial records during the evacuation process. Therefore, it was extremely difficult for claimants to establish that their claims were valid. Under the Act, Japanese American families filed 26,568 claims totaling $148 million in requests; about $37 million was approved and disbursed.
The different placement for the interned had significant consequences for their lifetime outcomes. A 2016 study finds, using the random dispersal of internees into camps in seven different states, that the people assigned to richer locations did better in terms of income, education, socioeconomic status, house prices, and housing quality roughly half a century later.
Beginning in the 1960s, a younger generation of Japanese Americans, inspired by the Civil Rights Movement, began what is known as the "Redress Movement, '' an effort to obtain an official apology and reparations from the federal government for incarcerating their parents and grandparents during the war. They focused not on documented property losses but on the broader injustice and mental suffering caused by the internment. The movement 's first success was in 1976, when President Gerald Ford proclaimed that the internment was "wrong, '' and a "national mistake '' which "shall never again be repeated ''.
The campaign for redress was launched by Japanese Americans in 1978. The Japanese American Citizens League (JACL), which had cooperated with the administration during the war, became part of the movement. It asked for three measures: $25,000 to be awarded to each person who was detained, an apology from Congress acknowledging publicly that the U.S. government had been wrong, and the release of funds to set up an educational foundation for the children of Japanese - American families.
In 1980, Congress established the Commission on Wartime Relocation and Internment of Civilians (CWRIC) to study the matter. On February 24, 1983, the commission issued a report entitled Personal Justice Denied, condemning the internment as unjust and motivated by racism and xenophobic ideas rather than factual military necessity. Internment camp sued the federal government for $24 million in property loss, but lost the case. However, the Commission recommended that $20,000 in reparations be paid to those Japanese Americans who had suffered internment.
The Civil Liberties Act of 1988 exemplified the Japanese American redress movement that impacted the large debate about the reparation bill. Whether or not it was going to pass during the 1980s due to the lack of federal budget that was at its all - time high and the low support of Japanese Americans covering 1 % of the United States; However, with the support of Democratic congressmen Barney Frank and four powerful Japanese American Democrats and Republicans whom had war experience saw the bill and enlisted it as their top priority and making sure it passed congress.
In 1988, U.S. President Ronald Reagan signed the Civil Liberties Act of 1988, which had been sponsored by Representative Norman Mineta and Senator Alan K. Simpson, who had met while Mineta was interned at Heart Mountain Relocation Center in Wyoming. It provided financial redress of $20,000 for each surviving detainee, totaling $1.2 billion. The question of to whom reparations should be given, how much, and even whether monetary reparations were appropriate were subjects of sometimes contentious debate within the Japanese American community and Congress.
On September 27, 1992, the Civil Liberties Act Amendments of 1992, appropriating an additional $400 million to ensure all remaining internees received their $20,000 redress payments, was signed into law by President George H.W. Bush. He issued another formal apology from the U.S. government on December 7, 1991, on the 50th - Anniversary of the Pearl Harbor attack, saying:
"In remembering, it is important to come to grips with the past. No nation can fully understand itself or find its place in the world if it does not look with clear eyes at all the glories and disgraces of its past. We in the United States acknowledge such an injustice in our history. The internment of Americans of Japanese ancestry was a great injustice, and it will never be repeated. ''
Over 81,800 people were qualified by 1998 and $1.6 billion was distributed among them.
Under the 2001 budget of the United States, Congress authorized that the ten detention sites are to be preserved as historical landmarks: "places like Manzanar, Tule Lake, Heart Mountain, Topaz, Amache, Jerome, and Rohwer will forever stand as reminders that this nation failed in its most sacred duty to protect its citizens against prejudice, greed, and political expediency ''.
On January 30, 2011, California first observed an annual "Fred Korematsu Day of Civil Liberties and the Constitution '', the first such commemoration for an Asian American in the U.S. On June 14, 2011, Peruvian president Alan García apologized for his country 's internment of Japanese immigrants during World War II, most of whom were transferred to the United States.
Several significant legal decisions arose out of Japanese - American internment, relating to the powers of the government to detain citizens in wartime. Among the cases which reached the US Supreme Court were Ozawa v. United States (1922), Yasui v. United States (1943), Hirabayashi v. United States (1943), ex parte Endo (1944), and Korematsu v. United States (1944). In Ozawa, the court established that peoples defined as ' white ' were specifically of Caucasian descent; In Yasui and Hirabayashi, the court upheld the constitutionality of curfews based on Japanese ancestry; in Korematsu, the court upheld the constitutionality of the exclusion order. In Endo, the court accepted a petition for a writ of habeas corpus and ruled that the WRA had no authority to subject a loyal citizen to its procedures.
Korematsu 's and Hirabayashi 's convictions were vacated in a series of coram nobis cases in the early 1980s. In the coram nobis cases, federal district and appellate courts ruled that newly uncovered evidence revealed an unfairness which, had it been known at the time, would likely have changed the Supreme Court 's decisions in the Yasui, Hirabayashi, and Korematsu cases.
These new court decisions rested on a series of documents recovered from the National Archives showing that the government had altered, suppressed, and withheld important and relevant information from the Supreme Court, including the Final Report by General DeWitt justifying the internment program. The Army had destroyed documents in an effort to hide alterations that had been made to the report to reduce their racist content. The coram nobis cases vacated the convictions of Korematsu and Hirabayashi (Yasui died before his case was heard, rendering it moot), and are regarded as part of the impetus to gain passage of the Civil Liberties Act of 1988.
The rulings of the US Supreme Court in the Korematsu and Hirabayashi cases, specifically in its expansive interpretation of government powers in wartime, have yet to be overturned. They are still the law of the land because a lower court can not overturn a ruling by the US Supreme Court. The coram nobis cases totally undermined the factual underpinnings of the 1944 cases, leaving the original decisions without much logical basis. As these 1944 decisions prevail, a number of legal scholars have expressed the opinion that the original Korematsu and Hirabayashi decisions have taken on renewed relevance in the context of the War on Terror.
Former Supreme Court Justice Tom C. Clark, who represented the US Department of Justice in the "relocation, '' writes in the epilogue to the 1992 book Executive Order 9066: The Internment of 110,000 Japanese Americans:
The truth is -- as this deplorable experience proves -- that constitutions and laws are not sufficient of themselves... Despite the unequivocal language of the Constitution of the United States that the writ of habeas corpus shall not be suspended, and despite the Fifth Amendment 's command that no person shall be deprived of life, liberty or property without due process of law, both of these constitutional safeguards were denied by military action under Executive Order 9066.
Since the end of World War II, there has been debate over the terminology used to refer to camps in which Americans of Japanese ancestry and their immigrant parents, were incarcerated by the United States Government during the war. These camps have been referred to as "War Relocation Centers, '' "relocation camps, '' "relocation centers, '' "internment camps, '' and "concentration camps, '' and the controversy over which term is the most accurate and appropriate continues.
James Hirabayashi, professor emeritus and former dean of ethnic studies at San Francisco State University, wrote an article in 1994 in which he stated that he wonders why euphemistic terms are still used to describe these camps.
In 1998, use of the term "concentration camps '' gained greater credibility prior to the opening of an exhibit about the American camps at Ellis Island. Initially, the American Jewish Committee (AJC) and the National Park Service, which manages Ellis Island, objected to the use of the term in the exhibit. However, during a subsequent meeting held at the offices of the AJC in New York City, leaders representing Japanese Americans and Jewish Americans reached an understanding about the use of the term.
The New York Times published an editorial supporting the use of "concentration camp '' in the exhibit. An article quoted Jonathan Mark, a columnist for The Jewish Week, who wrote, "Can no one else speak of slavery, gas, trains, camps? It 's Jewish malpractice to monopolize pain and minimize victims. '' AJC Executive Director David A. Harris stated during the controversy, "We have not claimed Jewish exclusivity for the term ' concentration camps. ' ''
The internment of Japanese Americans has been compared to the persecutions, expulsions, and dislocations of other ethnic minorities during World War II in Europe and Asia.
The Smithsonian Institution 's National Museum of American History has more than 800 artifacts from its A More Perfect Union collection available online. Archival photography, publications, original manuscripts, artworks, and handmade objects comprise the collection of items related to the Japanese American experience.
On October 1, 1987, the Smithsonian Institution National Museum of American History opened an exhibition called, "A More Perfect Union: Japanese Americans and the U.S. Constitution. '' The exhibition examined the Constitutional process by considering the experiences of Americans of Japanese ancestry before, during, and after World War II. On view were more than 1,000 artifacts and photographs relating to the experiences of Japanese Americans during World War II. The exhibition closed on January 11, 2004. On November 8, 2011, the National Museum of American History launched an online exhibition of the same name with shared content.
The elementary school at Poston Camp Unit 1, the only surviving school complex at one of the camps and the only major surviving element of the Poston camp, was designated a National Historic Landmark District in 2012.
On April 16, 2013, the Japanese American Internment Museum was opened in McGehee, Arkansas regarding the history of two internment camps.
In January 2015, the Topaz Museum opened in Delta, Utah. Its stated mission is "to preserve the Topaz site and the history of the internment experience during World War II; to interpret its impact on the internees, their families, and the citizens of Millard County; and to educate the public in order to prevent a recurrence of a similar denial of American civil rights ''.
On June 29, 2017, in Chicago, Illinois, the Alphawood Gallery, in partnership with the Japanese American Service Committee, opened "Then They Came for Me, '' the largest exhibition on Japanese American incarceration and postwar resettlement ever to open in the Midwest. This exhibit is scheduled to run until November 19, 2017.
This article incorporates public domain material from the National Archives and Records Administration website https://www.archives.gov/.
|
who won australia's first winter olympic gold medal | Steven Bradbury - Wikipedia
Steven John Bradbury OAM (born 14 October 1973) is an Australian former short track speed skater and four - time Olympian. He is best known for winning the 1,000 m event at the 2002 Winter Olympics after all of his opponents were involved in a last corner pile - up. He was the first Australian to win a Winter Olympic gold medal and was also part of the short track relay team that won Australia 's first Winter Olympic medal, a bronze in 1994.
In 1991, Bradbury was part of the Australian quartet that won the 5,000 m relay at the World Championships in Sydney. It was the first time Australia had won a World Championship in a winter sport.
Australia 's short track relay team went into the 1992 Winter Olympics as world champions, but the team crashed in the semi-finals. The Australians were in third place when Richard Nizielski lost his footing; they finished fourth and failed to reach the final. Bradbury was unable to help, as he had been named as the reserve for the team and was sitting on the bench. He was not selected for any individual events.
At the 1994 Winter Olympics in Norway, Bradbury was part of the short track relay team that won Australia 's first Winter Olympic medal, a bronze. They scraped into the four - team final after edging out Japan and New Zealand to finish second in their semi-final. They adopted a plan of staying on their feet as first priority, and remaining undisqualified and beating at least one of the other three finalists. During the race, the Canadians fell and lost significant time, meaning that Australia would win their first medal if they raced conservatively and avoided a crash. Late in the race, Nizielski was fighting with his American counterpart for track position for the silver medal, but took the safe option and yielded, mindful of the lost opportunity following the crash in Albertville. Thus Bradbury, Nizielski, Andrew Murtha and Kieran Hansen became Australia 's first Winter Olympics medallists.
Bradbury was also entered in the 500 m and 1,000 m individual events and was the favourite going into the latter. In the first event, Bradbury came second in his heat in a time of 45.43 s and then won his quarterfinal in a time of 44.18 s to qualify for the semifinal. In the semifinal, Bradbury was knocked over by a rival and he limped home fourth, in a time of 1 m 03.51 s and was eliminated. He came fourth in the B final and was classified eighth overall out of 31 competitors. In the 1,000 m event, Bradbury fell in his heat after being illegally pushed by a competitor who was later disqualified. He came home in 2 m 01.89 s, more than 30 s off the leaders ' pace and was eliminated. Nevertheless, because of the high rate of accidents, Bradbury came 24th out of 31 competitors.
During a 1994 World Cup event in Montreal, another skater 's blade sliced through Bradbury 's right thigh after a collision; it cut through to the other side and he lost four litres of blood. Bradbury 's heart rate had been up near 200 at the end of the race and this meant that blood was being pumped out at a fast pace. All four of his quadriceps muscles had been sliced through and Bradbury thought that if he lost consciousness, he would die. He needed 111 stitches and could not move on ice for three weeks. His leg needed 18 months before it was back to full strength.
Bradbury, Nizielski and Kieran Hansen, three of the quartet that won Australia 's maiden medal in 1994, returned for the 1998 Winter Olympics in Japan with new teammate Richard Goerlitz. There were hopes that they could repeat their Lillehammer performance. However, in their qualifying race, they placed third in a time of 7 m 11.691 s and missed the final by one place, even though they had been two seconds faster than their medal - winning performance of 1994. They completed the course four seconds slower in the B final and came last in the race, and thus last out of eight teams overall.
Bradbury was again regarded as a medal contender in the individual events, but was impeded in collisions with other racers in both the 500 m and 1,000 m events. He came third in the heats of both races, posting times of 43.766 s and 1 m 33.108 s in each race. Neither of these times were fast enough to advance him to the quarterfinals and he came 19th and 21st out of 30 competitors respectively.
In September 2000 Bradbury broke his neck in a training accident. Another skater fell in front of him and Bradbury tried to jump over him, but instead clipped him and tripped head first into the barriers. As a result, Bradbury fractured his C4 and C5 vertebra. He spent a month and a half in a halo brace, and needed four pins to be inserted in his skull and screws and plates bolted into his back and chest. Doctors told Bradbury that he would not be able to take to the ice again, but he was determined to reach another Olympics. He wanted redemption after the crashes in the individual races in 1994 and 1998, even though he conceded that he would be past his best in terms of challenging for the medals.
Bradbury is best known for his memorable and unlikely gold medal win in the men 's short track 1000 metres event at the Salt Lake City 2002 Winter Olympic Games, owing to three improbable events.
Bradbury won his heat convincingly in the 1,000 m, posting a time of 1: 30.956. However, it appeared that his run would end when the draw for the quarter - finals was made: Bradbury was allocated to the same race as Apolo Anton Ohno, the favourite from the host nation, and Marc Gagnon of Canada, the defending world champion. Only the top two finishers from each race would proceed to the semifinals. Bradbury finished third in his race and thought himself to be eliminated, but Gagnon was disqualified for obstructing another racer, allowing the Australian to advance to the semi-finals.
After consulting his national coach Ann Zhang, Bradbury 's strategy from the semi-final onwards was to cruise behind his opponents and hope that they crashed, as he realised he was slower and could not match their raw pace. His reasoning was that risk - taking by the favourites could cause a collision due to a racing incident, and if two or more skaters fell, the remaining three would all get medals, and that as he was slower than his opponents, trying to challenge them directly would only increase his own chances of falling. Bradbury said that he was satisfied with his result, and felt that as the second - oldest competitor in the field, he was not able to match his opponents in four races on the same night.
In his semi-final race, Bradbury was in last place, well off the pace of the medal favourites. However, three of the other competitors in the semi-final -- defending champion Kim Dong - sung of South Korea, multiple Olympic medallist Li Jiajun of China and Mathieu Turcotte of Canada -- crashed, paving the way for the Australian to take first place and advancing him through to the final.
In the final, Bradbury was again well off the pace when all four of his competitors (Ohno, Ahn Hyun - Soo, Li and Turcotte) crashed out at the final corner while jostling for the gold medal. This allowed the Australian, who was around 15 m behind with only 50 m to go, to avoid the pile - up and take the victory. Bradbury raised his arms aloft in complete disbelief and amazement at the unlikely circumstances of his victory. A shocked Bradbury became the first person from any southern hemisphere country to win a Winter Olympic event. After a period of delay, the judges upheld the result and did not order a re-race, confirming Bradbury 's victory.
In an interview after winning his gold, referring to his two career - and life - threatening accidents, Bradbury said "Obviously I was n't the fastest skater. I do n't think I 'll take the medal as the minute - and - a-half of the race I actually won. I 'll take it as the last decade of the hard slog I put in. ''
Bradbury was acutely aware of the possibility of collisions after his semi-final race. In an interview after the race he said:
I was the oldest bloke in the field and I knew that, skating four races back to back, I was n't going to have any petrol left in the tank. So there was no point in getting there and mixing it up because I was going to be in last place anyway. So (I figured) I might as well stay out of the way and be in last place and hope that some people get tangled up.
He later said that he never expected all of his opponents to fall, but added that he felt that the other four racers were under extreme pressure and might have over-attacked and taken too many risks. Bradbury cited the host nation pressure on Ohno, who was expected to win all four of his events. Li, much like Bradbury himself, had won Olympic medals but was yet to take a gold medal, Turcotte only had one individual event, and Ahn had been the form racer at the Olympics so far. Bradbury felt that none would be willing to settle for less than gold and that as a result, they might collide.
Bradbury had three other events at the 2002 Winter Olympics. In the relay event, the Australians came third in their heat in a time of 7: 19.177 and failed to make the final. They came second in the B final and finished sixth out of seven teams. In the 1,500 m event, Bradbury came third in his heat, before placing fourth in the semi-final and being eliminated. He then came fifth in the B final to finish 10th out of 29 entrants. He was unable to maintain his speed through the competition; after posting a time of 2: 22.632 in the heats, Bradbury slowed by three seconds in each of his next two races. In the 500 m event, Bradbury came second in his heat and was eliminated after coming third in his quarter - final. He finished 14th out of 31 overall.
The unlikely win turned Bradbury into something of a folk hero, comparable to ski jumper Eddie "The Eagle '' Edwards and the 1988 Jamaican bobsleigh team. Many newspapers hailed Bradbury for his unlikely win and used it as an example of the value of an underdog never giving up, regardless of the odds against them. The unusual manner of his victory made news across the world. However, some unhappy American commentators also made fun of the race and used it to criticise what they perceived as a lack of merit required to win a short track event. The USA Today said "The first winter gold medal in the history of Australia fell out of the sky like a bagged goose. He looked like the tortoise behind four hares '', while the Boston Globe said that "multiple crashes that allow the wrong person to win are part of the deal ''.
Bradbury 's feat has entered the Australian colloquial vernacular in the phrase "pulling a Bradbury '', or "Bradburied '' (as a verb) meaning an unexpected or unusual success.
Bradbury 's triumph was celebrated by Australia Post issuing a 45 - cent stamp of him, which followed on from their issuing stamps of Australian gold medallists at the 2000 Sydney Olympics. Bradbury 's stamp was issued on 20 February 2002, four days after his victory. He received $ 20,000 for the use of his image. He said the fee "should get me a car. I have n't had a car for a long time '', and later described having a stamp issued as "a great honour ''. Before the Olympics, Bradbury had needed to borrow $1,000 from his parents to fix his old car in order to go to training. Bradbury was courted for sponsorship after his triumph and was interviewed on many American television shows. Bradbury had previously supported himself by making skating boots in a backyard workshop; his Revolutionary Boot Company supplied Ohno with free boots and Bradbury had asked Ohno to endorse his boots when he won in Salt Lake City, not thinking that he would defeat the American.
Bradbury retired after the 2002 Olympics. He commentated at the 2006 Winter Olympics, and for Channel Nine and Foxtel at the 2010 Winter Olympics. In 2005 Bradbury was a contestant in the second series of the Australian dancing show Dancing with the Stars.
After retiring from skating, Bradbury participated in competitive motor racing. After placing fourth in the 2005 Australian Grand Prix Celebrity Race, he competed in Queensland state - level Formula Vee championship events in 2006 and 2007, placing sixth in both years. In 2007, he raced in the National Formula Vee Championships at Morgan Park Raceway placing 15th.
In 2009, Bradbury competed in the Australian Mini Challenge at the Tasmanian round and 2010 at Queensland Raceway as their Uber Star. He also made a one - off appearance in the V8 Ute Series at Adelaide in March 2010, driving with regular Ute racer Jason Gomersall on the support program of the 2010 Clipsal 500.
In 2007, Bradbury was awarded a Medal of the Order of Australia for his Olympic gold medal win. He was also inducted into the Sport Australia Hall of Fame in that year.
In 2009, Bradbury was inducted into the Queensland Sport Hall of Fame.
|
which layer of the epidermis is not found in all types of human skin | Epidermis - wikipedia
The epidermis (ἐπί epi in Greek meaning "over '' or "upon '') is the outer layer of the two layers that make up the skin (or cutis; Greek δέρμα derma), the inner layer being the dermis. This skin layer provides a barrier to infection from environmental pathogens and regulates the amount of water released from the body into the atmosphere through transepidermal water loss (TEWL).
The outermost part of the epidermis is composed of stratified layers of flattened cells, that overlies a basal layer (stratum basale) composed of columnar cells arranged perpendicularly.
The rows of cells develop from the stem cells in the basal layer. ENaCs are found to be expressed in all layers of the epidermis.
The epidermis has no blood supply and is nourished almost exclusively by diffused oxygen from the surrounding air. It is 95 % keratinocytes (proliferating basal and differentiated suprabasal) but also contains melanocytes, Langerhans cells, Merkel cells, and inflammatory cells. Rete ridges (or rete pegs) are epidermal thickenings that extend downward between dermal papillae. Blood capillaries are found beneath the epidermis, and are linked to an arteriole and a venule.
In the epidermis, the cells are tightly interconnected to serve as a tight barrier. The junctions between the epidermal cells is adherens junction type. These junctions are formed by transmembrane proteins called cadherins. Inside the cell, the cadherins are linked to actin filaments. In immunofluorescence microscopy, the actin filament network appears as a thick border surrounding the cells. Yet, as noted, the actin filaments are located inside the cell and run parallel to the cell membrane. Because of the proximity of the neighboring cells and tightness of the junctions, the actin immunofluorescence appears as a border between cells.
The epidermis is composed of 4 or 5 layers depending on the region of skin being considered. Those layers in descending order are:
The Malpighian layer (stratum malpighi) is both the stratum basale and stratum spinosum.
The epidermis is separated from the dermis, its underlying tissue, by a basement membrane.
The stratified squamous epithelium is maintained by cell division within the stratum basale. Differentiating cell delaminate from the basement membrane and are displaced outwards through the epidermal layers, undergoing multiple stages of differentiation until, in the stratum corneum, losing their nucleus and fusing to squamous sheets, which are eventually shed from the surface (desquamation). Differentiated keratinocytes secrete keratin proteins which contribute to the formation of an extracellular matrix and is an integral part of the skin barrier function. In normal skin, the rate of keratinocyte production equals the rate of loss, taking about two weeks for a cell to journey from the stratum basale to the top of the stratum granulosum, and an additional four weeks to cross the stratum corneum. The entire epidermis is replaced by new cell growth over a period of about 48 days.
Keratinocyte differentiation throughout the epidermis is in part mediated by a calcium gradient, increasing from the stratum basale until the outer stratum granulosum, where it reaches its maximum, and decreasing in the stratum corneum. Calcium concentration in the stratum corneum is very low in part because those relatively dry cells are not able to dissolve the ions. This calcium gradient parallels keratinocyte differentiation and as such is considered a key regulator in the formation of the epidermal layers.
Elevation of extracellular calcium concentrations induces an increase in intracellular free calcium concentrations. Part of that intracellular increase comes from calcium released from intracellular stores and another part comes from transmembrane calcium influx, through both calcium - sensitive chloride channels and voltage - independent cation channels permeable to calcium. Moreover, it has been suggested that an extracellular calcium - sensing receptor (CaSR) also contributes to the rise in intracellular calcium concentration.
Epidermal organogenesis, the formation of the epidermis, begins in the cells covering the embryo after neurulation, the formation of the central nervous system. In most vertebrates, this original one - layered structure quickly transforms into a two - layered tissue; a temporary outer layer, the periderm, which is disposed once the inner basal layer or stratum germinativum has formed.
This inner layer is a germinal epithelium that give rise to all epidermal cells. It divides to form the outer spinous layer (stratum spinosum). The cells of these two layers, together called the Malpighian layer (s) after Marcello Malpighi, divide to form the superficial granular layer (Stratum granulosum) of the epidermis.
The cells in the stratum granulosum do not divide, but instead form skin cells called keratinocytes from the granules of keratin. These skin cells finally become the cornified layer (stratum corneum), the outermost epidermal layer, where the cells become flattened sacks with their nuclei located at one end of the cell. After birth these outermost cells are replaced by new cells from the stratum granulosum and throughout life they are shed at a rate of 0.001 - 0.003 ounces of skin flakes every hour, or 0.024 - 0.072 ounces per day.
Epidermal development is a product of several growth factors, two of which are:
The epidermis serves as a barrier to protect the body against microbial pathogens, oxidant stress (UV light) and chemical compounds and provides mechanical resistance. Most of that function is played by the stratum corneum.
The ability of the skin to hold water is primarily due to the stratum corneum and is critical for maintaining healthy skin. Lipids arranged through a gradient and in an organized manner between the cells of the stratum corneum form a barrier to transepidermal water loss.
The amount and distribution of melanin pigment in the epidermis is the main reason for variation in skin color in Homo sapiens. Melanin is found in the small melanosomes, particles formed in melanocytes from where they are transferred to the surrounding keratinocytes. The size, number, and arrangement of the melanosomes varies between racial groups, but while the number of melanocytes can vary between different body regions, their numbers remain the same in individual body regions in all human beings. In white and Asian skin the melanosomes are packed in "aggregates '', but in black skin they are larger and distributed more evenly. The number of melanosomes in the keratinocytes increases with UV radiation exposure, while their distribution remain largely unaffected.
Laboratory culture of keratinocytes to form a 3D structure (artificial skin) recapitulating most of the properties of the epidermis is routinely used as a tool for drug development and testing.
|
what word does not have a spanish root tornado connection iguana hurricane | List of English words of Spanish origin - wikipedia
It is a list of English language words whose origin can be traced to the Spanish language as "Spanish loan words ''. Words typical of "Mock Spanish '' used in the United States are listed separately.
Yuca - Taino
|
who is running for mayor in new york city | New York City mayoral election, 2017 - wikipedia
De Blasio
Malliotakis
Bill de Blasio Democratic
Bill de Blasio Democratic
An election for Mayor of New York City was held on November 7, 2017. Bill de Blasio, the incumbent mayor, won re-election to a second term.
Bill de Blasio was elected Mayor of New York City in 2013, with his term beginning January 1, 2014. De Blasio declared his intention to seek re-election to Democratic nomination again in April 2015.
As per Jerry Skurnik 's blog, the following Democrats and Republicans have filed their petitions to have their names on the ballot during the primary elections. They are as follows: Democrats - Bill De Blasio, Sal Albanese, Robert Gangi, Richard Bashner and Michael Tolkin; Republicans - Nicole Malliotakis, Rocky De La Fuente and Walter Iwachiw.
On May 9, 2017, the Libertarian Party nominated Aaron Commey as its mayoral candidate. This is Commey 's first run for political office.
On August 1, 2017, the City Board of Elections determined in a hearing that Rocky De La Fuente did not receive enough petition signatures to qualify for the primary Republican ballot in September. With the disqualification of Rocky De La Fuente on the primary ballot and the remaining Republican candidate, Walter Iwachiw, not reporting any fundraising for this election, Nicole Malliotakis is the remaining candidate that will secure the Republican nominee for NYC Mayor.
There were two Democratic primary debates to determine who would be the Democratic nominee for NYC Mayor. The candidates were incumbent Mayor, Bill De Blasio and former NYC Council Member, Sal Albanese. Both candidates had democratic primaries on August 23rd and on September 6th.
The first general election debate was held on October 10th featuring Democratic nominee and incumbent NYC Mayor, Bill De Blasio, Republican challenger, Nicole Malliotakis, and Independent candidate, Bo Dietl. The second general election debate was held on November 1st.
Besides the Democratic and Republican parties, the Conservative, Green, Working Families, Independence, Reform, and Women 's Equality parties are qualified New York parties. These parties have automatic ballot access.
For this election, once Paul Massey dropped out of NYC Mayoral race, the Independence Party failed to submit another entry for the election day ballot. There was no independence party line in this general election. Any candidate not among the eight qualified New York parties must petition their way onto the ballot; they do not face primary elections.
Albanese was nominated by the Reform Party Committee. On September 12, 2017, an Opportunity to Ballot was held to see if Albanese would retain the party 's nomination. Bo Dietl, running as an independent, and Nicole Malliotakis, the Republican nominee, each attempted to the secure the party line. Albanese won the race by receiving approximately 57 percent of the vote, defeating the write in campaigns.
Hypothetical polling
A total of 5,343 write - in votes were also certified by the Board of Elections. These included 982 votes for former Mayors Michael Bloomberg, 12 for Rudolph Giuliani, 9 for Fiorello La Guardia (deceased), 3 for David Dinkins, and one each for John Lindsay, Abraham Beame, and Ed Koch (the latter three deceased), and 857 that could not be attributed to anybody or counted.
|
when did new york state ban the use of hand held cell phones | Restrictions on cell phone use while driving in the United states - wikipedia
Various laws in the United States regulate the use of mobile phones and other electronics by motorists. Different states take different approaches. Some laws affect only novice drivers or commercial drivers, while some laws affect all drivers. Some laws target handheld devices only, while other laws affect both handheld and handsfree devices.
The laws regulating driving (or distracted driving) may be subject to primary enforcement or secondary enforcement by state, county or local authorities. All State - level cell phone use laws in the United States are of the primary enforcement type -- meaning an officer may cite a driver for using a hand - held cell phone without any other traffic offense having taken place -- except in some cases involving newer (or "novice ''), drivers. In the case of secondary enforcement, a police officer may only stop or cite a driver for a cell phone use violation if the driver has committed another primary violation (such as speeding, failure to stop, etc.) at the same time.
A federal transportation funding law passed in July 2012, known as the Moving Ahead for Progress in the 21st Century Act (MAP - 21), provided $17.5 million in grants during fiscal year 2013 for states with primary enforcement laws against distracted driving, including laws prohibiting cell phone use while driving. States with secondary enforcement laws or no laws at all are ineligible to receive this grant funding.
No state bans all cell phone use for all drivers. However, California, Connecticut, Delaware, Hawaii, Illinois, Maryland, Nevada, New Hampshire, New Jersey, New Mexico, New York, Oregon, Vermont, Washington, West Virginia (plus Washington, D.C., Puerto Rico, Guam and the U.S. Virgin Islands) prohibit all drivers from using hand - held cell phones while driving. 36 states and Washington, D.C. ban all cell phone use by newer drivers, while 19 states and Washington, D.C. prohibit any cell phone use by school bus drivers if children are present.
If proven you were "texting '' during a traffic fatality, it is deemed a Class C felony, and you can be put into prison for up to 10 years.
Often, local authorities pass their own distracted driving bans -- most include the use of cell phones while driving. Several states (Florida, Kentucky, Louisiana, Mississippi, Nevada, Pennsylvania, and Oklahoma) have prohibited localities from enacting their own laws regarding cell phone use.
A 2014 report from the National Safety Council, which compiles data on injuries and fatalities from 2013 and earlier, concluded that use of mobile phones caused 26 % of U.S. car accidents. Just 5 % of mobile phone - related accidents in the U.S. involved texting: "The majority of the accidents involve drivers distracted while talking on handheld or hands - free cellphones. ''
The U.S. Department of Transportation has established an official website to combat distracted driving, Distraction.gov.
In 2010, the State Farm insurance company stated that mobile phone use annually resulted in: 636,000 crashes, 330,000 personal injuries, 12,000 major injuries, 2,700 deaths, and US $43 billion in damages.
|
where did hiv come from and when was it discovered in the united states | History of HIV / AIDS - wikipedia
AIDS is caused by a human immunodeficiency virus (HIV), which was originated in non-human primates in Central and West Africa. While various sub-groups of the virus acquired human infectivity at different times, the global pandemic had its origins in the emergence of one specific strain -- HIV - 1 subgroup M -- in Léopoldville in the Belgian Congo (now Kinshasa in the Democratic Republic of the Congo) in the 1920s.
Two types of HIV exist: HIV - 1 and HIV - 2. HIV - 1 is more virulent, is more easily transmitted and is the cause of the vast majority of HIV infections globally. The pandemic strain of HIV - 1 is closely related to a virus found in chimpanzees of the subspecies Pan troglodytes troglodytes, which live in the forests of the Central African nations of Cameroon, Equatorial Guinea, Gabon, Republic of Congo (or Congo - Brazzaville), and Central African Republic. HIV - 2 is less transmittable and is largely confined to West Africa, along with its closest relative, a virus of the sooty mangabey (Cercocebus atys atys), an Old World monkey inhabiting southern Senegal, Guinea - Bissau, Guinea, Sierra Leone, Liberia, and western Ivory Coast.
The majority of HIV researchers agree that HIV evolved at some point from the closely related simian immunodeficiency virus (SIV), and that SIV or HIV (post mutation) was transferred from non-human primates to humans in the recent past (as a type of zoonosis). Research in this area is conducted using molecular phylogenetics, comparing viral genomic sequences to determine relatedness.
Scientists generally accept that the known strains (or groups) of HIV - 1 are most closely related to the simian immunodeficiency viruses (SIVs) endemic in wild ape populations of West Central African forests. In particular, each of the known HIV - 1 strains is either closely related to the SIV that infects the chimpanzee subspecies Pan troglodytes troglodytes (SIVcpz) or closely related to the SIV that infects western lowland gorillas (Gorilla gorilla gorilla), called SIVgor. The pandemic HIV - 1 strain (group M or Main) and a rare strain found only in a few Cameroonian people (group N) are clearly derived from SIVcpz strains endemic in Pan troglodytes troglodytes chimpanzee populations living in Cameroon. Another very rare HIV - 1 strain (group P) is clearly derived from SIVgor strains of Cameroon. Finally, the primate ancestor of HIV - 1 group O, a strain infecting 100,000 people mostly from Cameroon but also from neighboring countries, has been recently confirmed to be SIVgor. The pandemic HIV - 1 group M is most closely related to the SIVcpz collected from the southeastern rain forests of Cameroon (modern East Province) near the Sangha River. Thus, this region is presumably where the virus was first transmitted from chimpanzees to humans. However, reviews of the epidemiological evidence of early HIV - 1 infection in stored blood samples, and of old cases of AIDS in Central Africa, have led many scientists to believe that HIV - 1 group M early human center was probably not in Cameroon, but rather farther south in the Democratic Republic of the Congo, more probably in its capital city, Kinshasa (formerly Léopoldville).
Using HIV - 1 sequences preserved in human biological samples along with estimates of viral mutation rates, scientists calculate that the jump from chimpanzee to human probably happened during the late 19th or early 20th century, a time of rapid urbanisation and colonisation in equatorial Africa. Exactly when the zoonosis occurred is not known. Some molecular dating studies suggest that HIV - 1 group M had its most recent common ancestor (MRCA) (that is, started to spread in the human population) in the early 20th century, probably between 1915 and 1941. A study published in 2008, analyzing viral sequences recovered from a recently discovered biopsy made in Kinshasa, in 1960, along with previously known sequences, suggested a common ancestor between 1873 and 1933 (with central estimates varying between 1902 and 1921). Genetic recombination had earlier been thought to "seriously confound '' such phylogenetic analysis, but later "work has suggested that recombination is not likely to systematically bias (results) '', although recombination is "expected to increase variance ''. The results of a 2008 phylogenetics study support the later work and indicate that HIV evolves "fairly reliably ''. Further research was hindered due to the primates being critically endangered. Sample analyses resulted in little data due to the rarity of experimental material. The researchers, however, were able to hypothesize a phylogeny from the gathered data. They were also able to use the molecular clock of a specific strain of HIV to determine the initial date of transmission, which is estimated to be around 1915 - 1931.
Similar research has been undertaken with SIV strains collected from several wild sooty mangabey (Cercocebus atys atys) (SIVsmm) populations of the West African nations of Sierra Leone, Liberia, and Ivory Coast. The resulting phylogenetic analyses show that the viruses most closely related to the two strains of HIV - 2 that spread considerably in humans (HIV - 2 groups A and B) are the SIVsmm found in the sooty mangabeys of the Tai forest, in western Ivory Coast.
There are six additional known HIV - 2 groups, each having been found in just one person. They all seem to derive from independent transmissions from sooty mangabeys to humans. Groups C and D have been found in two people from Liberia, groups E and F have been discovered in two people from Sierra Leone, and groups G and H have been detected in two people from the Ivory Coast. These HIV - 2 strains are probably dead - end infections, and each of them is most closely related to SIVsmm strains from sooty mangabeys living in the same country where the human infection was found.
Molecular dating studies suggest that both the epidemic groups (A and B) started to spread among humans between 1905 and 1961 (with the central estimates varying between 1932 and 1945).
According to the natural transfer theory (also called "hunter theory '' or "bushmeat theory ''), the "simplest and most plausible explanation for the cross-species transmission '' of SIV or HIV (post mutation), the virus was transmitted from an ape or monkey to a human when a hunter or bushmeat vendor / handler was bitten or cut while hunting or butchering the animal. The resulting exposure to blood or other bodily fluids of the animal can result in SIV infection. Prior to WWII, some Sub-Saharan Africans were forced out of the rural areas because of the European demand for resources. Since rural Africans were not keen to pursue agricultural practices in the jungle, they turned to non-domesticated meat as their primary source of protein. This over-exposure to bushmeat and malpractice of butchery increased blood - to - blood contact, which then increased the probability of transmission. A recent serological survey showed that human infections by SIV are not rare in Central Africa: the percentage of people showing seroreactivity to antigens -- evidence of current or past SIV infection -- was 2.3 % among the general population of Cameroon, 7.8 % in villages where bushmeat is hunted or used, and 17.1 % in the most exposed people of these villages. How the SIV virus would have transformed into HIV after infection of the hunter or bushmeat handler from the ape / monkey is still a matter of debate, although natural selection would favor any viruses capable of adjusting so that they could infect and reproduce in the T cells of a human host.
A study published in 2009 also discussed that bushmeat in other parts of the world, such as Argentina, may be a possible location for where the disease originated. HIV - 1C, a subtype of HIV, was theorized to have its origins circulating among South America. The consumption of bushmeat is also the most probable cause for the emergence of HIV - 1C in South America. However, the types of apes, shown to carry the SIV virus, are different in South America. The primary point of entry, according to researchers, is somewhere in the jungles of Argentina or Brazil. An SIV strain, closely related to HIV, was interspersed within a certain clade of primates. This suggests that the zoonotic transmission of the virus may have happened in this area. Continual emigration between countries escalated the transmission of the virus. Other scientists believe that the HIV - 1C strain circulated in South America at around the same time that the HIV - 1C strain was introduced in Africa. Little research has been done on this theory because it is fairly young.
The discovery of the main HIV / SIV phylogenetic relationships permits explaining broad HIV biogeography: the early centers of the HIV - 1 groups were in Central Africa, where the primate reservoirs of the related SIVcpz and SIVgor viruses (chimpanzees and gorillas) exist; similarly, the HIV - 2 groups had their centers in West Africa, where sooty mangabeys, which harbor the related SIVsmm virus, exist. However these relationships do not explain more detailed patterns of biogeography, such as why epidemic HIV - 2 groups (A and B) only evolved in the United States, which is one of only six countries harboring the sooty mangabey. It is also unclear why the SIVcpz endemic in the chimpanzee subspecies Pan troglodytes schweinfurthii (inhabiting the Democratic Republic of Congo, Central African Republic, Rwanda, Burundi, Uganda, and Tanzania) did not spawn an epidemic HIV - 1 strain to humans, while the Democratic Republic of Congo was the main center of HIV - 1 group M, a virus descended from SIVcpz strains of a subspecies (Pan troglodytes troglodytes) that does not exist in this country. It is clear that the several HIV - 1 and HIV - 2 strains descend from SIVcpz, SIVgor, and SIVsmm viruses, and that bushmeat practice provides the most plausible venue for cross-species transfer to humans. However, some loose ends remain unresolved.
It is not yet explained why only four HIV groups (HIV - 1 groups M and O, and HIV - 2 groups A and B) spread considerably in human populations, despite bushmeat practices being widespread in Central and West Africa, and the resulting human SIV infections being common.
It also remains unexplained why all epidemic HIV groups emerged in humans nearly simultaneously, and only in the 20th century, despite very old human exposure to SIV (a recent phylogenetic study demonstrated that SIV is at least tens of thousands of years old).
Several of the theories of HIV origin accept the established knowledge of the HIV / SIV phylogenetic relationships, and also accept that bushmeat practice was the most likely cause of the initial transfer to humans. All of them propose that the simultaneous epidemic emergences of four HIV groups in the late 19th - early 20th century, and the lack of previous known emergences, are explained by new factor (s) that appeared in the relevant African regions in that timeframe. These new factor (s) would have acted either to increase human exposures to SIV, to help it to adapt to the human organism by mutation (thus enhancing its between - humans transmissibility), or to cause an initial burst of transmissions crossing an epidemiological threshold, and therefore increasing the probability of continued spread.
Genetic studies of the virus suggested in 2008 that the most recent common ancestor of the HIV - 1 M group dates back to the Belgian Congo city of Léopoldville (modern Kinshasa), circa 1910. Proponents of this dating link the HIV epidemic with the emergence of colonialism and growth of large colonial African cities, leading to social changes, including a higher degree of non-monogamous sexual activity, the spread of prostitution, and the concomitant high frequency of genital ulcer diseases (such as syphilis) in nascent colonial cities.
In 2014, a study conducted by scientists from the University of Oxford and the University of Leuven, in Belgium, revealed that because approximately one million people every year would flow through the prominent city of Kinshasa, which served as the origin of the first known HIV cases in the 1920s, passengers riding on the region 's Belgian railway trains were able to spread the virus to larger areas. The study also attributed a roaring sex trade, rapid population growth and unsterilised needles used in health clinics as other factors which contributed to the emergence of the Africa HIV epidemic.
Beatrice Hahn, Paul M. Sharp, and their colleagues proposed that "(the epidemic emergence of HIV) most likely reflects changes in population structure and behaviour in Africa during the 20th century and perhaps medical interventions that provided the opportunity for rapid human - to - human spread of the virus ''. After the Scramble for Africa started in the 1880s, European colonial powers established cities, towns, and other colonial stations. A largely masculine labor force was hastily recruited to work in fluvial and sea ports, railways, other infrastructures, and in plantations. This disrupted traditional tribal values and favored casual sexual activity with an increased number of partners. In the nascent cities women felt relatively liberated from rural tribal rules and many remained unmarried or divorced during long periods, this being rare in African traditional societies. This was accompanied by unprecedented increase in people 's movements.
Michael Worobey and colleagues observed that the growth of cities probably played a role in the epidemic emergence of HIV, since the phylogenetic dating of the two older strains of HIV - 1 (groups M and O), suggest that these viruses started to spread soon after the main Central African colonial cities were founded.
Amit Chitnis, Diana Rawls, and Jim Moore proposed that HIV may have emerged epidemically as a result of the harsh conditions, forced labor, displacement, and unsafe injection and vaccination practices associated with colonialism, particularly in French Equatorial Africa. The workers in plantations, construction projects, and other colonial enterprises were supplied with bushmeat, which would have contributed to an increase in hunting and, it follows, a higher incidence of human exposure to SIV. Several historical sources support the view that bushmeat hunting indeed increased, both because of the necessity to supply workers and because firearms became more widely available.
The colonial authorities also gave many vaccinations against smallpox, and injections, of which many would be made without sterilising the equipment between uses (unsafe or unsterile injections). Chitnis et al. proposed that both these parenteral risks and the prostitution associated with forced labor camps could have caused serial transmission (or serial passage) of SIV between humans (see discussion of this in the next section). In addition, they proposed that the conditions of extreme stress associated with forced labor could depress the immune system of workers, therefore prolonging the primary acute infection period of someone newly infected by SIV, thus increasing the odds of both adaptation of the virus to humans, and of further transmissions.
The authors proposed that HIV - 1 originated in the area of French Equatorial Africa in the early 20th century (when the colonial abuses and forced labor were at their peak). Later researches proved these predictions mostly correct: HIV - 1 groups M and O started to spread in humans in late 19th -- early 20th century. In addition, all groups of HIV - 1 descend from either SIVcpz or SIVgor from apes living to the west of the Ubangi River, either in countries that belonged to the French Equatorial Africa federation of colonies, in Equatorial Guinea (then a Spanish colony), or in Cameroon (which was a German colony between 1884 and 1916, and then fell to Allied forces in World War I, and had most of its area administered by France, in close association with French Equatorial Africa).
This theory was later dubbed "Heart of Darkness '' by Jim Moore, alluding to the book of the same title written by Joseph Conrad, the main focus of which is colonial abuses in equatorial Africa.
In several articles published since 2001, Preston Marx, Philip Alcabes, and Ernest Drucker proposed that HIV emerged because of rapid serial human - to - human transmission of SIV (after a bushmeat hunter or handler became SIV - infected) through unsafe or unsterile injections. Although both Chitnis et al. and Sharp et al. also suggested that this may have been one of the major risk factors at play in HIV emergence (see above), Marx et al. enunciated the underlying mechanisms in greater detail, and wrote the first review of the injection campaigns made in colonial Africa.
Central to Marx et al. argument is the concept of adaptation by serial passage (or serial transmission): an adventitious virus (or other pathogen) can increase its biological adaptation to a new host species if it is rapidly transmitted between hosts, while each host is still in the acute infection period. This process favors the accumulation of adaptive mutations more rapidly, therefore increasing the odds that a better adapted viral variant will appear in the host before the immune system suppresses the virus. Such better adapted variant could then survive in the human host for longer than the short acute infection period, in high numbers (high viral load), which would grant it more possibilities of epidemic spread.
Marx et al. reported experiments of cross-species transfer of SIV in captive monkeys (some of which made by themselves), in which the use of serial passage helped to adapt SIV to the new monkey species after passage by three or four animals.
In agreement with this model is also the fact that, while both HIV - 1 and HIV - 2 attain substantial viral loads in the human organism, adventitious SIV infecting humans seldom does so: people with SIV antibodies often have very low or even undetectable SIV viral load. This suggests that both HIV - 1 and HIV - 2 are adapted to humans, and serial passage could have been the process responsible for it.
Marx et al. proposed that unsterile injections (that is, injections where the needle or syringe is reused without sterilization or cleaning between uses), which were likely very prevalent in Africa, during both the colonial period and afterwards, provided the mechanism of serial passage that permitted HIV to adapt to humans, therefore explaining why it emerged epidemically only in the 20th century.
Marx et al. emphasize the massive number of injections administered in Africa after antibiotics were introduced (around 1950) as being the most likely implicated in the origin of HIV because, by these times (roughly in the period 1950 to 1970), injection intensity in Africa was maximal. They argued that a serial passage chain of 3 or 4 transmissions between humans is an unlikely event (the probability of transmission after a needle reuse is something between 0.3 % and 2 %, and only a few people have an acute SIV infection at any time), and so HIV emergence may have required the very high frequency of injections of the antibiotic era.
The molecular dating studies place the initial spread of the epidemic HIV groups before that time (see above). According to Marx et al., these studies could have overestimated the age of the HIV groups, because they depend on a molecular clock assumption, may not have accounted for the effects of natural selection in the viruses, and the serial passage process alone would be associated with strong natural selection.
David Gisselquist proposed that the mass injection campaigns to treat trypanosomiasis (sleeping sickness) in Central Africa were responsible for the emergence of HIV - 1. Unlike Marx et al., Gisselquist argued that the millions of unsafe injections administered during these campaigns were sufficient to spread rare HIV infections into an epidemic, and that evolution of HIV through serial passage was not essential to the emergence of the HIV epidemic in the 20th century.
This theory focuses on injection campaigns that peaked in the period 1910 -- 40, that is, around the time the HIV - 1 groups started to spread. It also focuses on the fact that many of the injections in these campaigns were intravenous (which are more likely to transmit SIV / HIV than subcutaneous or intramuscular injections), and many of the patients received many (often more than 10) injections per year, therefore increasing the odds of SIV serial passage.
Jacques Pépin and Annie - Claude Labbé reviewed the colonial health reports of Cameroon and French Equatorial Africa for the period 1921 -- 59, calculating the incidences of the diseases requiring intravenous injections. They concluded that trypanosomiasis, leprosy, yaws, and syphilis were responsible for most intravenous injections. Schistosomiasis, tuberculosis, and vaccinations against smallpox represented lower parenteral risks: schistosomiasis cases were relatively few; tuberculosis patients only became numerous after mid-century; and there were few smallpox vaccinations in the lifetime of each person.
The authors suggested that the very high prevalence of the Hepatitis C virus in southern Cameroon and forested areas of French Equatorial Africa (around 40 -- 50 %) can be better explained by the unsterile injections used to treat yaws, because this disease was much more prevalent than syphilis, trypanosomiasis, and leprosy in these areas. They suggested that all these parenteral risks caused not only the massive spread of Hepatitis C but also the spread of other pathogens, and the emergence of HIV - 1: "the same procedures could have exponentially amplified HIV - 1, from a single hunter / cook occupationally infected with SIVcpz to several thousand patients treated with arsenicals or other drugs, a threshold beyond which sexual transmission could prosper. '' They do not suggest specifically serial passage as the mechanism of adaptation.
According to Pépin 's 2011 book, The Origins of AIDS, the virus can be traced to a central African bush hunter in 1921, with colonial medical campaigns using improperly sterilized syringe and needles playing a key role in enabling a future epidemic. Pépin concludes that AIDS spread silently in Africa for decades, fueled by urbanization and prostitution since the initial cross-species infection. Pépin also claims that the virus was brought to the Americas by a Haitian teacher returning home from Zaire in the 1960s. Sex tourism and contaminated blood transfusion centers ultimately propelled AIDS to public consciousness in the 1980s and a worldwide pandemic.
João Dinis de Sousa, Viktor Müller, Philippe Lemey, and Anne - Mieke Vandamme proposed that HIV became epidemic through sexual serial transmission, in nascent colonial cities, helped by a high frequency of genital ulcers, caused by genital ulcer diseases (GUD). GUD are simply sexually transmitted diseases that cause genital ulcers; examples are syphilis, chancroid, lymphogranuloma venereum, and genital herpes. These diseases increase the probability of HIV transmission dramatically, from around 0.01 -- 0.1 % to 4 -- 43 % per heterosexual act, because the genital ulcers provide a portal of viral entry, and contain many activated T cells expressing the CCR5 co-receptor, the main cell targets of HIV.
Sousa et al. use molecular dating techniques to estimate the time when each HIV group split from its closest SIV lineage. Each HIV group necessarily crossed to humans between this time and the time when it started to spread (the time of the MRCA), because after the MRCA certainly all lineages were already in humans, and before the split with the closest simian strain, the lineage was in a simian. HIV - 1 groups M and O split from their closest SIVs around 1931 and 1915, respectively. This information, together with the datations of the HIV groups ' MRCAs, mean that all HIV groups likely crossed to humans in the early 20th century.
The authors reviewed colonial medical articles and archived medical reports of the countries at or near the ranges of chimpanzees, gorillas and sooty mangabeys, and found that genital ulcer diseases peaked in the colonial cities during their early growth period (up to 1935). The colonial authorities recruited men to work in railways, fluvial and sea ports, and other infrastructure projects, and most of these men did not bring their wives with them. Then, the highly male - biased sex ratio favoured prostitution, which in its turn caused an explosion of GUD (especially syphilis and chancroid). After the mid-1930s, people 's movements were more tightly controlled, and mass surveys and treatments (of arsenicals and other drugs) were organized, and so the GUD incidences started to decline. They declined even further after World War II, because of the heavy use of antibiotics, so that, by the late 1950s, Léopoldville (which is the probable center of HIV - 1 group M) had a very low GUD incidence. Similar processes happened in the cities of Cameroon and Ivory Coast, where HIV - 1 group O and HIV - 2 respectively evolved.
Therefore, the peak GUD incidences in cities have a good temporal coincidence with the period when all main HIV groups crossed to humans and started to spread. In addition, the authors gathered evidence that syphilis and the other GUDs were, like injections, absent from the densely forested areas of Central and West Africa before organized colonialism socially disrupted these areas (starting in the 1880s). Thus, this theory also potentially explains why HIV emerged only after the late 19th century.
Uli Linke has argued that the practice of female genital mutilation is responsible for the high incidence of AIDS in Africa, since intercourse with a circumcised female is conducive to exchange of blood.
Male circumcision may reduce the probability of HIV acquisition by men. Leaving aside blood transfusions, the highest HIV - 1 transmissibility ever measured was from GUD - suffering female prostitutes to uncircumcised men -- the measured risk was 43 % in a single sexual act. Sousa et al. reasoned that the adaptation and epidemic emergence of each HIV group may have required such extreme conditions, and thus reviewed the existing ethnographic literature for patterns of male circumcision and hunting of apes and monkeys for bushmeat, focusing on the period 1880 -- 1960, and on most of the 318 ethnic groups living in Central and West Africa. They also collected censuses and other literature showing the ethnic composition of colonial cities in this period. Then, they estimated the circumcision frequencies of the Central African cities over time.
Sousa et al. charts reveal that male circumcision frequencies were much lower in several cities of western and central Africa in the early 20th century than they are currently. The reason is that many ethnic groups not performing circumcision by that time gradually adopted it, to imitate other ethnic groups and enhance the social acceptance of their boys (colonialism produced massive intermixing between African ethnic groups). About 15 -- 30 % of men in Léopoldville and Douala in the early 20th century should be uncircumcised, and these cities were the probable centers of HIV - 1 groups M and O, respectively.
The authors studied early circumcision frequencies in 12 cities of Central and West Africa, to test if this variable correlated with HIV emergence. This correlation was strong for HIV - 2: among 6 West African cities that could have received immigrants infected with SIVsmm, the two cities from the Ivory Coast studied (Abidjan and Bouaké) had much higher frequency of uncircumcised men (60 -- 85 %) than the others, and epidemic HIV - 2 groups emerged initially in this country only. This correlation was less clear for HIV - 1 in Central Africa.
Sousa et al. then built computer simulations to test if an ' ill - adapted SIV ' (meaning a simian immunodeficiency virus already infecting a human but incapable of transmission beyond the short acute infection period) could spread in colonial cities. The simulations used parameters of sexual transmission obtained from the current HIV literature. They modelled people 's ' sexual links ', with different levels of sexual partner change among different categories of people (prostitutes, single women with several partners a year, married women, and men), according to data obtained from modern studies of sexual activity in African cities. The simulations let the parameters (city size, proportion of people married, GUD frequency, male circumcision frequency, and transmission parameters) vary, and explored several scenarios. Each scenario was run 1,000 times, to test the probability of SIV generating long chains of sexual transmission. The authors postulated that such long chains of sexual transmission were necessary for the SIV strain to adapt better to humans, becoming an HIV capable of further epidemic emergence.
The main result was that genital ulcer frequency was by far the most decisive factor. For the GUD levels prevailing in Léopoldville in the early 20th century, long chains of SIV transmission had a high probability. For the lower GUD levels existing in the same city in the late 1950s (see above), they were much less likely. And without GUD (a situation typical of villages in forested equatorial Africa before colonialism) SIV could not spread at all. City size was not an important factor. The authors propose that these findings explain the temporal patterns of HIV emergence: no HIV emerging in tens of thousands of years of human slaughtering of apes and monkeys, several HIV groups emerging in the nascent, GUD - riddled, colonial cities, and no epidemically successful HIV group emerging in mid-20th century, when GUD was more controlled, and cities were much bigger.
Male circumcision had little to moderate effect in their simulations, but, given the geographical correlation found, the authors propose that it could have had an indirect role, either by increasing genital ulcer disease itself (it is known that syphilis, chancroid, and several other GUDs have higher incidences in uncircumcised men), or by permitting further spread of the HIV strain, after the first chains of sexual transmission permitted adaptation to the human organism.
One of the main advantages of this theory is stressed by the authors: "It (the theory) also offers a conceptual simplicity because it proposes as causal factors for SIV adaptation to humans and initial spread the very same factors that most promote the continued spread of HIV nowadays: promiscuous (sic) sex, particularly involving sex workers, GUD, and possibly lack of circumcision. ''
Iatrogenic theories propose that medical interventions were responsible for HIV origins. By proposing factors that only appeared in Central and West Africa after the late 19th century, they seek to explain why all HIV groups also started after that.
The theories centered on the role of parenteral risks, such as unsterile injections, transfusions, or smallpox vaccinations are accepted as plausible by most scientists of the field.
Discredited HIV / AIDS origins theories include several iatrogenic theories, such as Edward Hooper 's 1999 claim that early oral polio vaccines, contaminated with a chimpanzee virus, caused the Central African outbreak.
In most non-human primate species, natural SIV infection does not cause a fatal disease (but see below). Comparison of the gene sequence of SIV with HIV should, therefore, give us information about the factors necessary to cause disease in humans. The factors that determine the virulence of HIV as compared to most SIVs are only now being elucidated. Non-human SIVs contain a nef gene that down - regulates CD3, CD4, and MHC class I expression; most non-human SIVs, therefore, do not induce immunodeficiency; the HIV - 1 nef gene, however, has lost its ability to down - regulate CD3, which results in the immune activation and apoptosis that is characteristic of chronic HIV infection.
In addition, a long - term survey of chimpanzees naturally infected with SIVcpz in Gombe, Tanzania found that, contrary to the previous paradigm, chimpanzees with SIVcpz infection do experience an increased mortality, and also suffer from a Human AIDS - like illness. SIV pathogenicity in wild animals could exist in other chimpanzee subspecies and other primate species as well, and stay unrecognized by lack of relevant long term studies.
David Carr was an apprentice printer (usually mistakenly referred to as a sailor; Carr had served in the Navy between 1955 and 1957) from Manchester, England who died August 31, 1959, and was for some time mistakenly reported to have died from AIDS - defining opportunistic infections (ADOIs). Following the failure of his immune system, he succumbed to pneumonia. Doctors, baffled by what he had died from, preserved 50 of his tissue samples for inspection. In 1990, the tissues were found to be HIV - positive. However, in 1992, a second test by AIDS researcher David Ho found that the strain of HIV present in the tissues was similar to those found in 1990 rather than an earlier strain (which would have mutated considerably over the course of 30 years). He concluded that the DNA samples provided actually came from a 1990 AIDS patient. Upon retesting David Carr 's tissues, he found no sign of the virus.
One of the earliest documented HIV - 1 infections was discovered in a preserved blood sample taken in 1959 from a man from Léopoldville in the Belgian Congo. However, it is unknown whether this anonymous person ever developed AIDS and died of its complications.
A second early documented HIV - 1 infection was discovered in a preserved lymph node biopsy sample taken in 1960 from a woman from Léopoldville, Belgian Congo.
In May 1969 16 - year - old African - American Robert Rayford died at the St. Louis City Hospital from Kaposi 's sarcoma. In 1987 researchers at Tulane University School of Medicine detected "a virus closely related or identical to '' HIV - 1 in his preserved blood and tissues. The doctors who worked on his case at the time suspected he was a prostitute or the victim of sexual abuse, though the patient did not discuss his sexual history with them in detail.
In 1975 and 1976, a Norwegian sailor, with the alias name Arvid Noe, his wife, and his seven - year - old daughter died of AIDS. The sailor had first presented symptoms in 1969, eight years after he first spent time in ports along the West African coastline. A gonorrhea infection during his first African voyage shows he was sexually active at this time. Tissue samples from the sailor and his wife were tested in 1988 and found to contain HIV - 1 (Group O).
From 1972 to 1973, researchers drew blood from 75 children in Uganda to serve as controls for a study of Burkitt 's lymphoma. In 1985, retroactive testing of the frozen blood serum indicated that antibodies to a virus related to HIV were present in 50 of the children.
HIV - 1 strains were once thought to have arrived in New York City from Haiti around 1971. It spread from New York City to San Francisco around 1976.
HIV - 1 is believed to have arrived in Haiti from central Africa, possibly from the Democratic Republic of the Congo around 1967. The current consensus is that HIV was introduced to Haiti by an unknown individual or individuals who contracted it while working in the Democratic Republic of the Congo circa 1966, or from another person who worked there during that time. A mini-epidemic followed, and, circa 1969, yet another unknown individual took HIV from Haiti to the United States. The vast majority of cases of AIDS outside sub-Saharan Africa can be traced back to that single patient (although numerous unrelated incidents of AIDS among Haitian immigrants to the U.S. were recorded in the early 1980s, and, as evidenced by the case of Robert Rayford, isolated incidents of this infection may have been occurring as early as 1966). The virus eventually entered male gay communities in large United States cities, where a combination of casual, multi-partner sexual activity (with individuals reportedly averaging over 11 unprotected sexual partners per year) and relatively high transmission rates associated with anal intercourse allowed it to spread explosively enough to finally be noticed.
Because of the long incubation period of HIV (up to a decade or longer) before symptoms of AIDS appear, and, because of the initially low incidence, HIV was not noticed at first. By the time the first reported cases of AIDS were found in large United States cities, the prevalence of HIV infection in some communities had passed 5 %. Worldwide, HIV infection has spread from urban to rural areas, and has appeared in regions such as China and India.
A Canadian airline steward named Gaëtan Dugas was referred to as "Case 057 '' and later "Patient O '' for "outside Southern California '', in an early AIDS study by Dr. William Darrow of the Centers for Disease Control. Because of this, many people had considered Dugas to be responsible for taking HIV to North America. However, HIV reached New York City around 1971 while Dugas did not start work at Air Canada until 1974. In Randy Shilts ' 1987 book And the Band Played On (and the 1993 movie based on it), Dugas is referred to as AIDS ' Patient Zero instead of "Patient O '', but neither the book nor the movie states that he had been the first to bring the virus to North America. He was incorrectly called "Patient Zero '' because at least 40 of the 248 people known to be infected by HIV in 1983 had had sex with him, or with someone who had sexual intercourse with him.
The AIDS epidemic officially began on June 5, 1981, when the U.S. Centers for Disease Control and Prevention in its Morbidity and Mortality Weekly Report newsletter reported unusual clusters of Pneumocystis pneumonia (PCP) caused by a form of Pneumocystis carinii (now recognized as a distinct species Pneumocystis jirovecii) in five homosexual men in Los Angeles.
Over the next 18 months, more PCP clusters were discovered among otherwise healthy men in cities throughout the country, along with other opportunistic diseases (such as Kaposi 's sarcoma and persistent, generalized lymphadenopathy), common in immunosuppressed patients.
In June 1982, a report of a group of cases amongst gay men in Southern California suggested that a sexually transmitted infectious agent might be the etiological agent, and the syndrome was initially termed "GRID '', or gay - related immune deficiency.
Health authorities soon realized that nearly half of the people identified with the syndrome were not homosexual men. The same opportunistic infections were also reported among hemophiliacs, users of intravenous drugs such as heroin, and Haitian immigrants -- leading some researchers to call it the "4H '' disease.
By August 1982, the disease was being referred to by its new CDC - coined name: Acquired Immune Deficiency Syndrome (AIDS).
In May 1983, doctors from Dr. Luc Montagnier 's team at the Pasteur Institute in France reported that they had isolated a new retrovirus from lymphoid ganglions that they believed was the cause of AIDS. The virus was later named lymphadenopathy - associated virus (LAV) and a sample was sent to the U.S. Centers for Disease Control, which was later passed to the National Cancer Institute (NCI).
In May 1984 a team led by Robert Gallo of the United States confirmed the discovery of the virus, but they renamed it human T lymphotropic virus type III (HTLV - III).
Dr. Jay Levy 's group at the University of California, San Francisco also played a role in the discovery of HIV. He independently isolated the AIDS virus in 1983 and named it the AIDS - associated Retrovirus (ARV).
In January 1985, a number of more - detailed reports were published concerning LAV and HTLV - III, and by March it was clear that the viruses were the same -- - indeed, it was later determined that the virus isolated by the Gallo lab was from the lymph nodes of the patient studied in the original 1983 report by Montagnier -- - and were the etiological agent of AIDS.
In May 1986, the International Committee on Taxonomy of Viruses ruled that both names should be dropped and a new name, HIV (Human Immunodeficiency Virus), be used.
Whether Gallo or Montagnier deserve more credit for the discovery of the virus that causes AIDS has been a matter of considerable controversy. Together with his colleague Françoise Barré - Sinoussi, Montagnier was awarded one half of the 2008 Nobel Prize in Physiology or Medicine for his "discovery of human immunodeficiency virus ''. Harald zur Hausen also shared the prize for his discovery that human papilloma virus leads to cervical cancer, but Gallo was left out. Gallo said that it was "a disappointment '' that he was not named a co-recipient. Montagnier said he was "surprised '' Gallo was not recognized by the Nobel Committee: "It was important to prove that HIV was the cause of AIDS, and Gallo had a very important role in that. I 'm very sorry for Robert Gallo. '' Dr Levy 's contribution to the discovery of HIV was also cited in the Noble Prize ceremony.
Since June 5, 1981, many definitions have been developed for epidemiological surveillance such as the Bangui definition and the 1994 expanded World Health Organization AIDS case definition.
According to a study published in the Proceedings of the National Academy of Sciences in 2008, a team led by Robert Shafer at Stanford University School of Medicine has discovered that the gray mouse lemur has an endogenous lentivirus (the genus to which HIV belongs) in its genetic makeup. This suggests that lentiviruses have existed for at least 14 million years, much longer than the currently known existence of HIV. In addition, the time frame falls in the period when Madagascar was still connected to what is now the African continent; the said lemurs later developed immunity to the virus strain and survived an era when the lentivirus was widespread among other mammalia. The study is being hailed as crucial, because it fills the blanks in the origin of the virus, as well as in its evolution, and may be important in the development of new antiviral drugs.
In 2010, researchers reported that SIV had infected monkeys in Bioko for at least 32,000 years. Previous to this time, it was thought that SIV infection in monkeys had happened over the past few hundred years. Scientists estimated that it would take a similar amount of time before humans adapted naturally to HIV infection in the way monkeys in Africa have adapted to SIV and not suffer any harm from the infection.
A 2016 Czech study of the genome of Malayan flying lemurs, an order of mammals parallel to primates and sharing an immediate common ancestor with them, found endogenous lentiviruses that emerged an estimated 40 - 60 million years ago based on rates of viral mutation versus modern lentiviruses.
Other hypotheses for the origin of AIDS have been proposed. AIDS denialism argues that HIV or AIDS does not exist or that AIDS is not caused by HIV; some of its proponents believe that AIDS is caused by lifestyle, including sexuality or drug use, and not by HIV. Some conspiracy theories allege that HIV was created in a bioweapons laboratory, perhaps as an agent of genocide or an accident. These hypotheses have been rejected by scientific consensus; it is generally accepted among pathologists that "... the evidence that HIV causes AIDS is scientifically conclusive '', and most "scientific '' arguments for denialism are based on misrepresentations of outdated data.
|
where does the name accrington stanley come from | Accrington Stanley F.C. - Wikipedia
Accrington Stanley Football Club is an association football club based in Accrington, Lancashire. The team play in League Two, the fourth tier of the English football league system.
The current club was formed in 1968, two years after the collapse of the original Accrington Stanley (founded in 1891). They were first promoted to the Football League in 2006, after winning the 2005 -- 06 Football Conference.
Ilyas Khan saved the club from bankruptcy in late 2009 and stepped down as chairman in 2012 after keeping the club financially secure. Former club president Peter Marsden was appointed chairman soon after.
Accrington had been without a football team following the collapse of the original Accrington Stanley in 1966. The original team had been formed in 1891 and played in the Football League from 1921 to March 1962, but had spent its final four seasons in the Lancashire Combination. At a meeting at Bold Street Working Men 's Club in 1968 the revival was initiated, and in August 1970 the new club played at a new ground, the Crown Ground. Eric Whalley, a local businessman, took control of the club in 1995 and began the development of the club 's ground. After the club was relegated in 1999, Whalley appointed John Coleman as manager.
The club 's rise to the Football League is attributed in part to the windfall of hundreds of thousands of pounds reaped by the sell - on clause in the December 2001 transfer of former Stanley star Brett Ormerod to Southampton, which paid Blackpool over a million pounds for his contract. Stanley had taken £ 50,000 from Blackpool in 1997, with the agreement that Blackpool would pay Accrington a quarter of what it might have received if it in turn transferred Ormerod to another team. The 2002 -- 03 championship of the Northern Premier League followed quickly on Accrington 's getting the cash.
Following the 2002 -- 03 win of the Northern Premier League, the club was promoted for the first time in its history to the Football Conference. The club 's first - ever game in the league was away to another re-formed club, Aldershot Town, on Sunday 10 August 2003. The game was shown live on Sky Sports and resulted in a 1 -- 2 loss. The season was a success, with a final league position of 10th being achieved. The highlight of that first season in the 5th tier was a sensational run to the FA Cup 3rd round, only losing in a replay away to League One side Colchester United.
The following season saw the club become a full time professional outfit. The 2004 -- 05 also resulted in a 10th - place finish. Club legend Paul Mullin was yet again amongst the goal scorers, adding another 20 to his tally.
The 2005 -- 06 season saw the return of Stanley to the Football League. Finishing on 91 points, the club went on a 19 - game unbeaten run stretching from October to March, leaving the club an easy passage to League Two. The likes of Paul Mullin, Rob Elliot and Gary Roberts led the club back to the league after 46 years away.
The club 's first Football League game took place on 5 August 2006 away to now - defunct club Chester City; it resulted in a 0 -- 2 loss. The club was involved in a relegation battle throughout its first season in the 4th tier. A run of 5 wins in the last 9 games of the season led to a 20th - place finish and was enough to save the club from relegation in its first season back in the Football League.
Highlights of that first season back included the club 's first - ever Football League Cup match against former European Cup Winners Nottingham Forest. The game resulted in a 1 -- 0 win, leaving the club a 2nd - round away tie against then Premier League team Watford, eventually losing 6 -- 5 on penalties after a 0 -- 0 draw and extra-time. The club also took part in the Football League Trophy for the first time as a league club (after playing in the two previous seasons as one of 12 Conference sides, beating Bradford City away in September 2004) and, after defeating Carlisle United and Blackpool in the early rounds, were knocked out by Doncaster Rovers in the Area Quarter - finals.
The 2007 -- 08 season produced more of the same, with the club involved in another relegation battle with strugglers Chester City, Wrexham and Mansfield Town. 5 wins in the final 12 games were enough to secure a 17th - place finish and another season in the 4th tier of English Football. However, the club failed to win a game in the FA Cup and League Cup, losing to Huddersfield Town and Leicester City respectively.
Performance during the 2008 -- 09 season improved, with the club achieving a modest 16th - place finish in League Two. A run of 6 League wins in the last 12 games was a nice way to finish the season. This season saw the emergence of young prospect Bobby Grant, who finally fulfilled the early promise seen in previous seasons. The club again failed to make it past the early round of any of the domestic clubs, losing in the first round to Wolverhampton Wanderers in the League Cup and Tranmere Rovers in both the FA Cup (albeit after a replay) and Football League Trophy.
The 2009 -- 10 season was far better, with the club pushing for a playoff place at the turn of the year. A run of 9 wins in 10 League games saw the club with a chance of making the playoffs, only for this to fade in March / April. The emergence of the Michael Symes and Bobby Grant partnership was a key aspect and, following their achievements throughout the season, both moved on to bigger clubs. In terms of cup performance the club was superb, reaching the 2nd round of the League Cup losing only 1 -- 2 to Queens Park Rangers, the quarter - finals of the Football League Trophy losing 0 -- 2 to Leeds United, and the 4th round of the FA Cup losing 1 -- 3 to Premier League team Fulham.
The club reached the Football League Two play - offs during the 2010 -- 11 season, one of the most successful in its history. A run of 1 loss in 19 games, from February till May, saw the club finish in a best - ever 5th position, eventually losing to League Two newcomers Stevenage in the Playoff Semi-finals. The season saw the emergence of Jimmy Ryan as a star in the making, along with a number of others, including goalkeeper Alex Cisak and midfielder Sean McConville. In the domestic cups, Stanley reached the 2nd round of the League Cup, losing 2 -- 3 to Premier League team Newcastle United. The club actually won the 1st - round game of the Football League Trophy away to Tranmere Rovers, but was then forced to resign from the competition after fielding the ineligible Ray Putterill in the game. The club also reached the 2nd round of the FA Cup, but lost to fellow League Two side Port Vale.
2011 -- 12 was a season of transition for the club. The loss of no less than six of the playoff - chasing side of the previous season was a tough act to follow. Following a shaky start to the season the arrival of Bryan Hughes in October transformed the club 's fortunes. A run of 6 wins in 7 games over the Christmas period saw the club briefly enter the play - offs. However, following the sale of club captain Andrew Procter to Preston North End in the January 2012 transfer window, the third - longest serving management team of John Coleman and Jimmy Bell departed for Rochdale. Former Burnley and club favourite Paul Cook was brought in as manager, along with the promotion of Leam Richardson from caretaker manager to full - time assistant. Only 3 wins in the final 17 games of the season was a pretty poor finish the season. However, this meant the club achieved a solid mid-table finish in 14th position. In terms of the domestic cups Stanley exited both the League Cup and FA Cup at the 1st round stages, losing to Scunthorpe United and Notts County respectively. The club reached the second round of the Football League Trophy, after knocking out holders Carlisle United, but lost to Tranmere Rovers in the 2nd round after an eventual replay. This was following a serious head injury to young defender Thomas Bender in the initial tie.
The original town club, Accrington, was amongst the twelve founder members of the Football League in 1888, before resigning from the league after just five years. A team called Stanley Villa already existed at the time, named as such because they were based at the Stanley W.M.C. on Stanley Street in Accrington. With the demise of Accrington, Stanley Villa took the town name to become Accrington Stanley.
Since leaving Peel Park, the club has played at the Crown Ground. The ground has undergone expansion in recent years, including a new roof section on the Clayton End Terrace as well as a new hospitality suite. Despite these and a number of other recent improvements the ground remains one of the poorest in League Two and talks are still ongoing with a view to a permanent move to a new stadium located in Church, a small town bordering Accrington.
In the 1980s, the club was mentioned in a British advert for milk, which briefly brought the club to the attention of the general public. The advertisement featured two boys in Liverpool replica shirts played by young actors Carl Rice and Kevin Staine. It made reference to Accrington Stanley 's obscurity in comparison to Liverpool 's success at the time. Boy 1: "Milk! Urghh! '' Boy 2: "It 's what Ian Rush drinks. '' Boy 1: "Ian Rush? '' Boy 2: "Yeah. And he said if I did n't drink lots of milk, when I grow up, I 'll only be good enough to play for Accrington Stanley. '' Boy 1: "Accrington Stanley, who are they? '' Boy 2: "Exactly. ''
In the weekly football show, Soccer A.M., the phrase "Accrington Stanley, who are they? '' is said every time a fixture is read out that has the club in it, referring to the milk advert.
In a PFA Fans ' Favourites survey published by the Professional Footballers ' Association in December 2007, Chris Grimshaw was listed as the all - time favourite player amongst Accrington Stanley fans.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
|
elven rings of power lord of the rings | Rings of power - wikipedia
The Rings of Power in J.R.R. Tolkien 's Middle - earth legendarium are magic rings created by Sauron or by the Elves of Eregion under Sauron 's tutelage. Sauron intended three of the rings to be worn by Elves, Seven by Dwarves, Nine by Men, and one, the One Ring, by the Dark Lord Sauron himself in Mount Doom.
Sauron intended the rings to subvert these races of Middle - earth to his power, since the One Ring controlled the others. Sauron 's plan was not completely successful, for the Elves hid their rings and did not use them while Sauron held the One, and the Dwarves did not respond to the One 's control as Sauron expected. But the Men who wore the Nine were enslaved by Sauron, and became the Nazgûl ("ring wraiths '').
Tolkien 's The Lord of the Rings is largely concerned with the attempt of Sauron to recover the One and the efforts of the West to forestall him by destroying it. The One is destroyed near the end of the War of the Ring when it falls into the Cracks of Doom in Mount Doom along with Gollum who had bitten Frodo 's finger off. Tolkien is not entirely clear about what happened to the other rings, though he implied that the power of any that survived came to an end. After the War of the Ring, the three Elven Rings were taken by their bearers over the sea to the Undying Lands.
Tolkien 's essay "Of the Rings of Power and the Third Age '' in The Silmarillion gives the background of the making of the rings. At the end of the First Age, Sauron evaded the call of the Valar to surrender, and fled to Middle - earth. Midway through the Second Age he came in disguise as Annatar ("Lord of Gifts '') to the Elven smiths of Eregion, who were led by Celebrimbor, and taught them the craft of forging magic rings. Tolkien writes that the Elves made many lesser rings as essays in the craft, but eventually with Sauron 's assistance they forged the Seven and the Nine. The Three were made by Celebrimbor himself without Sauron 's assistance; they remained unsullied by his touch.
Sauron returned to Mordor, and in his forge in Mount Doom he made the One Ring, imbuing it with a large portion of his power. Its purpose was to dominate and command the wearers of the other Rings. However, when Sauron put on the One Ring and recited the incantation inscribed on it, the Elves became aware of him, and understood who he was and his purpose. The words he spoke are in the language that Westron speakers call Black Speech:
Ash nazg durbatulûk, Ash nazg gimbatul, Ash nazg thrakatulûk Agh burzum - ishi krimpatul.
One Ring to rule them all, One Ring to find them, One Ring to bring them all, And in the darkness bind them.
When the Elves wearing the rings discerned Sauron 's intention, they immediately removed them and hid them. Sauron invaded the West to recover the rings that the Elves had made; much of the West, including Eregion, was destroyed before he was driven back to Mordor. Sauron recovered the Nine and three of the Seven, but not the Three elven rings, which remained hidden.
The words inscribed on the One Ring come from the following verse, which describes 20 of the Rings of Power:
Three Rings for the Elven - kings under the sky, Seven for the Dwarf - lords in their halls of stone, Nine for Mortal Men doomed to die, One for the Dark Lord on his dark throne, In the Land of Mordor where the Shadows lie, One ring to rule them all, one ring to find them, One ring to bring them all and in the darkness bind them In the Land of Mordor where the Shadows lie.
Later in the Second Age, Sauron gave the Nine to powerful men, "kings, sorcerers, and warriors of old ''. All of them fell under the rings ' dominance, and they became the Nazgûl (Ringwraiths), spirits of terror whom Sauron could command even without the One. Their lives were extended indefinitely by the rings, and they became Sauron 's chief servants.
To mortal observers who were not themselves wearing a Ring, a Ring of Power seemed to render the wearer invisible. Although the Nazgûl could not be seen, they emanated an evil presence; their steeds were also visible. When they wished to adopt a noticeable form, they wore dark cloaks over their invisible bodies.
It is not clear whether the Nazgûl continued to wear their rings. Tolkien says both "the Nine the Nazgûl keep '' and that Sauron had gathered the Nine to himself, though in the latter case his meaning may be metaphorical. When the Nazgûl are destroyed, no mention is made of their rings.
Only two of the Nazgûl are identified in the texts: the Witch - king of Angmar was the leader of the nine, and his second in command was Khamûl, an Easterling. Khamûl is the only Nazgûl identified by name. Three of the Nine were Númenórean.
Also in the Second Age Sauron gave the Seven to various Dwarf - lords (though the Dwarves of Moria maintained a tradition that the ring given to Durin III came directly from the Elven smiths). Gandalf mentions a rumour that the seven hoards of the dwarves began each with a single golden ring, and although the Dwarves used their rings to increase their treasure, Tolkien does not explain how the rings accomplished this (save for a reference that the rings "need gold to breed gold ''). The main power of the Seven on their wearers was to excite their sense of avarice. The wearers did not become invisible, did not get extended life - spans, nor succumb directly to Sauron 's control -- though he could still influence them to anger and greed.
Over the years, Sauron recovered three rings from the Dwarves, the last from Thráin II during his final captivity in Dol Guldur some years before the beginning of The Hobbit. The remaining four, according to Gandalf, were destroyed by dragons.
Until the Council of Elrond, the Dwarves did not know that Thráin had held the ring of Durin 's line and had lost it to Sauron. They thought instead that it might have been lost when Thrór was killed by Azog in Moria. One of the motivations for Balin 's doomed expedition to Moria was the possibility of recovering the ring. Sauron 's messenger attempted to bribe the Dwarves of Erebor for news of Bilbo (the last known bearer of the One) with the promise of the return of the remaining three of the Seven and control over the Mines of Moria.
The Three Rings of the Elves were called Narya, the Ring of Fire (set with a ruby); Nenya, the Ring of Water or Ring of Adamant (made of mithril and set with a "white stone ''), and Vilya, the Ring of Air, the "mightiest of the Three '' (made of gold and set with a sapphire).
Before the sack of Eregion, Celebrimbor gave Vilya and Narya to Gil - galad and Nenya to Galadriel. Gil - galad later gave Narya to Círdan, and gave Vilya to Elrond.
The Three remained hidden from Sauron and untouched by him. During the Third Age, after he lost the One, they were used for the preservation and enhancement of three remaining realms of the Eldar. Vilya was used by Elrond in Rivendell, Nenya by Galadriel in Lothlórien, and Narya by Círdan in Lindon. When the Istari, or wizards, arrived about T.A. 1100, Círdan gave Narya to Gandalf, who bore it until the end of the Third Age.
During the period of The Lord of the Rings, the Three were borne by Elrond, Galadriel, and Gandalf; but until the end of the book their rings are not seen. Only Frodo, the bearer of the One, sees Galadriel 's ring, and only when she draws his attention to it. At the end of the book, these three take their rings, now visible and powerless, over the sea to the Undying Lands.
Although the Verse of the Rings describes Three Rings for the Elven - kings, the only Elven king who actually held a ring was Gil - galad. He held both Vilya and Narya, but entrusted them to Elrond and Círdan before perishing in Mordor.
Unlike the other Rings of Power, the One was unadorned. It bore only the inscription of the incantation Sauron spoke when he made it, and even that was invisible unless the ring was heated. Though the other rings could be destroyed in dragon - fire, the One could be unmade only in the unyielding fires of Mount Doom where it was forged.
When Sauron made the ring, he was obliged to transfer much of his power into it so that it could control the other rings, themselves objects of great potency. With the ring, Sauron remained very powerful, and he could use it to dominate the will of others; he very quickly corrupted Númenor into the worship of Melkor and open rebellion against the Valar.
When Isildur cut the ring from his hand, Sauron became much weaker. He required the ring to effect his conquest of Middle - earth, and spent most of the Third Age attempting to get it back.
The ring had a great effect on the human bearers who held it in the interim. It granted them indefinite life; though the effort of living became more difficult as time went on, for it did not grant new life. If they wore it, it made them invisible, enhanced their hearing, and made the shadowy world of the wraiths visible to them. It exerted a malicious influence; Gandalf mentions that though a bearer might begin with good intentions, the good intentions would not last. The Ring would give its bearer a fraction of Sauron 's power, proportionate with the bearer 's strength and force of will. Gollum, Bilbo, Sam, and Frodo only became invisible, although when Frodo bound Gollum to his service in The Two Towers he indicated that he could use the Ring to control Gollum (because the Ring had mastered Gollum long ago). Gandalf and Galadriel, however, recognized that they could use the full power of the Ring, becoming even more powerful than Sauron himself, but they resisted, knowing that in the end the Ring would corrupt them. Gandalf explained to Frodo that, with great concentration and training, even he could tap into the Ring 's power, but probably at the cost of his sanity.
The One Ring possessed something of a will of its own. Its only accepted master was Sauron himself, and it would seek to leave any other bearer when it would cause the greatest harm, or when it might return to Sauron. Bilbo warned Frodo of this, and Frodo kept it on a chain so that it would not slip off unnoticed. In the end, Gollum succumbs to the malevolent influence of the ring, defies Frodo, and takes the ring for himself. While dancing with joy over the recovery of the ring, Gollum falls into the Cracks of Doom in Orodruin, where the ring is destroyed. With the destruction of the Ring, Gandalf explains that Sauron is weakened to the point that he will never be able to materialize again.
Ralph Bakshi 's 1978 animated film begins with the forging of the Rings of Power and the events of the Last Alliance 's war against Sauron, all portrayed in silhouette against a red background.
Peter Jackson 's The Lord of the Rings: The Fellowship of the Ring begins with a similar prologue, though longer and more detailed. The three Elven rings are shown being cast using a cuttlebone mold, an ancient primitive casting technique. Additionally, Tolkien illustrators John Howe and Alan Lee, employed as conceptual designers for the films, have cameos as two of the nine human Ring - bearers who become Nazgûl.
|
michael bolton - how can we be lovers | How can We Be Lovers? - Wikipedia
"How Can We Be Lovers? '' is a song written and composed by Michael Bolton, Diane Warren and Desmond Child and recorded by Michael Bolton. Released as a single from Bolton 's album, Soul Provider, it peaked at number three on the Billboard Hot 100 in May 1990.
|
when did the practice of only using women's names to name hurricanes end | History of tropical Cyclone naming - wikipedia
The practice of using names to identify tropical cyclones goes back several centuries, with storms named after places, saints or things they hit before the formal start of naming in each basin. Examples of such names are the 1911 Ship Cyclone, the 1928 Okeechobee hurricane and the 1938 New England hurricane. The system currently in place provides identification of tropical cyclones in a brief form that is easily understood and recognized by the public. The credit for the first usage of personal names for weather systems is given to the Queensland Government Meteorologist Clement Wragge, who named tropical cyclones and anticyclones between 1887 and 1907. This system of naming fell into disuse for several years after Wragge retired, until it was revived in the latter part of World War II for the Western Pacific. Over the following decades formal naming schemes were introduced for several tropical cyclone basins, including the North and South Atlantic, Eastern, Central, Western and Southern Pacific basins as well as the Australian region and Indian Ocean.
However, there has been controversy over the names used at various times, with names being dropped for religious and political reasons. Female names were exclusively used in the basins at various times between 1945 and 2000, and were the subject of several protests. At present tropical cyclones are officially named by one of eleven meteorological services and retain their names throughout their lifetimes. Due to the potential for longevity and multiple concurrent storms, the names reduce the confusion about what storm is being described in forecasts, watches and warnings. Names are assigned in order from predetermined lists once storms have one, three, or ten - minute sustained wind speeds of more than 65 km / h (40 mph), depending on which basin it originates in. Standards vary from basin to basin, with some tropical depressions named in the Western Pacific, while a significant amount of gale - force winds are required in the Southern Hemisphere. The names of significant tropical cyclones in the North Atlantic Ocean, Pacific Ocean and Australian region are retired from the naming lists and replaced with another name, at meetings of the World Meteorological Organization 's various tropical cyclone committees.
The practice of using names to identify tropical cyclones goes back several centuries, with systems named after places, saints or things they hit before the formal start of naming in each basin. Examples include the 1526 San Francisco hurricane, the 1928 Okeechobee hurricane and the 1938 New England hurricane. Credit for the first usage of personal names for weather is generally given to the Queensland Government Meteorologist Clement Wragge, who named named tropical cyclones and anticyclones between 1887 -- 1907. Wragge used names drawn from the letters of the Greek alphabet, Greek and Roman mythology and female names, to describe weather systems over Australia, New Zealand and the Antarctic. After the new Australian government had failed to create a federal weather bureau and appoint him director, Wragge started naming cyclones after political figures. This system of naming weather systems subsequently fell into disuse for several years after Wragge retired, until it was revived in the latter part of the Second World War. Despite falling into disuse the naming scheme was occasionally mentioned in the press, with an editorial published in the Launceston Examiner newspaper on October 5, 1935 that called for the return of the naming scheme. Wragge 's naming was also mentioned within Sir Napier Shaw 's "Manual of Meteorology '' which likened it to a "child naming waves ''.
After reading about Clement Wragge, George Stewart was inspired to write a novel, "Storm '', about a storm affecting California which was named Maria. The book was widely read after it was published in 1941 by Random House, especially by United States Army Air Corps and United States Navy (USN) meteorologists during World War II. During 1944, United States Army Air Forces forecasters (USAAF) at the newly established Saipan weather center, started to informally name typhoons after their wives and girlfriends. This practise became popular amongst meteorologists from the United States Airforce and Navy who found that it reduced confusion during map discussions, and in 1945 the United States Armed Services publicly adopted a list of women 's names for typhoons of the Pacific. However, they were not able to persuade the United States Weather Bureau (USWB) to start naming Atlantic hurricanes, as the Weather Bureau wanted to be seen as a serious enterprise, and thus felt that it was "not appropriate '' to name tropical cyclones while warning the United States public. They also felt that using women 's names was frivolous and that using the names in official communications would have made them look silly. During 1947 the Air Force Hurricane Office in Miami started using the Joint Army / Navy Phonetic Alphabet to name significant tropical cyclones in the North Atlantic Ocean. These names were used over the next few years in private / internal communications between weather centres and aircraft, and were not included in public bulletins.
During August and September 1950, three tropical cyclones (Hurricanes Baker, Dog and Easy) occurred simultaneously and impacted the United States during August and September 1950, which led to confusion within the media and the public. As a result, during the next tropical cyclone (Fox), Grady Norton decided to start using the names in public statements and in the seasonal summary. This practice continued throughout the season, until the system was made official before the start of the next season. During 1952, a new International Phonetic Alphabet was introduced, as the old phonetic alphabet was seen as too Anglocentric. This led to some confusion with what names were being used, as some observers referred to Hurricane Charlie as "Cocoa. '' Ahead of the following season no agreement could be reached over which phonetic alphabet to use, before it was decided to start using a list of female names to name tropical cyclones. During the season the names were used in the press with only a few objections recorded, and as a result public reception to the idea seemed favourable. The same names were reused during 1954 with only one change: Gilda for Gail. However, as Hurricanes Carol, Edna, and Hazel affected the populated Northeastern United States, controversy raged with several protests over the use of women 's names as it was felt to be ungentlemanly or insulting to womanhood, or both. Letters were subsequently received that overwhelmingly supported the practise, with forecasters claiming that 99 % of correspondence received in the Miami Weather Bureau supported the use of women 's names for hurricanes.
Forecasters subsequently decided to continue with the current practice of naming hurricanes after women, but developed a new set of names ahead of the 1955 season with the names Carol, Edna and Hazel retired for the next ten years. However, before the names could be written, a tropical storm was discovered on January 2, 1955 and named Alice. The Representative T. James Tumulty subsequently announced that he intended to introduce legislation that would call on the USWB to abandon its practice of naming hurricanes after women, and suggested that they be named using descriptive terms instead. Until 1960, forecasters decided to develop a new set of names each year. By 1958, the Guam Weather Center had become the Fleet Weather Central / Typhoon Tracking Center on Guam, and had started to name systems as they became tropical storms rather than typhoons. Later that year during the 1958 -- 59 cyclone season, the New Caledonia Meteorological Office started to name tropical cyclones within the Southern Pacific. During 1959 the US Pacific Command Commander in Chief and the Joint Chiefs of Staff decided that the various US Navy and Air Force weather units would become one unit based on Guam entitled the Fleet Weather Central / Joint Typhoon Warning Center, which continued naming the systems for the Pacific basin.
In January 1960, a formal naming scheme was introduced for the South - West Indian Ocean by the Mauritius and Madagascan Weather Services. with the first cyclone being named Alix. Later that year, as meteorology entered a new era with the launching of the world 's first meteorological satellite TIROS - 1, eight lists of tropical cyclone names were prepared for use in the Atlantic and Eastern Pacific basins. In the Atlantic it was decided to rotate these lists every four years, while in the Eastern Pacific the names were designed to be used consecutively before being repeated. During 1963, the Philippine Weather Bureau adopted four sets of female Filipino nicknames ending in "ng '' from A to Y for use in its self - defined area of responsibility. Following the international practise of naming tropical cyclones, the Australian Bureau of Meteorology decided at a conference in October 1963 that they would start naming tropical cyclones after women at the start of the 1963 -- 64 cyclone season. The first Western Australian cyclone was subsequently named Bessie on January 6, 1964. In 1965, after two of the Eastern Pacific lists of names had been used, it was decided to start recycling the sets of names on an annual basis like in the Atlantic.
At its 1969 national conference, the National Organization for Women passed a motion that called for the National Hurricane Center (NHC) not to name tropical cyclones using only female names. Later that year, during the 1969 -- 70 cyclone season, the New Zealand Meteorological Service (NZMS) office in Fiji started to name tropical cyclones that developed within the South Pacific basin, with the first named Alice on January 4, 1970. Within the Atlantic basin the four lists of names were used until 1971, when the newly established United States National Oceanic and Atmospheric Administration decided to inaugurate a ten - year list of names for the basin. Roxcy Bolton subsequently petitioned the 1971, 1972 and 1973 interdepartmental hurricane conferences to stop the female naming; however, the National Hurricane Center responded by stating that there was a 20: 1 positive response to the usage of female names. In February 1975, the NZMS decided to incorporate male names into the naming lists for the South Pacific, from the following season after a request from the Fiji National Council of Women who considered the practice discriminatory. At around the same time the Australian Science Minister ordered that tropical cyclones within the Australian region should carry both men 's and women 's names, as the minister thought "that both sexes should bear the odium of the devastation caused by cyclones. '' Male names were subsequently added to the lists for the Southern Pacific and each of the three Australian tropical cyclone warning centres ahead of the 1975 -- 76 season.
During 1977 the World Meteorological Organization decided to form a hurricane committee, which held its first meeting during May 1978 and took control of the Atlantic hurricane naming lists. During 1978 the Secretary of Commerce Juanita Kreps ordered NOAA administrator Robert White to cease the sole usage of female names for hurricanes. Robert White subsequently passed the order on to the Director of NHC Neil Frank, who attended the first meeting of the hurricane committee and requested that both men 's and women 's names be used for the Atlantic. The committee subsequently decided to accept the proposal and adopted five new lists of male and female names to be used the following year. The lists also contained several Spanish and French names, so that they could reflect the cultures and languages used within the Atlantic Ocean. After an agreement was reached between Mexico and the United States, six new sets of male / female names were implemented for the Eastern Pacific basin during 1978. A new list was also drawn up during the year for the Western Pacific and was implemented after Typhoon Bess and the 1979 tropical cyclone conference.
As the dual sex naming of tropical cyclones started in the Northern Hemisphere, the NZMS considered adding ethnic Pacific names to the naming lists rather than the European names that were currently used. As a result of the many languages and cultures in the Pacific there was a lot of discussion surrounding this matter, with one name, "Oni, '' being dropped as it meant "the end of the world '' in one language. One proposal suggested that cyclones be named from the country nearest to which they formed; however, this was dropped when it was realized that a cyclone might be less destructive in its formative stage than later in its development. Eventually it was decided to combine names from all over the South Pacific into a single list at a training course, where each course member provided a list of names that were short, easily pronounced, culturally acceptable throughout the Pacific and did not contain any idiosyncrasies. These names were then collated, edited for suitability, and cross-checked with the group for acceptability. It was intended that the four lists of names should be alphabetical with alternating male and female names while using only ethnic names. However, it was not possible to complete the lists using only ethnic names. As a result, there was a scattering of European names in the final lists, which have been used by the Fiji Meteorological Service and NZMS since the 1980 -- 81 season. During October 1985 the Eastern Pacific Hurricane Center had to request an additional list, after the names preselected for that season was used up. As a result, the names Xina, York, Zelda, Xavier, Yolanda, Zeke were subsequently added to the naming lists, while a contingency plan of using the Greek alphabet if all of the names were used up was introduced.
During the 30th session of the ESCAP / WMO Typhoon Committee in November 1997, a proposal was put forward by Hong Kong to give Asian typhoons local names and to stop using the European and American names that had been used since 1945. The committee 's Training and Research Coordination Group was subsequently tasked to consult with members and work out the details of the scheme in order to present a list of names for approval at the 31st session. During August 1998, the group met and decided that each member of the committee would be invited to contribute ten names to the list and that five principles would be followed for the selection of names. It was also agreed that each name would have to be approved by each member and that a single objection would be enough to veto a name. A list of 140 names was subsequently drawn up and submitted to the Typhoon Committee 's 32nd session, who after a lengthy discussion approved the list and decided to implement it on January 1, 2000. It was also decided that the Japan Meteorological Agency would name the systems rather than the Joint Typhoon Warning Center.
During its annual session in 2000, the WMO / ESCAP Panel on North Indian Tropical Cyclones agreed in principle to start assigning names to cyclonic storms that developed within the North Indian Ocean. As a result of this, the panel requested that each member country submit a list of ten names to a rapporteur by the end of 2000. At the 2001 session, the rapporteur reported that of the eight countries involved, only India had refused to submit a list of names, as it had some reservations about assigning names to tropical cyclones. The panel then studied the names and felt that some of the names would not be appealing to the public or the media and thus requested that members submit new lists of names. During 2002 the rapporteur reported that there had been a poor response by member countries in resubmitting their lists of names. Over the next year, each country except India submitted a fresh list. By the 2004 session, India had still not submitted its list despite promising to do so. However, the rapporteur presented the lists of names that would be used with a gap left for India 's names. The rapporteur also recommended that the naming lists be used on an experimental basis during the season, starting in May or June 2004. The naming lists were then completed in May 2004, after India submitted their names. However, the lists were not used until September 2004, when the first tropical cyclone was named Onil by the India Meteorological Department (IMD).
At the 22nd hurricane committee in 2000 it was decided that tropical cyclones that moved from the Atlantic to the Eastern Pacific basin and vice versa would no longer be renamed. Ahead of the 2000 -- 01 season it was decided to start using male names, as well as female names for tropical cyclones developing in the South - West Indian Ocean. RSMC La Reunion subsequently proposed to the fifteenth session of the RA I Tropical Cyclone Committee for the South - West Indian Ocean during September 2001, that the basin adopt a single circular list of names. Along with the RA V Tropical Cyclone Committee, RSMC La Reunion also proposed to the session that a tropical cyclone have only one name during its lifetime. However, both of these proposals were rejected in favour of continuing an annual list of names and to rename systems when they moved across 90 ° E into the South - West Indian Ocean. During the 2002 Atlantic hurricane season the naming of subtropical cyclones restarted, with names assigned to systems from the main list of names drawn up for that year.
During March 2004, a rare tropical cyclone developed within the Southern Atlantic, about 1,010 km (630 mi) to the east - southeast of Florianópolis in southern Brazil. As the system was threatening the Brazilian state of Santa Catarina, a newspaper used the headline "Furacão Catarina, '' which was presumed to mean "furacão (hurricane) threatening (Santa) Catarina (the state) ''. However, when the international press started monitoring the system, it was assumed that "Furacão Catarina '' meant "Cyclone Catarina '' and that it had been formally named in the usual way. During the 2005 Atlantic hurricane season the names pre-assigned for the North Atlantic basin were exhausted and as a result names from the Greek alphabet were used. There were subsequently a couple of attempts to get rid of the Greek names, as they are seen to be inconsistent with the standard naming convention used for tropical cyclones, and are generally unknown and confusing to the public. However, none of the attempts have succeeded and thus the Greek alphabet will be used should the lists ever be used up again. Ahead of the 2007 hurricane season, the Central Pacific Hurricane Center (CPHC) and the Hawaii State Civil Defense requested that the hurricane committee retire eleven names from the Eastern Pacific naming lists. However, the committee declined the request and noted that its criteria for the retirement of names was "well defined and very strict. '' It was felt that while the systems may have had a significant impact on the Hawaiian Islands, none of the impacts were major enough to warrant the retirement of the names. It was also noted that the committee had previously not retired names for systems that had a greater impact than those that had been submitted. The CPHC also introduced a revised set of Hawaiian names for the Central Pacific, after they had worked with the University of Hawaii Hawaiian Studies Department to ensure the correct meaning and appropriate historical and cultural use of the names.
On April 22, 2008 the newly established tropical cyclone warning centre in Jakarta, Indonesia named its first system: Durga, before two sets of Indonesian names were established for their area of responsibility ahead of the 2008 -- 09 season. At the same time the Australian Bureau of Meteorology, merged their three lists into one national list of names. The issue of tropical cyclones being renamed when they moved across 90 ° E into the South - West Indian Ocean, was subsequently brought up during October 2008 at the 18th session of the RA I Tropical Cyclone Committee. However, it was decided to postpone the matter until the following committee meeting so that various consultations could take place. During the 2009 Tropical Cyclone RSMCs / TCWCs Technical Coordination Meeting, it was reaffirmed that a tropical cyclone name should be retained throughout a system 's lifetime, including when moving from one basin to another, to avoid confusion. As a result, it was proposed at the following year 's RA I tropical cyclone committee, that systems stopped being renamed when they moved into the South - West Indian Ocean from the Australian region. It was subsequently agreed that during an interim period, cyclones that moved into the basin would have a name attached to their existing name, before it was stopped at the start of the 2012 -- 13 season. Tropical Cyclone Bruce was subsequently the first tropical cyclone not to be renamed, when it moved into the South - West Indian Ocean during 2013 - 14. During March 12, 2010, public and private weather services in Southern Brazil, decided to name a tropical storm Anita in order to avoid confusion in future references. A naming list was subsequently set up by the Brazilian Navy Hydrographic Center with the names Arani, Bapo and Cari subsequently taken from that list during 2011 and 2015.
At its twenty - first session in 2015, the RA I Tropical Cyclone Committee reviewed the arrangements for naming tropical storms and decided that the procedure was in need of a "very urgent change ''. In particular it was noted that the procedure did not take into account, any of the significant improvements in the science surrounding tropical cyclones and that it was biased due to inappropriate links with some national warning systems. As a result the committee decided to keep the current naming procedure, for the next few years and form a task force, in order to develop an alternative cyclone naming procedure.
At present tropical cyclones are officially named by one of eleven warning centres and retain their names throughout their lifetimes to provide ease of communication between forecasters and the general public regarding forecasts, watches, and warnings. Due to the potential for longevity and multiple concurrent storms, the names are thought to reduce the confusion about what storm is being described. Names are assigned in order from predetermined lists once storms have one, three, or ten - minute sustained wind speeds of more than 65 km / h (40 mph) depending on which basin it originates in. However, standards vary from basin to basin, with some tropical depressions named in the Western Pacific, while tropical cyclones have to have gale - force winds occurring near the center before they are named within the Southern Hemisphere.
Any member of the World Meteorological Organisation 's hurricane, typhoon and tropical cyclone committees can request that the name of a tropical cyclone be retired or withdrawn from the various tropical cyclone naming lists. A name is retired or withdrawn if a consensus or majority of members agree that the tropical cyclone has acquired a special notoriety, such as causing a large amount of deaths, damages, impacts or for other special reasons. Any tropical cyclone names assigned by the Papua New Guinea National weather Service are automatically retired regardless of any damage caused. A replacement name is then submitted to the committee concerned and voted upon, but these names can be rejected and replaced for various reasons. These reasons include the spelling and pronunciation of the name, its similarity to the name of a recent tropical cyclone or on another list of names, and the length of the name for modern communication channels such as social media. PAGASA also retires the names of significant tropical cyclones, when they have caused at least 7009100000000000000 ♠ ₱ 1 billion in damage and / or have caused at least 300 deaths. There are no names retired within the South - West Indian Ocean, as names that are used are automatically removed from the three naming lists used in that basin.
|
what does it mean when congress votes present | Nuclear option - wikipedia
The nuclear option (or constitutional option) is a parliamentary procedure that allows the United States Senate to override a rule -- specifically the 60 - vote rule to close debate -- by a simple majority of 51 votes, rather than the two - thirds supermajority normally required to amend the rules. The option is invoked when the majority leader raises a point of order that only a simple majority is needed to close debate on certain matters. The presiding officer denies the point of order based on Senate rules, but the ruling of the chair is then appealed and overturned by majority vote, establishing new precedent.
This procedure effectively allows the Senate to decide any issue by simple majority vote, regardless of existing procedural rules such as Rule XXII which requires the consent of 60 senators (out of 100) to end a filibuster for legislation, and 67 for amending a Senate rule. The term "nuclear option '' is an analogy to nuclear weapons being the most extreme option in warfare.
In November 2013, Senate Democrats led by Harry Reid used the nuclear option to eliminate the 60 - vote rule on executive branch nominations and federal judicial appointments, but not for the Supreme Court. In April 2017, Senate Republicans led by Mitch McConnell extended the nuclear option to Supreme Court and the nomination of Neil Gorsuch ending the debate.
As of January 2018, a three - fifths majority vote is still required to end debates on legislation.
Beginning with a rules change in 1806, the Senate has traditionally not restricted the total time allowed for debate. In 1917, Rule XXII was amended to allow for ending debate (invoking "cloture '') with a two - thirds majority, later reduced in 1975 to three - fifths of all senators "duly chosen and sworn '' (usually 60). Thus, although a bill might have majority support, a minority of 41 or more senators can still prevent a final vote through endless debate, effectively defeating the bill. This tactic is known as a filibuster.
Since the 1970s, the Senate has also used a "two - track '' procedure whereby Senate business may continue on other topics while one item is filibustered. Since filibusters no longer required the minority to actually hold the floor and bring all other business to a halt, the mere threat of a filibuster has gradually become normalized. In the modern Senate, this means that any controversial item now typically requires 60 votes to advance, unless a specific exception limiting the time for debate applies.
Changing Rule XXII to eliminate the 60 - vote rule is made difficult by the rules themselves. Rule XXII sec. 2 states that to end debate on any proposal "to amend the Senate rules... the necessary affirmative vote shall be two - thirds of the Senators present and voting. '' This is typically 67 senators assuming all are voting. Meanwhile, Rule V sec. 2 states that "(t) he rules of the Senate shall continue from one Congress to the next Congress unless they are changed as provided in these rules. '' Effectively, these provisions mean that the general 60 - vote cloture rule in Rule XXII can never be modified without the approval of 67 senators.
The "nuclear option '' is invoked when a simple majority of the Senate overrides the normal consequences of the rules above. Following a failed cloture vote, the majority leader raises a point of order that Rule XXII should be interpreted -- or disregarded on constitutional grounds -- to require only a simple majority to invoke cloture on a certain type of business, such as nominations. The presiding officer, relying on the advice of the Senate Parliamentarian, then denies the point of order based upon rules and precedent. But the ruling of the chair is then appealed, and is overturned by simple majority vote. For example, the option was invoked on November 21, 2013, as follows:
Mr. REID. I raise a point of order that the vote on cloture under rule XXII for all nominations other than for the Supreme Court of the United States is by majority vote. The PRESIDENT pro tempore. Under the rules, the point of order is not sustained. Mr. REID. I appeal the ruling of the Chair and ask for the yeas and nays. (48 -- 52 vote on upholding ruling of the chair) The PRESIDENT pro tempore. The decision of the Chair is not sustained. The PRESIDENT pro tempore. * * * Under the precedent set by the Senate today, November 21, 2013, the threshold for cloture on nominations, not including those to the Supreme Court of the United States, is now a majority. That is the ruling of the Chair.
A new precedent is thus established allowing for cloture to be invoked by a simple majority on certain types of actions. These and other Senate precedents will then be relied upon by future Parliamentarians in advising the chair, effectively eliminating the 60 - vote barrier going forward. (Riddick 's Senate Procedure is a compilation by Senate parliamentarians of precedents established throughout the entire history of the Senate by direct rulings of the chair, actions relating to rulings of the chair, or direct Senate action.)
The legality of the nuclear option has been challenged. For example, former Senate Parliamentarian Alan Frumin expressed opposition to the nuclear option in 2005. It 's been reported that a Congressional Research Service report "leaves little doubt '' that the nuclear option would not be based on previous precedents of the Senate. However, its validity has not been seriously challenged since being invoked by both parties in 2013 and 2017, at least with regard to invoking cloture on judicial nominations by simple majority vote.
Senator Ted Stevens (R - Alaska) suggested using a ruling of the chair to defeat a filibuster of judicial nominees in February 2003. The code word for the plan was "Hulk ''. Weeks later, Sen. Trent Lott (R - Miss.) coined the term nuclear option in March 2003 because the maneuver was seen as a last resort with possibly major consequences for both sides. The metaphor of a nuclear strike refers to the majority party unilaterally imposing a change to the filibuster rule, which might provoke retaliation by the minority party.
The alternative term "constitutional option '' is often used with particular regard to confirmation of executive and judicial nominations, on the rationale that the United States Constitution requires these nominations to receive the "advice and consent '' of the Senate. Proponents of this term argue that the Constitution implies that the Senate can act by a majority vote unless the Constitution itself requires a supermajority, as it does for certain measures such as the ratification of treaties. By effectively requiring a supermajority of the Senate to fulfill this function, proponents believe that the current Senate practice prevents the body from exercising its constitutional mandate, and that the remedy is therefore the "constitutional option. ''
The first set of Senate rules included a procedure to limit debate called "moving the previous question. '' This rule was dropped in 1806 in the misunderstanding that it was redundant. Starting in 1837, senators began taking advantage of this gap in the rules by giving lengthy speeches so as to prevent specific measures they opposed from being voted on, a procedure called filibustering.
In 1890, Senator Nelson Aldrich (R - RI) threatened to break a Democratic filibuster of a Federal Election Bill (which would ban any prohibitions on the black vote) by invoking a procedure called "appeal from the chair. '' At this time, there was no cloture rule or other regular method to force an immediate vote. Aldrich 's plan was to demand an immediate vote by making a point of order. If, as expected, the presiding officer overrules the point, Aldrich would then appeal the ruling and the appeal would be decided by a majority vote of the Senate. (This plan would not work today because appeals from the chair are debatable under modern rules.) If a majority voted to limit debate, a precedent would have been established to allow debate to be limited by majority vote. Aldrich 's plan was procedurally similar to the modern option, but it stayed within the formal rules of the Senate and did not invoke the Constitution. In the end, the Democrats were able to muster a majority to table the bill, so neither Aldrich 's proposed point of order nor his proposed appeal was ever actually moved.
In 1892, the U.S. Supreme Court ruled in United States v. Ballin that both houses of Congress are parliamentary bodies, implying that they may make procedural rules by majority vote.
The history of the constitutional option can be traced to a 1917 opinion by Senator Thomas J. Walsh (Democrat of Montana). Walsh contended that the U.S. Constitution provided the basis by which a newly commenced Senate could disregard procedural rules established by previous Senates, and had the right to choose its own procedural rules based on a simple majority vote despite the two - thirds requirement in the rules. "When the Constitution says, ' Each House may determine its rules of proceedings, ' it means that each House may, by a majority vote, a quorum present, determine its rules, '' Walsh told the Senate. Opponents countered that Walsh 's constitutional option would lead to procedural chaos, but his argument was a key factor in the adoption of the first cloture rule later that year.
In 1957, Vice President Richard Nixon (and thus President of the Senate) wrote an advisory opinion that no Senate may constitutionally enact a rule that deprives a future Senate of the right to approve its own rules by the vote of a simple majority. (Nixon made clear that he was speaking for himself only, not making a formal ruling.) Nixon 's opinion, along with similar opinions by Hubert Humphrey and Nelson Rockefeller, has been cited as precedent to support the view that the Senate may amend its rules at the beginning of the session with a simple majority vote.
The option was officially moved by Senator Clinton P. Anderson (D - NM) (1963), Senator George McGovern (D - SD) (1967), and Senator Frank Church (D - ID) (1969), but was each time defeated or tabled by the Senate.
A series of votes in 1975 have been cited as a precedent for the nuclear option, although some of these were reconsidered shortly thereafter. According to one account, the option was arguably endorsed by the Senate three times in 1975 during a debate concerning the cloture requirement. A compromise was reached to reduce the cloture requirement from two - thirds of those voting (67 votes if 100 Senators were present) to three - fifths of the current Senate (60 votes if there were no current vacancies) and also to approve a point of order revoking the earlier three votes in which the Constitutional option had been invoked. (This was an effort to reverse the precedent that had been set for cloture by majority vote).
Senator Robert Byrd (D - WV) was later able to effect changes in Senate procedures by majority vote four times when he was majority leader without the support of two - thirds of senators present and voting (which would have been necessary to invoke cloture on a motion for an amendment to the Rules): to ban post-cloture filibustering (1977), to adopt a rule to limit amendments to an appropriations bill (1979), to allow a senator to make a non-debatable motion to bring a nomination to the floor (1980), and to ban filibustering during a roll call vote (1987). However, none of these procedural changes affected the ultimate ability of a 41 - vote minority to block final action on a matter before the Senate via filibuster.
The maneuver was brought to prominence in 2005 when Majority Leader Bill Frist (Republican of Tennessee) threatened its use to end Democratic - led filibusters of judicial nominees submitted by President George W. Bush. In response to this threat, Democrats threatened to shut down the Senate and prevent consideration of all routine and legislative Senate business. The ultimate confrontation was prevented by the Gang of 14, a group of seven Democratic and seven Republican Senators, all of whom agreed to oppose the nuclear option and oppose filibusters of judicial nominees, except in extraordinary circumstances. Several of the blocked nominees were brought to the floor, voted upon and approved as specified in the agreement, and others were dropped and did not come up for a vote, as implied by the agreement.
In 2011, with a Democratic majority in the Senate (but not a supermajority), Senators Jeff Merkley (D - Ore.) and Tom Udall (D-N. M.) proposed "a sweeping filibuster reform package '' to be implemented via the constitutional option but Majority Leader Harry Reid dissuaded them from pushing it forward. In October 2011, however, Reid triggered a more modest change in Senate precedents. In a 51 - 48 vote, the Senate prohibited any motion to waive the rules after a filibuster is defeated, although this change did not affect the ultimate ability of a 41 - vote minority to block final action via an initial filibuster.
The nuclear option was raised again following the congressional elections of 2012, this time with Senate Democrats in the majority (but short of a supermajority). The Democrats had been the majority party in the Senate since 2007 but only briefly did they have the 60 votes necessary to halt a filibuster. The Hill reported that Democrats would "likely '' use the nuclear option in January 2013 to effect filibuster reform, but the two parties managed to negotiate two packages of amendments to the Rules concerning filibusters that passed on January 24, 2013, by votes of 78 to 16 and 86 to 9, thus avoiding the need for the nuclear option.
In the end, negotiation between the two parties resulted in two packages of "modest '' amendments to the rules on filibusters that were approved by the Senate on January 24, 2013, without triggering the nuclear option. Changes to the standing orders affecting just the 2013 - 14 Congress were passed by a vote of 78 to 16, eliminating the minority party 's right to filibuster a bill as long as each party has been permitted to present at least two amendments to the bill. Changes to the permanent Senate rules were passed by a vote of 86 to 9.
In July 2013, the nuclear option was raised as nominations were being blocked by Senate Republicans as Senate Democrats prepared to push through a change to the chamber 's filibuster rule. On July 16, the Senate Democratic majority came within hours of using the nuclear option to win confirmation of seven of President Obama 's long - delayed executive branch appointments. The confrontation was avoided when the White House withdrew two of the nominations in exchange for the other five being brought to the floor for a vote, where they were confirmed.
On November 21, 2013, the Senate voted 52 -- 48, with all Republicans and three Democrats voting against, to rule that "the vote on cloture under Rule XXII for all nominations other than for the Supreme Court of the United States is by majority vote, '' even though the text of the rule requires "three - fifths of the senators duly chosen and sworn '' to end debate. This ruling 's precedent eliminated the 60 - vote requirement to end a filibuster against all executive branch nominees and judicial nominees other than to the Supreme Court. The text of Rule XXII was never changed. A 3 / 5 supermajority was still required to end filibusters unrelated to those nominees, such as for legislation and Supreme Court nominees.
The Democrats ' stated motivation for this change was expansion of filibustering by Republicans during the Obama administration, in particular blocking three nominations to the United States Court of Appeals for the District of Columbia Circuit. Republicans had asserted that the D.C. Circuit was underworked, and also cited the need for cost reduction by reducing the number of judges in that circuit. At the time of the vote, 59 executive branch nominees and 17 judicial nominees were awaiting confirmation.
Prior to November 21, 2013, in the entire history of the nation there had been only 168 cloture motions filed (or reconsidered) with regard to nominations. Nearly half of them (82) had been during the Obama Administration, but those cloture motions were often filed merely to speed things along, rather than in response to any filibuster. In contrast, there were just 38 cloture motions on nominations during the preceding eight years under President George W. Bush. Most of those cloture votes successfully ended debate, and therefore most of those nominees cleared the hurdle. Obama won Senate confirmation for 30 out of 42 federal appeals court nominations, compared with Bush 's 35 out of 52.
Regarding Obama 's federal district court nominations, the Senate approved 143 out of 173 as of November 2013, compared to George W. Bush 's first term 170 of 179, Bill Clinton 's first term 170 of 198, and George H.W. Bush 's 150 of 195. Filibusters were used on 20 Obama nominations to U.S. District Court positions, but Republicans had allowed confirmation of 19 out of the 20 before the nuclear option was invoked.
On April 6, 2017, Senate Republicans invoked the nuclear option to remove the Supreme Court exception created in 2013. This was after Senate Democrats filibustered the nomination of Neil Gorsuch to the Supreme Court of the United States, after the Senate Republicans had previously refused to take up Merrick Garland 's nomination by President Obama in 2016.
Following elimination of the 60 - vote rule for nominations in 2017, senators expressed concerns that the 60 - vote rule will eventually be eliminated for legislation via the nuclear option.
President Donald Trump has spoken out against the 60 - vote requirement for legislation on several occasions. On January 21, 2018, President Trump said on Twitter that if the shutdown stalemate continued, Republicans should consider the "nuclear option '' in the Senate.
Policy debates surrounding the nuclear option -- a tool to implement a rule change -- are closely related to arguments regarding the 60 - vote requirement imposed by Rule XXII. Issues include:
The U.S. Constitution does not explicitly address how many votes are required for passage of a bill or confirmation of a nominee. Regarding nominations, Article II, Section 2, of the U.S. Constitution says the president "shall nominate, and by and with the Advice and Consent of the Senate, shall appoint... Judges... '' The Constitution includes several explicit supermajority rules, including requiring a two - thirds majority in the Senate for impeachment, confirming treaties, expelling one of its members, and concurring in the proposal of Constitutional Amendments.
Supporters of a simple majority standard argue that the Constitution 's silence implies that a simple majority is sufficient; they contrast this with Article II 's language for Senate confirmation of treaties. Regarding nominations, they argue that the Appointments Clause 's lack of a supermajority requirement is evidence that the Framers consciously rejected such a requirement. They also argue that the general rule of parliamentary systems "is that majorities govern in a legislative body, unless another rule is expressly provided. ''
From this, supporters argue that a simple - majority rule would bring current practices into line with the Framers ' original intent -- hence supporters ' preferred nomenclature of the "constitutional option ''. They argue that the filibuster of presidential nominees effectively establishes a 60 - vote threshold for approval of judicial nominees instead of the 51 - vote standard implied by the Constitution. A number of existing Judges and Justices were confirmed with fewer than 60 votes, including Supreme Court Justice Clarence Thomas (confirmed in a 52 -- 48 vote in 1991).
Supporters have claimed that the minority party is engaged in obstruction. In 2005, Republicans argued that Democrats obstructed the approval of the president 's nominees in violation of the intent of the U.S. Constitution. President Bush had nominated forty - six candidates to federal appeals courts. Thirty - six were confirmed. 10 were blocked and 7 were renominated in Spring 2005. Democrats responded that 63 of President Clinton 's 248 nominees were blocked via procedural means at the committee level, denying them a confirmation vote and leaving the positions available for Bush to fill.
In 2005, pro-nuclear option Republicans argued that they had won recent elections and in a democracy the winners rule, not the minority. They also argued that while the Constitution requires supermajorities for some purposes (such as 2 / 3 needed to ratify a treaty), the Founders did not require a supermajority for confirmations, and that the Constitution thus presupposes a majority vote for confirmations.
Proponents of the 60 - vote rule point out that while the Constitution requires two - thirds majorities for actions such as treaty ratification and proposed constitutional amendments, it is silent on other matters. Instead, Article I, Section V of the Constitution permits and mandates that each house of Congress establish its own rules. Regarding nominations, they contend that the word "Advice '' in the Constitution refers to consultation between the Senate and the President with regard to the use of the President 's power to make nominations.
Supporters of the right to filibuster argue that the Senate has a long tradition of requiring broad support to do business, due in part to the threat of the filibuster, and that this protects the minority. Starting with the first Senate in 1789, the rules left no room for a filibuster; a simple majority could move to bring the matter to a vote. However, in 1806, the rule allowing a majority to bring the previous question ceased to exist. The filibuster became possible, and since any Senator could now block a vote, 100 % support was required to bring the matter to a vote. A rule change in 1917 introduced cloture, permitting a two - thirds majority of those present to end debate, and a further change in 1975 reduced the cloture requirement to three - fifths of the entire Senate.
Proponents of the 60 - vote rule have argued that the Senate is a less - than - democratic body that could conceivably allow a simple majority of senators, representing a minority of the national population, to enact legislation or confirm appointees lacking popular support.
In 2005, Democrats claimed the nuclear option was an attempt by Senate Republicans to hand confirmation power to themselves. Rather than require the President to nominate someone who will get broad support in the Senate, the nuclear option would allow Judges to not only be "nominated to the Court by a Republican president, but also be confirmed by only Republican Senators in party - line votes. ''
Of the 9 U.S. Supreme Court Justices seated as of May 2005, 6 were confirmed with the support of 90 or more Senators, 2 were confirmed with at least the support of 60 senators, and only 1 (Thomas) was confirmed with the support of fewer than 60 Senators, however, since John G. Roberts was confirmed, no candidate has gotten more than 68 votes. Conservative nominees for Appellate Courts that were given a vote through the "Gang of 14 '' were confirmed almost exclusively along party lines: Priscilla Owen was confirmed 55 -- 43, Janice Rogers Brown was confirmed 56 -- 43, and William Pryor was confirmed 53 -- 45.
In 2005, polling indicated public support for an active Senate role in its "advise and consent '' capacity. An Associated Press - Ipsos poll released May 20, 2005, found 78 percent of Americans believe the Senate should take an "assertive role '' examining judicial nominees rather than just give the president the benefit of the doubt. The agreement to stave off the "nuclear option '' reached by 14 moderate Senators supports a strong interpretation of "Advice and Consent '' from the Constitution:
We believe that, under Article II, Section 2, of the United States Constitution, the word "Advice '' speaks to consultation between the Senate and the President with regard to the use of the president 's power to make nominations. We encourage the Executive branch of government to consult with members of the Senate, both Democratic and Republican, prior to submitting a judicial nomination to the Senate for consideration.
Some fear that removing the 60 - vote rule for judicial nominations would allow the courts to be "packed '' by a party that controls the other two branches of the government. As of November 2013, Republican presidents have appointed five of the nine justices on the Supreme Court and all four of the chief justices since the Truman administration.
In 1937, Franklin Delano Roosevelt, a Democrat, sought to alter the court through the Judiciary Reorganization Bill of 1937 (a.k.a. "the court - packing plan ''). Noting that the Constitution does not specify a number of Supreme Court justices, the bill would have added a seat for every justice over the age of 701⁄2, creating a new majority on the Court. Roosevelt allowed the bill to be scuttled after Justice Owen Roberts began upholding the constitutionality of his New Deal programs.
Elimination of the 60 - vote rule is a significantly less drastic strategy, only allowing the majority to fill existing vacancies on the Court. However, if the two strategies are combined, a party that controls the Presidency and has a simple majority in the Senate, as FDR 's Democrats did in 1937, could quickly gain control of the Court as well.
In general, senators from both parties have been very opportunistic in making these policy arguments. Senators in the majority often argue for simple majority rule, especially for nominations, while senators in the minority nearly always defend the 60 - vote rule. However, since a simple - majority rule for nominations was progressively adopted in 2013 and 2017, a significant bipartisan majority has remains opposed to eliminating the 60 - vote rule for legislation.
Examples of opportunism abound. In 2005 Republicans pointed out that several Democrats once opposed the filibuster on judicial nominees, and only recently changed their views as they had no other means of stopping Bush 's judicial appointees.
However, Republicans were staunch supporters of the filibuster when they were a minority party and frequently employed it to block legislation. Republicans continued to support the filibuster for general legislation -- the Republican leadership insisted that the proposed rule change would only affect judicial nominations. According to the Democrats, arguments that a simple majority should prevail apply equally well to all votes where the Constitution does not specify a three - fifths majority. Republicans stated that there is a difference between the filibustering of legislation -- which affects only the Senate 's own constitutional prerogative to consider new laws -- and the filibustering of a President 's judicial or executive nominees, which arguably impinges on the constitutional powers of the Executive branch.
Beyond the specific context of U.S. federal judicial appointments, the term "nuclear option '' has come to be used generically for a procedural maneuver with potentially serious consequences, to be used as a last resort to overcome political opposition. In a recent legal ruling on the validity of the Hunting Act 2004 the UK House of Lords used "nuclear option '' to describe the events of 1832, when the then - government threatened to create hundreds of new Whig peers to force the Tory - dominated Lords to accept the Reform Act 1832. (Nuclear weapons were not theorized until the 20th century, so the government 's threat was not labeled as "nuclear '' at the time.)
The term is also used in connection with procedural maneuvers in various state senates.
The nuclear option is not to be confused with reconciliation, which allows issues related to the annual budget to be decided by a majority vote without the possibility of filibuster.
Senate Majority Leader Harry Reid (D - Nev.) often files cloture on multiple bills or nominations at once to speed things along even if no one is slowing things down... A number of the cloture motions that Reid has filed were intended to speed things up, to suit his parliamentary preferences...
|
when do you use the moist adiabatic rate | Lapse rate - wikipedia
Lapse rate is the rate at which Earth 's atmospheric temperature decreases with an increase in altitude, or increases with the decrease in altitude. Lapse rate arises from the word lapse, in the sense of a gradual change.
Although this concept is most often applied to Earth 's troposphere, lapse rate can be extended to any gravitationally supported parcel of gas.
A formal definition from the Glossary of Meteorology is:
In general, a lapse rate is the negative of the rate of temperature change with altitude change, thus:
where Γ (\ displaystyle \ Gamma) is the lapse rate given in units of temperature divided by units of altitude, T = temperature, and z = altitude.
The temperature profile of the atmosphere is a result of an interaction between radiation and convection. Sunlight hits the ground and heats it. The ground then heats the air at the surface. If radiation were the only way to transfer heat from the ground to space, the greenhouse effect of gases in the atmosphere would keep the ground at roughly 333 K (60 ° C; 140 ° F), and the temperature would decay exponentially with height.
However, when air is hot, it tends to expand, which lowers its density. Thus, hot air tends to rise and transfer heat upward. This is the process of convection. Convection comes to equilibrium when a parcel of air at a given altitude has the same density as the other air at the same elevation.
When a parcel of air expands, it pushes on the air around it, doing work (thermodynamics). Since the parcel does work but gains no heat, it loses internal energy so that its temperature decreases. The process of expanding and contracting without exchanging heat is an adiabatic process. The term adiabatic means that no heat transfer occurs into or out of the parcel. Air has low thermal conductivity, and the bodies of air involved are very large, so transfer of heat by conduction is negligibly small.
The adiabatic process for air has a characteristic temperature - pressure curve, so the process determines the lapse rate. When the air contains little water, this lapse rate is known as the dry adiabatic lapse rate: the rate of temperature decrease is 9.8 ° C / km (5.38 ° F per 1,000 ft) (3.0 ° C / 1,000 ft). The reverse occurs for a sinking parcel of air.
Note that only the troposphere (up to approximately 12 kilometres (39,000 ft) of altitude) in the Earth 's atmosphere undergoes convection: the stratosphere does not generally convect. However, some exceptionally energetic convection processes -- notably volcanic eruption columns and overshooting tops associated with severe supercell thunderstorms -- may locally and temporarily inject convection through the tropopause and into the stratosphere.
These calculation use a very simple model of an atmosphere, either dry or moist, within a still vertical column at equilibrium.
Thermodynamics defines an adiabatic process as:
the first law of thermodynamics can be written as
Also, since α = V / m (\ displaystyle \ alpha = V / m) and γ = c p / c v (\ displaystyle \ gamma = c_ (p) / c_ (v)), we can show that:
where c p (\ displaystyle c_ (p)) is the specific heat at constant pressure and α (\ displaystyle \ alpha) is the specific volume.
Assuming an atmosphere in hydrostatic equilibrium:
where g is the standard gravity and ρ (\ displaystyle \ rho) is the density. Combining these two equations to eliminate the pressure, one arrives at the result for the dry adiabatic lapse rate (DALR),
The presence of water within the atmosphere complicates the process of convection. Water vapor contains latent heat of vaporization. As a parcel of air rises and cools, it eventually becomes saturated; that is, the vapor pressure of water in equilibrium with liquid water has decreased (as temperature has decreased) to the point where it is equal to the actual vapor pressure of water. With further decrease in temperature the water vapor in excess of the equilibrium amount condenses, forming cloud, and releasing heat (latent heat of condensation). Before saturation, the rising air follows the dry adiabatic lapse rate. After saturation, the rising air follows the moist adiabatic lapse rate. The release of latent heat is an important source of energy in the development of thunderstorms.
While the dry adiabatic lapse rate is a constant 9.8 ° C / km (5.38 ° F per 1,000 ft, 3 ° C / 1,000 ft), the moist adiabatic lapse rate varies strongly with temperature. A typical value is around 5 ° C / km, (9 ° F / km, 2.7 ° F / 1,000 ft, 1.5 ° C / 1,000 ft). The formula for the moist adiabatic lapse rate is given by:
The environmental lapse rate (ELR), is the rate of decrease of temperature with altitude in the stationary atmosphere at a given time and location. As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of 6.49 K / km (3.56 ° F or 1.98 ° C / 1,000 ft) from sea level to 11 km (36,090 ft or 6.8 mi). From 11 km up to 20 km (65,620 ft or 12.4 mi), the constant temperature is − 56.5 ° C (− 69.7 ° F), which is the lowest assumed temperature in the ISA. The standard atmosphere contains no moisture. Unlike the idealized ISA, the temperature of the actual atmosphere does not always fall at a uniform rate with height. For example, there can be an inversion layer in which the temperature increases with altitude.
The varying environmental lapse rates throughout the Earth 's atmosphere are of critical importance in meteorology, particularly within the troposphere. They are used to determine if the parcel of rising air will rise high enough for its water to condense to form clouds, and, having formed clouds, whether the air will continue to rise and form bigger shower clouds, and whether these clouds will get even bigger and form cumulonimbus clouds (thunder clouds).
As unsaturated air rises, its temperature drops at the dry adiabatic rate. The dew point also drops (as a result of decreasing air pressure) but much more slowly, typically about − 2 ° C per 1,000 m. If unsaturated air rises far enough, eventually its temperature will reach its dew point, and condensation will begin to form. This altitude is known as the lifting condensation level (LCL) when mechanical lift is present and the convective condensation level (CCL) when mechanical lift is absent, in which case, the parcel must be heated from below to its convective temperature. The cloud base will be somewhere within the layer bounded by these parameters.
The difference between the dry adiabatic lapse rate and the rate at which the dew point drops is around 8 ° C per 1,000 m. Given a difference in temperature and dew point readings on the ground, one can easily find the LCL by multiplying the difference by 125 m / ° C.
If the environmental lapse rate is less than the moist adiabatic lapse rate, the air is absolutely stable -- rising air will cool faster than the surrounding air and lose buoyancy. This often happens in the early morning, when the air near the ground has cooled overnight. Cloud formation in stable air is unlikely.
If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable -- an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL or CCL, and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL).
If the environmental lapse rate is larger than the dry adiabatic lapse rate, it has a superadiabatic lapse rate, the air is absolutely unstable -- a parcel of air will gain buoyancy as it rises both below and above the lifting condensation level or convective condensation level. This often happens in the afternoon mainly over land masses. In these conditions, the likelihood of cumulus clouds, showers or even thunderstorms is increased.
Meteorologists use radiosondes to measure the environmental lapse rate and compare it to the predicted adiabatic lapse rate to forecast the likelihood that air will rise. Charts of the environmental lapse rate are known as thermodynamic diagrams, examples of which include Skew - T log - P diagrams and tephigrams. (See also Thermals).
The difference in moist adiabatic lapse rate and the dry rate is the cause of foehn wind phenomenon (also known as "Chinook winds '' in parts of North America). The phenomenon exists because warm moist air rises through orographic lifting up and over the top of a mountain range or large mountain. The temperature decreases with the dry adiabatic lapse rate, until it hits the dew point, where water vapor in the air begins to condense. Above that altitude, the adiabatic lapse rate decreases to the moist adiabatic lapse rate as the air continues to rise. Condensation is also commonly followed by precipitation on the top and windward sides of the mountain. As the air descends on the leeward side, it is warmed by adiabatic compression at the dry adiabatic lapse rate. Thus, the foehn wind at a certain altitude is warmer than the corresponding altitude on the windward side of the mountain range. In addition, because the air has lost much of its original water vapor content, the descending air creates an arid region on the leeward side of the mountain.
|
who sings for the first time in frozen | For the first Time in Forever - wikipedia
"For the First Time in Forever '' is a song from Disney 's 2013 animated feature film Frozen, with music and lyrics composed by Kristen Anderson - Lopez and Robert Lopez. It is reprised later in the musical. Both versions are sung by sisters Princess Anna (Kristen Bell) and Queen Elsa (Idina Menzel).
The song was composed relatively late in the production process in June 2013. This was only five months before the film 's November 27, 2013 release date, when the filmmakers were scrambling to make the film work after realizing in February it still was n't working.
The original version of the song contained a line about "I hope that I do n't vomit in his face, '' which was deemed unacceptable by Disney as a reference to bodily fluids. The Lopezes ' daughter, Katie, came up with the replacement line that ended up in the film: "I wan na stuff some chocolate in my face. ''
As for the reprise, there was originally a different confrontation lyric for the scene where Elsa strikes Anna with her powers entitled "Life 's Too Short '' (the premise being that life is too short to waste it with someone who does n't understand them), which itself would have been reprised later when the sisters realize that life 's too short to live life alone. As the characters evolved throughout the writing process (specifically Elsa was turned from a villain to a tragic hero), the song was deemed too vindictive and was instead replaced with a reprise of this song, to create a motif. "Life 's Too Short '' survives as a demo track on the Deluxe Edition of the movie soundtrack, and part of the melody was reused in Frozen Fever for the song "Making Today A Perfect Day ''.
When the necessity of a reprise dawned upon Anderson - Lopez, she wrote it in only about 20 minutes, and then successfully pitched it on her own to the Disney production team, as Lopez was already with the team in Los Angeles trying to fix "Do You Want to Build a Snowman? ''
In the first version, the song shows Anna 's happiness and naive optimism when preparing for Elsa 's coronation. During the third verse, Elsa sings a counterpoint melody (with some of the same lyrics that are later used as the first verse of "Let It Go ''), in which she expresses her fear of accidentally revealing her ice powers and her anxiety about opening the gates. During her solo, Elsa practices her role in the coronation on a box and candlestick in her a room. Elsa orders the guards to open the gates, and Anna joyfully wanders down a causeway into the town against the flow of guests arriving. The song is cutoff mid-note when Anna crashes into Hans 's horse, and subsequently falls into a rowboat. This version goes up a half - step with each verse, starting in F major and ending in G major for the finale.
In the reprise, Anna has arrived at Elsa 's ice palace to try to get her to unfreeze the kingdom, after she unknowingly sparked an eternal winter. She also wants Elsa to come back so that they can rekindle their once close relationship as sisters. However, Elsa refuses because she feels she can not control her powers and that she is better off alone where she ca n't hurt anyone. As Anna 's tries to reason with her sister, Elsa 's fear intensifies, resulting in her being covered in a blizzard of ice particles as a physical manifestation of her emotions, and she blocks out Anna 's calming words. At one point Elsa turns her back to her sister to form a two shot west, a blocking technique normally used in American soap operas. Finally, paranoid and lost, Elsa lets out a yell, and accidentally blasts Anna in the heart with the accumulated ice particles, thereby freezing it (an act which Pabbie and the trolls note to be fatal).
The reprise uses a different melody from the original. Namely, Anna 's parts are in a major key while Elsa 's counterpoint is in a minor key, highlighting the opposite emotions the two characters have at this point in time. After Anna is inadvertently struck by Elsa 's magic, the percussion includes part of the music from "Frozen Heart ''.
Both iterations have received very positive reviews. NeonTommy described it as "A classic "I want '' song (think Part of Your World or When Will My Life Begin?) with a sprinkle of self - awareness ", and said "this song puts a nice new spin on a familiar form... Lopez and Anderson - Lopez keep the tune fresh, and Kristen Bell 's charming and bright delivery of the peppy lyrics is endearing. '' GeeksOfDoom said "Who would have guessed that Kristen Bell and Idina Menzel would make such a nice duo? Bell adds some humor with her effervescent spirit and amusing lyrics, whereas Menzel lends the signature Broadway voice. You know a song provides further significance when it moves the story, as opposed to stops the film completely, and this one perfectly represents the former. "First Time '' conveys Anna 's hopefulness and openness, contrasting with Elsa 's close - minded and fearful vibe. '' In a negative review, SputnikMusic said ""For the First Time in Forever, '' with its lyrical clunkers like "Do n't know if I 'm elated or gassy / But I 'm somewhere in that zone '' and poor performance decisions like the ham - fisted pause before Elsa "opens the gates '' and Anna 's meaningless harmonization shortly thereafter, represents the downhill slide and subsequent face - first mud landing of the soundtrack over the course of its runtime ". The Hollywood Reporter described it as a "big number '', and "the centerpiece of the original songs ''. StitchKingdom said: "The ' I Want ' song, the composition and lyrics feed off Anna 's frenetic and anxious energy and awkwardness, a classic example of mixing sophistication with silliness ''. Rochester City Newspaper wrote "For the First Time in Forever suffers from a fairly run - of - the - mill chorus tune, but smartly makes up for it with catchy verses, amusing lyrics ("Do n't know if I 'm elated or gassy / But I 'm somewhere in that zone! '') and a great performance from Kristen Bell, showing off protagonist Princess Anna 's quirky side while still longing for a ball, a man, and some basic human interaction. ''
NeonTommy wrote "This song balances really well between long, powerful phrases and banter - like recitative, and is a great illustration of the dynamic between Anna and Elsa. It 's also the first time where we get to hear Anna and Elsa sing as equals (the earlier version of this song is more about Anna than it is about Elsa), so it 's quite fun to hear this song between two sisters. '' GeeksOfDoom wrote "The reoccurrence of the "sister song '' signifies how Elsa has changed, much unlike Anna, who still sees the potential of their relationship. The song incorporates polyphony and intensifies their emotions as it builds to a crescendo. While it 's not a substantial addition -- the scene could have played out just as well without music -- it 's still entertaining ". StitchKingdom wrote "The words and melody are just about the only thing this song has in common with its namesake. Anna 's desperate plea to Elsa, this song also features one of the most complex arrangements found on the soundtrack, giving it a haunting and to a professional effect in a way seldom seen on the stage, let alone in family films. The song also treads dangerously along the operetta line at times which puts a unique spin on it. ''
Several other language versions of the song have also been successful. The Japanese - language version called "Umarete Hajimete '' (生まれ て はじめて, "For the first time in life '') was sung by Takako Matsu and Sayaka Kanda, who played Elsa and Anna respectively. It appeared on the Billboard Japan Hot 100 in between April and June 2014, peaking at number 19, and was popular enough to be certified platinum for 250,000 digital downloads by the RIAJ in September 2014. The Korean - language version, sung by Park Ji - Yun (ko) and Park Hyena (ko), reached 129 on the Gaon Singles Chart being downloaded 14,000 times, while the reprise version peaked at 192 with 8,000 downloads.
Since 2013, some local TV stations and independent studios have been dubbing the movie in their local languages, creating some unofficial dubs. Namely: Albanian, Arabic TV, Karachay - Balkar, Persian and Tagalog.
sales figures based on certification alone shipments figures based on certification alone
Kristen Bell and Idina Menzel performed both songs together at the Vibrato Grill Jazz Club in Los Angeles to celebrate the film.
|
head of department gauteng department of agriculture and rural development | Gauteng Department of Agriculture and Rural Development - Wikipedia
The Gauteng Department of Agriculture and Rural Development (GDARD) is a department of the Gauteng provincial government in South Africa. It is responsible for agricultural affairs, environmental protection and nature conservation within Gauteng. It was formerly known as the Gauteng Department of Agriculture, Conservation and Environment (GDACE).
|
which state is the home to the longest beach in the united states | Long Beach Peninsula - wikipedia
The Long Beach Peninsula is an arm of land in western Washington state. It is bounded on the west by the Pacific Ocean, the south by the Columbia River, and the east by Willapa Bay. Leadbetter Point State Park and Willapa National Wildlife Refuge are at the northern tip of the peninsula, Cape Disappointment State Park, formerly known as Fort Canby State Park is at the southern end, and in between is Pacific Pines State Park.
Cape Disappointment, part of the Lewis and Clark National and State Historical Parks west of Ilwaco, was the westernmost terminus for the Lewis and Clark Expedition, and a monument designed by Maya Lin as part of the Confluence Project was dedicated there in 2005.
The Long Beach Peninsula is known for its continuous sand beaches on the Pacific Ocean side, 28 miles (45 km) in extent, claimed to be the longest beach in the United States. Because of the fine beaches, it is a popular vacation destination for people from Seattle, Washington, 165 miles (266 km) distant, and Portland, Oregon, 115 miles (185 km) distant.
The peninsula is located entirely within Pacific County, Washington.
The principal industry of the Long Beach Peninsula has become tourism, though fishing, crabbing, oyster farming, and cranberry farming are also important components of the local economy. The Long Beach Peninsula is located on the west side of the Willapa Bay, considered the number one producer of farmed oysters in the United States and among the top five producers worldwide.
The Long Beach Peninsula has become one of the most popular tourism destinations in the State of Washington and has attracted visitors from all over North America. As one of the final destinations of the Lewis and Clark Expedition, several television specials have brought publicity to this area. A multitude of events and festivals are held throughout the year including the Washington State International Kite Festival, the Sandsations sandcastle sculpting competition, and the annual The Rod Run to the End of the World, which attracts thousands of visitors on the weekend following Labor Day.
Coordinates: 46 ° 27 ′ 36 '' N 124 ° 02 ′ 35 '' W / 46.46000 ° N 124.04306 ° W / 46.46000; - 124.04306
|
the good doctor season 1 episode 2 release date | The Good Doctor (TV series) - wikipedia
The Good Doctor is an American medical drama television series based on the 2013 award - winning South Korean series of the same name. The actor Daniel Dae Kim first noticed the series and bought the rights for his production company. He began adapting the series and in 2015 eventually shopped it to CBS, his home network. CBS decided against creating a pilot. Because Kim felt so strongly about the series, he bought back the rights from CBS. Eventually, Sony Pictures Television and Kim worked out a deal and brought on David Shore, creator of the Fox hit medical drama, House, to develop the series.
The show is produced by Sony Pictures Television and ABC Studios, in association with production companies Shore Z Productions, 3AD, and Entermedia. David Shore serves as showrunner and Daniel Dae Kim is an executive producer for the show.
The series stars Freddie Highmore as Shaun Murphy, a young savant surgical resident at San Jose St. Bonaventure Hospital who lives with the challenges of autism. Antonia Thomas, Nicholas Gonzalez, Beau Garrett, Hill Harper, Richard Schiff, and Tamlyn Tomita also star in the show. The series received a put pilot commitment at ABC after a previous attempted series did not move forward at CBS Television Studios in 2015; The Good Doctor was ordered to series in May 2017. On October 3, 2017, ABC picked up the series for a full season of 18 episodes. The series is primarily filmed in Vancouver, British Columbia.
The series debuted on September 25, 2017. The Good Doctor has received mixed to positive reviews from critics, with particular praise given to Highmore 's performance, and strong television ratings. In March 2018, ABC renewed the series for a second season, which premiered on September 24, 2018.
The series follows Shaun Murphy, a young surgeon with autism and savant syndrome from the mid-size city of Casper, Wyoming, where he had a troubled childhood. He relocates to San Jose, California, to work at the prestigious San Jose St. Bonaventure Hospital.
In May 2014, CBS Television Studios began development on an American remake of the hit South Korean medical drama Good Doctor with Daniel Dae Kim as producer. Kim explained the appeal of adapting the series as "something that can fit into a recognizable world, with a breadth of characters that can be explored in the long run ''. The story of a pediatric surgeon with autism was to be set in Boston and projected to air in August 2015. However, CBS did not pick up the project and it moved to Sony Pictures Television, with a put pilot commitment from ABC in October 2016. The series is developed by David Shore, who is executive producing alongside Kim, Sebastian Lee, and David Kim. ABC officially ordered the series to pilot in January 2017.
On May 11, 2017, ABC ordered the show to series as a co-production with Sony Pictures Television and ABC Studios, and it was officially picked up for a full season of 18 episodes on October 3, 2017. On March 7, 2018, ABC renewed the series for a second season. It will premiere in the fall of 2018.
On February 17, 2017, Antonia Thomas was cast as Dr. Claire Browne, a strong - willed and talented doctor who forms a special connection with Shaun. A week later, Freddie Highmore was cast in the lead role as Dr. Shaun Murphy, a young surgeon with autism; and Nicholas Gonzalez was cast as Dr. Neil Melendez, the boss of the surgical residents at the hospital. The next month, Chuku Modu was cast as resident Dr. Jared Kalu (originally Dr. Jared Unger); Hill Harper as head of surgery Dr. Marcus Andrews (originally Dr. Horace Andrews); Irene Keng as resident Dr. Elle McLean; and Richard Schiff was cast as Dr. Aaron Glassman (originally Dr. Ira Glassman), the hospital president and Shaun 's mentor. Schiff was shortly followed by Beau Garrett as hospital board member Jessica Preston and a friend of Dr. Glassman. In September 2017, Tamlyn Tomita was promoted to the principal cast as Allegra Aoki.
In April 2018, it was revealed that Will Yun Lee, Fiona Gubelmann, Christina Chang, and Paige Spara had been promoted to series regulars for the second season, after recurring in the first as Alex, Morgan, Audrey, and Lea, respectively. In addition, it was announced that Chuku Modu would not return for the second season. In September 19, 2018, it was announced that Beau Garrett had left the series ahead of the second season premiere.
Production on the pilot took place from March 21 to April 6, 2017, in Vancouver, British Columbia. Filming for the rest of the season began on July 26, 2017, and concluded on March 1, 2018. Filming for season two began on June 27, 2018, and is set to conclude on February 12, 2019.
Emmy nominated Dan Romer serves as the primary composer for the series. He won an ASCAP Screen Music Awards for his work on the show.
The Good Doctor began airing on September 25, 2017, on ABC in the United States, and on CTV in Canada. Sky Living acquired the broadcast rights for the United Kingdom and Ireland. Seven Network airs the series in Australia. Wowow, the largest Japanese private satellite and pay - per - view television network in Japan acquired the rights to broadcast the series beginning in April 2018. In the Netherlands, the series began airing on January 29, 2018, on RTL 4 and on video - on - demand service Videoland. In Italy the series premiered on Rai 1 on July 17, 2018 setting a record of 5.2 million total viewers from 9.30 pm to 11.45 pm, reaching a share of 31, 7 % in the third episode and entering the Top 10 of Most Watched Foreign TV Series in Italy at No. 5, an event since the leaderboard never changed again after the last entry in November 14, 2007 with an episode of House. In Brazil, the series was the first international production to be released at the Rede Globo 's video - on - demand service Globoplay. On August 27 the two first episodes was aired at Globo free - to - air television network to announce the launch of the series in the streaming service.
A full - length trailer was released for ABC 's May 2017 Upfront presentation, which / Film 's Ethan Anderton described the concept as feeling like "House meets Rain Man, that just might be enough to make it interesting ''. However, he questioned "how long can audiences be entranced by both the brilliance of (Highmore 's) character 's savant skills and the difficulties that come from his autism in the workplace. '' Daniel Fienberg of The Hollywood Reporter felt the trailer was "both kinda progressive and really dated ''. He added, "Too much felt on - the - nose -- especially Hill Harper as the main character 's detractor and Richard Schiff as his noble defender '', while also commentating that "On - the - nose / premise is how you have to trailer a show like this, and maybe spaced out over 43 minutes it wo n't grate. '' Ben Travers and Steve Greene for IndieWire called it "a serious trailer for a serious subject. The first glimpse of Highmore 's character hints that they 're toeing the line between presenting a thoughtful depiction of his condition and using his perceptive abilities as a kind of secret weapon. '' The trailer had been viewed over 25.4 million times after a week of its release, including over 22 million views on Facebook.
The pilot was screened at ABC 's PaleyFest event on September 9, 2017. On March 22, 2018, members of the cast as well as executive producers Shore and Kim attended the 35th annual PaleyFest LA to promote the series, along with a screening of the season finale of the first season.
The series premiere earned a 2.2 / 9 rating in the 18 - to 49 - year - old demographic, with 11.22 million total viewers, making it the most watched Monday drama debut on ABC in 21 years, since Dangerous Minds in September 1996, and the highest rated Monday drama in the 18 -- 49 demographic in 8.5 years, since Castle in March 2009. Factoring live plus seven - day ratings, the pilot was watched by a total of 19.2 million viewers and set a record for DVR viewers with 7.9 million, surpassing the record of 7.67 million set by the pilot of Designated Survivor in 2016. According to TV Guide 's November 13 -- 26 issue, the October 9 episode attracted 18.2 million viewers, beating out both high - rated CBS shows NCIS and The Big Bang Theory for the most viewed primetime show that week.
The review aggregator website Rotten Tomatoes reported a 60 % approval rating with an average rating of 5.62 / 10 based on 40 reviews. The website 's consensus reads, "The Good Doctor 's heavy - handed bedside manner undermines a solid lead performance, but under all the emotionally manipulative gimmickry, there 's still plenty of room to improve. '' Metacritic, which uses a weighted average, assigned a score of 53 out of 100 based on 15 critics, indicating "mixed or average reviews ''.
Giving his first impression of the series ' pilot for TVLine, Matt Webb Mitovich stated, "The Good Doctor boasts great DNA... (and) has the potential to be a refreshingly thought - provoking hospital drama, based on the buttons pushed in the pilot alone. '' He enjoyed the "warm dynamic '' of Schiff and Highmore, while describing Thomas ' character as "our emotional ' in ' to Shaun 's distinct, distant world ''. He noted that "it takes a while to build up momentum '', but concluded that "the very final scene packs quite a punch, as Dr. Murphy unwittingly puts a colleague on notice ''.
The New York Times television critic, James Poniewozik, notes in his Critic 's Notebook column, that for the most part the drama is a "hospital melodrama with whiz - bang medical science, a dash of intra-staff romance and shameless sentimentality. '' Discussing the main characters of Dr. Aaron Glassman (Richard Schiff) and Dr. Shaun Murphy (Freddie Highmore), however, Poniewozik writes that "Mr. Schiff is convincing in the role and Mr. Highmore is striking in his. ''
Speaking of Freddie Highmore 's Golden Globe nomination on Monday, December 11, 2017, for his role in The Good Doctor, Laura Bradley, writing for Vanity Fair says: "... Freddie Highmore received the awards recognition that has long and unjustly eluded him... '' Bradley feels that Highmore 's performance has been "the central key '' to the show 's enormous success and while the show had lukewarm reviews, most critics have praised Highmore 's work.
|
when do you use see in legal citations | Citation signal - wikipedia
In law, a citation or introductory signal is a set of phrases or words used to clarify the authority (or significance) of a legal citation as it relates to a proposition. It is used in citations to present authorities and indicate how those authorities relate to propositions in statements. Legal writers use citation signals to tell readers how the citations support (or do not support) their propositions, organizing citations in a hierarchy of importance so the reader can quickly determine the relative weight of a citation. Citation signals help a reader to discern meaning or usefulness of a reference when the reference itself provides inadequate information.
Citation signals have different meanings in different U.S. citation - style systems. The two most prominent citation manuals are The Bluebook: A Uniform System of Citation and the ALWD Citation Manual. Some state - specific style manuals also provide guidance on legal citation. The Bluebook citation system is the most comprehensive and the most widely used system by courts, law firms and law reviews.
Most citation signals are placed in front of the citation to which they apply. In the paragraph
When writing a legal argument, it is important to refer to primary sources. To assist readers in locating these sources, it is desirable to use a standardized citation format. See generally Harvard Law Review Association, The Bluebook: A Uniform System of Citation (18th ed. 2005). Note, however, that some courts may require any legal papers that are submitted to them to conform to a different citation format.
the signal is "see generally '', which indicates that The Bluebook: A Uniform System of Citation (18th ed. 2005) provides background information on the topic.
When writers do not signal a citation, the cited authority states the proposition, is the source of the cited quotation or identifies an authority referred to in the text; for example, a court points out that "the proper role of the trial and appellate courts in the federal system in reviewing the size of jury verdicts is a matter of federal law '' or "Bilida was prosecuted in state court for the misdemeanor offense of possessing the raccoon without a permit ''.
This signal, an abbreviation of the Latin exempli gratia, means "for example ''. It tells the reader that the citation supports the proposition; although other authorities also support the proposition, their citation (s) may not be useful or necessary. This signal may be used in combination with other signals, preceded by an italicized comma. The comma after e.g., is not italicized when attached to another signal at the end (whether supportive or not), but is italicized when e.g. appears alone. Examples: Parties challenging state abortion laws have sharply disputed in some courts the contention that a purpose of these laws, when enacted, was to protect prenatal life. See, e.g., Abele v. Markle, 342 F. Supp. 800 (D. Conn. 1972), appeal docketed, No. 72 - 56. Unfortunately, hiring undocumented laborers is a widespread industry practice. E.g., Transamerica Ins. Co. v. Bellefonte Ins. Co., 548 F. Supp. 1329, 1331 (E.D. Pa. 1982).
"Accord '' is used when two or more sources state or support the proposition, but the text quotes (or refers to) only one; the other sources are then introduced by "accord ''. Legal writers often use accord to indicate that the law of one jurisdiction is in accord with that of another jurisdiction. Examples: "(N) ervousness alone does not justify extended detention and questioning about matters not related to the stop. '' United States v. Chavez - Valenzuela, 268 F. 3d 719,725 (9th Cir. 2001); accord United States v. Beck, 140 F. 3d 1129, 1139 (8th Cir. 1998); United States v. Wood, 106 F. 3d 942, 248 (10th Cir. 1997); United States v. Tapia, 912 F. 2d 1367, 1370 (11th Cir. 1990). "... The term ' Fifth Amendment ' in the context of our time is commonly regarded as being synonymous with the privilege against self - incrimination ''. Quinn v. United States, 349 U.S. 155, 163, 75 S. Ct. 668, 99 L. Ed. 964 (1955); accord In re Johnny V., 85 Cal. App. 3d 120, 149 Cal. Rptr. 180, 184, 188 (Cal. Ct. App. 1978) (holding that the statement "I 'll take the fifth '' was an assertion of the Fifth Amendment privilege).
"See '' indicates that the cited authority supports, but does not directly state, the proposition given. Used similarly to no signal, to indicate that the proposition follows from the cited authority. It may also be used to refer to a cited authority which supports the proposition. For example, before 1997 the IDEA was silent on the subject of private school reimbursement, but courts had granted such reimbursement as "appropriate '' relief under principles of equity pursuant to 20 U.S.C. § 1415 (i) (2) (C). See Burlington, 471 U.S. at 370, 105 S. Ct. 1996 ("(W) e are confident that by empowering the court to grant ' appropriate ' relief Congress meant to include retroactive reimbursement to parents as an available remedy in a proper case. ''); 20 U.S.C. § 1415 (i) (2) (C) ("In any action brought under this paragraph, the court... shall grant such relief as the court determines is appropriate. '').
This indicates that the cited authority constitutes additional material which supports the proposition less directly than that indicated by "see '' or "accord ''. "See also '' may be used to introduce a case supporting the stated proposition which is distinguishable from previously - cited cases. It is sometimes used to refer readers to authorities supporting a proposition when other supporting authorities have already been cited or discussed. A parenthetical explanation of the source 's relevance, after a citation introduced by "see also '', is encouraged. For example, "... Omitting the same mental element in a similar weapons possession statute, such as RCW 9.41. 040, strongly indicates that the omission was purposeful and that strict liability was intended. See generally State v. Alvarez, 74 Wash. App. 250, 260, 872 P. 2d 1123 (1994) (omission of "course of conduct '' language in criminal counterpart to civil antiharassment act indicated "Legislature consciously chose to criminalize a single act rather than a course of conduct. '') aff 'd, 128 Wash. 2d 1, 904 P. 2d 754 (1995); see also State v. Roberts, 117 Wash. 2d 576, 586, 817 P. 2d 855 (1991) (use of certain statutory language in one instance, and different language in another, evinces different legislative intent) (citing cases). '' Source: State v. Anderson, 141 Wash. 2d 357, 5 P. 3d 1247, 1253 (2000).
From the Latin confer ("compare ''), this signals that a cited proposition differs from the main proposition but is sufficiently analogous to lend support. An explanatory parenthetical note is recommended to clarify the citation 's relevance. For example, it is precisely this kind of conjecture and hair - splitting that the Supreme Court wanted to avoid when it fashioned the bright - line rule in Miranda. Cf. Davis, 512 U.S. at 461 (noting that where the suspect asks for counsel, the benefit of the bright - line rule is the "clarity and ease of application '' that "can be applied by officers in the real world without unduly hampering the gathering of information '' by forcing them "to make difficult judgment calls '' with a "threat of suppression if they guess wrong '').
This signal indicates that the cited authority presents background material relevant to the proposition. Legal scholars generally encourage the use of parenthetical explanations of the source material 's relevance following each authority using "see generally '', and this signal can be used with primary and secondary sources. For example, it is a form of "discrimination '' because the complainant is being subjected to differential treatment. See generally Olmstead v. L. C., 527 U.S. 581, 614, 144 L. Ed. 2d 540, 119 S. Ct. 2176 (1999) (Kennedy, J., concurring in judgment) (the "normal definition of discrimination '' is "differential treatment '').
This signals that the cited authority directly contradicts a given point. Contra is used where no signal would be used for support. For example: "Before Blakely, courts around the country had found that "statutory minimum '' was the maximum sentence allowed by law for the crime, rather than the maximum standard range sentence. See, e.g., State v. Gore, 143 Wash. 2d 288, 313 - 14, 21 P. 3d 262 (2001), overruled by State v. Hughes, 154 Wash. 2d 118, 110 P. 3d 192 (2005); contra Blakely, 124 S. Ct. at 2536 - 37. ''
The cited authority contradicts the stated proposition, directly or implicitly. "But see '' is used in opposition where "see '' is used for support. For example: "Specifically, under Roberts, there may have been cases in which courts erroneously determined that testimonial statements were reliable. But see Bockting v. Bayer, 418 F. 3d at 1058 (O'Scannlain, J., dissenting from denial of rehearing en banc). ''
The cited authority contradicts the stated proposition by analogy; a parenthetical explanation of the source 's relevance is recommended. For example: But cf. 995 F. 2d, at 1137 (observing that "(i) n the ordinary tort claim arising when a government driver negligently runs into another car, jury trial is precisely what is lost to a plaintiff when the government is substituted for the employee '').
"But '' should be omitted from "but see '' and "but cf. '' when the signal follows another negative signal: Contra Blake v. Kiline, 612 F. 2d 718, 723 - 24 (3d Cir. 1979); see CHARLES ALAN WRIGHT, LAW OF FEDERAL COURTS 48 (4th ed. 1983).
This signal compares two or more authorities who reach different outcomes for a stated proposition. Because the relevance of the comparison may not be readily apparent to the reader, The Bluebook recommends adding a parenthetical explanation after each authority. Either "compare '' or "with '' may be followed by more than one source, using "and '' between each. Legal writers italicize "compare '', "with '' and "and ''. "Compare '' is used with "with '', with the "with '' preceded by a comma. If "and '' is used, it is also be preceded by a comma. For example: To characterize the first element as a "distortion '', however, requires the concurrence to second - guess the way in which the state court resolved a plain conflict in the language of different statutes. Compare Fla. Stat. 102.166 (2001) (foreseeing manual recounts during the protest period), with 102.111 (setting what is arguably too short a deadline for manual recounts to be conducted); compare 102.112 (1) (stating that the Secretary "may '' ignore late returns), with 102.111 (1) (stating that the Secretary "shall '' ignore late returns).
In footnotes, signals may function as verbs in sentences; this allows material which would otherwise be included in a parenthetical explanation to be integrated. When used in this manner, signals should not be italicized. See Christina L. Anderson, Comment, Double Jeopardy: The Modern Dilemma for Juvenile Justice, 152 U. Pa. L. Rev. 1181, 1204 - 07 (2004) (discussing four main types of restorative justice programs) becomes: See Christina L. Anderson, Comment, Double Jeopardy: The Modern Dilemma for Juvenile Justice, 152 U. Pa. L. Rev. 1181, 1204 - 07 (2004), for a discussion of restorative justice as a reasonable replacement for retributive sanctions. "Cf. '' becomes "compare '' and "e.g. '' becomes "for example '' when the signals are used as verbs.
The first letter of a signal should be capitalized when it begins a citation sentence. If it is in a citation clause or sentence, it should not be capitalized.
One space should separate an introductory signal from the rest of the citation, with no punctuation between. For example, See American Trucking Associations v. United States EPA, 195 F. 3d 4 (D.C. Cir. 1999).
Do not italicize a signal used as a verb; for example, for a discussion of the Environmental Protection Agency 's failure to interpret a statute to provide intelligible principles, see American Trucking Associations v. United States EPA, 195 F. 3d 4 (D.C. Cir. 1999).
When one or more signals are used, the signals should appear in the following order:
When multiple signals are used, they must be consistent with this order. Signals of the same basic type - supportive, comparative, contradictory or background - are strung together in a single citation sentence, separated by semicolons. Signals of different types should be grouped in different citation sentences. For example:
"See Mass. Bd. of Ret. v. Murgia, 427 U.S. 307 (1976) (per curiam); cf. Palmer v. Ticcione, 433 F. Supp. 653 (E.D.N.Y 1977) (upholding a mandatory retirement age for kindergarten teachers). But see Gault v. Garrison, 569 F. 2d 993 (7th Cir. 1977) (holding that a classification of public school teachers based on age violated equal protection absent a showing of justifiable and rational state purpose). See generally Comment, O'Neill v. Baine: Application of Middle - Level Scrutiny to Old - Age Classifications, 127 U. Pa. L. Rev. 798 (1979) (advocating a new constitutional approach to old - age classifications). ''
When e.g. is combined with another signal, the placement of the combined signal is determined by the non-e.g. signal; the combined signal "see, e.g. '' should be placed where the "see '' signal would normally be. In a citation clause, citation strings may contain different types of signals; these signals are separated by semicolons.
Authorities in a signal are separated by semicolons. If an authority is more helpful or authoritative than others cited in a signal, it should precede them. Otherwise, authorities are cited in the following order:
Parentheticals, as needed, explain the relevance of an authority to the proposition in the text. Parenthetical information is recommended when the relevance of a cited authority might not otherwise be clear to the reader. Explanatory information takes the form of a present - participle phrase, a quoted sentence or a short statement appropriate in context. Unlike the other signals, it immediately follows the full citation. Usually brief (about one sentence), it quickly explains how the citation supports or disagrees with the proposition. For example: Brown v. Board of Education, 347 U.S. 483 (1954) (overruling Plessy v. Ferguson, 163 U.S. 537 (1896)).
Explanatory parenthetical phrases not directly quoting the authority usually begin with a present participle and should not begin with a capital letter: See generally John Copeland Nagle & J.B. Ruhl, The Law of Biodiversity and Ecosystem Management 227 - 45 (2002) (detailing the ESA 's prohibition on the possession of protected species). When a complete participial phrase is unnecessary in context, a shorter parenthetical may be substituted: Such standards have been adopted to address a variety of environmental problems. See, e.g., H.B. Jacobini, The New International Sanitary Regulations, 46 Am. J. INT'L L. 727, 727 - 28 (1952) (health - related water quality); Robert L. Meyer, Travaux Preparatoires for the UNESCO World Heritage Convention, 2 EARTH L.J. 45, 45 - 81 (1976) (conservation of protected areas).
If the parenthetical quotes one or more full sentences, it begins with a capital letter and end with punctuation: See Committee Note to Interim Rule 8001 (f) ("Given the short time limit to file the petition with the circuit clerk, subdivision (f) (1) provides that entry of a certification on the docket does not occur until an effective appeal is taken under Rule 8003 (a) or (b). ''). Insert a space before the opening parenthesis of the explanatory parenthetical. If the parenthetical does not contain a complete sentence, the writer should not place final punctuation (such as a period) inside it.
Place a parenthetical included as part of a citation before an explanatory parenthetical: Fed. R. Civ. P. 30 (1) (emphasis added) (also indicating that "(a) party may instruct a deponent not to answer... when necessary to preserve a privilege ''). Shorter parenthetical phrases may be used if a complete participial phrase is unnecessary in the context of the citation: The Florida Supreme court recently declared that "where the seller of a home knows facts materially affecting the value of the property which are not readily observable and are not known to the buyer, the seller is under a duty to disclose them to the buyer. '' Johnson v. Davis, 480 So. 2d 625, 629 (Fla. 1985) (defective roof in three - year - old home). If a source directly quotes or supports an argument (no signal or "see '' before a citation), no parenthetical is necessary.
If a cited case has subsequent history or other relevant authority, it follows the parenthetical: Anderson v. Terhune, 467 F. 3d 1208 (9th Cir. 2006) (claiming that a police officer 's continued questioning violated due process rights), reh'g en banc granted, 486 F. 3d 1115 (9th Cir. 2007).
Portions of text, footnotes, and groups of authorities within the piece are cited with supra or infra. Supra refers to material already in the piece, and infra to material appearing later in the piece. "Note '' and "Part '' refer to footnotes and parts (when parts are specifically designed) in the same piece; "p. '' and "pp. '' are used to refer to other pages in the same piece. These abbreviations should be used sparingly to avoid repeating a lengthy footnote or to cross-reference a nearby footnote.
|
what three things were the new england colonies based on | New England colonies - wikipedia
The New England Colonies of British America included Connecticut Colony, Colony of Rhode Island and Providence Plantations, Massachusetts Bay Colony, and Province of New Hampshire, as well as a few smaller short - lived colonies. The New England colonies were part of the Thirteen Colonies and eventually became five of the six states in New England. Captain John Smith was the author of the 1616 work A Description of New England which first applied the term "New England '' to the coastal lands from Long Island Sound to Newfoundland.
France, England, and other countries made several attempts to colonize New England early in the 17th century, and those nations were often in contention for lands in the New World. French nobleman Pierre Dugua Sieur de Monts established a settlement on Saint Croix Island, Maine in June 1604 under the authority of the King of France. The small St. Croix River Island is located on the northern boundary of present - day Maine. Nearly half the settlers perished due to the harsh winter and scurvy, and the survivors moved north out of New England to Port - Royal of Nova Scotia (see symbol "R '' on map to the right) in the spring of 1605.
King James I of England recognized the need for a permanent settlement in New England, and he granted competing royal charters to the Plymouth Company and the London Company. The Plymouth Company ships arrived at the mouth of the Kennebec River (then called the Sagadahoc River) in August 1607 where they established a settlement named Sagadahoc Colony, better known as Popham Colony (see symbol "Po '' on map to right) to honor financial backer Sir John Popham. The colonists faced a harsh winter, the loss of supplies following a storehouse fire, and mixed relations with the indigenous tribes.
Colony leader Captain George Popham died, and Raleigh Gilbert decided to return to England to take up an inheritance left by the death of an older brother -- at which point, all of the colonists decided to return to England. It was around August 1608 when they left on the ship Mary and John and a new ship built by the colony named Virginia of Sagadahoc. The 30 - ton Virginia was the first sea - going ship ever built in North America.
Conflict over land rights continued through the early 17th century, with the French constructing Fort Pentagouet near present - day Castine, Maine in 1613. The fort protected a trading post and a fishing station and was considered the first longer - term settlement in New England. It changed hands multiple times throughout the 17th century among the English, French, and Dutch colonists.
In 1614, Dutch explorer Adriaen Block traveled along the coast of Long Island Sound and then up the Connecticut River to present - day Hartford, Connecticut. By 1623, the Dutch West India Company regularly traded for furs there and, ten years later, they fortified it for protection from the Pequot Indians and named the site "House of Hope '' (also identified as "Fort Hoop, '' "Good Hope, '' and "Hope '').
A group of Puritans known as the Pilgrims arrived on the Mayflower from England and the Netherlands to establish Plymouth Colony in modern - day Massachusetts, the second successful English colony in North America, following Jamestown, Virginia. About half of the one hundred - plus passengers on the Mayflower died that first winter, mostly because of diseases contracted on the voyage followed by a harsh winter. In 1621, an Indian named Squanto taught the colonists how to grow corn and where to catch eels and fish. His assistance was invaluable and helped the Pilgrims to survive the early years of the colonization. The Pilgrims lived on the same site where Squanto 's Patuxet tribe had established a village before they were wiped out from diseases.
The Plymouth settlement faced great hardships and earned few profits, but it enjoyed a positive reputation in England and may have sown the seeds for further immigration. Edward Winslow and William Bradford published an account of their adventures in 1622 called Mourt 's Relation. This book was only a small glimpse of the hardships and adventures encountered by the Pilgrims, but it may have served to encourage other Puritans to emigrate during the Great Migration.
The Puritans in England first sent smaller groups in the mid-1620s to establish colonies, buildings, and food supplies, learning from the Pilgrims ' harsh experiences of winter in the Plymouth Colony. In 1623, the Plymouth Council for New England (successor to the Plymouth Company) established a small fishing village at Cape Ann under the supervision of the Dorchester Company. The first group of Puritans moved to a new town at nearby Naumkeag after the Dorchester Company dropped support, and fresh financial support was found by Rev. John White. Other settlements were started in nearby areas; however, the overall Puritan population remained small through the 1620s.
A larger group of Puritans arrived in 1630, leaving England because they desired to worship in a manner that differed from the Church of England. Their views were in accord with those of the Pilgrims who arrived on the Mayflower, except that the Pilgrims were known as "separatists '' because they felt that they needed to separate themselves from the Church of England, whereas the later Puritans were content to remain under the umbrella of the Church of England. The separate colonies were governed independent of each other until 1691, when Plymouth Colony was absorbed into the Massachusetts Bay Colony to form the Province of Massachusetts Bay.
Early dissenters of the Puritan laws were often banished from the Massachusetts Bay Colony. The Connecticut Colony was started after Puritan minister Thomas Hooker left Massachusetts Bay with around 100 followers in search of greater religious and political freedom. Puritan minister Roger Williams left Massachusetts Bay to found the Rhode Island Colony, while John Wheelwright left with his followers to a colony in present - day New Hampshire and shortly thereafter on to present day Maine. The Puritans also established the American public school system for the express purpose of ensuring that future generations would be able to read the Bible for themselves, which was a central tenet of Puritan worship.
It was the dead of winter in January 1636 when Salem minister Roger Williams had been banished from the Massachusetts Bay Colony because of theological differences. In Salem, he preached that government and religion should be separate; he also believed that the Wampanoag and Narragansett tribes had been treated unfairly. That winter, the tribes helped Williams to survive and sold him land for a new colony in present - day Providence, Rhode Island which he named Providence Plantation in recognition of the intervention of Divine Providence in establishing the new colony. It was unique in its day in expressly providing for religious freedom and a separation of church from state.
Roger Williams returned to England two times to prevent the attempt of other colonies to take over Providence and to charter or incorporate Providence and other nearby communities into the Colony of Rhode Island and Providence Plantations.
Later in 1636, Thomas Hooker left Massachusetts with one hundred followers and founded a new English settlement just north of the Dutch Fort Hoop that became Connecticut Colony. The community was first named Newtown then renamed Hartford shortly afterwards to honor the English town of Hertford. One of the reasons why Hooker left was that only members of the church could vote and participate in the government in Massachusetts Bay, which he believed should include any adult male owning property. The Connecticut Colony was not the first settlement in Connecticut (the Dutch were first) or even the first English settlement (Windsor was first in 1633). Thomas Hooker obtained a royal charter and established Fundamental Orders, considered to be one of the first constitutions in North America. Other colonies later merged into the royal charter for the Connecticut Colony, including New Haven and Saybrook.
The earliest colonies in New England were usually fishing villages or farming communities on the more fertile land along the rivers. The rocky soil in the New England Colonies was not as fertile as the Middle or Southern Colonies, but the land provided rich resources, including lumber that was valued for building homes and ships. Lumber was also a resource that could be exported back to England, where there was a shortage of wood. In addition, the hunting of wildlife provided furs to be traded and food for the table.
In Massachusetts their economy was heavily based on small industries with people working as porters, tanners, shopkeepers and metalworkers. They had merchants which carried raw materials such as furs, timber, and the tar and pitch needed for shipbuilding. Trade went on directly between the English colonists and the European Caribbean islands.
The New England Colonies were located near the ocean where there was an abundance of whales, fish, and other marketable sea life. Excellent harbors and some inland waterways offered protection for ships and were also valuable for fresh water fishing. By the end of the 17th century, New England colonists had created an Atlantic trade network that connected them to the English homeland as well as the West African slave coast, the Caribbean 's plantation islands, and the Iberian Peninsula. Colonists relied upon British and European imports for glass, linens, hardware, machinery, and other items for the colonial household.
The Southern Colonies could produce tobacco, rice, and indigo in exchange for imports, whereas New England 's colonies could not offer much to England beyond fish, furs, and lumber. Inflation was a major issue in the economy. During the 18th century, shipbuilding drew upon the abundant lumber and revived the economy, often under the direction of the British Crown.
Enslavement of enemies defeated in war was a common practice in European nations at this time. This was a policy that had been going on for decades in Ireland, particularly since the time of Elizabeth I and during the mid-17th century Cromwell wars in Britain and Ireland, where large numbers of Irish, Welsh, and Scots prisoners of war were sent as slaves to plantations in the West Indies, especially to Barbados and Jamaica.
The practice of selling enemy combatants into slavery was begun in the American colonies during the Pequot War and King Philip 's War. Military leader Benjamin Church spoke out at that time against enslavement of Indians. (Church 's militia company was responsible for killing King Philip in August 1676.) In the summer of 1675, he described the enslavement of Indian combatants as "an action so hateful... that (I) opposed it to the loss of the good will and respect of some that before were (my) good friends. '' This said, Church was an owner of African slaves himself, like many Englishmen in the colony.
Ships carrying captured war combatants began to leave New England ports during King Philip 's War (1675 - 78) for places far away late in 1675 and, by the next summer, the shipping out of slaves had turned into a regular process which continued for the three years of the war. The policy concerning war enemies was that "no male captive above the age of fourteen years should reside in the colony. ''
It is estimated that at least a thousand New England Indian warriors were sold as slaves during King Philip 's War, with over half of those coming from Plymouth. By the end of the war, villages that were once crowded Indian population centers were empty of inhabitants, due to the fact that the male Indian population had risen in armed war against their English neighbors.
In the New England Colonies, the first settlements of Pilgrims and the other Puritans who came later taught their children how to read and write in order that they might read and study the Bible for themselves. Depending upon social and financial status, education was taught by the parents home - schooling their children, public grammar schools, and private governesses, which included subjects from reading and writing to Latin and Greek and more.
|
when do we look at number 2 but say 10 | Look - and - say sequence - wikipedia
In mathematics, the look - and - say sequence is the sequence of integers beginning as follows:
To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example:
The look - and - say sequence was introduced and analyzed by John Conway.
The idea of the look - and - say sequence is similar to that of run - length encoding.
If we start with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For d different from 1, the sequence starts as follows:
Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence (sequence A006715 in the OEIS). (for d = 2, see A006751)
The sequence grows indefinitely. In fact, any variant defined by starting with a different integer seed number will (eventually) also grow indefinitely, except for the degenerate sequence: 22, 22, 22, 22,... (sequence A010861 in the OEIS)
No digits other than 1, 2, and 3 appear in the sequence, unless the seed number contains such a digit or a run of more than three of the same digit.
Conway 's cosmological theorem asserts that every sequence eventually splits ("decays '') into a sequence of "atomic elements '', which are finite subsequences that never again interact with their neighbors. There are 92 elements containing the digits 1, 2, and 3 only, which John Conway named after the chemical elements up to Uranium, calling the sequence audioactive. There are also two "transuranic '' elements for each digit other than 1, 2, and 3.
The terms eventually grow in length by about 30 % per generation. In particular, if L denotes the number of digits of the n - th member of the sequence, then the limit of the ratio L n + 1 L n (\ displaystyle (\ frac (L_ (n + 1)) (L_ (n)))) exists and is given by
where λ = 1.303577269034... (sequence A014715 in the OEIS) is an algebraic number of degree 71. This fact was proven by Conway, and the constant λ is known as Conway 's constant. The same result also holds for every variant of the sequence starting with any seed other than 22.
Conway 's constant is the unique positive real root of the following polynomial: (sequence A137275 in the OEIS)
In his original article, Conway gives an incorrect value for this polynomial, writing - instead of + in front of x 35 (\ displaystyle x ^ (35)). However, the value of λ given in his article is correct.
The look - and - say sequence is also popularly known as the Morris Number Sequence, after cryptographer Robert Morris, and the puzzle "What is the next number in the sequence 1, 11, 21, 1211, 111221? '' is sometimes referred to as the Cuckoo 's Egg, from a description of Morris in Clifford Stoll 's book The Cuckoo 's Egg.
There are many possible variations on the rule used to generate the look - and - say sequence. For example, to form the "pea pattern '' one reads the previous term and counts all instances of each digit, listed in order of their first appearance, not just those occurring in a consecutive block. Thus, beginning with the seed 1, the pea pattern proceeds 1, 11 ("one 1 ''), 21 ("two 1s ''), 1211 ("one 2 and one 1 ''), 3112 (three 1s and one 2), 132112 ("one 3, two 1s and one 2 ''), 311322 ("three 1s, one 3 and two 2s ''), etc. This version of the pea pattern eventually forms a cycle with the two terms 23322114 and 32232114.
Other versions of the pea pattern are also possible; for example, instead of reading the digits as they first appear, one could read them in ascending order instead. In this case, the term following 21 would be 1112 ("one 1, one 2 '') and the term following 3112 would be 211213 ("two 1s, one 2 and one 3 '').
These sequences differ in several notable ways from the look - and - say sequence. Notably, unlike the Conway sequences, a given term of the pea pattern does not uniquely define the preceding term. Moreover, for any seed the pea pattern produces terms of bounded length. This bound will not typically exceed 2 * radix + 2 digits and may only exceed 3 * radix digits in length for degenerate long initial seeds ("100 ones, etc ''). For these maximum bounded cases, individual elements of the sequence take the form a0b1c2d3e4f5g6h7i8j9 for decimal where the letters here are placeholders for the digit counts from the preceding element of the sequence. Given that this sequence is infinite and the length is bounded, it must eventually repeat due to the pigeonhole principle. As a consequence, these sequences are always eventually periodic.
|
when does ty come back in heartland season 2 | List of Heartland episodes - wikipedia
Heartland is a Canadian family drama television series which debuted on CBC on October 14, 2007. Heartland follows sisters Amy and Lou Fleming, their grandfather Jack Bartlett, and Ty Borden, through the highs and lows of life at the ranch.
Heartland airs in Canada on the regional CBC channels at 7 pm (7: 30 pm in Newfoundland) on Sundays. Beginning April 23, 2017, the 10th season of Heartland airs in the United States on the Up TV network on Sunday evenings at 8: 00 pm Eastern. From its first episode the plot focuses on Amy, who inherited her mother 's gift of being able to heal abused and damaged horses, after a tragic accident that led to big changes in everyone 's lives.
The show became the longest - running one - hour scripted drama in the history of Canadian television, when it surpassed the 124 episodes of Street Legal on October 19, 2014. As of February 4, 2018, 189 episodes of Heartland have aired. Season 11 began on Sunday, September 24, 2017 at its usual time. Filming for Season 12 began on May 31, 2018.
|
who secured the first number one in 2018 | List of UK top 10 singles in 2018 - wikipedia
The UK Singles Chart is one of many music charts compiled by the Official Charts Company that calculates the best - selling singles of the week in the United Kingdom. Since 2004 the chart has been based on the sales of both physical singles and digital downloads, with airplay figures excluded from the official chart. Since 2014, the singles chart has been based on both sales and streaming, with the ratio altered in 2017 to 300: 1 streams and only three singles by the same artist eligible for the chart... From July 2018, video streams from YouTube Music and Spotify among others began to be counted for the Official Charts. This list shows singles that peaked in the Top 10 of the UK Singles Chart during 2018, as well as singles which peaked in 2017 but were in the top 10 in 2018. The entry date is when the song appeared in the top 10 for the first time (week ending, as published by the Official Charts Company, which is six days after the chart is announced).
Sixty - nine singles have been in the top ten this year, as of 23 August 2018 (week ending). Twelve singles from 2017 remained in the top 10 for several weeks at the beginning of the year. "Fairytale of New York '' by The Pogues featuring Kirsty MacColl, "Last Christmas '' by Wham!, "Man 's Not Hot '' by Big Shaq, "17 '' by MK, "Let You Down '' by NF, "River '' by Eminem featuring Ed Sheeran and "I Miss You '' by Clean Bandit featuring Julia Michaels were the singles from 2017 to reach their peak in 2018. Fourteen artists have scored multiple entries in the top 10 in 2018, as of 23 August 2018 (week ending). Cardi B, Keala Settle, Ramz, Chris Stapleton, Sigrid and Portugal. The Man are among the many artists who have achieved their first UK charting top 10 single in 2018.
"Three Lions '' by Baddiel, Skinner and The Lightning Seeds set two new chart records this year. The single first released ahead of Euro 96 returned to number - one for the fourth time during the 2018 FIFA World Cup, the only single to date to top the chart four times by the same artists. It also suffered the sharpest fall from the top spot, dropping 96 places the full chart week after England were knocked out of the tournament.
The 2017 Christmas number - one, "Perfect '' by Ed Sheeran, remained at number - one for the first three weeks of 2018. The first new number - one single of the year was "River '' by Eminem featuring Ed Sheeran. Eight different singles have peaked at number - one in 2018, with Drake (3) having the most singles hit that position. An asterisk (*) in the "Weeks in Top 10 '' column shows that the song is currently in the top 10.
Sixty - nine singles have charted in the top 10 in 2018 (as of 17 May 2018), with sixty - four singles reaching their peak this year (including the re-entries "Do They Know It 's Christmas? '', "Fairytale of New York '', "Last Christmas '', "Merry Christmas Everyone '' and "Rockin ' Around the Christmas Tree '' which charted in previous years but reached peaks on their latest chart run).
Fourteen artists have scored multiple entries in the top 10 in 2018, as of 23 August 2018 (week ending).
In June 2018, the Official Charts Company announced that official video streams from YouTube Music, Spotify, Apple Music and Tidal among other providers would become eligible for the chart from the following month alongside audio streams. "Shotgun '' by George Ezra was the first single to reach number - one under the new rules, topping the chart on 5 July 2018 (week ending). His combined sales included around 3 million views of the music video.
"Girls Like You '' by Maroon 5 and Cardi B was another single to benefit from the Official Charts Company 's inclusion of video streams, rising from 13 to number 10 as the most streamed video of that week.
Drake also claimed his third chart - topper of the year thanks in part to the chart alterations, after "In My Feelings '' became the subject of a viral craze, with the public and celebrities recreating dance moves from the music video.
"Three Lions '' made chart history as the first song to reach number - one on four occasions with the same line - up. The football anthem by Baddiel, Skinner and The Lightning Seeds renewed popularity was fuelled by a young England football team 's unexpected success in reaching the FIFA World Cup semi-finals for the first time since 1990. The song 's refrain "It 's Coming Home '' was a soundtrack to the tournament and led to demand for the track.
The single led the iTunes sales chart and Spotify Top 50 chart in the days leading up to the game, but was only announced as number - one on 13 July 2018, two days after the team were eliminated against Croatia.
As a result, the popularity of the song rapidly faded and it set another new record as the fastest falling number - one single in history, dropping ninety - six places to number 97 the following week. This was the steepest chart decline from number - one since "A Bridge over You '' by The Lewisham and Greenwich NHS Choir went from Christmas number - one down to number 29 at the end of 2015. It was also only the fifth song in history to fall straight out of the top 10 from the top spot. Besides "A Bridge Over You '', the Elvis Presley reissues "One Night '' / "I Got Stung '' and "It 's Now or Never '' from 2005, and McFly 's "Baby 's Coming Back '' / "Transylvania '' from 2007 were the other singles to suffer that fate.
Twenty - five artists have achieved their first top 10 single in 2018 (as of 20 September 2018, week ending), either as a lead or featured artist. Cardi B has had two other entries in her breakthrough year.
The following table (collapsed on desktop site) does not include acts who had previously charted as part of a group and secured their first top 10 solo single.
Ina Wroldsen had previously had a top 10 entry as an uncredited artist when Calvin Harris and Disciples remixed a song she had written and sang on, "How Deep Is Your Love ''. The song went on to peak at number 2 in 2015. Macklemore joined forces with Rudimental, Jess Glynne and Dan Caplen for the number - one single "These Days ''. All his previous top 10 singles were alongside Ryan Lewis in the duo Macklemore & Ryan Lewis.
Along with Quavo and Takeoff, Offset is part of the collective Migos whose first top 10 credit came on Calvin Harris ' hit single "Slide '' in 2017. Benny Blanco took on a lead artist tag for the first time on "Eastside '', with Halsey and Khalid providing vocals, after years of chart success as a songwriter (beginning with Katy Perry 's "I Kissed a Girl '' in 2008).
Original songs from various films entered the top 10 throughout the year. These included "This Is Me '' (from The Greatest Showman), "For You '' (Fifty Shades Freed), "All the Stars '' (Black Panther). "Shallow '' (from A Star Is Born) being the only one to have peaked at Number 1 so far.
The following table shows artists who have achieved two or more top 10 entries in 2018, including singles that reached their peak in 2017. The figures include both main artists and featured artists, while appearances on ensemble charity records are also counted for each artist. The total number of weeks an artist spent in the top ten in 2018 is also shown.
General
Specific
|
what are the three different states that water occurs as during the water cycle | Water cycle - wikipedia
The water cycle, also known as the hydrological cycle or the hydrologic cycle, describes the continuous movement of water on, above and below the surface of the Earth. The mass of water on Earth remains fairly constant over time but the partitioning of the water into the major reservoirs of ice, fresh water, saline water and atmospheric water is variable depending on a wide range of climatic variables. The water moves from one reservoir to another, such as from river to ocean, or from the ocean to the atmosphere, by the physical processes of evaporation, condensation, precipitation, infiltration, surface runoff, and subsurface flow. In doing so, the water goes through different forms: liquid, solid (ice) and vapor.
The water cycle involves the exchange of energy, which leads to temperature changes. For instance, when water evaporates, it takes up energy from its surroundings and cools the environment. When it condenses, it releases energy and warms the environment. These heat exchanges influence climate.
The evaporative phase of the cycle purifies water which then replenishes the land with freshwater. The flow of liquid water and ice transports minerals across the globe. It is also involved in reshaping the geological features of the Earth, through processes including erosion and sedimentation. The water cycle is also essential for the maintenance of most life and ecosystems on the planet.
The sun, which drives the water cycle, heats water in oceans and seas. Water evaporates as water vapor into the air. Ice and snow can sublimate directly into water vapour. Evapotranspiration is water transpired from plants and evaporated from the soil. The water vapour molecule H 2O has less density compared to the major components of the atmosphere, nitrogen and oxygen, N 2 and O 2. Due to the significant difference in molecular mass, water vapor in gas form gains height in open air as a result of buoyancy. However, as altitude increases, air pressure decreases and the temperature drops (see Gas laws). The lowered temperature causes water vapour to condense into a tiny liquid water droplet which is heavier than the air, such that it falls unless supported by an updraft. A huge concentration of these droplets over a large space up in the atmosphere become visible as cloud. Fog is formed if the water vapour condenses near ground level, as a result of moist air and cool air collision or an abrupt reduction in air pressure. Air currents move water vapour around the globe, cloud particles collide, grow, and fall out of the upper atmospheric layers as precipitation. Some precipitation falls as snow or hail, sleet, and can accumulate as ice caps and glaciers, which can store frozen water for thousands of years. Most water falls back into the oceans or onto land as rain, where the water flows over the ground as surface runoff. A portion of runoff enters rivers in valleys in the landscape, with streamflow moving water towards the oceans. Runoff and water emerging from the ground (groundwater) may be stored as freshwater in lakes. Not all runoff flows into rivers, much of it soaks into the ground as infiltration. Some water infiltrates deep into the ground and replenishes aquifers, which can store freshwater for long periods of time. Some infiltration stays close to the land surface and can seep back into surface - water bodies (and the ocean) as groundwater discharge. Some groundwater finds openings in the land surface and comes out as freshwater springs. In river valleys and floodplains, there is often continuous water exchange between surface water and ground water in the hyporheic zone. Over time, the water returns to the ocean, to continue the water cycle.
Water cycle thus involves many of the intermediate processes.
The residence time of a reservoir within the hydrologic cycle is the average time a water molecule will spend in that reservoir (see adjacent table). It is a measure of the average age of the water in that reservoir.
Groundwater can spend over 10,000 years beneath Earth 's surface before leaving. Particularly old groundwater is called fossil water. Water stored in the soil remains there very briefly, because it is spread thinly across the Earth, and is readily lost by evaporation, transpiration, stream flow, or groundwater recharge. After evaporating, the residence time in the atmosphere is about 9 days before condensing and falling to the Earth as precipitation.
The major ice sheets - Antarctica and Greenland - store ice for very long periods. Ice from Antarctica has been reliably dated to 800,000 years before present, though the average residence time is shorter.
In hydrology, residence times can be estimated in two ways. The more common method relies on the principle of conservation of mass and assumes the amount of water in a given reservoir is roughly constant. With this method, residence times are estimated by dividing the volume of the reservoir by the rate by which water either enters or exits the reservoir. Conceptually, this is equivalent to timing how long it would take the reservoir to become filled from empty if no water were to leave (or how long it would take the reservoir to empty from full if no water were to enter).
An alternative method to estimate residence times, which is gaining in popularity for dating groundwater, is the use of isotopic techniques. This is done in the subfield of isotope hydrology.
The water cycle describes the processes that drive the movement of water throughout the hydrosphere. However, much more water is "in storage '' for long periods of time than is actually moving through the cycle. The storehouses for the vast majority of all water on Earth are the oceans. It is estimated that of the 332,500,000 mi (1,386,000,000 km) of the world 's water supply, about 321,000,000 mi (1,338,000,000 km) is stored in oceans, or about 97 %. It is also estimated that the oceans supply about 90 % of the evaporated water that goes into the water cycle.
During colder climatic periods more ice caps and glaciers form, and enough of the global water supply accumulates as ice to lessen the amounts in other parts of the water cycle. The reverse is true during warm periods. During the last ice age glaciers covered almost one - third of Earth 's land mass, with the result being that the oceans were about 122 m (400 ft) lower than today. During the last global "warm spell, '' about 125,000 years ago, the seas were about 5.5 m (18 ft) higher than they are now. About three million years ago the oceans could have been up to 50 m (165 ft) higher.
The scientific consensus expressed in the 2007 Intergovernmental Panel on Climate Change (IPCC) Summary for Policymakers is for the water cycle to continue to intensify throughout the 21st century, though this does not mean that precipitation will increase in all regions. In subtropical land areas -- places that are already relatively dry -- precipitation is projected to decrease during the 21st century, increasing the probability of drought. The drying is projected to be strongest near the poleward margins of the subtropics (for example, the Mediterranean Basin, South Africa, southern Australia, and the Southwestern United States). Annual precipitation amounts are expected to increase in near - equatorial regions that tend to be wet in the present climate, and also at high latitudes. These large - scale patterns are present in nearly all of the climate model simulations conducted at several international research centers as part of the 4th Assessment of the IPCC. There is now ample evidence that increased hydrologic variability and change in climate has and will continue to have a profound impact on the water sector through the hydrologic cycle, water availability, water demand, and water allocation at the global, regional, basin, and local levels. Research published in 2012 in Science based on surface ocean salinity over the period 1950 to 2000 confirm this projection of an intensified global water cycle with salty areas becoming more saline and fresher areas becoming more fresh over the period:
Fundamental thermodynamics and climate models suggest that dry regions will become drier and wet regions will become wetter in response to warming. Efforts to detect this long - term response in sparse surface observations of rainfall and evaporation remain ambiguous. We show that ocean salinity patterns express an identifiable fingerprint of an intensifying water cycle. Our 50 - year observed global surface salinity changes, combined with changes from global climate models, present robust evidence of an intensified global water cycle at a rate of 8 ± 5 % per degree of surface warming. This rate is double the response projected by current - generation climate models and suggests that a substantial (16 to 24 %) intensification of the global water cycle will occur in a future 2 ° to 3 ° warmer world.
An instrument carried by the SAC - D satellite launched in June, 2011 measures global sea surface salinity but data collection began only in June, 2011.
Glacial retreat is also an example of a changing water cycle, where the supply of water to glaciers from precipitation can not keep up with the loss of water from melting and sublimation. Glacial retreat since 1850 has been extensive.
Human activities that alter the water cycle include:
The water cycle is powered from solar energy. 86 % of the global evaporation occurs from the oceans, reducing their temperature by evaporative cooling. Without the cooling, the effect of evaporation on the greenhouse effect would lead to a much higher surface temperature of 67 ° C (153 ° F), and a warmer planet.
Aquifer drawdown or overdrafting and the pumping of fossil water increases the total amount of water in the hydrosphere, and has been postulated to be a contributor to sea - level rise.
While the water cycle is itself a biogeochemical cycle, flow of water over and beneath the Earth is a key component of the cycling of other biogeochemicals. Runoff is responsible for almost all of the transport of eroded sediment and phosphorus from land to waterbodies. The salinity of the oceans is derived from erosion and transport of dissolved salts from the land. Cultural eutrophication of lakes is primarily due to phosphorus, applied in excess to agricultural fields in fertilizers, and then transported overland and down rivers. Both runoff and groundwater flow play significant roles in transporting nitrogen from the land to waterbodies. The dead zone at the outlet of the Mississippi River is a consequence of nitrates from fertilizer being carried off agricultural fields and funnelled down the river system to the Gulf of Mexico. Runoff also plays a part in the carbon cycle, again through the transport of eroded rock and soil.
The hydrodynamic wind within the upper portion of a planet 's atmosphere allows light chemical elements such as Hydrogen to move up to the exobase, the lower limit of the exosphere, where the gases can then reach escape velocity, entering outer space without impacting other particles of gas. This type of gas loss from a planet into space is known as planetary wind. Planets with hot lower atmospheres could result in humid upper atmospheres that accelerate the loss of hydrogen.
In ancient times, it was widely thought that the land mass floated on a body of water, and that most of the water in rivers has its origin under the earth. Examples of this belief can be found in the works of Homer (circa 800 BCE).
In the ancient near east, Hebrew scholars observed that even though the rivers ran into the sea, the sea never became full (Ecclesiastes 1: 7). Some scholars conclude that the water cycle was described completely during this time in this passage: "The wind goeth toward the south, and turneth about unto the north; it whirleth about continually, and the wind returneth again according to its circuits. All the rivers run into the sea, yet the sea is not full; unto the place from whence the rivers come, thither they return again '' (Ecclesiastes 1: 6 - 7, KJV). Scholars are not in agreement as to the date of Ecclesiastes, though most scholars point to a date during the time of Solomon, the son of David and Bathsheba, "three thousand years ago, there is some agreement that the time period is 962 - 922 BCE. Furthermore, it was also observed that when the clouds were full, they emptied rain on the earth (Ecclesiastes 11: 3). In addition, during 793 - 740 BC a Hebrew prophet, Amos, stated that water comes from the sea and is poured out on the earth (Amos 5: 8, 9: 6).
In the Adityahridayam (a devotional hymn to the Sun God) of Ramayana, a Hindu epic dated to the 4th century BC, it is mentioned in the 22nd verse that the Sun heats up water and sends it down as rain. By roughly 500 BCE, Greek scholars were speculating that much of the water in rivers can be attributed to rain. The origin of rain was also known by then. These scholars maintained the belief, however, that water rising up through the earth contributed a great deal to rivers. Examples of this thinking included Anaximander (570 BCE) (who also speculated about the evolution of land animals from fish) and Xenophanes of Colophon (530 BCE). Chinese scholars such as Chi Ni Tzu (320 BC) and Lu Shih Ch'un Ch'iu (239 BCE) had similar thoughts. The idea that the water cycle is a closed cycle can be found in the works of Anaxagoras of Clazomenae (460 BCE) and Diogenes of Apollonia (460 BCE). Both Plato (390 BCE) and Aristotle (350 BCE) speculated about percolation as part of the water cycle.
In the Biblical Book of Job, dated between 7th and 2nd centuries BCE, there is a description of precipitation in the hydrologic cycle, "For he maketh small the drops of water: they pour down rain according to the vapour thereof; Which the clouds do drop and distil upon man abundantly '' (Job 36: 27 - 28, KJV).
Up to the time of the Renaissance, it was thought that precipitation alone was insufficient to feed rivers, for a complete water cycle, and that underground water pushing upwards from the oceans were the main contributors to river water. Bartholomew of England held this view (1240 CE), as did Leonardo da Vinci (1500 CE) and Athanasius Kircher (1644 CE).
The first published thinker to assert that rainfall alone was sufficient for the maintenance of rivers was Bernard Palissy (1580 CE), who is often credited as the "discoverer '' of the modern theory of the water cycle. Palissy 's theories were not tested scientifically until 1674, in a study commonly attributed to Pierre Perrault. Even then, these beliefs were not accepted in mainstream science until the early nineteenth century.
|
who did lords look to for protection after the fall of the roman empire | History of Western civilization - wikipedia
Western civilization traces its roots back to Western Europe and the Western Mediterranean. It is linked to the Roman Empire and with Medieval Western Christendom which emerged from the Middle Ages to experience such transformative episodes as the Renaissance, the Reformation, the Enlightenment, the Industrial Revolution, scientific revolution, and the development of liberal democracy. The civilizations of Classical Greece, Ancient Rome and Ancient Israel are considered seminal periods in Western history; a few cultural contributions also emerged from the pagan peoples of pre-Christian Europe, such as the Celts and Germans. Christianity and Roman Catholicism has played a prominent role in the shaping of Western civilization, which throughout most of its history, has been nearly equivalent to Christian culture. (There were Christians outside of the West, such as China, India, Russia, Byzantium and the Middle East). Western civilization has spread to produce the dominant cultures of modern Americas and Oceania, and has had immense global influence in recent centuries in many ways.
Following the 5th century Fall of Rome, Western Europe entered the Middle Ages, during which period the Catholic Church filled the power vacuum left in the West by the fall of the Western Roman Empire, while the Eastern Roman Empire (or Byzantine Empire) endured in the East for centuries, becoming a Hellenic Eastern contrast to the Latin West. By the 12th century, Western Europe was experiencing a flowering of art and learning, propelled by the construction of cathedrals and the establishment of medieval universities. Christian unity was shattered by the Reformation from the 14th century. A merchant class grew out of city states, initially in the Italian peninsula (see Italian city - states), and Europe experienced the Renaissance from the 14th to the 17th century, heralding an age of technological and artistic advance and ushering in the Age of Discovery which saw the rise of such global European Empires as those of Spain, Portugal and Britain.
The Industrial Revolution began in Britain in the 18th century. Under the influence of the Enlightenment, the Age of Revolution emerged from the United States and France as part of the transformation of the West into its industrialised, democratised modern form. The lands of North and South America, South Africa, Australia and New Zealand became first part of European Empires and then home to new Western nations, while Africa and Asia were largely carved up between Western powers. Laboratories of Western democracy were founded in Britain 's colonies in Australasia from the mid-19th centuries, while South America largely created new autocracies. In the 20th century, absolute monarchy disappeared from Europe, and despite episodes of Fascism and Communism, by the close of the century, virtually all of Europe was electing its leaders democratically. Most Western nations were heavily involved in the First and Second World Wars and protracted Cold War. World War II saw Fascism defeated in Europe, and the emergence of the United States and Soviet Union as rival global powers and a new "East - West '' political contrast. In addition, the prehistoric men used tools to get food. They slaughtered large animals for food. They also gathered fruits and ate roots. They were hunters and gatherers and hence, moved across different environments searching for food. To support this, these humans had evolved body parts that could help them survive in very hot climates. They had a small brain and their skull was small. They used symbols to communicate and they had a social life. Prehistoric men frequently lived in gatherings of a couple of or many individuals, in a few families. They created devices to enable them to survive and were subject to the wealth of nourishment in the territory, which if a zone was not sufficiently abundant expected them to move to greener timberlands. Other than in Russia, the European Empires disintegrated after World War II and civil rights movements and widescale multi-ethnic, multi-faith migrations to Europe, the Americas and Oceania lowered the earlier predominance of ethnic Europeans in Western culture. European nations moved towards greater economic and political co-operation through the European Union. The Cold War ended around 1990 with the collapse of Soviet imposed Communism in Central and Eastern Europe. In the 21st century, the Western World retains significant global economic power and influence. The West has contributed a great many technological, political, philosophical, artistic and religious aspects to modern international culture: having been a crucible of Catholicism, Protestantism, democracy, industrialisation; the first major civilisation to seek to abolish slavery during the 19th century, the first to enfranchise women (beginning in Australasia at the end of the 19th century) and the first to put to use such technologies as steam, electric and nuclear power. The West invented cinema, television, the personal computer and the Internet; produced artists such as Shakespeare, Michelangelo, Mozart and The Beatles; developed sports such as soccer, cricket, golf, tennis, rugby and basketball; and transported humans to an astronomical object for the first time with the 1969 Apollo 11 Moon Landing.
While the Roman Empire and Christian religion survived in an increasingly Hellenised form in the Byzantine Empire centered at Constantinople in the East, Western civilization suffered a collapse of literacy and organization following the fall of Rome in AD 476. Gradually however, the Christian religion re-asserted its influence over Western Europe.
After the Fall of Rome, the papacy served as a source of authority and continuity. In the absence of a magister militum living in Rome, even the control of military matters fell to the pope. Gregory the Great (c 540 -- 604) administered the church with strict reform. A trained Roman lawyer and administrator, and a monk, he represents the shift from the classical to the medieval outlook and was a father of many of the structures of the later Roman Catholic Church. According to the Catholic Encyclopedia, he looked upon Church and State as co-operating to form a united whole, which acted in two distinct spheres, ecclesiastical and secular, but by the time of his death, the papacy was the great power in Italy:
According to tradition, it was a Romanized Briton, Saint Patrick who introduced Christianity to Ireland around the 5th century. Roman legions had never conquered Ireland, and as the Western Roman Empire collapsed, Christianity managed to survive there. Monks sought out refuge at the far fringes of the known world: like Cornwall, Ireland, or the Hebrides. Disciplined scholarship carried on in isolated outposts like Skellig Michael in Ireland, where literate monks became some of the last preservers in Western Europe of the poetic and philosophical works of Western antiquity.
By around 800 they were producing illuminated manuscripts such as the Book of Kells. The missions of Gaelic monasteries led by monks like St Columba spread Christianity back into Western Europe during the Middle Ages, establishing monasteries initially in northern Britain, then through Anglo - Saxon England and the Frankish Empire during the Middle Ages. Thomas Cahill, in his 1995 book How the Irish Saved Civilization, credited Irish Monks with having "saved '' Western Civilization during this period. According to art historian Kenneth Clark, for some five centuries after the fall of Rome, virtually all men of intellect joined the Church and practically nobody in western Europe outside of monastic settlements had the ability to read or write.
Around AD 500, Clovis I, the King of the Franks, became a Christian and united Gaul under his rule. Later in the 6th century, the Byzantine Empire restored its rule in much of Italy and Spain. Missionaries sent from Ireland by the Pope helped to convert England to Christianity in the 6th century as well, restoring that faith as the dominant in Western Europe.
Muhammed, the founder and Prophet of Islam was born in Mecca in AD 570. Working as a trader he encountered the ideas of Christianity and Judaism on the fringes of the Byzantine Empire, and around 610 began preaching of a new monotheistic religion, Islam, and in 622 became the civil and spiritual leader of Medina, soon after conquering Mecca in 630. Dying in 632, Muhammed 's new creed conquered first the Arabian tribes, then the great Byzantine cities of Damascus in 635 and Jerusalem in 636. A multiethnic Islamic empire was established across the formerly Roman Middle East and North Africa. By the early 8th century, Iberia and Sicily had fallen to the Muslims. By the 9th century, Sardinia, Malta, Cyprus and Crete had fallen -- and for a time the South of France and Italy.
Only in 732 was the Muslim advance into Europe stopped by the Frankish leader Charles Martel, saving Gaul and the rest of the West from conquest by Islam. From this time, the "West '' became synonymous with Christendom, the territory ruled by Christian powers, as Oriental Christianity fell to dhimmi status under the Muslim Caliphates. The cause to liberate the "Holy Land '' remained a major focus throughout medieval history, fueling many consecutive crusades, only the first of which was successful (although it resulted in many atrocities, in Europe as well as elsewhere).
Charlemagne ("Charles the Great '' in English) became king of the Franks. He conquered Gaul (modern day France), northern Spain, Saxony, and northern and central Italy. In 800, Pope Leo III crowned Charlemagne Holy Roman Emperor. Under his rule, his subjects in non-Christian lands like Germany converted to Christianity.
After his reign, the empire he created broke apart into the kingdom of France (from Francia meaning "land of the Franks ''), Holy Roman Empire and the kingdom in between (containing modern day Switzerland, northern - Italy, Eastern France and the low - countries).
Starting in the late 8th century, the Vikings began seaborne attacks on the towns and villages of Europe. Eventually, they turned from raiding to conquest, and conquered Ireland, most of England, and northern France (Normandy). These conquests were not long - lasting, however. In 954 Alfred the Great drove the Vikings out of England, which he united under his rule, and Viking rule in Ireland ended as well. In Normandy the Vikings adopted French culture and language, became Christians and were absorbed into the native population.
By the beginning of the 11th century Scandinavia was divided into three kingdoms, Norway, Sweden, and Denmark, all of which were Christian and part of Western civilization. Norse explorers reached Iceland, Greenland, and even North America, however only Iceland was permanently settled by the Norse. A period of warm temperatures from around 1000 - 1200 enabled the establishment of a Norse outpost in Greenland in 985, which survived for some 400 years as the most westerly oupost of Christendom. From here, Norseman attempted their short - lived European colony in North America, five centuries before Columbus.
In the 10th century another marauding group of warriors swept through Europe, the Magyars. They eventually settled in what is today Hungary, converted to Christianity and became the ancestors of the Hungarian people.
A West Slavic people, the Poles, formed a unified state by the 10th century and having adopted Christianity also in the 10th century but with pagan rising in the 11th century.
By the start of the second millennium AD, the West had become divided linguistically into three major groups. The Romance languages, based on Latin, the language of the Romans, the Germanic languages, and the Celtic languages. The most widely spoken Romance languages were French, Italian, Portuguese and Spanish. Four widely spoken Germanic languages were English, German, Dutch, and Danish. Irish and Scots Gaelic were two widely spoken Celtic languages in the British Isles.
Art historian Kenneth Clark wrote that Western Europe 's first "great age of civilisation '' was ready to begin around the year 1000. From 1100, he wrote: "every branch of life -- action, philosophy, organisation, technology (experienced an) extraordinary outpouring of energy, an intensification of existence ''. Upon this period rests the foundations of many of Europe 's subsequent achievements. By Clark 's account, the Catholic Church was very powerful, essentially internationalist and democratic in it structures and run by monastic organisations generally following Benedictine rule. Men of intelligence usually joined religious orders and those of intellectual, administrative or diplomatic skill could advance beyond the usual restraints of society -- leading churchmen from faraway lands were accepted in local bishoprics, linking European thought across wide distances. Complexes like the Abbey of Cluny became vibrant centres with dependencies spread throughout Europe. Ordinary people also treked vast distances on pilgrimages to express their piety and pray at the site of holy relics. Monumental abbeys and cathedrals were constructed and decorated with sculptures, hangings, mosaics and works belonging one of the greatest epochs of art and providing stark contrast to the monotonous and cramped conditions of ordinary living. Abbot Suger of the Abbey of St. Denis is considered an influential early patron of Gothic architecture and believed that love of beauty brought people closer to God: "The dull mind rises to truth through that which is material ''. Clark calls this "the intellectual background of all the sublime works of art of the next century and in fact has remained the basis of our belief of the value of art until today ''.
By the year 1000 feudalism had become the dominant social, economic and political system. At the top of society was the monarch, who gave land to nobles in exchange for loyalty. The nobles gave land to vassals, who served as knights to defend their monarch or noble. Under the vassals were the peasants or serfs. The feudal system thrived as long as peasants needed protection by the nobility from invasions originating inside and outside of Europe. So as the 11th century progressed, the feudal system declined along with the threat of invasion.
In 1054, after centuries of strained relations, the Great Schism occurred over differences in doctrine, splitting the Christian world between the Catholic Church, centered in Rome and dominant in the West, and the Orthodox Church, centered in Constantinople, capital of the Byzantine Empire. The last pagan land in Europe was converted to Christianity with the conversion of the Baltic peoples in the High Middle Ages, bringing them into Western civilization as well.
As the Medieval period progressed, the aristocratic military ideal of Chivalry and institution of knighthood based around courtesy and service to others became culturally important. Large Gothic cathedrals of extraordinary artistic and architectural intricacy were constructed throughout Europe, including Canterbury Cathedral in England, Cologne Cathedral in Germany and Chartres Cathedral in France (called the "epitome of the first great awakening in European civilisation '' by Kenneth Clark). The period produced ever more extravagant art and architecture, but also the virtuous simplicity of such as St Francis of Assisi (expressed in the Prayer of St Francis) and the epic poetry of Dante 's Divine Comedy. As the Church grew more powerful and wealthy, many sought reform. The Dominican and Franciscan Orders were founded, which emphasized poverty and spirituality.
Women were in many respects excluded from political and mercantile life, however, leading churchwomen were an exception. Medieval abbesses and female superiors of monastic houses were powerful figures whose influence could rival that of male bishops and abbots: "They treated with kings, bishops, and the greatest lords on terms of perfect equality;... they were present at all great religious and national solemnities, at the dedication of churches, and even, like the queens, took part in the deliberation of the national assemblies... ''. The increasing popularity of devotion to the Virgin Mary (the mother of Jesus) secured maternal virtue as a central cultural theme of Catholic Europe. Kenneth Clark wrote that the ' Cult of the Virgin ' in the early 12th century "had taught a race of tough and ruthless barbarians the virtues of tenderness and compassion ''.
In 1095, Pope Urban II called for a Crusade to re-conquer the Holy Land from Muslim rule, when the Seljuk Turks prevented Christians from visiting the holy sites there. For centuries prior to the emergence of Islam, Asia Minor and much of the Mid East had been a part of the Roman and later Byzantine Empires. The Crusades were originally launched in response to a call from the Byzantine Emperor for help to fight the expansion of the Turks into Anatolia. The First Crusade succeeded in its task, but at a serious cost on the home front, and the crusaders established rule over the Holy Land. However, Muslim forces reconquered the land by the 13th century, and subsequent crusades were not very successful. The specific crusades to restore Christian control of the Holy Land were fought over a period of nearly 200 years, between 1095 and 1291. Other campaigns in Spain and Portugal (the Reconquista), and Northern Crusades continued into the 15th century. The Crusades had major far - reaching political, economic, and social impacts on Europe. They further served to alienate Eastern and Western Christendom from each other and ultimately failed to prevent the march of the Turks into Europe through the Balkans and the Caucasus.
Cathedral schools began in the Early Middle Ages as centers of advanced education, some of them ultimately evolving into medieval universities. During the High Middle Ages, Chartres Cathedral operated the famous and influential Chartres Cathedral School. The medieval universities of Western Christendom were well - integrated across all of Western Europe, encouraged freedom of enquiry and produced a great variety of fine scholars and natural philosophers, including Robert Grosseteste of the University of Oxford, an early expositor of a systematic method of scientific experimentation; and Saint Albert the Great, a pioneer of biological field research The Italian University of Bologna is considered the oldest continually operating university.
Philosophy in the High Middle Ages focused on religious topics. Christian Platonism, which modified Plato 's idea of the separation between the ideal world of the forms and the imperfect world of their physical manifestations to the Christian division between the imperfect body and the higher soul was at first the dominant school of thought. However, in the 12th century the works of Aristotle were reintroduced to the West, which resulted in a new school of inquiry known as scholasticism, which emphasized scientific observation. Two important philosophers of this period were Saint Anselm and Saint Thomas Aquinas, both of whom were concerned with proving God 's existence through philosophical means. The Summa Theologica by Aquinas was one of the most influential documents in medieval philosophy and Thomism continues to be studied today in philosophy classes. Theologian Peter Abelard wrote in 1122 "I must understand in order that I may believe... by doubting we come to questioning, and by questioning we perceive the truth ''.
In Normandy, the Vikings adopted French culture and language, mixed with the native population of mostly Frankish and Gallo - Roman stock and became known as the Normans. They played a major political, military, and cultural role in medieval Europe and even the Near East. They were famed for their martial spirit and Christian piety. They quickly adopted the Romance language of the land they settled off, their dialect becoming known as Norman, an important literary language. The Duchy of Normandy, which they formed by treaty with the French crown, was one of the great large fiefs of medieval France. The Normans are famed both for their culture, such as their unique Romanesque architecture, and their musical traditions, as well as for their military accomplishments and innovations. Norman adventurers established a kingdom in Sicily and southern Italy by conquest, and a Norman expedition on behalf of their duke led to the Norman Conquest of England. Norman influence spread from these new centres to the Crusader States in the Near East, to Scotland and Wales in Great Britain, and to Ireland.
Relations between the major powers in Western society: the nobility, monarchy and clergy, sometimes produced conflict. If a monarch attempted to challenge church power, condemnation from the church could mean a total loss of support among the nobles, peasants, and other monarchs. Holy Roman Emperor Henry IV, one of the most powerful men of the 11th century, stood three days bare - headed in the snow at Canossa in 1077, in order to reverse his excommunication by Pope Gregory VII. As monarchies centralized their power as the Middle Ages progressed, nobles tried to maintain their own authority. The sophisticated Court of Holy Roman Emperor Frederick II was based in Sicily, where Norman, Byzantine, and Islamic civilization had intermingled. His realm stretched through Southern Italy, through Germany and in 1229, he crowned himself King of Jerusalem. His reign saw tension and rivalry with the Papacy over control of Northern Italy. A patron of education, Frederick founded the University of Naples.
Plantagenet kings first ruled the Kingdom of England in the 12th century. Henry V left his mark with a famous victory against larger numbers at the Battle of Agincourt, while Richard the Lionheart, who had earlier distinguished himself in the Third Crusade, was later romanticised as an iconic figure in English folklore. A distinctive English culture emerged under the Plantagenets, encouraged by some of the monarchs who were patrons of the "father of English poetry '', Geoffrey Chaucer. The Gothic architecture style was popular during the time, with buildings such as Westminster Abbey remodelled in that style. King John 's sealing of the Magna Carta was influential in the development of common law and constitutional law. The 1215 Charter required the King to proclaim certain liberties, and accept that his will was not arbitrary -- for example by explicitly accepting that no "freeman '' (non-serf) could be punished except through the law of the land, a right which is still in existence today. Political institutions such as the Parliament of England and the Model Parliament originate from the Plantagenet period, as do educational institutions including the universities of Cambridge and Oxford.
From the 12th century onward inventiveness had re-asserted itself outside of the Viking north and the Islamic south of Europe. Universities flourished, mining of coal commenced, and crucial technological advances such as the lock, which enabled sail ships to reach the thriving Belgian city of Bruges via canals, and the deep sea ship guided by magnetic compass and rudder were invented.
A cooling in temperatures after about 1150 saw leaner harvests across Europe and consequent shortages of food and flax material for clothing. Famines increased and in 1316 serious famine gripped Ypres. In 1410, the last of the Greenland Norseman abandoned their colony to the ice. From Central Asia, Mongol invasions progressed towards Europe throughout the 13th century, resulting in the vast Mongol Empire which became largest empire of history and ruled over almost half of human population and expanded through the world by 1300.
The Papacy had its court at Avignon from 1305 - 78 This arose from the conflict between the Papacy and the French crown. A total of seven popes reigned at Avignon; all were French, and all were increasingly under the influence of the French crown. Finally in 1377 Gregory XI, in part because of the entreaties of the mystic Saint Catherine of Sienna, restored the Holy See to Rome, officially ending the Avignon papacy. However, in 1378 the breakdown in relations between the cardinals and Gregory 's successor, Urban VI, gave rise to the Western Schism -- which saw another line of Avignon Popes set up as rivals to Rome (subsequent Catholic history does not grant them legitimacy). The period helped weaken the prestige of the Papacy in the buildup to the Protestant Reformation.
In the Later Middle Ages the Black Plague struck Europe, arriving in 1348. Europe was overwhelmed by the outbreak of bubonic plague, probably brought to Europe by the Mongols. The fleas hosted by rats carried the disease and it devastated Europe. Major cities like Paris, Hamburg, Venice and Florence lost half their population. Around 20 million people -- up to a third of Europe 's population -- died from the plague before it receded. The plague periodically returned over coming centuries.
The last centuries of the Middle Ages saw the waging of the Hundred Years ' War between England and France. The war began in 1337 when the king of France laid claim to English - ruled Gascony in southern France, and the king of England claimed to be the rightful king of France. At first, the English conquered half of France and seemed likely to win the war, until the French were rallied by a peasant girl, who would later become a saint, Joan of Arc. Although she was captured and executed by the English, the French fought on and won the war in 1453. After the war, France gained all of Normandy excluding the city of Calais, which it gained in 1558.
Following the Mongols from Central Asia came the Ottoman Turks. By 1400 they had captured most of modern - day Turkey and extended their rule into Europe through the Balkans and as far as the Danube, surrounding even the fabled city of Constantinople. Finally, in 1453, one of Europe 's greatest city fell to the Turks. The Ottomans under the command of Sultan Mehmed II, fought a vastly outnumbered defending army commanded by Emperor Constantine XI -- the last "Emperor of the Eastern Roman Empire '' -- and blasted down the ancient walls with the terrifying new weaponry of the canon. The Ottoman conquests sent refugee Greek scholars westward, contributing to the revival of the West 's knowledge of the learning of Classical Antiquity.
Probably the first clock in Europe was installed in a Milan church in 1335, hinting at the dawning mechanical age. By the 14th century, the middle class in Europe had grown in influence and number as the feudal system declined. This spurred the growth of towns and cities in the West and improved the economy of Europe. This, in turn helped begin a cultural movement in the West known as the Renaissance, which began in Italy. Italy was dominated by city - states, many of which were nominally part of the Holy Roman Empire, and were ruled by wealthy aristocrats like the Medicis, or in some cases, by the pope.
The Renaissance, originating from Italy, ushered in a new age of scientific and intellectual inquiry and appreciation of ancient Greek and Roman civilizations. The merchant cities of Florence, Genoa, Ghent, Nuremberg, Geneva, Zürich, Lisbon and Seville provided patrons of the arts and sciences and unleashed a flurry of activity.
The Medici became the leading family of Florence and fostered and inspired the birth of the Italian Renaissance along with other families of Italy, such as the Visconti and Sforza of Milan, the Este of Ferrara, and the Gonzaga of Mantua. Greatest artists like Brunelleschi, Botticelli, Da Vinci, Michelangelo, Giotto, Donatello, Titian and Raphael produced inspired works -- their paintwork was more realistic - looking than had been created by Medieval artists and their marble statues rivalled and sometimes surpassed those of Classical Antiquity. Michelangelo carved his masterpiece David from marble between 1501 and 1504.
Humanist historian Leonardo Bruni, split the history in the antiquity, Middle Ages and modern period.
Churches began being built in the Romanesque style for the first time in centuries. While art and architecture flourished in Italy and then the Netherlands, religious reformers flowered in Germany and Switzerland; printing was establishing itself in the Rhineland and navigators were embarking on extraordinary voyages of discovery from Portugal and Spain.
Around 1450, Johannes Gutenberg developed a printing press, which allowed works of literature to spread more quickly. Secular thinkers like Machiavelli re-examined the history of Rome to draw lessons for civic governance. Theologians revisited the works of St Augustine. Important thinkers of the Renaissance in Northern Europe included the Catholic humanists Desiderius Erasmus, a Dutch theologian, and the English statesman and philosopher Thomas More, who wrote the seminal work Utopia in 1516. Humanism was an important development to emerge from the Renaissance. It placed importance on the study of human nature and worldly topics rather than religious ones. Important humanists of the time included the writers Petrarch and Boccaccio, who wrote in both Latin as had been done in the Middle Ages, as well as the vernacular, in their case Tuscan Italian.
As the calendar reached the year 1500, Europe was blossoming -- with Leonardo da Vinci painting his Mona Lisa portrait not long after Christopher Columbus reached the Americas (1492), Amerigo Vespucci proofed that America is not a part of India and hence the new world derived from his name, the Portuguese navigator Vasco Da Gama sailed around Africa into the Indian Ocean and Michelangelo completed his paintings of Old Testament themes on the ceiling of the Sistine Chapel in Rome (the expense of such artistic exuberance did much to spur the likes of Martin Luther in Northern Europe in their protests against the Church of Rome).
For the first time in European history, events North of the Alps and on the Atlantic Coast were taking centre stage. Important artists of this period included Bosch, Dürer, and Breugel. In Spain Miguel de Cervantes wrote the novel Don Quixote, other important works of literature in this period were the Canterbury Tales by Geoffrey Chaucer and Le Morte d'Arthur by Sir Thomas Malory. The most famous playwright of the era was the Englishman William Shakespeare whose sonnets and plays (including Hamlet, Romeo and Juliet and Macbeth) are considered some of the finest works ever written in the English language.
Meanwhile, the Christian kingdoms of northern Iberia continued their centuries - long fight to reconquer the peninsula from its Muslim rulers. In 1492, the last Islamic stronghold, Granada, fell, and Iberia was divided between the Christian kingdoms of Spain and Portugal. Iberia 's Jewish and Muslim minorities were forced to convert to Catholicism or be exiled. The Portuguese immediately looked to expand outward sending expeditions to explore the coasts of Africa and engage in trade with the mostly Muslim powers on the Indian Ocean, making Portugal wealthy. In 1492, a Spanish expedition of Christopher Columbus discovered the Americas during an attempt to find a western route to East Asia.
From the East, however, the Ottoman Turks under Suleiman the Magnificent continued their advance into the heart of Christian Europe -- besieging Vienna in 1529.
The 16th century saw the flowering of the Renaissance in the rest of the West. In the Polish - Lithuanian Commonwealth, astronomer Nicolaus Copernicus deduced that the geocentric model of the universe was incorrect, and that in fact the planets revolve around the sun. The father of modern science Galileo developed telescope technology. Advances in medicine and understanding of the human anatomy also increased in this time. Gerolamo Cardano partially invented several machines and introduced essential mathematics theories. In England, Sir Isaac Newton pioneered the science of physics. These events led to the so - called scientific revolution, which emphasized experimentation.
The other major movement in the West in the 16th century was the Reformation, which would profoundly change the West and end its religious unity. The Reformation began in 1517 when the Catholic monk Martin Luther wrote his 95 Theses, which denounced the wealth and corruption of the church, as well as many Catholic beliefs, including the institution of the papacy and the belief that, in addition to faith in Christ, "good works '' were also necessary for salvation. Luther drew on the beliefs of earlier church critics, like the Bohemian Jan Hus and the Englishman John Wycliffe. Luther 's beliefs eventually ended in his excommunication from the Catholic Church and the founding of a church based on his teachings: the Lutheran Church, which became the majority religion in northern Germany. Soon other reformers emerged, and their followers became known as Protestants. In 1525, Ducal Prussia became the first Lutheran state.
In the 1540s the Frenchman John Calvin founded a church in Geneva which forbade alcohol and dancing, and which taught God had selected those destined to be saved from the beginning of time. His Calvinist Church gained about half of Switzerland and churches based on his teachings became dominant in the Netherlands (the Dutch Reformed Church) and Scotland (the Presbyterian Church). In England, when the Pope failed to grant King Henry VIII a divorce, he declared himself head of the Church in England (founding what would evolve into today 's Church of England and Anglican Communion). Some Englishmen felt the church was still too similar to the Catholic Church and formed the more radical Puritanism. Many other small Protestant sects were formed, including Zwinglianism, Anabaptism and Mennonism. Although they were different in many ways, Protestants generally called their religious leaders ministers instead of priests, and believed only the Bible, and not Tradition offered divine revelation.
Britain and the Dutch Republic allowed Protestant dissenters to migrate to their North American colonies -- thus the future United States found its early Protestant ethos -- while Protestants were forbidden to migrate to the Spanish colonies (thus South America retained its Catholic hue). A more democratic organisational structure within some of the new Protestant movements -- as in the Calvinists of New England -- did much also to foster a democratic spirit in Britain 's American colonies.
The Catholic Church responded to the Reformation with the Counter Reformation. Some of Luther and Calvin 's criticisms were heeded: the selling of indulgences was reined in by the Council of Trent in 1562. But exuberant baroque architecture and art was embraced as an affirmation of the faith and new seminaries and orders were established to lead missions to far off lands. An important leader in this movement was Saint Ignatius of Loyola, founder of the Society of Jesus (Jesuit Order) which gained many converts and sent such famous missionaries as Saints Matteo Ricci to China, Francis Xavier to India and Peter Claver to the Americas.
As princes, kings and emperors chose sides in religious debates and sought national unity, religious wars erupted throughout Europe, especially in the Holy Roman Empire. Emperor Charles V was able to arrange the Peace of Augsburg between the warring Catholic and Protestant nobility. However, in 1618, the Thirty Years ' War began between Protestants and Catholics in the empire, which eventually involved neighboring countries like France. The devastating war finally ended in 1648. In the Peace of Westphalia ending the war, Lutheranism, Catholicism and Calvinism were all granted toleration in the empire. The two major centers of power in the empire after the war were Protestant Prussia in the north and Catholic Austria in the south. The Dutch, who were ruled by the Spanish at the time, revolted and gained independence, founding a Protestant country. In 1588 the staunchly Catholic Spanish attempted to conquer Protestant England with a large fleet of ships (the Spanish Armada), however a storm destroyed the fleet, bringing a famous victory to Queen Elizabeth I of England. The defeat of the Spanish Armada associated her name forever with what is popularly viewed as one of the greatest victories in English history. The Elizabethan era is famous above all for the flourishing of English drama, led by playwrights such as William Shakespeare and for the seafaring prowess of English adventurers such as Sir Francis Drake. Her 44 years on the throne provided welcome stability and helped forge a sense of national identity. One of her first moves as queen was to support the establishment of an English Protestant church, of which she became the Supreme Governor of what was to become the Church of England.
By 1650, the religious map of Europe had been redrawn: Scandinavia, Iceland, north Germany, part of Switzerland, Netherlands and Britain were Protestant, while the rest of the West remained Catholic. A byproduct of the Reformation was increasingly literacy as Protestant powers pursued an aim of educating more people to be able to read the Bible.
From its dawn until modern times, the West had suffered invasions from Africa, Asia, and non-Western parts of Europe. By 1500 Westerners took advantage of their new technologies, sallied forth into unknown waters, expanded their power and the Age of Discovery began, with Western explorers from seafaring nations like Portugal and Castile (later Spain) and later Holland, France and England setting forth from the "Old World '' to chart faraway shipping routes and discover "new worlds ''.
In 1492, the Genovese born mariner, Christopher Columbus set out under the auspices of the Crown of Castile to seek an oversea route to the East Indies via the Atlantic Ocean. Rather than Asia, Columbus landed in the Bahamas, in the Caribbean. Spanish colonization followed and Europe established Western Civilization in the Americas. The Portuguese explorer Vasco da Gama led the first sailing expedition directly from Europe to India in 1497 - 1499, by the Atlantic and Indian oceans, opening up the possibility of trade with the East other than via perilous overland routes like the Silk Road. Ferdinand Magellan, a Portuguese explorer working for the Spanish Crown (under the Crown of Castile), led an expedition in 1519 -- 1522 which became the first to sail from the Atlantic Ocean into the Pacific Ocean and the first to cross the Pacific. It also completed the first circumnavigation of the Earth (although Magellan himself was killed in the Philippines).
The Americas were deeply affected by European expansion, due to conquest, sickness, and introduction of new technologies and ways of life. The Spanish Conquistadors conquered most of the Caribbean islands and overran the two great New World empires: the Aztec Empire of Mexico and the Inca Empire of Peru. From there, Spain conquered about half of South America, the remaining Central America and much of North America. Portugal also expanded in the Americas, attempting to establish some fishing colonies in northern North America first (with a relatively limited duration) and conquering half of South America and calling their colony Brazil. These Western powers were aided not only by superior technology like gunpowder, but also by Old World diseases which they inadvertently brought with them, and which wiped out large segments Amerindian population. The natives populations, called Indians by Columbus, since he originally thought he had landed in Asia (but often called Amerindians by scholars today), were converted to Catholicism and adopted the language of their rulers, either Spanish or Portuguese. They also adopted much of Western culture. Many Iberian settlers arrived, and many of them intermarried with the Amerindians resulting in a so - called Mestizo population, which became the majority of the population of Spain 's American empires.
Other powers to arrive in the Americas were the Swedes, Dutch, English, and French. The Dutch, English, and French all established colonies in the Caribbean and each established a small South American colony. The French established two large colonies in North America, Louisiana in the center of the continent and New France in the northeast of the continent. The French were not as intrusive as the Iberians were and had relatively good relations with the Amerindians, although there were areas of relatively heavy settlement like New Orleans and Quebec. Many French missionaries were successful in converting Amerindians to Catholicism. On North America 's Atlantic coast, the Swedes established New Sweden. This colony was eventually conquered by the nearby Dutch colony of New Netherland (including New Amsterdam). New Netherland itself was eventually conquered by England and renamed New York. Although England 's American empire began in what is today Canada, they soon focused their attention to the south, where they established thirteen colonies on North America 's Atlantic coast. The English were unique in that rather than attempting to convert the Amerindians, they simply settled their colonies with Englishmen and pushed the Amerindians off their lands.
In the Americas, it seems that only the most remote peoples managed to stave off complete assimilation by Western and Western - fashioned governments. These include some of the northern peoples (i.e., Inuit), some peoples in the Yucatán, Amazonian forest dwellers, and various Andean groups. Of these, the Quechua people, Aymara people, and Maya people are the most numerous - at around 10 - 11 million, 2 million, and 7 million, respectively. Bolivia is the only American country with a majority Amerindian population.
Contact between the Old and New Worlds produced the Columbian Exchange, named after Columbus. It involved the transfer of goods unique to one hemisphere to another. Westerners brought cattle, horses, and sheep to the New World, and from the New World Europeans received tobacco, potatoes, and bananas. Other items becoming important in global trade were the sugarcane and cotton crops of the Americas, and the gold and silver brought from the Americas not only to Europe but elsewhere in the Old World.
Much of the land of the Americas was uncultivated, and Western powers were determined to make use of it. At the same time, tribal West African rulers were eager to trade their prisoners of war, and even members of their own tribes as slaves to the West. The West began purchasing slaves in large numbers and sending them to the Americas. This slavery was unique in world history for several reasons. Firstly, since only black Africans were enslaved, a racial component entered into Western slavery which had not existed in any other society to the extent it did in the West. Another important difference between slavery in the West and slavery elsewhere was the treatment of slaves. Unlike in some other cultures, slaves in the West were used primarily as field workers. Western empires differed in how often manumission was granted to slaves, with it being rather common in Spanish colonies, for example, but rare in English ones. Many Westerners did eventually come to question the morality of slavery. This early anti-slavery movement, mostly among clergy and political thinkers, was countered by pro-slavery forces by the introduction of the idea that blacks were inferior to European whites, mostly because they were non-Christians, and therefore it was acceptable to treat them without dignity. This idea resulted in racism in the West, as people began feeling all blacks were inferior to whites, regardless of their religion. Once in the Americas, blacks adopted much of Western culture and the languages of their masters. They also converted to Christianity.
After trading with African rulers for some time, Westerners began establishing colonies in Africa. The Portuguese conquered ports in North, West and East Africa and inland territory in what is today Angola and Mozambique. They also established relations with the Kingdom of Kongo in central Africa before, and eventually the Kongolese converted to Catholicism. The Dutch established colonies in modern - day South Africa, which attracted many Dutch settlers. Western powers also established colonies in West Africa. However, most of the continent remained unknown to Westerners and their colonies were restricted to Africa 's coasts.
Westerners also expanded in Asia. The Portuguese controlled port cities in the East Indies, India, Persian Gulf, Sri Lanka, Southeast Asia and China. During this time, the Dutch began their colonisation of the Indonesian archipelago, which became the Dutch East Indies in the early 19th century, and gained port cities in Sri Lanka and Malaysia and India. Spain conquered the Philippines and converted the inhabitants to Catholicism. Missionaries from Iberia (including some from Italy and France) gained many converts in Japan until Christianity was outlawed by Japan 's emperor. Some Chinese also became Christian, although most did not. Most of India was divided up between England and France.
As Western powers expanded they competed for land and resources. In the Caribbean, pirates attacked each other and the navies and colonial cities of countries, in hopes of stealing gold and other valuables from a ship or city. This was sometimes supported by governments. For example, England supported the pirate Sir Francis Drake in raids against the Spanish. Between 1652 and 1678, the Anglo - Dutch wars were fought, which England won, and England gained New Netherland and Dutch South Africa. In 1756, the Seven Years ' War, or French and Indian War began. It involved several powers fighting on several continents. In North America, English soldiers and colonial troops defeated the French, and in India the French were also defeated by England. In Europe Prussia defeated Austria. When the war ended in 1763, New France and eastern Louisiana were ceded to England, while western Louisiana was given to Spain. France 's lands in India were ceded to England. Prussia was given rule over more territory in what is today Germany.
The Dutch navigator Willem Janszoon had been the first documented Westerner to land in Australia in 1606 Another Dutchman, Abel Tasman later touched mainland Australia, and mapped Tasmania and New Zealand for the first time, in the 1640s. The English navigator James Cook became first to map the east coast of Australia in 1770. Cook 's extraordinary seamanship greatly expanded European awareness of far shores and oceans: his first voyage reported favourably on the prospects of colonisation of Australia; his second voyage ventured almost to Antarctica (disproving long held European hopes of an undiscovered Great Southern Continent); and his third voyage explored the Pacific coasts of North America and Siberia and brought him to Hawaii, where an ill - advised return after a lengthy stay saw him clubbed to death by natives.
Europe 's period of expansion in early modern times greatly changed the world. New crops from the Americas improved European diets. This, combined with an improved economy thanks to Europe 's new network of colonies, led to a demographic revolution in the West, with infant mortality dropping, and Europeans getting married younger and having more children. The West became more sophisticated economically, adopting Mercantilism, in which companies were state - owned and colonies existed for the good of the mother country.
The West in the early modern era went through great changes as the traditional balance between monarchy, nobility and clergy shifted. With the feudal system all but gone, nobles lost their traditional source of power. Meanwhile, in Protestant countries, the church was now often headed by a monarch, while in Catholic countries, conflicts between monarchs and the Church rarely occurred and monarchs were able to wield greater power than they ever had in Western history. Under the doctrine of the Divine right of kings, monarchs believed they were only answerable to God: thus giving rise to absolutism.
At the opening of the 15th century, tensions were still going on between Islam and Christianity. Europe, dominated by Christians, remained under threat from the Muslim Ottoman Turks. The Turks had migrated from central to western Asia and converted to Islam years earlier. Their capture of Constantinople in 1453, thus extinguishing the Eastern Roman Empire, was a crowning achievement for the new Ottoman Empire. They continued to expand across the Middle East, North Africa and the Balkans. Under the leadership of the Spanish, a Christian coalition destroyed the Ottoman navy at the battle of Lepanto in 1571 ending their naval control of the Mediterranean. However, the Ottoman threat to Europe was not ended until a Polish led coalition defeated the Ottoman at the Battle of Vienna in 1683. The Turks were driven out of Buda (the eastern part of Budapest they had occupied for a century), Belgrade, and Athens -- though Athens was to be recaptured and held until 1829.
The 16th century is often called Spain 's Siglo de Oro (golden century). From its colonies in the Americas it gained large quantities of gold and silver, which helped make Spain the richest and most powerful country in the world. One of the greatest Spanish monarchs of the era was Charles I (1516 -- 1556, who also held the title of Holy Roman Emperor Charles V). His attempt to unite these lands was thwarted by the divisions caused by the Reformation and ambitions of local rulers and rival rulers from other countries. Another great monarch was Philip II (1556 -- 1598), whose reign was marked by several Reformation conflicts, like the loss of the Netherlands and the Spanish Armada. These events and an excess of spending would lead to a great decline in Spanish power and influence by the 17th century.
After Spain began to decline in the 17th century, the Dutch, by virtue of its sailing ships, became the greatest world power, leading the 17th century to be called the Dutch Golden Age. The Dutch followed Portugal and Spain in establishing an overseas colonial empire -- often under the corporate colonialism model of the East India and West India Companies. After the Anglo - Dutch Wars, France and England emerged as the two greatest powers in the 18th century.
Louis XIV became king of France in 1643. His reign was one of the most opulent in European history. He built a large palace in the town of Versailles.
The Holy Roman Emperor exerted no great influence on the lands of the Holy Roman Empire by the end of the Thirty Years ' War. In the north of the empire, Prussia emerged as a powerful Protestant nation. Under many gifted rulers, like King Frederick the Great, Prussia expanded its power and defeated its rival Austria many times in war. Ruled by the Habsburg dynasty, Austria became a great empire, expanding at the expense of the Ottoman Empire and Hungary.
One land where absolutism did not take hold was England, which had trouble with revolutionaries. Elizabeth I, daughter of Henry VIII, had left no direct heir to the throne. The rightful heir was actually James VI of Scotland, who was crowned James I of England. James 's son, Charles I resisted the power of Parliament. When Charles attempted to shut down Parliament, the Parliamentarians rose up and soon the all of England was involved in a civil war. The English Civil War ended in 1649 with the defeat and execution of Charles I. Parliament declared a kingless commonwealth but soon appointed the anti-absolutist leader and staunch Puritan Oliver Cromwell as Lord Protector. Cromwell enacted many unpopular Puritan religious laws in England, like outlawing alcohol and theaters, although religious diversity may have grown. (It was Cromwell, after all, that invited the Jews back into England after the Edict of Expulsion.) After his death, the monarchy was restored under Charles 's son, who was crowned Charles II. His son, James II succeeded him. James and his infant son were Catholics. Not wanting to be ruled by a Catholic dynasty, Parliament invited James 's daughter Mary and her husband William of Orange, to rule as co-monarchs. They agreed on the condition James would not be harmed. Realizing he could not count on the Protestant English army to defend him, he abdicated following the Glorious Revolution of 1688. Before William III and Mary II were crowned however, Parliament forced them to sign the English Bill of Rights, which guaranteed some basic rights to all Englishmen, granted religious freedom to non-Anglican Protestants, and firmly established the rights of Parliament. In 1707, the Act of Union of 1707 were passed by the parliaments of Scotland and England, merging Scotland and England into a single Kingdom of Great Britain, with a single parliament. This new kingdom also controlled Ireland which had previously been conquered by England. Following the Irish Rebellion of 1798, in 1801 Ireland was formally merged with Great Britain to form the United Kingdom of Great Britain and Ireland. Ruled by the Protestant Ascendancy, Ireland eventually became an English - speaking land, though the majority population preserved distinct cultural and religious outlooks, remaining predomininantly Catholic except in parts of Ulster and Dublin. By then, the British experience had already contributed to the American Revolution.
The Polish - Lithuanian Commonwealth was an important European center for the development of modern social and political ideas. It was famous for its rare quasi-democratic political system, praised by philosophers such as Erasmus; and, during the Counter-Reformation, was known for near - unparalleled religious tolerance, with peacefully coexisting Catholic, Jewish, Eastern Orthodox, Protestant and Muslim communities. With its political system the Commonwealth gave birth to political philosophers such as Andrzej Frycz Modrzewski (1503 -- 1572), Wawrzyniec Grzymała Goślicki (1530 -- 1607) and Piotr Skarga (1536 -- 1612). Later, works by Stanisław Staszic (1755 -- 1826) and Hugo Kołłątaj (1750 -- 1812) helped pave the way for the Constitution of 3 May 1791, which historian Norman Davies calls "the first constitution of its kind in Europe ''. Polish - Lithuanian Commonwealth 's constitution enacted revolutionary political principles for the first time on the European continent. The Komisja Edukacji Narodowej, Polish for Commission of National Education, formed in 1773, was the world 's first national Ministry of Education and an important achievement of the Polish Enlightenment.
The intellectual movement called the Age of Enlightenment began in this period as well. Its proponents opposed the absolute rule of the monarchs, and instead emphasized the equality of all individuals and the idea that governments should derive their existence from the consent of the governed. Enlightenment thinkers called philosophes (French for philosophers) idealized Europe 's classical heritage. They looked at Athenian democracy and the Roman republic as ideal governments. They believed reason held the key to creating an ideal society.
The Englishman Francis Bacon espoused the idea that senses should be the primary means of knowing, while the Frenchman René Descartes advocated using reason over the senses. In his works, Descartes was concerned with using reason to prove his own existence and the existence of the external world, including God. Another belief system became popular among philosophes, Deism, which taught that a single god had created but did not interfere with the world. This belief system never gained popular support and largely died out by the early 19th century.
Thomas Hobbes was an English philosopher, best known today for his work on political philosophy. His 1651 book Leviathan established the foundation for most of Western political philosophy from the perspective of social contract theory. The theory was examined also by John Locke (Second Treatise of Government (1689)) and Rousseau (Du contrat social (1762)). Social contract arguments examine the appropriate relationship between government and the governed and posit that individuals unite into political societies by a process of mutual consent, agreeing to abide by common rules and accept corresponding duties to protect themselves and one another from violence and other kinds of harm.
In 1690 John Locke wrote that people have certain natural rights like life, liberty and property and that governments were created in order to protect these rights. If they did not, according to Locke, the people had a right to overthrow their government. The French philosopher Voltaire criticized the monarchy and the Church for what he saw as hypocrisy and for their persecution of people of other faiths. Another Frenchman, Montesquieu, advocated division of government into executive, legislative and judicial branches. The French author Rousseau stated in his works that society corrupted individuals. Many monarchs were affected by these ideas, and they became known to history as the enlightened despots. However, most only supported Enlightenment ideas that strengthened their own power.
The Scottish Enlightenment was a period in 18th century Scotland characterised by an outpouring of intellectual and scientific accomplishments. Scotland reaped the benefits of establishing Europe 's first public education system and a growth in trade which followed the Act of Union with England of 1707 and expansion of the British Empire. Important modern attitudes towards the relationship between science and religion were developed by the philosopher / historian David Hume. Adam Smith developed and published The Wealth of Nations, the first work in modern economics. He believed competition and private enterprise could increase the common good. The celebrated bard Robert Burns is still widely regarded as the national poet of Scotland.
European cities like Paris, London, and Vienna grew into large metropolises in early modern times. France became the cultural center of the West. The middle class grew even more influential and wealthy. Great artists of this period included El Greco, Rembrandt, and Caravaggio.
By this time, many around the world wondered how the West had become so advanced, for example, the Orthodox Christian Russians, who came to power after conquering the Mongols that had conquered Kiev in the Middle Ages. They began westernizing under Czar Peter the Great, although Russia remained uniquely part of its own civilization. The Russians became involved in European politics, dividing up the Polish - Lithuanian Commonwealth with Prussia and Austria.
The late 18th century and early 19th century, much of the West experienced a series of revolutions that would change the course of history, resulting in new ideologies and changes in society.
The first of these revolutions began in North America. Britain 's 13 American colonies had by this time developed their own sophisticated economy and culture, largely based on Britain 's. The majority of the population was of British descent, while significant minorities included people of Irish, Dutch and German descent, as well as some Amerindians and many black slaves. Most of the population was Anglican, others were Congregationalist or Puritan, while minorities included other Protestant churches like the Society of Friends and the Lutherans, as well as some Roman Catholics and Jews. The colonies had their own great cities and universities and continually welcomed new immigrants, mostly from Britain. After the expensive Seven Years ' War, Britain needed to raise revenue, and felt the colonists should bare the brunt of the new taxation it felt was necessary. The colonists greatly resented these taxes and protested the fact they could be taxed by Britain but had no representation in the government.
After Britain 's King George III refused to seriously consider colonial grievances raised at the first Continental Congress, some colonists took up arms. Leaders of a new pro-independence movement were influenced by Enlightenment ideals and hoped to bring an ideal nation into existence. On 4 July 1776, the colonies declared independence with the signing of the United States Declaration of Independence. Drafted primarily by Thomas Jefferson, the document 's preamble eloquently outlines the principles of governance that would come to increasingly dominate Western thinking over the ensuing century and a half:
George Washington led the new Continental Army against the British forces, who had many successes early in this American Revolution. After years of fighting, the colonists formed an alliance with France and defeated the British at Yorktown, Virginia in 1781. The treaty ending the war granted independence to the colonies, which became The United States of America.
The other major Western revolution at the turn of the 19th century was the French Revolution. In 1789 France faced an economical crisis. The King called, for the first time in more than two centuries, the Estates General, an assembly of representatives of each estate of the kingdom: the First Estate (the clergy), the Second Estate (the nobility), and the Third Estate (middle class and peasants); in order to deal with the crisis. As the French society was gained by the same Enlightenment ideals that led to the American revolution, in which many Frenchmen, such as Lafayette, took part; representatives of the Third Estate, joined by some representatives of the lower clergy, created the National Assembly, which, unlike the Estates General, provided the common people of France with a voice proportionate to their numbers.
The people of Paris feared the King would try to stop the work of the National Assembly and Paris was soon consumed with riots, anarchy, and widespread looting. The mobs soon had the support of the French Guard, including arms and trained soldiers, because the royal leadership essentially abandoned the city. On the fourteenth of July 1789 a mob stormed the Bastille, a prison fortress, which led the King to accept the changes. On 4 August 1789 the National Constituent Assembly abolished feudalism sweeping away both the seigneurial rights of the Second Estate and the tithes gathered by the First Estate. It was the first time in Europe, where feudalism was the norm for centuries, that such a thing happened. In the course of a few hours, nobles, clergy, towns, provinces, companies, and cities lost their special privileges.
At first, the revolution seemed to be turning France into a constitutional monarchy, but the other continental Europe powers feared a spread of the revolutionary ideals and eventually went to war with France. In 1792 King Louis XVI was imprisoned after he had been captured fleeing Paris and the Republic was declared. The Imperial and Prussian armies threatened retaliation on the French population should it resist their advance or the reinstatement of the monarchy. As a consequence, King Louis was seen as conspiring with the enemies of France. His execution on 21 January 1793 led to more wars with other European countries. During this period France effectively became a dictatorship after the parliamentary coup of the radical leaders, the Jacobin. Their leader, Robespierre oversaw the Reign of Terror, in which thousands of people deemed disloyal to the republic were executed. Finally, in 1794, Robespierre himself was arrested and executed, and more moderate deputies took power. This led to a new government, the French Directory. In 1799, a coup overthrew the Directory and General Napoleon Bonaparte seized power as dictator and even an emperor in 1804.
Liberté, égalité, fraternité (French for "Liberty, equality, fraternity ''), now the national motto of France, had its origins during the French Revolution, though it was only later institutionalised. It remains another iconic motto of the aspirations of Western governance in the modern world.
Some influential intellectuals came to reject the excesses of the revolutionary movement. Political theorist Edmund Burke had supported the American Revolution, but turned against the French Revolution and developed a political theory which opposed governing based on abstract ideas, and preferred ' organic ' reform. He is remembered as a father of modern Anglo - conservatism. In response to such critiques, the American revolutionary Thomas Paine published his book The Rights of Man in 1791 as a defence of the ideals of the French Revolution. The spirit of the age also produced early works of feminist philosophy -- notably Mary Wollstonecraft 's 1792 book: A Vindication of the Rights of Woman.
The Napoleonic Wars were a series of conflicts involving Napoleon 's French Empire and changing sets of European allies by opposing coalitions that ran from 1803 to 1815. As a continuation of the wars sparked by the French Revolution of 1789, they revolutionized European armies and played out on an unprecedented scale, mainly due to the application of modern mass conscription. French power rose quickly, conquering most of Europe, but collapsed rapidly after France 's disastrous invasion of Russia in 1812. Napoleon 's empire ultimately suffered complete military defeat resulting in the restoration of the Bourbon monarchy in France. The wars resulted in the dissolution of the Holy Roman Empire and sowed the seeds of nascent nationalism in Germany and Italy that would lead to the two nations ' consolidation later in the century. Meanwhile, the Spanish Empire began to unravel as French occupation of Spain weakened Spain 's hold over its colonies, providing an opening for nationalist revolutions in Spanish America. As a direct result of the Napoleonic wars, the British Empire became the foremost world power for the next century, thus beginning Pax Britannica.
France had to fight on multiple battlefronts against the other European powers. A nationwide conscription was voted to reinforce the old royal army made of noble officers and professional soldiers. With this new kind of army, Napoleon was able to beat the European allies and dominate Europe. The revolutionary ideals, based no more on feudalism but on the concept of a sovereign nation, spread all over Europe. When Napoleon eventually lost and the monarchy reinstated in France these ideals survived and led to the revolutionary waves of the 19th century that bring democracy in many European countries.
With the success of the American Revolution, the Spanish Empire also began to crumble as their American colonies sought independence as well. In 1808, when Joseph Bonaparte was installed as the Spanish King by the Napoleonic French, the Spanish resistance resorted to governing Juntas. When the Supreme Central Junta of Seville fell to the French in 1810, the Spanish American colonies developed themselves governing Juntas in the name of the deposed King Ferdinand VII (upon the concept known as "Retroversion of the Sovereignty to the People ''). As this process led to open conflicts between independentists and loyalists, the Spanish American Independence Wars immediately ensued; resulting, by the 1820s, in the definitive loss for the Spanish Empire of all its American territories, with the exception of Cuba and Puerto Rico.
The years following Britain 's victory in the Napoleonic Wars were a period of expansion for the United Kingdom and its former American colonies, which now made up the United States. This period of expansion would help establish Anglicanism as the dominant religion, English as the dominant language, and English and Anglo - American culture as the dominant culture of two continents and many other lands outside the British Isles.
Possibly the greatest change in the English - speaking world and the West as a whole following the Napoleonic Wars was the Industrial Revolution. The revolution began in Britain, where Thomas Newcomen developed a steam engine in 1712 to pump seeping water out of mines. This engine at first was powered by water, but later other fuels like coal and wood were used. Steam power had first been developed by the Ancient Greeks, but it was the British that first learned to use steam power effectively. In 1804, the first steam powered railroad locomotive was developed in Britain, which allowed goods and people to be transported at faster speeds than ever before in history. Soon, large numbers of goods were being produced in factories. This resulted in great societal changes, and many people settled in the cities where the factories were located. Factory work could often be brutal. With no safety regulations, people became sick from contaminants in the air in textile mills for, example. Many workers were also horribly maimed by dangerous factory machinery. Since workers relied only on their small wages for sustenance, entire families were forced to work, including children. These and other problems caused by industrialism resulted in some reforms by the mid-19th century. The economic model of the West also began to change, with mercantilism being replaced by capitalism, in which companies, and later, large corporations, were run by individual investor (s).
New ideological movements began as a result of the Industrial Revolution, including the Luddite movement, which opposed machinery, feeling it did not benefit the common good, and the socialists, whose beliefs usually included the elimination of private property and the sharing of industrial wealth. Unions were founded among industrial workers to help secure better wages and rights. Another result of the revolution was a change in societal hierarchy, especially in Europe, where nobility still occupied a high level on the social ladder. Capitalists emerged as a new powerful group, with educated professionals like doctors and lawyers under them, and the various industrial workers at the bottom. These changes were often slow however, with Western society as a whole remaining primarily agricultural for decades.
From 1837 until 1901, Queen Victoria reigned over the United Kingdom and the ever - expanding British Empire. The Industrial Revolution had begun in Britain and during the 19th century it became the most powerful Western nation. Britain also enjoyed relative peace and stability from 1815 until 1914, this period is often called the Pax Britannica, from the Latin "British Peace ''. This period also saw the evolution of British constitutional monarchy, with the monarch being more a figurehead and symbol of national identity than actual head of state, with that role being taken over by the Prime Minister, the leader of the ruling party in Parliament. Two dominant parties emerging in Parliament in this time were the Conservative Party and the Liberal Party. The Liberal constituency was made up of mostly of businessmen, as many Liberals supported the idea of a free market. Conservatives were supported by the aristocracy and farmers. Control of Parliament switched between the parties over the 19th century, but overall the century was a period of reform. In 1832 more representation was granted to new industrial cities, and laws barring Catholics from serving in Parliament were repealed, although discrimination against Catholics, especially Irish Catholics, continued. Other reforms granted near universal manhood suffrage, and state - supported elementary education for all Britons. More rights were granted to workers as well.
Ireland had been ruled from London since the Middle Ages. After the Protestant Reformation the British Establishment began a campaign of discrimination against Roman Catholic and Presbyterian Irish, who lacked many rights under the Penal Laws, and the majority the agricultural land was owned the Protestant Ascendancy. Great Britain and Ireland had become a single nation ruled from London without the autonomous Parliament of Ireland after the Act of Union of 1800 was passed, creating the United Kingdom of Great Britain and Ireland. In the mid-19th century, Ireland suffered a devastating Potato Famine, which killed 10 % of the population and led to massive emigration: see Irish diaspora.
Throughout the 19th century, Britain 's power grew enormously and the sun quite literally "never set '' on the British Empire, for it had outposts on every occupied continent. It consolidated control over such far flung territories as Canada and British Guiana in the Americas, Australia and New Zealand in Oceania; Malaya, Hong Kong and Singapore in the Far East and a line of colonial possessions from Egypt to the Cape of Good Hope through Africa. All of India was under British rule by 1870.
In 1804, the Shah of the declining Mughal Empire had formally accepted the protection of the British East India Company. Many Britons settled in India, establishing a ruling class. They then expanded into neighbouring Burma. Among the British born in India were the immensely influential writers Rudyard Kipling (1865) and George Orwell (1903).
In the Far East, Britain went to war with the ruling Qing Dynasty of China when it tried to stop Britain from selling the dangerous drug opium to the Chinese people. The First Opium War (1840 -- 1842), ended in a British victory, and China was forced to remove barriers to British trade and cede several ports and the island of Hong Kong to Britain. Soon, other powers sought these same privileges with China and China was forced to agree, ending Chinese isolation from the rest of the world. In 1853 an American expedition opened up Japan to trade with first the U.S., and then the rest of the world.
In 1833 Britain outlawed slavery throughout its empire after a successful campaign by abolitionists, and Britain had a great deal of success attempting to get other powers to outlaw the practice as well.
As British settlement of southern Africa continued, the descendants of the Dutch in southern Africa, called the Boers or Afrikaners, whom Britain had ruled since the Anglo - Dutch Wars, migrated northward, disliking British rule. Explorers and missionaries like David Livingstone became national heroes. Cecil Rhodes founded Rhodesia and a British army under Lord Kitchener secured control of Sudan in the 1898 Battle of Omdurman.
Following the American Revolution, many Loyalists to Britain fled north to what is today Canada (where they were called United Empire Loyalists). Joined by mostly British colonists, they helped establish early colonies like Ontario and New Brunswick. British settlement in North America increased, and soon there were several colonies both north and west of the early ones in the northeast of the continent, these new ones included British Columbia and Prince Edward Island. Rebellions broke out against British rule in 1837, but Britain appeased the rebels ' supporters in 1867 by confederating the colonies into the Dominion of Canada, with its own Prime Minister. Although Canada was still firmly within the British Empire, its people now enjoyed a great degree of self - rule. Canada was unique in the British Empire in that it had a French - speaking province, Quebec, which Britain had gained rule over in the Seven Years ' War.
The First Fleet of British convicts arrived at New South Wales, Australia in 1788 and established a British outpost and penal colony at Sydney Cove. These convicts were often petty ' criminals ', and represented the population spill - over of Britain 's Industrial Revolution, as a result of the rapid urbanisation and dire crowding of British cities. Other convicts were political dissidents, particularly from Ireland. The establishment of a wool industry and the enlightened governorship of Lachlan Macquarie were instrumental in transforming New South Wales from a notorious prison outpost into a budding civil society. Further colonies were established around the perimeter of the continent and European explorers ventured deep inland. A free colony was established at South Australia in 1836 with a vision for a province of the British Empire with political and religious freedoms. The colony became a cradle of democratic reform. The Australian gold rushes increased prosperity and cultural diversity and autonomous democratic parliaments began to be established from the 1850s onward.
The native inhabitants of Australia, called the Aborigines, lived as hunter gatherers before European arrival. The population, never large, was largely dispossessed without treaty agreements nor compensations through the 19th century by the expansion of European agriculture, and, as had occurred when Europeans arrived in North and South America, faced superior European weaponry and suffered greatly from exposure to old world diseases such as smallpox, to which they had no biological immunity.
From the early 19th century, New Zealand was being visited by explorers, sailors, missionaries, traders and adventurers and was administered by Britain from the nearby colony at New South Wales. In 1840 Britain signed the Treaty of Waitangi with the natives of New Zealand, the Māori, in which Britain gained sovereignty over the archipelago. As British settlers arrived, clashes resulted and the British fought several wars before defeating the Māori. By 1870, New Zealand had a population made up mostly of Britons and their descendants.
Following independence from Britain, the United States began expanding westward, and soon a number of new states had joined the union. In 1803, the United States purchased the Louisiana Territory from France, whose emperor, Napoleon I, had regained it from Spain. Soon, America 's growing population was settling the Louisiana Territory, which geographically doubled the size of the country. At the same time, a series of revolutions and independence movements in Spain and Portugal 's American empires resulted in the liberation of nearly all of Latin America, as the region composed of South America, most of the Caribbean, and North America from Mexico south became known. At first Spain and its allies seemed ready to try to reconquer the colonies, but the U.S. and Britain opposed this, and the reconquest never took place. From 1821 on, the U.S. bordered the newly independent nation of Mexico. An early problem faced by the Mexican republic was what to do with its sparsely populated northern territories, which today make up a large part of the American West. The government decided to try to attract Americans looking for land. Americans arrived in such large numbers that both the provinces of Texas and California had majority white, English - speaking populations. This led to a culture clash between these provinces and the rest of Mexico. When Mexico became a dictatorship under General Antonio López de Santa Anna, the Texans declared independence. After several battles, Texas gained independence from Mexico, although Mexico later claimed it still had a right to Texas. After existing as a republic modeled after the U.S. for several years, Texas joined the United States in 1845. This led to border disputes between the U.S. and Mexico, resulting in the Mexican -- American War. The war ended with an American victory, and Mexico had to cede all its northern territories to the United States, and recognize the independence of California, which had revolted against Mexico during the war. In 1850, California joined the United States. In 1848, the U.S. and Britain resolved a border dispute over territory on the Pacific coast, called the Oregon Country by giving Britain the northern part and the U.S. the southern part. In 1867, the U.S. expanded again, purchasing the Russian colony of Alaska, in northwestern North America.
Politically, the U.S. became more democratic with the abolishment of property requirements in voting, although voting remained restricted to white males. By the mid-19th century, the most important issue was slavery. The Northern states generally had outlawed the practice, while the Southern states not only had kept it legal but came to feel it was essential to their way of life. As new states joined the union, lawmakers clashed over whether they should be slave states or free states. In 1860, the anti-slavery candidate Abraham Lincoln was elected president. Fearing he would try to outlaw slavery in the whole country, several southern states seceded, forming the Confederate States of America, electing their own president and raising their own army. Lincoln countered that secession was illegal and raised an army to crush the rebel government, thus the advent of the American Civil War (1861 -- 65). The Confederates had a skilled military that even succeeded in invading the northern state of Pennsylvania. However, the war began to turn around, with the defeat of Confederates at Gettysburg, Pennsylvania, and at Vicksburg, which gave the Union control of the important Mississippi River. Union forces invaded deep into the South, and the Confederacy 's greatest general, Robert E. Lee, surrendered to Ulysses S. Grant of the Union in 1865. After that, the south came under Union occupation, ending the American Civil War. Lincoln was tragically assassinated in 1865, but his dream of ending slavery, exhibited in the wartime Emancipation Proclamation, was carried out by his Republican Party, which outlawed slavery, granted blacks equality and black males voting rights via constitutional amendments. However, although the abolishment of slavery would not be challenged, equal treatment for blacks would be.
The Gettysburg Address, Lincoln 's most famous speech and one of the most quoted political speeches in United States history, was delivered at the dedication of the Soldiers ' National Cemetery in Gettysburg, Pennsylvania on 19 November 1863, during the Civil War, four and a half months after the Battle of Gettysburg. Describing America as a "nation conceived in Liberty and dedicated to the proposition that all men are created equal '', Lincoln famously called on those gathered:
In the early 19th century, missionaries, mostly from America, converted the Hawaiians to Christianity. They were followed by American entrepreneurs who established sugar and pineapple plantations and a well - developed economy on the island, becoming a new ruling class, although the native Hawaiian monarchy continued to rule. Eventually, English - speaking Americans and their descendants made up the majority of Hawaii 's population.
The years following the Napoleonic Wars were a time of change in Europe. The Industrial Revolution, nationalism, and several political revolutions transformed the continent.
Industrial technology was imported from Britain. The first lands affected by this were France, the Low Countries, and western Germany. Eventually the Industrial Revolution spread to other parts of Europe. Many people in the countryside migrated to major cities like Paris, Berlin, and Amsterdam, which were connected like never before by railroads. Europe soon had its own class of wealthy industrialists, and large numbers of industrial workers. New ideologies emerged as a reaction against perceived abuses of industrial society. Among these ideologies were socialism and the more radical communism, created by the German Karl Marx. According to communism, history was a series of class struggles, and at the time industrial workers were pitted against their employers. Inevitably the workers would rise up in a worldwide revolution and abolish private property, according to Marx. Communism was also atheistic, since, according to Marx, religion was simply a tool used by the dominant class to keep the oppressed class docile.
Several revolutions occurred in Europe following the Napoleonic Wars. The goal of most of these revolutions was to establish some form of democracy in a particular nation. Many were successful for a time, but their effects were often eventually reversed. Examples of this occurred in Spain, Italy, and Austria. Several European nations stood steadfastly against revolution and democracy, including Austria and Russia. Two successful revolts of the era were the Greek and Serbian wars of independence, which freed those nations from Ottoman rule. Another successful revolution occurred in the Low Countries. After the Napoleonic Wars, the Netherlands was given control of modern - day Belgium, which had been part of the Holy Roman Empire. The Dutch found it hard to rule the Belgians, due to their Catholic religion and French language. In the 1830s, the Belgians successfully overthrew Dutch rule, establishing the Kingdom of Belgium. In 1848 a series of revolutions occurred in Prussia, Austria, and France. In France, the king, Louis - Philippe, was overthrown and a republic was declared. Louis Napoleon, nephew of Napoleon I was elected the republic 's first president. Extremely popular, Napoleon was made Napoleon III (since Napoleon I 's son had been crowned Napoleon II during his reign), Emperor of the French, by a vote of the French people, ending France 's Second Republic. Revolutionaries in Prussia and Italy focused more on nationalism, and most advocated the establishment of unified German and Italian states, respectively.
In the city - states of Italy, many argued for a unification of all the Italian kingdoms into a single nation. Obstacles to this included the many Italian dialects spoken by the people of Italy, and the Austrian presence in the north of the peninsula. Unification of the peninsula began in 1859. The powerful Kingdom of Sardinia (also called Savoy or Piedmont) formed an alliance with France and went to war with Austria in that year. The war ended with a Sardinian victory, and Austrian forces left Italy. Plebiscites were held in several cities, and the majority of people voted for union with Sardinia, creating the Kingdom of Italy under Victor Emmanuel II. In 1860, the Italian nationalist Garibaldi led revolutionaries in an overthrow of the government of the Kingdom of the Two Sicilies. A plebiscite held there resulted in a unification of that kingdom with Italy. Italian forces seized the eastern Papal States in 1861. In 1866 Venetia became part of Italy after Italy 's ally, Prussia, defeated that kingdom 's rulers, the Austrians, in Austro - Prussian War. In 1870, Italian troops conquered the Papal States, completing unification. Pope Pius IX refused to recognize the Italian government or negotiate settlement for the loss of Church land.
Prussia in the middle and late parts of the 19th century was ruled by its king, Wilhelm I, and its skilled chancellor, Otto von Bismarck. In 1864, Prussia went to war with Denmark and gained several German - speaking lands as a result. In 1866, Prussia went to war with the Austrian Empire and won, and created a confederation of it and several German states, called the North German Confederation, setting the stage for the 1871 formation of the German Empire.
After years of dealing with Hungarian revolutionist, whose kingdom Austria had conquered centuries earlier, the Austrian emperor, Franz Joseph agreed to divide the empire into two parts: Austria and Hungary, and rule as both Emperor of Austria and king of Hungary. The new Austro - Hungarian Empire was created in 1867. The two peoples were united in loyalty to the monarch and Catholicism.
There were changes throughout the West in science, religion and culture between 1815 and 1870. Europe in 1870 differed greatly from its state in 1815. Most Western European nations had some degree of democracy, and two new national states had been created, Italy and Germany. Political parties were formed throughout the continent and with the spread of industrialism, Europe 's economy was transformed, although it remained very agricultural.
The 19th and early 20th centuries saw important contributions to the process of modernisation of Western art and Literature and continuing evolution in the role of religion in Western societies.
Napoleon re-established the Catholic Church in France through the Concordat of 1801. The end of the Napoleonic wars, signaled by the Congress of Vienna, brought Catholic revival and the return of the Papal States. In 1801, a new political entity was formed, the United Kingdom of Great Britain and Ireland, which merged the kingdoms of Great Britain and Ireland, thus increasing the number of Catholics in the new state. Pressure for abolition of anti-Catholic laws grew and in 1829 Parliament passed the Roman Catholic Relief Act 1829, giving Catholics almost equal civil rights, including the right to vote and to hold most public offices. While remaining a minority religion in the British Empire, a steady stream of new Catholics would continue to convert from the Church of England and Ireland, notably John Henry Newman and the poets Gerard Manley Hopkins and Oscar Wilde. The Anglo - Catholic movement began, emphasizing the Catholic traditions of the Anglican Church. New churches like the Methodist, Unitarian, and LDS Churches were founded. Many Westerners became less religious in this period, although a majority of people still held traditional Christian beliefs.
The 1859 publication of On the Origin of Species, by the English naturalist Charles Darwin, provided an alternative hypothesis for the development, diversification, and design of human life to the traditional poetic scriptural explanation known as Creationism. According to Darwin, only the organisms most able to adapt to their environment survived while others became extinct. Adaptations resulted in changes in certain populations of organisms which could eventually cause the creation of new species. Modern genetics started with Gregor Johann Mendel, a German - Czech Augustinian monk who studied the nature of inheritance in plants. In his 1865 paper "Versuche über Pflanzenhybriden '' ("Experiments on Plant Hybridization ''), Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically. Louis Pasteur and Joseph Lister made discoveries about bacteria and its effects on humans. Geologists at the time made discoveries indicating the world was far older than most believed it to be. Early batteries were invented and a telegraph system was also invented, allowing global communication. In 1869 Russian chemist Dmitri Mendeleev published his Periodic table. The success of Mendeleev 's table came from two decisions he made: The first was to leave gaps in the table when it seemed that the corresponding element had not yet been discovered. The second decision was to occasionally ignore the order suggested by the atomic weights and switch adjacent elements, such as cobalt and nickel, to better classify them into chemical families. At the end of the 19th century, a number of discoveries were made in physics which paved the way for the development of modern physics -- including Maria Skłodowska - Curie 's work on radioactivity.
In Europe by the 19th century, fashion had shifted away from such the artistic styles as Mannerism, Baroque and Rococo which followed the Renaissance and sought to revert to the earlier, simpler art of the Renaissance by creating Neoclassicism. Neoclassicism complemented the intellectual movement known as the Enlightenment, which was similarly idealistic. Ingres, Canova, and Jacques - Louis David are among the best - known neoclassicists.
Just as Mannerism rejected Classicism, so did Romanticism reject the ideas of the Enlightenment and the aesthetic of the Neoclassicists. Romanticism emphasized emotion and nature, and idealized the Middle Ages. Important musicians were Franz Schubert, Pyotr Tchaikovsky, Richard Wagner, Fryderyk Chopin, and John Constable. Romantic art focused on the use of color and motion in order to portray emotion, but like classicism used Greek and Roman mythology and tradition as an important source of symbolism. Another important aspect of Romanticism was its emphasis on nature and portraying the power and beauty of the natural world. Romanticism was also a large literary movement, especially in poetry. Among the greatest Romantic artists were Eugène Delacroix, Francisco Goya, Karl Bryullov, J.M.W. Turner, John Constable, Caspar David Friedrich, Ivan Aivazovsky, Thomas Cole, and William Blake. Romantic poetry emerged as a significant genre, particularly during the Victorian Era with leading exponents including William Wordsworth, Samuel Taylor Coleridge, Robert Burns, Edgar Allan Poe and John Keats. Other Romantic writers included Sir Walter Scott, Lord Byron, Alexander Pushkin, Victor Hugo, and Goethe.
Some of the best regarded poets of the era were women. Mary Wollstonecraft had written one of the first works of feminist philosophy, A Vindication of the Rights of Woman which called for equal education for women in 1792 and her daughter, Mary Shelley became an accomplished author best known for her 1818 novel Frankenstein, which examined some of the frightening potential of the rapid advances of science.
In early 19th - century Europe, in response to industrialization, the movement of Realism emerged. Realism sought to accurately portray the conditions and hardships of the poor in the hopes of changing society. In contrast with Romanticism, which was essentially optimistic about mankind, Realism offered a stark vision of poverty and despair. Similarly, while Romanticism glorified nature, Realism portrayed life in the depths of an urban wasteland. Like Romanticism, Realism was a literary as well as an artistic movement. The great Realist painters include Jean - Baptiste - Siméon Chardin, Gustave Courbet, Jean - François Millet, Camille Corot, Honoré Daumier, Édouard Manet, Edgar Degas (both considered as Impressionists), Ilya Repin, and Thomas Eakins, among others.
Writers also sought to come to terms with the new industrial age. The works of the Englishman Charles Dickens (including his novels Oliver Twist and A Christmas Carol) and the Frenchman Victor Hugo (including Les Miserables) remain among the best known and widely influential. The first great Russian novelist was Nikolai Gogol (Dead Souls). Then came Ivan Goncharov, Nikolai Leskov and Ivan Turgenev. Leo Tolstoy (War and Peace, Anna Karenina) and Fyodor Dostoevsky (Crime and Punishment, The Idiot, The Brothers Karamazov) soon became internationally renowned to the point that many scholars such as F.R. Leavis have described one or the other as the greatest novelist ever. In the second half of the century Anton Chekhov excelled in writing short stories and became perhaps the leading dramatist internationally of his period. American literature also progressed with the development of a distinct voice: Mark Twain produced his masterpieces Tom Sawyer and Adventures of Huckleberry Finn. In Irish Literature, the Anglo - Irish tradition produced Bram Stoker and Oscar Wilde writing in English and a Gaelic Revival had emerged by the end of the 19th century. The poetry of William Butler Yeats prefigured the emergence of the 20th - century Irish literary giants James Joyce, Samuel Beckett and Patrick Kavanagh. In Britain 's Australian colonies, bush balladeers such as Henry Lawson and Banjo Paterson brought the character of a new continent to the pages of world literature.
The response of architecture to industrialisation, in stark contrast to the other arts, was to veer towards historicism. The railway stations built during this period are often called "the cathedrals of the age ''. Architecture during the Industrial Age witnessed revivals of styles from the distant past, such as the Gothic Revival -- in which style the iconic Palace of Westminster in London was re-built to house the mother parliament of the British Empire. Notre Dame de Paris Cathedral in Paris was also restored in the Gothic style, following its desecration during the French Revolution.
Out of the naturalist ethic of Realism grew a major artistic movement, Impressionism. The Impressionists pioneered the use of light in painting as they attempted to capture light as seen from the human eye. Edgar Degas, Édouard Manet, Claude Monet, Camille Pissarro, and Pierre - Auguste Renoir, were all involved in the Impressionist movement. As a direct outgrowth of Impressionism came the development of Post-Impressionism. Paul Cézanne, Vincent van Gogh, Paul Gauguin, Georges Seurat are the best known Post-Impressionists. In Australia the Heidelberg School was expressing the light and colour of Australian landscape with a new insight and vigour.
The Industrial Revolution which began in Britain in the 18th century brought increased leisure time, leading to more time for citizens to attend and follow spectator sports, greater participation in athletic activities, and increased accessibility. The bat and ball sport of cricket was first played in England during the 16th century and was exported around the globe via the British Empire. A number of popular modern sports were devised or codified in Britain during the 19th century and obtained global prominence -- these include Ping Pong, modern tennis, Association Football, Netball and Rugby. The United States also developed popular international sports during this period. English migrants took antecedents of baseball to America during the colonial period. American football resulted from several major divergences from rugby, most notably the rule changes instituted by Walter Camp. Basketball was invented in 1891 by James Naismith, a Canadian physical education instructor working in Springfield, Massachusetts in the United States. Baron Pierre de Coubertin, a Frenchman, instigated the modern revival of the Olympic Games, with the first modern Olympics were held at Athens in 1896.
The years between 1870 and 1914 saw the expansion of Western power. By 1914, the Western and some Asian and Eurasian empires like the Empire of Japan, Russian Empire, Ottoman Empire, and Qing China dominated the entire planet. The major Western players in this New Imperialism were Britain, Russia, France, Germany, Italy, and the United States. The Empire of Japan is the only one non-Western power involved in this new era of imperialism.
Although the West had had a presence in Africa for centuries, its colonies were limited mostly to Africa 's coast. Europeans, including the Britons Mungo Park and David Livingstone, the German Johannes Rebmann, and the Frenchman René Caillié, explored the interior of the continent, allowing greater European expansion in the later 19th century. The period between 1870 and 1914 is often called the Scramble for Africa, due to the competition between European nations for control of Africa. In 1830, France occupied Algeria in North Africa. Many Frenchman settled on Algeria 's Mediterranean coast. In 1882 Britain annexed Egypt. France eventually conquered most of Morocco and Tunisia as well. Libya was conquered by the Italians. Spain gained a small part of Morocco and modern - day Western Sahara. West Africa was dominated by France, although Britain ruled several smaller West African colonies. Germany also established two colonies in West Africa, and Portugal had one as well. Central Africa was dominated by the Belgian Congo. At first the colony was ruled by Belgium 's king, Leopold II, however his regime was so brutal the Belgian government took over the colony. The Germans and French also established colonies in Central Africa. The British and Italians were the two dominant powers in East Africa, although France also had a colony there. Southern Africa was dominated by Britain. Tensions between the British Empire and the Boer republics led to the Boer Wars, fought on and off between the 1880s and 1902, ending in a British victory. In 1910 Britain united its South African colonies with the former Boer republics and established the Union of South Africa, a dominion of the British Empire. The British established several other colonies in Southern Africa. The Portuguese and Germans also established a presence in Southern Africa. The French conquered the island of Madagascar. By 1914, Africa had only two independent nations, Liberia, a nation founded in West Africa by free black Americans earlier in the 19th century, and the ancient kingdom of Ethiopia in East Africa. Many Africans, like the Zulus, resisted European rule, but in the end Europe succeeded in conquering and transforming the continent. Missionaries arrived and established schools, while industrialists helped establish rubber, diamond and gold industries on the continent. Perhaps the most ambitious change by Europeans was the construction of the Suez Canal in Egypt, allowing ships to travel from the Atlantic to the Indian Ocean without having to go all the way around Africa.
In Asia, China was defeated by Britain in the Opium War and later Britain and France in the Arrow War, forcing it to open up to trade with the West. Soon every major Western power as well as Russia and Japan had spheres of influence in China, although the country remained independent. Southeast Asia was divided between French Indochina and British Burma. One of the few independent nations in this region at the time was Siam. The Dutch continued to rule their colony of the Dutch East Indies, while Britain and Germany also established colonies in Oceania. India remained an integral part of the British Empire, with Queen Victoria being crowned Empress of India. The British even built a new capital in India, New Delhi. The Middle East remained largely under the rule of the Ottoman Empire and Persia. Britain, however, established a sphere of influence in Persia and a few small colonies in Arabia and coastal Mesopotamia.
The Pacific islands were conquered by Germany, the U.S., Britain, France, and Belgium. In 1893, the ruling class of colonists in Hawaii overthrew the Hawaiian monarchy of Queen Liliuokalani and established a republic. Since most of the leaders of the overthrow were Americans or descendants of Americans, they asked to be annexed by the United States, which agreed to the annexation in 1898.
Latin America was largely free from foreign rule throughout this period, although the United States and Britain had a great deal of influence over the region. Britain had two colonies on the Latin American mainland, while the United States, following 1898, had several in the Caribbean. The U.S. supported the independence of Cuba and Panama, but gained a small territory in central Panama and intervened in Cuba several times. Other countries also faced American interventions from time to time, mostly in the Caribbean and southern North America.
Competition over control of overseas colonies sometimes led to war between Western powers, and between Western powers and non-Westerners. At the turn of the 20th century, Britain fought several wars with Afghanistan to prevent it from falling under the influence of Russia, which ruled all of Central Asia excluding Afghanistan. Britain and France nearly went to war over control of Africa. In 1898, the United States and Spain went to war after an American naval ship was sunk in the Caribbean. Although today it is generally held that the sinking was an accident, at the time the U.S. held Spain responsible and soon American and Spanish forces clashed everywhere from Cuba to the Philippines. The U.S. won the war and gained several Caribbean colonies including Puerto Rico and several Pacific islands, including Guam and the Philippines. Important resistance movements to Western Imperialism included the Boxer Rebellion, fought against the colonial powers in China, and the Philippine -- American War, fought against the United States, both of which failed.
The Russo - Turkish War (1877 -- 78) left the Ottoman Empire little more than an empty shell, but the failing empire was able to hang on into the 20th century, until its final partition, which left the British and French colonial empires in control of much of the former Ottoman ruled Arab countries of Middle East (British Mandate of Palestine, British Mandate of Mesopotamia, French Mandate of Syria, French Mandate of Lebanon, in addition to the British occupation of Egypt from 1882). Even though this happened centuries after the West had given up its futile attempts to conquer the "Holy Land '' under religious pretexts, this fuelled resentment against the "Crusaders '' in the Islamic world, together with the nationalisms hatched under Ottoman rule contributing to the development of Islamism.
The expanding Western powers greatly changed the societies they conquered. Many connected their empires via railroad and telegraph and constructed churches, schools, and factories.
By the late 19th century, the world was dominated by a few great powers, including Great Britain, the United States, and Germany. France, Russia, Austria - Hungary, and Italy were also great powers.
Western inventors and industrialists transformed the West in the late 19th century and early 20th century. The American Thomas Edison pioneered electricity and motion picture technology. Other American inventors, the Wright brothers, completed the first successful airplane flight in 1903. The first automobiles were also invented in this period. Petroleum became an important commodity after the discovery it could be used to power machines. Steel was developed in Britain by Henry Bessemer. This very strong metal, combined with the invention of elevators, allowed people to construct very tall buildings, called skyscrapers. In the late 19th century, the Italian Guglielmo Marconi was able to communicate across distances using radio. In 1876, the first telephone was invented by Alexander Graham Bell, a British expatriate living in America. Many became very wealthy from this Second Industrial Revolution, including the American entrepreneurs Andrew Carnegie and John D. Rockefeller. Unions continued to fight for the rights of workers, and by 1914 laws limiting working hours and outlawing child labor had been passed in many Western countries.
Culturally, the English - speaking nations were in the midst of the Victorian Era, named for Britain 's queen. In France, this period is called the Belle Epoque, a period of many artistic and cultural achievements. The suffragette movement began in this period, which sought to gain voting rights for women, with New Zealand and Australian parliaments granting women 's suffrage in the 1890s. However, by 1914, only a dozen U.S. states had given women this right, although women were treated more and more as equals of men before the law in many countries.
Cities grew as never before between 1870 and 1914. This led at first to unsanitary and crowded living conditions, especially for the poor. However, by 1914, municipal governments were providing police and fire departments and garbage removal services to their citizens, leading to a drop in death rates. Unfortunately, pollution from burning coal and wastes left by thousands of horses that crowded the streets worsened the quality of life in many urban areas. Paris, lit up by gas and electric light, and containing the tallest structure in the world at the time, the Eiffel Tower, was often looked to as an ideal modern city, and served as a model for city planners around the world.
Following the American Civil War, great changes occurred in the United States. After the war, the former Confederate States were put under federal occupation and federal lawmakers attempted to gain equality for blacks by outlawing slavery and giving them citizenship. After several years, however, Southern states began rejoining the Union as their populations pledged loyalty to the United States government, and in 1877 Reconstruction as this period was called, came to an end. After being re-admitted to the Union, Southern lawmakers passed segregation laws and laws preventing blacks from voting, resulting in blacks being regarded as second - class citizens for decades to come.
Another great change beginning in the 1870s was the settlement of the western territories by Americans. The population growth in the American West led to the creation of many new western states, and by 1912 all the land of the contiguous U.S. was part of a state, bringing the total to 48. As whites settled the West, however, conflicts occurred with the Amerindians. After several Indian Wars, the Amerindians were forcibly relocated to small reservations throughout the West and by 1914 whites were the dominant ethnic group in the American West. As the farming and cattle industries of the American West matured and new technology allowed goods to be refrigerated and brought to other parts of the country and overseas, people 's diets greatly improved and contributed to increased population growth throughout the West.
America 's population greatly increased between 1870 and 1914, due largely to immigration. The U.S. had been receiving immigrants for decades but at the turn of the 20th century the numbers greatly increased, due partly to large population growth in Europe. Immigrants often faced discrimination, because many differed from most Americans in religion and culture. Despite this, most immigrants found work and enjoyed a greater degree of freedom than in their home countries. Major immigrant groups included the Irish, Italians, Russians, Scandinavians, Germans, Poles and Jews. The vast majority, at least by the second generation, learned English, and adopted American culture, while at the same time contributing to American culture. For example, the celebration of ethnic holidays and the introduction of foreign cuisine to America. These new groups also changed America 's religious landscape. Although it remained mostly Protestant, Catholics especially, as well as Jews and Orthodox Christians, increased in number.
The U.S. became a major military and industrial power during this time, gaining a colonial empire from Spain and surpassing Britain and Germany to become the world 's major industrial power by 1900. Despite this, most Americans were reluctant to get involved in world affairs, and American presidents generally tried to keep the U.S. out of foreign entanglement.
The years between 1870 and 1914 saw the rise of Germany as the dominant power in Europe. By the late 19th century, Germany had surpassed Britain to become the world 's greatest industrial power. It also had the mightiest army in Europe. From 1870 -- 1871, Prussia was at war with France. Prussia won the war and gained two border territories, Alsace and Lorraine, from France. After the war, Wilhelm took the title kaiser from the Roman title caesar, proclaimed the German Empire, and all the German states other than Austria united with this new nation, under the leadership of Prussian Chancellor Otto von Bismarck.
After the Franco - Prussian War, Napoleon III was dethroned and France was proclaimed a republic. During this time, France was increasingly divided between Catholics and monarchists and anticlerical and republican forces. In 1900, church and state were officially separated in France, although the majority of the population remained Catholic. France also found itself weakened industrially following its war with Prussia due to its loss of iron and coal mines following the war. In addition, France 's population was smaller than Germany 's and was hardly growing. Despite all this, France 's strong sense of nationhood among other things kept the country together.
Between 1870 and 1914, Britain continued to peacefully switch between Liberal and Conservative governments, and maintained its vast empire, the largest in world history. Two problems faced by Britain in this period were the resentment of British rule in Ireland and Britain 's falling behind Germany and the United States in industrial production.
The European populations of Canada, Australia, New Zealand and South Africa all continued to grow and thrive in this period and evolved democratic Westminster system parliaments.
Canada united as a dominion of the British Empire under the Constitution Act, 1867 (British North America Acts). The colony of New Zealand gained its own parliament (called a "general assembly '') and home rule in 1852. and in 1907 was proclaimed the Dominion of New Zealand. Britain began to grant its Australian colonies autonomy beginning in the 1850s and during the 1890s, the colonies of Australia voted to unite. In 1901 they were federated as an independent nation under the British Crown, known as the Commonwealth of Australia, with a wholly elected bicameral parliament. The Constitution of Australia had been drafted in Australia and approved by popular consent. Thus Australia is one of the few countries established by a popular vote. The Second Boer War (1899 -- 1902) ended with the conversion of the Boer republics of South Africa into British colonies and these colonies later formed part of the Union of South Africa in 1910.
From the 1850s, Canada, Australia and New Zealand had become laboratories of democracy. By the 1870s, they had already granted voting rights to their citizens in advance of most other Western nations. In 1893, New Zealand became the first self - governing nation to extend the right to vote to women and, in 1895, the women of South Australia became the first to obtain also the right to stand for Parliament.
During the 1890s Australia also saw such milestones as the invention of the secret ballot, the introduction of a minimum wage and the election of the world 's first Labor Party government, prefiguring the emergence of Social Democratic governments in Europe. The old age pension was established in Australia and New Zealand by 1900.
From the 1880s, the Heidelberg School of art adapted Western painting techniques to Australian conditions, while writers like Banjo Paterson and Henry Lawson introduced the character of a new continent into English literature and antipodean artists such as the opera singer Dame Nellie Melba began to influence the European arts.
The late 19th century saw the creation of several alliances in Europe. Germany, Italy, and Austria - Hungary formed a secret defensive alliance called the Triple Alliance. France and Russia also developed strong relations with one another, due to the financing of Russia 's Industrial Revolution by French capitalists. Although it did not have a formal alliance, Russia supported the Slavic Orthodox nations of the Balkans and the Caucasus, which had been created in the 19th century after several wars and revolutions against the Ottoman Empire, which by now was in decline and ruled only parts of the southern Balkan Peninsula. This Russian policy, called Pan-Slavism, led to conflicts with the Ottoman and Austro - Hungarian Empires, which had many Slavic subjects. Franco - German relations were also tense in this period due to France 's defeat and loss of land at the hands of Prussia in the Franco - Prussian War. Also in this period, Britain ended its policy of isolation from the European continent and formed an alliance with France, called the Entente Cordiale. Rather than achieve greater security for the nations of Europe, however, these alliances increased the chances of a general European war breaking out. Other factors that would eventually lead to World War I were the competition for overseas colonies, the military buildups of the period, most notably Germany 's, and the feeling of intense nationalism throughout the continent.
When the war broke out, much of the fighting was between Western powers, and the immediate casus belli was an assassination. The victim was the heir to the Austro - Hungarian throne, Franz Ferdinand, and he was assassinated on 28 June 1914 by a Yugoslav nationalist named Gavrilo Princip in the city of Sarajevo, at the time part of the Austro - Hungarian Empire. Although Serbia agreed to all but one point of the Austrian ultimatum (it did not take responsibility in planning the assassination but was ready to hand over any subject involved on its territory), Austria - Hungary was more than eager to declare war, attacked Serbia and effectively began World War I. Fearing the conquest of a fellow Slavic Orthodox nation, Russia declared war on Austria - Hungary. Germany responded by declaring war on Russia as well as France, which it feared would ally with Russia. To reach France, Germany invaded neutral Belgium in August, leading Britain to declare war on Germany. The war quickly stalemated, with trenches being dug from the North Sea to Switzerland. The war also made use of new and relatively new technology and weapons, including machine guns, airplanes, tanks, battleships, and submarines. Even chemical weapons were used at one point. The war also involved other nations, with Romania and Greece joining the British Empire and France and Bulgaria and the Ottoman Empire joining Germany. The war spread throughout the globe with colonial armies clashing in Africa and Pacific nations such as Japan and Australia, allied with Britain, attacking German colonies in the Pacific. In the Middle East, the Australian and New Zealand Army Corps landed at Gallipoli in 1915 in a failed bid to support an Anglo - French capture of the Ottoman capital of Istanbul. Unable to secure an early victory in 1915, British Empire forces later attacked from further south after the beginning of Arab revolt and conquered Mesopotamia and Palestine from the Ottomans with the support of local Arab rebels and supported an Arab revolt against the Ottomans centered in the Arabian Peninsula.
1916 saw some of the most ferocious fighting in human history with the Somme Offensive on the Western Front alone resulting in 500,000 German casualties, 420,000 British and Dominion, and 200,000 French casualties.
1917 was a crucial year in the war. The United States had followed a policy of neutrality in the war, feeling it was a European conflict. However, during the course of the war many Americans had died on board British ocean liners sunk by the Germans, leading to anti-German feelings in the U.S. There had also been incidents of sabotage on American soil, including the Black Tom explosion. What finally led to American involvement in the war, however, was the discovery of the Zimmermann Telegram, in which Germany offered to help Mexico conquer part of the United States if it formed an alliance with Germany. In April, the U.S. declared war on Germany. The same year the U.S. entered the war, Russia withdrew. After the deaths of many Russian soldiers and hunger in Russia, a revolution occurred against the Czar, Nicholas II. Nicholas abdicated and a Liberal provisional government was set up. In October, Russian communists, led by Vladimir Lenin rose up against the government, resulting in a civil war. Eventually, the communists won and Lenin became premier. Feeling World War I was a capitalist conflict, Lenin signed a peace treaty with Germany in which it gave up a great deal of its Central and Eastern European lands.
Although Germany and its allies no longer had to focus on Russia, the large numbers of American troops and weapons reaching Europe turned the tide against Germany, and after more than a year of fighting, Germany surrendered.
The treaties which ended the war, including the famous Versailles Treaty dealt harshly with Germany and its former allies. The Austro - Hungarian Empire were completely abolished and Germany was greatly reduced in size. Many nations regained their independence, including Poland, Czechoslovakia, and Yugoslavia. The last Austro - Hungarian emperor abdicated, and two new republics, Austria and Hungary, were created. The last Ottoman sultan was overthrown by the Turkish nationalist revolutionist named Ataturk and the Ottoman homeland of Turkey was declared a republic. Germany 's kaiser also abdicated and Germany was declared a republic. Germany was also forced to give up the lands it had gained in the Franco - Prussian War to France, accept responsibility for the war, reduce its military and pay reparations to Britain and France.
In the Middle East, Britain gained Palestine, Transjordan (modern - day Jordan), and Mesopotamia as colonies. France gained Syria and Lebanon. An independent kingdom consisting of most of the Arabian peninsula, Saudi Arabia, was also established. Germany 's colonies in Africa, Asia, and the Pacific were divided between the British and French Empires.
The war had cost millions of lives and led many in the West to develop a strong distaste for war. Few were satisfied with, and many despised the agreements made at the end of the war. Japanese and Italians were angry they had not been given any new colonies after the war, and many Americans felt the war had been a mistake. Germans were outraged at the state of their country following the war. Also, unlike many in the United States for example, had hoped, democracy did not flourish in the world in the post-war period. The League of Nations, an international organization proposed by American president Woodrow Wilson to prevent another great war from breaking out, proved ineffective, especially because the isolationist U.S. wound end up not joining.
After World War I, most Americans regretted getting involved in world affairs and desired a "return to normalcy ''. The 1920s were a period of economic prosperity in the United States. Many Americans bought cars, radios, and other appliances with the help of installment payments. Movie theaters sprang up throughout the country, although at first they did not have sound. Also, many Americans invested in the stock market as a source of income. Also in the 1920s, alcoholic beverages were outlawed in the United States. Women were granted the right to vote throughout the United States. Although the United States was arguably the most powerful nation in the post-war period, Americans remained isolationist and elected several conservative presidents in the 1920s.
In October 1929 the New York stock market crashed, leading to the Great Depression. Many lost their life 's savings and the resulting decline in consumer spending led millions to lose their jobs as banks and businesses closed. In the Midwestern United States, a severe drought destroyed many farmers ' livelihoods. In 1932, Americans elected Franklin D. Roosevelt president. Roosevelt followed a series of policies which regulated the stock market and banks, and created many public works programs aimed at providing the unemployed with work. Roosevelt 's policies helped alleviate the worst effects of the Depression, although by 1941 the Great Depression was still ongoing. Roosevelt also instituted pensions for the elderly and provided money to those who were unemployed. Roosevelt was also one of the most popular presidents in U.S. history, earning re-election in 1936, and also in 1940 and 1944, becoming the only U.S. president to serve more than two terms.
Europe was relatively unstable following World War I. Although many prospered in the 1920s, Germany was in a deep financial and economic crisis. Also, France and Britain owed the U.S. a great deal of money. When the United States went into Depression, so did Europe. There were perhaps 30 million people around the world unemployed following the Depression. Many governments helped to alleviate the suffering of their citizens and by 1937 the economy had improved although the lingering effects of the Depression remained. Also, the Depression led to the spread of radical left - wing and right - wing ideologies, like Communism and Fascism.
In 1919 - 1921 Polish - Soviet War took place. After the Russian Revolution of 1917 Russia sought to spread communism to the rest of Europe. This is evidenced by the well - known daily order by marshal Tukhachevsky to his troops: "Over the corpse of Poland leads the road to the world 's fire. Towards Wilno, Minsk, Warsaw go! ''. Poland, whose statehood had just been re-established by the Treaty of Versailles following the Partitions of Poland in the late 18th century achieved an unexpected and decisive victory at the Battle of Warsaw. In the wake of the Polish advance eastward, the Soviets sued for peace and the war ended with a ceasefire in October 1920. A formal peace treaty, the Peace of Riga, was signed on 18 March 1921. According to the British historian A.J.P. Taylor, the Polish -- Soviet War "largely determined the course of European history for the next twenty years or more. (...) Unavowedly and almost unconsciously, Soviet leaders abandoned the cause of international revolution. '' It would be twenty years before the Bolsheviks would send their armies abroad to ' make revolution '. According to American sociologist Alexander Gella "the Polish victory had gained twenty years of independence not only for Poland, but at least for an entire central part of Europe.
In 1916 militant Irish republicans staged a rising and proclaimed a republic. The rising was suppressed after six days with leaders of the rising executed. This was followed by the Irish War of Independence in 1919 -- 1921 and the Irish Civil War (1922 -- 1923). After the civil war, the island was divided. Northern Ireland remained part of the United Kingdom, while the rest of the island became the Irish Free State. In 1927 the United Kingdom renamed itself the United Kingdom of Great Britain and Northern Ireland.
In the 1920s the UK was the granting of the right to vote to women.
The relationship between Britain and its Empire evolved significantly over the period. In 1919, the British Empire was represented at the all - important Versailles Peace Conference by delegates from its dominions who had each suffered large casualties during the War. The Balfour Declaration at the 1926 Imperial Conference, stated that Britain and its dominions were "equal in status, in no way subordinate one to another in any aspect of their domestic or external affairs, though united by common allegiance to the Crown, and freely associated as members of the British Commonwealth of Nations ''. These aspects to the relationship were eventually formalised by the Statute of Westminster in 1931 -- a British law which, at the request and with the consent of the dominion parliaments clarified the independent powers of the dominion parliaments, and granted the former colonies full legal freedom except areas where they chose to remain subordinate. Previously the British Parliament had had residual ill - defined powers, and overriding authority, over dominion legislation. It applied to the six dominions which existed in 1931: Canada, Australia, the Irish Free State, the Dominion of Newfoundland, New Zealand, and the Union of South Africa. Each of the dominions remained within the British Commonwealth and retained close political and cultural ties with Britain and continued to recognize the British monarch as head of their own independent nations. Australia, New Zealand, and Newfoundland had to ratify the statute for it to take effect. Australia and New Zealand did so in 1942 and 1947 respectively. Newfoundland united with Canada in 1949 and the Irish Free State came to an end in 1937, when the citizens voted by referendum to replace its 1922 constitution. It was succeeded by the entirely sovereign modern state of Ireland.
The Inter-war years saw the establishment of the first totalitarian regimes in world history. The first was established in Russia (following the revolution of 1917. The Russian Empire was renamed the Union of Soviet Socialist Republics, or Soviet Union). The government controlled every aspect of its citizens ' lives, from maintaining loyalty to the Communist Party to persecuting religion. Lenin helped to establish this state but it was brought to a new level of brutality under his successor, Joseph Stalin.
The first totalitarian state in the West was established in Italy. Unlike the Soviet Union however, this would be a Fascist rather than a Communist state. Fascism is a less organized ideology than Communism, but generally it is characterized by a total rejection of humanism and liberal democracy, as well as very intense nationalism, with government headed by a single all - powerful dictator. The Italian politician Benito Mussolini established the Fascist Party, from which Fascism derives its name following World War I. Fascists won support by many disillusioned Italians, angry over Italy 's treatment following World War I. They also employed violence and intimidation against their political enemies. In 1922 Mussolini seized power by threatening to lead his followers on a march on Rome if he was not named Prime Minister. Although he had to share some power with the monarchy, Mussolini ruled as a dictator. Under his rule, Italy 's military was built up and democracy became a thing of the past. One important diplomatic achievement of his reign, however, was the Lateran Treaty, between Italy and the Pope, in which a small part of Rome where St. Peter 's Basilica and other Church property was located was given independence as Vatican City and the Pope was reimbursed for lost Church property. In exchange, the Pope recognized the Italian government.
Another Fascist party, the Nazis, would take power in Germany. The Nazis were similar to Mussolini 's Fascists but held many views of their own. Nazis were obsessed with racial theory, believing Germans to be part of a master race, destined to dominate the inferior races of the world. The Nazis were especially hateful of Jews. Another unique aspect of Nazism was its connection with a small movement that supported a return to ancient Germanic paganism. Adolf Hitler, a World War I veteran, became leader of the party in 1921. Gaining support from many disillusioned Germans, and by using intimidation against its enemies, the Nazi party had gained a great deal of power by the early 1930s. In 1933, Hitler was named Chancellor, and seized dictatorial power. Hitler built up Germany 's military in opposition to the Versailles Treaty and stripped Jews of all rights in Germany. Eventually, the regime Hitler created would lead to the Second World War.
In Spain, a republic had been set up following the abdication of the king. After a series of elections, a coalition of republicans, socialists, Marxists, and anticlericals were brought to power. The army, joined by Spanish Conservatives rose up against the republic. In 1939 the Spanish Civil War ended, and General Francisco Franco became dictator. Franco supported the governments of Italy and Germany, although he was not as strongly committed to Fascism as they were and instead focused more on restoring traditionalism and Catholicism to dominance in Spain.
The late 1930s saw a series of violations of the Versailles Treaty by Germany, however, France and Britain refused to act. In 1938, Hitler annexed Austria in an attempt to unite all German - speakers under his rule. Next, he annexed a German - speaking area of Czechoslovakia. Britain and France agreed to recognize his rule over that land and in exchange Hitler agreed not to expand his empire further. In a matter of months, however, Hitler broke the pledge and annexed the rest of Czechoslovakia. Despite this, the British and French chose to do nothing, wanting to avoid war at any cost. Hitler then formed a secret non-aggression pact with the Soviet Union, despite the fact that the Soviet Union was Communist and Germany was Nazi. Also in the 1930s, Italy conquered Ethiopia. The Soviets too began annexing neighboring countries. Japan began taking aggressive actions towards China. After Japan opened itself to trade with the West in the mid-19th century, its leaders learned to take advantage of Western technology and industrialized their country by the end of the century. By the 1930s, Japan 's government was under the control of militarists who wanted to establish an empire in the Asia - Pacific region. In 1937, Japan invaded China.
In 1939, German forces invaded Poland, and soon the country was divided between the Soviet Union and Germany. France and Britain declared war on Germany, World War II had begun. The war featured the use of new technologies and improvements on existing ones. Airplanes called bombers were capable of travelling great distances and dropping bombs on targets. Submarine, tank and battleship technology also improved. Most soldiers were equipped with hand - held machine guns and armies were more mobile than ever before. Also, the British invention of radar would revolutionize tactics. German forces invaded and conquered the Low Countries and by June had even conquered France. In 1940 Germany, Italy and Japan formed an alliance and became known as the Axis Powers. Germany next turned its attention to Britain. Hitler attempted to defeat the British using only air power. In the Battle of Britain, German bombers destroyed much of the British air force and many British cities. Led by their Prime Minister, the defiant Winston Churchill, the British refused to give up and launched air attacks on Germany. Eventually, Hitler turned his attention from Britain to the Soviet Union. In June 1941, German forces invaded the Soviet Union and soon reached deep into Russia, surrounding Moscow, Leningrad, and Stalingrad. Hitler 's invasion came as a total surprise to Stalin; however, Hitler had always believed sooner or later Soviet Communism and what he believed were the "inferior '' Slavic peoples had to be wiped out.
The United States attempted to remain neutral early in the war. However, a growing number feared the consequences of a Fascist victory. So, President Roosevelt began sending weapons and support to the British, Chinese, and Soviets. Also, the U.S. placed an embargo against the Japanese, as they continued to war with China and conquered many colonies formerly ruled by the French and Dutch, who were now under German rule. Japan responded by launching a surprise attack on Pearl Harbor, an American naval base in Hawaii 1941. The U.S. responded by declaring war on Japan. The next day, Germany and Italy declared war on the United States. The United States, The British Commonwealth, and the Soviet Union now constituted the Allies, dedicated to destroying the Axis Powers. Other allied nations included Canada, Australia, New Zealand, South Africa and China.
In the Pacific War, British, Indian and Australian troops made a disorganised last stand at Singapore, before surrendering on 15 February 1942. The defeat was the worst in British military history. Around 15,000 Australian soldiers alone became prisoners of war. Allied prisoners died in their thousands interned at Changi Prison or working as slave labourers on such projects as the infamous Burma Railway and the Sandakan Death Marches. Australian cities and bases -- notably Darwin suffered air raids and Sydney suffered naval attack. U.S. General Douglas MacArthur, based in Melbourne, Australia became "Supreme Allied Commander of the South West Pacific '' and the foundations of the post war Australia - New Zealand - United States Alliance were laid. In May 1942, the Royal Australian Navy and U.S. Navy engaged the Japanese in the Battle of the Coral Sea and halted the Japanese fleet headed for Australian waters. The Battle of Midway in June effectively defeated the Japanese navy. In August 1942, Australian forces inflicted the first land defeat on advancing Japanese forces at the Battle of Milne Bay in the Australian Territory of New Guinea.
By 1942, German and Italian armies ruled Norway, the Low Countries, France, the Balkans, Central Europe, part of Russia, and most of North Africa. Japan by this year ruled much of China, Southeast Asia, Indonesia, the Philippines, and many Pacific Islands. Life in these empires was cruel -- especially in Germany, where The Holocaust was perpetrated. Eleven million people -- six million of them Jews -- were systematically murdered by the German nazis by 1945.
From 1943 on, the Allies gained the upper hand. American and British troops first liberated North Africa from the Germans and Italians. Next they invaded Italy, where Mussolini was deposed by the king and later was killed by Italian partisans. Italy surrendered and came under Allied occupation. After the liberation of Italy, American, British, and Canadian troops crossed the English Channel and liberated Normandy, France, from German rule after great loss of life. The Western Allies were then able to liberate the rest of France and move towards Germany. During these campaigns in Africa and Western Europe, the Soviets fought off the Germans, pushing them out of the Soviet Union altogether and driving them out of Eastern and East - Central Europe. In 1945 the Western Allies and Soviets invaded Germany itself. The Soviets captured Berlin and Hitler committed suicide. Germany surrendered unconditionally and came under Allied occupation. The war against Japan continued however. American forces from 1943 on had worked their way across the Pacific, liberating territory from the Japanese. The British also fought the Japanese in such places as Burma. By 1945, the U.S. had surrounded Japan, however the Japanese refused to surrender. Fearing a land invasion would cost one million American lives, the U.S. used a new weapon against Japan, the atomic bomb, developed after years of work by an international team including Germans, in the United States. These atomic bombings of Hiroshima and Nagasaki combined with a Soviet invasion of many of Japan 's occupied territories in the east, led Japan to surrender.
After the war the U.S., Britain and the Soviet Union attempted to cooperate. German and Japanese military leaders responsible for atrocities in their regimes were put on trial and many were executed. The international organization the United Nations was created. Its goal was to prevent wars from breaking out as well as provide the people of the world with security, justice and rights. The period of post-war cooperation ended, however, when the Soviet Union rigged elections in the occupied nations of Central and Eastern Europe to allow for Communist victories. Soon, all of Eastern and much of Central Europe had become a series of Communist dictatorships, all staunchly allied with the Soviet Union. Germany following the war had been occupied by British, American, French, and Soviet forces. Unable to agree on a new government, the country was divided into a democratic west and Communist east. Berlin itself was also divided, with West Berlin becoming part of West Germany and East Berlin becoming part of East Germany. Meanwhile, the former Axis nations soon had their sovereignty restored, with Italy and Japan regaining independence following the war.
World War II had cost millions of lives and devastated many others. Entire cities lay in ruins and economies were in shambles. However, in the Allied countries, the people were filled with pride at having stopped Fascism from dominating the globe, and after the war, Fascism was all but extinct as an ideology. The world 's balance of power also shifted, with the United States and Soviet Union being the world 's two superpowers.
Following World War II, the great colonial empires established by the Western powers beginning in early modern times began to collapse. There were several reasons for this. Firstly, World War II had devastated European economies and had forced governments to spend great deals of money, making the price of colonial administration increasingly hard to manage. Secondly, the two new superpowers following the war, the United States and Soviet Union were both opposed to imperialism, so the now weakened European Empires could generally not look to the outside for help. Thirdly, Westerners increasingly were not interested in maintaining and even opposed the existence of empires. The fourth reason was the rise of independence movements following the war. The future leaders of these movements had often been educated at colonial schools run by Westerners where they adopted Western ideas like freedom, equality, self - determination and nationalism, and which turned them against their colonial rulers.
The first colonies to gain independence were in Asia. In 1946, the U.S. granted independence to the Philippines, its only large overseas colony. In British India, Mahatma Gandhi led his followers in non-violent resistance to British rule. By the late 1940s Britain found itself unable to work with Indians in ruling the colony, this, combined with sympathy around the world for Gandhi 's non-violent movement, led Britain to grant independence to India, dividing it into the largely Hindu country of India and the smaller, largely Muslim nation of Pakistan in 1947. In 1948 Burma gained independence from Britain, and in 1945 Indonesian nationalists declared Indonesian independence, which the Netherlands recognised in 1949 after a four - year armed and diplomatic struggle. Independence for French Indochina came only after a great conflict. After the withdrawal of Japanese forces from the colony following World War II, France regained control but found it had to contend with an independence movement that had fought against the Japanese. The movement was led by the Vietnamese Ho Chi Minh, leader of the Vietnamese Communists. Because of this, the U.S. supplied France with arms and support, fearing Communists would dominate South - east Asia. In the end though, France gave in and granted independence, creating Laos, Cambodia, Communist North Vietnam, and South Vietnam.
A chaotic part of North Pole in this period was the Middle East. Following World War II, Britain had granted independence to the formerly Ottoman territories of Mesopotamia, which became Iraq, Kuwait, and Transjordan, which became Jordan. France also granted independence to Syria and Lebanon. British Palestine, however, presented a unique challenge. Following World War I, when Britain gained the colony, Jewish and Arab national aspirations conflicted, followed by a proposal of the UN to divided Mandatory Palestine into a Jewish state and an Arab state. The Arabs objected, Britain withdrew and the Zionists declared the state of Israel on 14 May 1948.
The other major center of colonial power, Africa, was freed from colonial rule following World War II as well. Egypt gained independence from Britain and this was soon followed by Ghana and Tunisia. One violent independence movement of the time was fought in Algeria, in which Algerian rebels went so far as to kill innocent Frenchmen. In 1962, however, Algeria gained independence from France. By the 1970s the entire continent had become independent of European rule, although a few southern countries remained under the rule of white colonial minorities.
By the close of the 20th century, the European colonial Empires had ceased to exist as significant global entities. Sunset for the British Empire came when Britain 's lease on the great trading port of Hong Kong was brought to end, and political control was transferred to the People 's Republic of China in 1997. Soon after, in 1999 Transfer of sovereignty over Macau was concluded between Portugal and China, bringing to a close six centuries of Portuguese colonialism. Britain remained culturally linked to its former empire through the voluntary association of the Commonwealth of Nations, and 14 British Overseas Territories remained (formerly known as Crown colonies), consisting mainly of scattered island outposts. Currently, 16 independent Commonwealth realms retain the British monarch as their head of state. Canada, Australia and New Zealand emerged as vibrant and prosperous migrant nations. The once vast French colonial empire had lost its major possessions though a scattered territories remained as Overseas departments and territories of France. The shrunken Dutch Empire retained a few Caribbean islands as constituent countries of the Kingdom of the Netherlands. Spain had lost its overseas possessions, but its legacy was vast - with Latin culture remaining throughout South and Central America. Along with Portugal and France, Spain had made Catholicism a global religion.
Of Europe 's empires, only the Russian Empire remained a significant geo - political force into the late 20th century, having morphed into the Soviet Union and Warsaw Pact, which, drawing on the writings of the German Karl Marx, established a socialist economic model under Communist dictatorship, which ultimately collapsed in the early 1990s. Adaptations of Marxism continued as the stated inspiration for Governments in Central America and Asia into the 21st century - though only a handful survived the end of the Cold War.
The end of the Western Empires greatly changed the world. Although many newly independent nations attempted to become democracies, many slipped into military and autocratic rule. Amid power vacuums and newly determined national borders, civil war also became a problem, especially in Africa, where the introduction of firearms to ancient tribal rivalries exacerbated problems.
The loss of overseas colonies partly also led many Western nations, particularly in continental Europe, to focus more on European, rather than global, politics as the European Union rose as an important entity. Though gone, the colonial empires left a formidable cultural and political legacy, with English, French, Spanish, Portuguese, Russian and Dutch being spoken by peoples across far flung corners of the globe. European technologies were now global technologies - religions like Catholicism and Anglicanism, founded in the West, were booming in post colonial Africa and Asia. Parliamentary (or presidential) democracies, as well as rival Communist style one party states invented in the West had replaced traditional monarchies and tribal government models across the globe. Modernity, for many, was equated with Westernisation.
From the end of World War II almost until the start of the 21st century, Western and world politics were dominated by the state of tensions and conflict between the world 's two Superpowers, the United States and the Soviet Union. In the years following World War II, the Soviets established satellite states throughout Central and Eastern Europe, including historically and culturally Western nations like Poland and Hungary. Following the division of Germany, the East Germans constructed the Berlin Wall, to prevent East Berliners from escaping to the "freedom '' of West Berlin. The Berlin Wall would come to represent the Cold War around the world.
Rather than revert to isolationism, the United States took an active role in global politics following World War II to halt Communist expansion. After the war, Communist parties in Western Europe increased in prestige and number, especially in Italy and France, leading many to fear the whole of Europe would become Communist. The U.S. responded to this with the Marshall Plan, in which the U.S. financed the rebuilding of Western Europe and poured money into its economy. The Plan was a huge success and soon Europe was prosperous again, with many Europeans enjoying a standard of living close that in the U.S (following World War II, the U.S. became very prosperous and Americans enjoyed the highest standard of living in the world). National rivalries ended in Europe and most Germans and Italians, for example, were happy to be living under democratic rule, regretting their Fascist pasts. In 1949, the North Atlantic Treaty was signed, creating the North Atlantic Treaty Organization or NATO. The treaty was signed by the United States, Canada, the Low Countries, Norway, Denmark, Iceland, Portugal, Italy, France, and Britain. NATO members agreed that if any one of them were attacked, they would all consider themselves attacked and retaliate. NATO would expand as the years went on, other nations joined, including Greece, Turkey, and West Germany. The Soviets responded with the Warsaw Pact, an alliance which bound Central and Eastern Europe to fight with the United States and its allies in the event of war.
One of the first actual conflicts of the Cold War took place in China. Following the withdrawal of Japanese troops after World War II, China was plunged into civil war, pitting Chinese Communists against Nationalists, who opposed Communism. The Soviets supported the Communists while the Americans supported the Nationalists. In 1949, the Communists were victorious, proclaiming the Peoples ' Republic of China. However, the Nationalists continued to rule the island of Taiwan off the coast. With American guarantees of protection for Taiwan, China did not make an attempt to take over the island. A major political change in East Asia in this period was Japan 's becoming a tolerant, democratic society and an ally of the United States. In 1950, another conflict broke out in Asia, this time in Korea. The peninsula had been divided between a Communist North and non-Communist South in 1948 following the withdrawal of American and Soviet troops. In 1950, the North Koreans invaded South Korea, wanting to united the land under Communism. The UN condemned the action, and, because the Soviets were boycotting the organization at the time and therefore had no influence on it, the UN sent forces to liberate South Korea. Many nations sent troops, but most were from America. UN forces were able to liberate the South and even attempted to conquer the North. However, fearing the loss of North Korea, Communist China sent troops to the North. The U.S. did not retaliate against China, fearing war with the Soviet Union, so the war stalemated. In 1953 the two sides agreed to a return to the pre-war borders and a de-militarization of the border area.
The world lived in the constant fear of World War III in the Cold War. Seemingly any conflict involving Communism might lead to a conflict between the Warsaw pact countries and the NATO countries. The prospect of a third world war was made even more frightening by the fact that it would almost certainly be a nuclear war. In 1949 the Soviets developed their first atomic bomb, and soon both the United States and Soviet Union had enough to destroy the world several times over. With the development of missile technology, the stakes were raised as either country could launch weapons from great distances across the globe to their targets. Eventually, Britain, France, and China would also develop nuclear weapons. It is believed that Israel developed nuclear weapons as well.
One major event that nearly brought the world to the brink of war was the Cuban Missile Crisis. In the 1950s a revolution in Cuba had brought the only Communist regime in the Western Hemisphere to power. In 1962, the Soviets began constructing missile sites in Cuba and sending nuclear missiles. Because of its close proximity to the U.S., the U.S. demanded the Soviets withdraw missiles from Cuba. The U.S. and Soviet Union came very close to attacking one another, but in the end came to a secret agreement in which the NATO withdrew missiles in exchange for a Soviet withdrawal of missiles from Cuba.
The next great Cold War conflict occurred in Southeast Asia. In the 1960s, North Vietnam invaded South Vietnam, hoping to unite all of Vietnam under Communist rule. The U.S. responded by supporting the South Vietnamese. In 1964, American troops were sent to "save '' South Vietnam from conquest, which many Americans feared would lead to Communist dominance in the entire region. The war lasted many years, but most Americans felt the North Vietnamese would be defeated in time. Despite American technological and military superiority, by 1968, the war showed no signs of ending and most Americans wanted U.S. forces to end their involvement. The U.S. undercut support for the North by getting the Soviets and Chinese to stop supporting North Vietnam, in exchange for recognition of the legitimacy of mainland China 's Communist government, and began withdrawing troops from Vietnam. In 1972, the last American troops left Vietnam and in 1975 South Vietnam fell to the North. In the following years Communism took power in neighboring Laos and Cambodia.
By the 1970s global politics were becoming more complex. For example, France 's president proclaimed France was a great power in and of itself. However, France did not seriously threaten the U.S. for supremacy in the world or even Western Europe. In the Communist world, there was also division, with the Soviets and Chinese differing over how Communist societies should be run. Soviet and Chinese troops even engaged in border skirmishes, although full - scale war never occurred.
The last great armed conflict of the Cold War took place in Afghanistan. In 1979, Soviet forces invaded that country, hoping to establish Communism. Muslims from throughout the Islamic World travelled to Afghanistan to defend that Muslim nation from conquest, calling it a Jihad, or Holy War. The U.S. supported the Jihadists and Afghan resisters, despite the fact that the Jihadists were vehemently anti-Western. By 1989 Soviet forces were forced to withdraw and Afghanistan fell into civil war, with an Islamic fundamentalist government, the Taliban taking over much of the country.
The late 1970s had seen a lessening of tensions between the U.S. and Soviet Union, called Détente. However, by the 1980s Détente had ended with the invasion of Afghanistan. In 1981, Ronald Reagan became President of the United States and sought to defeat the USSR by leveraging the United States capitalist economic system to outproduce the communist Russians. The United States military was in a state of low moral after its loss in the Vietnam War, and President Reagan began a huge effort to out - produce the Soviets in military production and technology. In 1985, a new Soviet leader, Mikhail Gorbachev took power. Gorbachev, knowing that the Soviet Union could no longer compete economically with the United States, implemented a number of reforms granting his citizens freedom of speech and introducing some capitalist reforms. Gorbachev and America 's staunch anti-Communist president Ronald Reagan were even able to negotiate treaties limiting each side 's nuclear weapons. Gorbachev also ended the policy of imposing Communism in Central and Eastern Europe. In the past Soviet troops had crushed attempts at reform in places like Hungary and Czechoslovakia. Now, however, Eastern Europe was freed from Soviet domination. In Poland the Round Table Talks between the government and the Solidarity - led opposition led to semi-free elections in 1989 elections in Poland where anti-communist candidates won a striking victory sparked off a succession of peaceful anti-communist revolutions in Central and Eastern Europe known as the Revolutions of 1989. Soon, Communist regimes throughout Europe collapsed. In Germany, after calls from Reagan to Gorbachev to tear down the Berlin Wall, the people of East and West Berlin tore down the wall and East Germany 's Communist government was voted out. East and West Germany unified to create the country of Germany, with its capital in the reunified Berlin. The changes in Central and Eastern Europe led to calls for reform in the Soviet Union itself. A failed coup by hard - liners led to greater instability in the Soviet Union, and the Soviet legislature, long subservient to the Communist Party, voted to abolish the Soviet Union in 1991. What had been the Soviet Union was divided into many republics. Although many slipped into authoritarianism, most became democracies. These new republics included Russia, Ukraine, and Kazakhstan. By the early 1990s, the West and Europe as a whole was finally free from Communism.
Following the end of the Cold War, Communism largely died out as a major political movement. The United States was now left as the world 's only superpower.
Following World War II, there was an unprecedented period of prosperity in the United States. The majority of Americans entered the middle class and moved from the cities into surrounding suburbs, buying homes of their own. Most American households owned at least one car, as well as the relatively new invention, the television. Also, the American population greatly increased as part of the so - called "baby boom '' following the war. For the first time following the war, large of numbers of non-wealthy Americans were able to attend college.
Following the war, black Americans started what has become known as the Civil Rights Movement in the United States. After roughly a century of second - class citizenship following the abolition of slavery, blacks began seeking full equality. This was helped by the 1954 decision by the Supreme Court, outlawing segregation in schools, which was common in the South. Dr. Martin Luther King, a black minister from the South led many blacks and whites who supported their cause in non-violent protests against discrimination. Eventually, the Civil Rights Act and Voting Rights Act were passed in 1964, banning measures that had prevented blacks from voting and outlawing segregation and discrimination in the U.S.
In politics, the Democratic and Republican parties remained dominant. In 1945, the Democratic party relied on Southerners, whose support went back to the days when Democrats defended a state 's right to own slaves, and Northeasterners and industrial Mid-Westerners, who supported the pro-labor and pro-immigrant policies of the Democrats. Republicans tended to rely on middle - class Protestants from elsewhere in the country. As the Democrats began championing civil rights, however, Southern Democrats felt betrayed, began voting Republican. Presidents from this period were Harry Truman, Dwight Eisenhower, John Kennedy, Lyndon Johnson, Richard Nixon, Gerald Ford, and Jimmy Carter. The years 1945 -- 1980 saw the expansion of federal power and the establishment of programs to help the elderly and poor pay for medical expenses.
By 1980, many Americans had become pessimistic about their country. Despite its status as one of only two superpowers, the Vietnam War as well as the social upheavals of the 1960s and an economic downturn in the 1970s led America to become a much-less confident nation.
At the close of the war, much of Europe lay in ruins with millions of homeless refugees. A souring of relations between the Western Allies and the Soviet Union then saw Europe split by an Iron Curtain, dividing the continent between West and East. In Western Europe, democracy had survived the challenge of Fascism and began a period of intense rivalry with Eastern Communism, which was to continue into the 1980s. France and Britain secured themselves permanent positions on the newly formed United Nations Security Council, but Western European Empires did not long survive the war, and no one Western European nation would ever again be the paramount power in world affairs.
Despite these immense challenges however, Western Europe again rose as an economic and cultural powerhouse. Assisted first by the Marshall Plan of financial aid from the United States, and later through closer economic integration through the European Common Market, Western Europe quickly re-emerged as a global economic power house. The vanquished nations of Italy and West Germany became leading economies and allies of the United States. So marked was their recovery that historians refer to an Italian economic miracle and in the case of West Germany and Austria the Wirtschaftswunder (German for economic miracle).
Facing a new power balance between the Soviet East and American West, Western European nations moved closer together. In 1957, Belgium, France, the Netherlands, West Germany, Italy and Luxembourg signed the landmark Treaty of Rome, creating the European Economic Community, free of customs duties and tariffs, and allowing the rise of a new European geo - political force. Eventually, this organization was renamed the European Union or (EU), and many other nations joined, including Britain, Ireland, and Denmark. The EU worked toward economic and political cooperation among European nations.
Between 1945 and 1980, Europe became increasingly socialist. Most European countries became welfare states, in which governments provided a large number of services to their people through taxation. By 1980, most of Europe had universal healthcare and pensions for the elderly. The unemployed were also guaranteed income from the government, and European workers were guaranteed long vacation time. Many other entitlements were established, leading many Europeans to enjoy a very high standard of living. By the 1980s, however, the economic problems of the welfare state were beginning to emerge.
Europe had many important political leaders during this time. Charles de Gaulle, leader of the French government in exile during World War II, served as France 's president for many years. He sought to carve out for France a great power status in the world.
Although Europe as a whole was relatively peaceful in this period, both Britain and Spain suffered from acts of terrorism. In Britain, The Troubles saw Irish republicans battle Unionists loyal to Britain. In Spain, ETA, a Basque separatist group, began committing acts of terror against Spaniards, hoping to gain independence for the Basques, an ethnic minority in north - eastern Spain. Both these terrorist campaigns failed, however.
For Greece, Spain and Portugal, ideological battles between left and right continued and the emergence of parliamentary democracy was troubled. Greece experienced Civil War, coup and counter-coup into the 1970s. Portugal, since the 1930s under a quasi-Fascist regime and among the poorest nations in Europe, fought a rearguard action against independence movements in its empire, until a 1974 coup. The last authoritarian dictatorship in Western Europe fell in 1975, when Francisco Franco, dictator of Spain, died. Franco had helped to modernize the country and improve the economy. His successor, King Juan Carlos, transformed the country into a constitutional monarchy. By 1980, all Western European nations were democracies.
Between 1945 and 1980, the British Empire was transformed from the its centuries old position as a global colonial power, to a voluntary association known as the Commonwealth of Nations - only some of which retained any formal political links to Britain or its monarchy. Some former British colonies or protectorates disassociated themselves entirely from Britain.
The popular war time leader Winston Churchill was swept from office at the 1945 election and the Labour Government of Clement Attlee introduced a program of nationalisation of industry and introduced wide - ranging social welfare. Britain 's finances had been ravaged by the war and John Maynard Keynes was sent to Washington to negotiate the massive Anglo - American loan on which Britain relied to fund its post-war reconstruction.
India was granted Independence in 1947 and Britain 's global influence rapidly declined as decolonisation proceeded. Though the USSR and United States now stood as the post war super powers, Britain and France launched the ill - fated Suez intervention in the 1950s, and Britain committed to the Korean War.
From the 1960s The Troubles afflicted Northern Ireland, as British Unionist and Irish Republican paramilitaries conducted campaigns of violence in support of their political goals. The conflict at times spilled into Ireland and England and continental Europe. Paramilitaries such as the IRA (Irish Republican Army) wanted union with the Republic of Ireland while the UDA (Ulster Defence Association) were supporters of Northern Ireland remaining within the United Kingdom.
In 1973, Britain entered the European Common Market, stepping away from imperial and commonwealth trade ties. Inflation and unemployment contributed to a growing sense of economic decline - partly offset by the exploitation of North Sea Oil from 1974. In 1979, the electorate turned to Conservative Party leader Margaret Thatcher, who became Britain 's first female prime minister. Thatcher launched a radical program of economic reform and remained in power for over a decade. In 1982, Thatcher dispatched a British fleet to the Falkland Islands which successfully repelled an Argentine invasion of the British Territory, demonstrating that Britain could still project power across the globe.
Canada continued to evolve its own national identity in the post-war period. Although it was an independent nation, it remained part of the British Commonwealth and recognized the British monarch as the Canadian monarch as well. Following the war, French and English were recognized as co-equal official languages in Canada, and French became the only official language in the French - speaking province of Quebec. Referenda were held in both 1980 and 1995 in which Quebecers, however, voted not to secede from the union. Other cultural changes Canada faced were similar to those in the United States. Racism and discrimination largely disappeared in the post-war years, and dual - income families became the norm. Also, there was a rejection of traditional Western values by many in Canada. The government also established universal health care for its citizens following the war.
Following World War II, Australia and New Zealand enjoyed a great deal of prosperity along with the rest of the West. Both countries remained constitutional monarchies within the evolving Commonwealth of Nations and continued to recognise British monarchs as head of their own independent Parliaments. However, following British defeats by the Japanese in World War II, the post-war decline of the British Empire, and entry of Britain into the European Economic Community in 1973, the two nations re-calibrated defence and trade relations with the rest of the world. Following the Fall of Singapore in 1941, Australia turned to the United States for military aid against the Japanese Empire and Australia and New Zealand joined the United States in the ANZUS military alliance in the early 1950s and contributed troops to anti-communist conflicts in South - East Asia in the 1950s, 1960s and 1970s. The two nations also established multicultural immigration programs with waves of economic and refugee migrants establishing bases for large Southern European, East Asian, Middle Eastern, and South Pacific islander communities. Trade integration with Asia expanded, particularly through good post-war relations with Japan. The Maori and Australian Aborigines had been largely dispossessed and disenfranchised during the 19th and early 20th centuries, but relations between the descendants of European settlers and the Indigenous peoples of Australia and New Zealand began to improve through legislative and social reform over the post-war period corresponding with the civil rights movement in North America. 1970s Australia was a vocal critic of white - minority rule in the former British colonies of South Africa and Rhodesia.
The arts also diversified and flourished over the period -- with Australian cinema, literature and musical artists expanding their nation 's profile internationally. The iconic Sydney Opera House opened in 1973 and Australian Aboriginal Art began to find international recognition and influence.
The West went through a series of great cultural and social changes between 1945 and 1980. Mass media created a global culture that could ignore national frontiers. Literacy became almost universal, encouraging the growth of books, magazines and newspapers. The influence of cinema and radio remained, while televisions became near essentials in every home. A new pop culture also emerged with rock n roll and pop stars at its heart.
Religious observance declined in most of the West. Protestant churches began focusing more on social gospel rather than doctrine, and the ecumenist movement, which supported co-operation among Christian Churches. The Catholic Church changed many of its practices in the Second Vatican Council, including allowing masses to be said in the vernacular rather than Latin. The counterculture of the 1960s (and early 1970s) began in the United States as a reaction against the conservative government, social norms of the 1950s, the political conservatism (and perceived social repression) of the Cold War period, and the US government 's extensive military intervention in Vietnam.
With the abolition of laws treating most non-whites as second - class citizens, institutional racism largely disappeared from the West. Although the United States failed to secure the legal equality of women with men (by the failure of Congress to ratify the Equal Rights Amendment), women continued working outside the home, and by 1980 the double - income family became commonplace in Western society. Beginning in the 1960s, many began rejecting traditional Western values and there was a decline in emphasis on church and the family.
Rock and roll music and the spread of technological innovations such as television dramatically altered the cultural landscape of western civilisation. The influential artists of the 20th century often belonged to the new technology artforms.
Rock and roll emerged from the United States from the 1950s to become a quintessential 20th - century art form. Artists such as Elvis Presley, Roy Orbison and Johnny Cash and, later, The Beach Boys developed the new genre in the Southern United States. Cash became an icon of the also newly emerging popular genre of Country Music. British rock and roll emerged later, with bands like The Beatles and The Rolling Stones rising to unparalleled success during the 1960s. From Australia emerged the mega pop band The Bee Gees and hard rock band AC / DC, who carried the genre in new directions through the 1970s. These musical artists were icons of radical social changes which saw many traditional notions of western culture alter dramatically.
Hollywood, California became synonymous with film during the 20th century and American Cinema continued a period of immense global influence in the West after World War II. American cinema played a role in adjusting community attitudes through the 1940s to 1980 with seminal works like John Ford 's 1956 Western The Searchers, starring John Wayne, providing a sympathetic view of the Native American experience; and 1962 's To Kill a Mockingbird, based on the Pulitzer Prize - winning novel by Harper Lee and starring Gregory Peck, challenging racial prejudice. The advent of television challenged the status of cinema and the artform evolved dramatically from the 1940s through the age of glamorous icons like Marilyn Monroe and directors like Alfred Hitchcock to the emergence of such directors as Stanley Kubrick, George Lucas and Steven Spielberg, whose body of work reflected the emerging Space Age and immense technological and social change.
The 1980s were a period of economic growth in the West, though the 1987 Stock Market Crash saw much of the West enter the 1990s in a downturn. The 1990s and turn of the century in turn saw a period of prosperity throughout the West. The World Trade Organization was formed to assist in the organisation of world trade. Following the collapse of Soviet Communism, Central and Eastern Europe began a difficult readjustment towards market economies and parliamentary democracy. In the post Cold War environment, new co-operation emerged between the West and former rivals like Russia and China, but Islamism declared itself a mortal enemy of the West, and wars were launched in Afghanistan and the mid-East in response. The economic cycle turned again with the 2008 Global Financial Crisis, but amidst a new economic paradigm, the effect on the West was uneven, with Europe and United States suffering deep recession, but Pacific economies like Australia and Canada, largely avoiding the downturn - benefitting from a combination of rising trade with Asia, good fiscal management and banking regulation. In the early 21st century, Brasil, Russia, Indian and China (the BRIC nations) were re-emerging as drivers of economic growth from outside North America and Western Europe.
In the early stages after the Cold War, Russian president Boris Yeltsin stared down an attempted restoration of Sovietism in Russia, and pursued closer relations with the West. Amid economic turmoil a class of oligarchs emerged at the summit of the Russian economy. Yeltsin 's chosen successor, the former spy, Vladimir Putin, tightened the reins on political opposition, opposed separatist movements within the Russian Federation, and battled pro-Western neighbour states like Georgia, contributing to a challenging climate of relations with Europe and America. Former Soviet satellites joined NATO and the European Union, leaving Russia again isolated in the East. Under Putin 's long reign, the Russian economy profited from a resource boom in the global economy, and the political and economic instability of the Yeltsin era was brought to an end.
Elsewhere, both within and without the West, democracy and capitalism were in the ascendant - even Communist holdouts like mainland China and (to a lesser extent) Cuba and Vietnam, while retaining one party government, experimented with market liberalisation, a process which accelerated after the fall of European Communism, enabling the re-emergence of China as an alternative centre of economic and political power standing outside the West.
Free trade agreements were signed by many countries. The European nations broke down trade barriers with one another in the EU, and the United States, Canada, and Mexico signed the North American Free Trade Agreement (NAFTA). Although free trade has helped businesses and consumers, it has had the unintended consequence of leading companies to outsource jobs to areas where labor is cheapest. Today, the West 's economy is largely service and information - based, with most of the factories closing and relocating to China and India.
European countries have had very good relations with each other since 1980. The European Union has become increasingly powerful, taking on roles traditionally reserved for the nation - state. Although real power still exists in the individual member states, one major achievement of the Union was the introduction of the Euro, a currency adopted by most EU countries.
Australia and New Zealand continued their large multi-ethnic immigration programs and became more integrated in the Asia Pacific region. While remaining constitutional monarchies within the Commonwealth, distance has grown between them and Britain, spurred on by Britain 's entry into the European Common Market. Australia and New Zealand have integrated their own economies via a free trade agreement. While political and cultural ties with North America and Europe remain strong, economic reform and commodities trade with the booming economies of Asia have set the South Pacific nations on a new economic trajectory with Australia largely avoiding a downturn in the Financial crisis of 2007 -- 2008 which unleashed severe economic loss through North America and Western Europe.
Today Canada remains part of the Commonwealth, and relations between French and English Canada have continued to present problems. A referendum was held in Quebec, however, in 1980, in which Quebecers voted to remain part of Canada.
In 1990, the white - minority government of the Republic of South Africa, led by F.W. de Klerk, began negotiations to dismantle its racist apartheid legislation and the former British colony held its first universal elections in 1994, which the African National Congress Party of Nelson Mandela won by an overwhelming majority. The country has rejoined the Commonwealth of Nations.
Since 1991, the United States has been regarded as the world 's only superpower. Politically, the United States is dominated by the Republican and Democratic parties. Presidents of the United States between 1980 and 2006 have been Ronald Reagan, George H.W. Bush, Bill Clinton, and George W. Bush. Since 1980, Americans have become far more optimistic about their country than they were in the 1970s. Since the 1960s, a large number of immigrants have been coming into the U.S., mostly from Asia and Latin America, with the largest single group being Mexicans. Large numbers from those areas have also been coming illegally, and the solution to this problem has produced much debate in the U.S.
On 11 September 2001, the United States suffered the worst terrorist attack in its history. Four planes were hijacked by Islamic extremists and crashed into the World Trade Center, the Pentagon, and a field in Pennsylvania.
The late - 2000s financial crisis, considered by many economists to be the worst financial crisis since the Great Depression of the 1930s, was triggered by a liquidity shortfall in the United States banking system, and has resulted in the collapse of large financial institutions, the bailout of banks by national governments, and downturns in stock markets throughout much of the West. The United States and Britain faced serious downturn, while Portugal, Greece, Ireland and Iceland faced major debt crises. Almost uniquely among Western nations, Australia avoided recession off the back of strong Asian trade and 25 years of economic reform and low levels of government debt.
Evidence of the major demographic and social shifts which have taken place within Western society since World War II can be found with the elections of national level leaders: United States (Barack Obama was elected president in 2009, becoming the first African - American to hold that office), France (Nicolas Sarkozy, the first president of France of Eastern European and Tunisian descent, and a practitioner of the Jewish faith), Germany (Angela Merkel, the first female leader of that nation), and Australia (Julia Gillard, also the first female leader of that nation).
Following 1991, Western nations provided troops and aid to many war - torn areas of the world. Some of these missions were unsuccessful, like the attempt by the United States to provide relief in Somalia in the early 1990s. A very successful peace - making operation was conducted in the Balkans in the late 1990s, however. After the Cold War, Yugoslavia broke up into several countries along ethnic lines, and soon countries and ethnic groups within countries of the former Yugoslavia began fighting one another. Eventually, NATO troops arrived in 1999 and ended the conflict. Australian led a United Nations mission into East Timor in 1999 (INTERFET) to restore order during that nation 's transition to democracy and independence from Indonesia.
The greatest war fought by the West in the 1990s, however, was the Persian Gulf War. In 1990, the Middle Eastern nation of Iraq, under its brutal dictator Saddam Hussein, invaded the much smaller neighbouring country of Kuwait. After refusing to withdraw troops, the United Nations condemned Iraq and sent troops to liberate Kuwait. American, British, French, Egyptian and Syrian troops all took part in the liberation. The war ended in 1991, with the withdrawal of Iraqi troops from Kuwait and Iraq 's agreement to allow United Nations inspectors to search for weapons of mass destruction in Iraq.
The West had become increasingly unpopular in the Middle East following World War II. The Arab states greatly disliked the West 's support for Israel. Many soon had a special hatred towards the United States, Israel 's greatest ally. Also, partly to ensure stability on the region and a steady supply of the oil the world economy needed, the United States supported many corrupt dictatorships in the Middle East. In 1979, an Islamic revolution in Iran overthrew the pro-Western Shah and established an anti-Western Shiite Islamic theocracy. Following the withdrawal of Soviet troops from Afghanistan, most of the country came under the rule of a Sunni Islamic theocracy, the Taliban. The Taliban offered shelter to the Islamic terrorist group Al - Qaeda, founded by the extremist Saudi Arabian exile Osama Bin Laden. Al - Qaeda launched a series of attacks on United States overseas interests in the 1990s and 2000. Following the September 11 attacks, however, the United States overthrew the Taliban government and captured or killed many Al Qaeda leaders, including Bin Laden. In 2003, the United States led a controversial war in Iraq, because Saddam had never accounted for all his weapons of mass destruction. By May of that year, American, British, Polish and troops from other countries had defeated and occupied Iraq. Weapons of mass destruction however, were never found afterwards. In both Afghanistan and Iraq, the United States and its allies established democratic governments. Following the Iraq war, however, an insurgency made up of a number of domestic and foreign factions has cost many lives and made establishing a government very hard.
In March 2011, a multi-state coalition led by NATO began a military intervention in Libya to implement United Nations Security Council Resolution 1973, which was taken in response to threat made by the government of Muammar Gaddafi against the civilian population of Libya during the 2011 Libyan civil war.
In general, Western culture has become increasingly secular in Northern Europe, North America, Australia and New Zealand. Nevertheless, in a sign of the continuing status of the ancient Western institution of the Papacy in the early 21st century, the Funeral of Pope John Paul II brought together the single largest gathering in history of heads of state outside the United Nations. It is likely to have been the largest single gathering of Christianity in history, with numbers estimated in excess of four million mourners gathering in Rome. He was followed by another non-Italian Benedict XVI, whose near - unprecedented resignation from the papacy in 2013 ushered in the election of the Argentine Pope Francis - the first pope from the Americas, the new demographic heartland of Catholicism.
Personal computers emerged from the West as a new society changing phenomenon during this period. In the 1960s, experiment began on networks linking computers and from these experiments grew the World Wide Web. The internet revolutionised global communications through the late 1990s and into the early 21st century and permitted the rise of new social media with profound consequences, linking the world as never before. In the West, the internet allowed free access to vast amounts of information, while outside the democratic West, as in China and in Middle Eastern nations, a range of censorship and monitoring measures were instigated, providing a new socio - political contrast between east and west.
|
who wrote the guitar riff to beat it | Beat It - wikipedia
"Beat It '' is a song written and performed by American singer Michael Jackson single from the singer 's sixth solo album, Thriller (1982). The song was produced by Quincy Jones together with Jackson. Following the successful chart performances of the Thriller singles "The Girl Is Mine '' and "Billie Jean '', "Beat It '' was released on February 14, 1983 as the album 's third single. The song is also notable for its famous video, which featured Jackson bringing two gangs together through the power of music and dance.
"Beat It '' received the 1984 Grammy Awards for Record of the Year and Best Male Rock Vocal Performance, as well as two American Music Awards. It was inducted into the Music Video Producers Hall of Fame. The single, along with its music video, helped propel Thriller into becoming the best - selling album of all time. The single was certified platinum in the United States in 1989. Rolling Stone placed "Beat It '' on the 344th spot of its list of "The 500 Greatest Songs of All Time ''. The song was also ranked number 81 on Rolling Stone 's "100 Greatest Guitar Songs of All Time ''.
In the decades since its release, "Beat It '' has been covered, parodied, and sampled by numerous artists including Pierce the Veil, Fall Out Boy, Pomplamoose, Justin Bieber, Alvin and the Chipmunks, Fergie, John 5, "Weird Al '' Yankovic and Eminem. The song was also featured in the National Highway Safety Commission 's anti-drunk driving campaign.
"Beat It '' was composed by Michael Jackson for his Thriller album. Producer Quincy Jones had wanted to include a rock and roll song in the vein of the Knack 's "My Sharona '', though Jackson reportedly had never previously shown an interest in the genre. Jackson later said of "Beat It '', "I wanted to write a song, the type of song that I would buy if I were to buy a rock song... That is how I approached it and I wanted the children to really enjoy it -- the school children as well as the college students. '' Jermaine Jackson has suggested the inspiration of "Beat It '' and its video came from the Jackson family experiencing gang activity in Gary, Indiana. "From our front window, we witnessed, about three bad rumbles between rival gangs. ''
Upon hearing the first recorded vocals, Jones stated that it was exactly what he was looking for. The song begins with seven distinct synthesizer notes played on the Synclavier digital synthesizer, with Tom Bahler credited for the Synclavier performance on the song. The intro is taken note for note from a demo LP released the year before, called "The Incredible Sounds of Synclavier II '' first published in 1981 by Denny Jaeger Creative Services, Inc and sold by New England Digital, makers of the Synclavier.
Eddie Van Halen, lead guitarist of hard rock band Van Halen, was asked to add a guitar solo. When initially contacted by Jones, Van Halen thought he was receiving a prank call. Having established that the call was genuine, Van Halen borrowed an amp from Allan Holdsworth and recorded his guitar solo free of any charge. "I did it as a favor '', the musician later said. "I was a complete fool, according to the rest of the band, our manager and everyone else. I was not used. I knew what I was doing -- I do n't do something unless I want to do it. '' Van Halen recorded his contribution following Jones and Jackson arriving at the guitarist 's house with a "skeleton version '' of the song. Fellow guitarist Steve Lukather recalled, "Initially, we rocked it out as Eddie had played a good solo -- but Quincy thought it was too tough. So I had to reduce the distorted guitar sound and that is what was released. '' The song was among the last four completed for Thriller; the others were "Human Nature '', "P.Y.T. (Pretty Young Thing) '' and "The Lady in My Life ''.
Right before Van Halen 's guitar solo begins, a noise is heard that sounds like somebody knocking at a door. It is reported that the knock was a person walking into Eddie 's recording studio. Another story has claimed that the sound was simply the musician knocking on his own guitar.
The engineers were shocked during the recording of Van Halen 's solo to discover that the sound of his guitar had caused the monitor speaker in the control room to catch fire, causing one to exclaim, "This must be REALLY good! ''
The lyrics of "Beat It '' have been described as a "sad commentary on human nature ''. The line "do n't be a macho man '' is said to express Jackson 's dislike of violence, whilst also referring to the childhood abuse he faced at the hands of his father Joseph. The song is played in the key of E ♭ minor at a moderately fast tempo of 138 -- 139 beats per minute. In the song, Jackson 's vocal range is B3 to D5.
Drums on the song were played by Toto co-founder Jeff Porcaro.
A remix of "2 Bad '', is featured on Blood on the Dance Floor: HIStory in the Mix containing a sample of "Beat It '' as well as a rap by John Forté and guitar solo by Wyclef Jean.
"Beat It '' was released on February 14, 1983, following the successful chart performances of "The Girl Is Mine '' and "Billie Jean ''. Frank DiLeo, the vice president of Epic Records, convinced Jackson to release "Beat It '' while "Billie Jean '' was heading towards No. 1. Dileo, who would later become the singer 's manager, predicted that both singles would remain in the Top 10 at the same time. "Billie Jean '' remained atop the Billboard Hot 100 for seven weeks, before being toppled by "Come On Eileen '', which stayed at No. 1 for a single week, before Jackson reclaimed the position with "Beat It ''.
"Billie Jean '' and "Beat It '' occupied Top 5 positions at the same time, a feat matched by very few artists. The single remained at the top of the Hot 100 for a total of three weeks. The song also charted at No. 1 on the US R&B singles chart and No. 14 on the Billboard Top Tracks chart in the US. Billboard ranked it at the No. 5 song for 1983. "Beat It '' also claimed the top spot in Spain and The Netherlands, reached No. 3 in the UK and the Top 20 in Austria, Norway, Italy, Sweden and Switzerland.
In a Rolling Stone review, Christopher Connelly describes "Beat It '' as the best song on Thriller, adding that it "ai n't no disco AOR track ''. He notes of the "nifty dance song '', "Jackson 's voice soars all over the melody, Eddie Van Halen checks in with a blistering guitar solo, you could build a convention center on the backbeat ''. AllMusic 's Stephen Thomas Erlewine states that the song is both "tough '' and "scared ''. Robert Christgau claimed that the song has Eddie Van Halen "wielding his might in the service of antimacho ''. Slant Magazine observed that the song was an "uncharacteristic dalliance with the rock idiom ''. The track also won praise from Jackson biographer J. Randy Taraborrelli, who stated that the song was "rambunctious ''.
"Beat It '' has been recognized with several awards. At the 1984 Grammy Awards, the song earned Jackson two of a record - eight awards: Record of the Year and Best Male Rock Vocal Performance. The track won the Billboard Music Award for favorite dance / disco 12 '' LP in 1983. The single was certified gold by the Recording Industry Association of America (RIAA), a few months after its release, for shipments of at least one million units. In 1989, the standard format single was re-certified platinum by the RIAA, based on the revised sales level of one million units for platinum singles. The total number of digital sales in the US, as of September 2010, stands at 1,649,000.
The music video for "Beat It '' helped establish Jackson as an international pop icon. The video was Jackson 's first treatment of black youth and the streets. Both "Beat It '' and "Thriller '' are notable for their "mass choreography '' of synchronized dancers, a Jackson trademark.
The video, which cost Jackson $150,000 to create after CBS refused to finance it, was filmed on Los Angeles ' Skid Row -- mainly on locations on East 5th Street -- around March 9, 1983. To add authenticity to the production but also to foster peace between them, Jackson had the idea to cast members of rival Los Angeles street gangs Crips and Bloods. In addition to around 80 genuine gang members, the video, which is noted for opening up many job opportunities for dancers in the US, also featured 18 professional dancers and four breakdancers. Besides Jackson, Michael Peters, and Vincent Paterson, the cast included Michael DeLorenzo, Stoney Jackson, Tracii Guns, Tony Fields, Peter Tramm, Rick Stone, and Cheryl Song.
The video was written and directed by Bob Giraldi, produced by Antony Payne and Mary M. Ensign through production company GASP. The second video released for the Thriller album, it was choreographed by Michael Peters who also performed, alongside Vincent Paterson, as one of the two lead dancers. Despite some sources claiming otherwise, Jackson was involved in creating some parts of the choreography. Jackson asked Giraldi, at the time already an established commercial director but who had never directed a music video, to come up with a concept for the "Beat It '' video because he really liked a commercial Giraldi had directed for WLS - TV in Chicago about a married couple of two elderly blind people who instead of running from a run - down neighborhood all the other white people had fled from, chose to stay and throw a block party for all the young children in the area. Contrary to popular belief, the concept of the video was not based on the Broadway musical West Side Story; in reality Giraldi drew inspiration from his growing up in Paterson, New Jersey.
The video had its world premiere on MTV during prime time on March 31, 1983 though it should be noted that neither Beat It nor Billie Jean was, as is often claimed, the first music video by an African - American artist to be played on MTV. Soon after its premiere the video was also running on other video programs including BET 's Video Soul, SuperStation WTBS 's Night Tracks, and NBC 's Friday Night Videos. In fact, Beat It was the first video shown on the latter 's first ever telecast on July 29, 1983.
The video opens with the news of a fight circulating at a diner. This scene repeats itself at a pool hall, where gang members arrive via foot, forklift, and out of sewers, while the video 's titular song begins to play. The camera cuts to a scene of Jackson lying on a bed, revealing he 's the one singing contemplating the senseless violence. The singer notices rival gangs and leaves. Michael Jackson dons a red leather J. Parks brand jacket, and dances his way towards the fight through the diner and pool hall. A knife fight is taking place between the two gang leaders in a warehouse. They dance battle for an interlude of music until MJ arrives; the singer breaks up the fight and launches into a dance routine. The video ends with the gang members joining him in the dance, agreeing that violence is not the solution to their problems.
The video received recognition through numerous awards. The American Music Awards named the short film their Favorite Pop / Rock Video and their Favorite Soul Video. The Black Gold Awards honored Jackson with the Best Video Performance award. The Billboard Video Awards recognised the video with 7 awards; Best Overall Video Clip, Best Performance by a Male Artist, Best Use of Video to Enhance a Song, Best Use of Video to Enhance an Artist 's Image, Best Choreography, Best Overall Video and Best Dance / Disco 12 ". The short film was ranked by Rolling Stone as the No. 1 video, in both their critic 's and reader 's poll. The video was later inducted into the Music Video Producer 's Hall of Fame.
The music video of the song appears on the video albums: Video Greatest Hits -- HIStory, HIStory on Film, Volume II, Number Ones, on the bonus DVD of Thriller 25 and Michael Jackson 's Vision.
On July 14, 1984, Jackson performed "Beat It '' live with his brothers during The Jacksons ' Victory Tour. The brothers were joined on stage by Eddie Van Halen, who played the guitar in his solo spot. The song became a signature song of Jackson; the singer performed it on all of his world tours: Bad, Dangerous and HIStory. The October 1, 1992 Dangerous Tour performance of "Beat It '' was included on the DVD of the singer 's Michael Jackson: The Ultimate Collection box set. The DVD was later repackaged as Live in Bucharest: The Dangerous Tour. Jackson also performed the song on the Michael Jackson: 30th Anniversary Special, a concert celebrating the musician 's thirtieth year as a solo performer. The performance featured Slash as the song 's guest guitarist.
A highlight of Jackson 's solo concert tour performances of the song is that would he would begin the song on a cherry picker (which he would also later use with "Earth Song '' during the HIStory World Tour) after performing Thriller. Another live version of the song is available on the DVD Live at Wembley July 16, 1988. The song would have also been performed as part of the This Is It concerts which were cancelled due to Jackson 's death.
Michael Jackson 's "Beat It '' has been cited as one of the most successful, recognized, awarded and celebrated songs in the history of pop music; both the song and video had a large impact on pop culture. The song is said to be a "pioneer '' in black rock music, and is considered one of the cornerstones of the Thriller album. Eddie Van Halen has been praised for adding "the greatest guitar solo '', aiding "Beat It '' into becoming one of the biggest selling singles of all time.
Shortly after its release, "Beat It '' was included in the National Highway Safety Commission 's anti-drunk driving campaign, "Drinking and Driving Can Kill a Friendship ''. The song was also included on the accompanying album. Jackson collected an award from President Ronald Reagan at the White House, in recognition for his support of the campaign. Reagan stated that Jackson was "proof of what a person can accomplish through a lifestyle free of alcohol or drug abuse. People young and old respect that. And if Americans follow his example, then we can face up to the problem of drinking and driving, and we can, in Michael 's words, ' Beat It '. ''
Frequently listed in greatest song polling lists, "Beat It '' was ranked as the world 's fourth favorite song in a 2005 poll conducted by Sony Ericsson. Over 700,000 people in 60 different countries cast their votes. Voters from the UK placed "Billie Jean '' at No. 1, ahead of "Thriller '', with a further five of the top ten being solo recordings by Jackson. In 2004, Rolling Stone magazine placed "Beat It '' in the 337th spot on its list of the 500 Greatest Songs of All Time. The song was featured in the films Back to the Future Part II, Zoolander and Undercover Brother. When re-released, as part of the Visionary campaign in 2006, "Beat It '' charted at No. 15 in the UK. The song has been used in TV commercials for companies like Budweiser, eBay, Burger King, Delta Air Lines, Game Boy, Coldwell Banker and the NFL. On the City Guys episode of season 3 's "Face the Music '', Jamal says to Slick Billy West, played by Sherman Hemsley, "Well Gone Michael Jackson and Beat It '' which was in the final scene. The song also appeared in the 2008 music game, Guitar Hero World Tour, as the last song in the vocal career. Notably, in this game, the vocalist will perform the same dance routine performed by Jackson on the video and live performances when singing the final verse. The song is featured on the dancing game Michael Jackson: The Experience.
Adapted from the Thriller liner notes.
sales figures based on certification alone shipments figures based on certification alone
For Thriller 25, The Black Eyed Peas singer will.i.am remixed "Beat It ''. The song, titled "Beat It 2008 '', featured additional vocals by fellow Black Eyed Peas member Fergie. Upon its release in 2008, the song reached No. 26 in Switzerland, the Top 50 in Sweden and No. 65 in Austria. This was the second remixed version of "Beat It '' to get an official release, following Moby 's Sub Mix which was released on the "Jam '' and "Who Is It '' singles in 1992, as well as the "They Do n't Care About Us '' single in 1996 (and re-released as part of the Visionary campaign.)
"Beat It 2008 '' received generally unfavorable reviews from music critics. Rob Sheffield of Rolling Stone claimed that the song was a "contender for the year 's most pointless musical moment ''. AllMusic criticized Fergie for "parroting the lyrics of "Beat It '' back to a recorded Jackson ". Blender 's Kelefa Sanneh also noted that the Black Eyed Peas singer traded lines with Jackson. "Why? '', she queried. Todd Gilchrist was thankful that the remix retained Eddie Van Halen 's "incendiary guitar solo '', but added that the song "holds the dubious honor of making Jackson seem masculine for once, and only in the context of Fergie 's tough - by - way - of - Kids Incorporated interpretation of the tune ''. Tom Ewing of Pitchfork observed that Fergie 's "nervous reverence is a waste of time ''.
American rock band Fall Out Boy covered "Beat It ''. The studio version was digitally released on March 25, 2008 by Island Records as the only single from the band 's first live album, Live in Phoenix (2008). The song features a guitar solo by John Mayer, which was performed by Eddie Van Halen in the original song. In the United States, the song peaked at No. 19 on the Billboard Hot 100 and reached No. 21 on the defunct - Billboard Pop 100 chart, also charting internationally. The band has since regularly incorporated it in their set list at their shows.
In early 2008 it was announced that Fall Out Boy were to cover "Beat It '' for their Live in Phoenix album. The band had previously performed the song at venues such as Coors Amphitheatre and festivals such as the Carling Weekend in Leeds. Bassist Pete Wentz, who has claimed to have an obsession with Jackson, stated that prior to recording the song, he would only watch Moonwalker. It was also announced that John Mayer was to add the guitar solo previously played by Eddie Van Halen.
The band 's lead singer / guitarist Patrick Stump stated that the band had not planned to cover the song. "Basically, I just started playing the riff in sound - check one day, and then we all started playing it, and then we started playing it live, and then we figured we 'd record it and put it out with our live DVD. '' Bassist Pete Wentz added that the band had not originally intended for the song to be released as a single either. "' Beat It ' seemed like a song that would be cool and that we could do our own take on '', he said. Having spent time deciding on a guitarist for the song, Wentz eventually called John Mayer to add the guitar solo. "We were trying to think about who is a contemporary guitar guy who 's going to go down as a legend '', Wentz later noted.
Upon its digital release as a single in April 2008, Fall Out Boy 's cover of "Beat It '' became a mainstay on iTunes ' Top 10 chart. The song peaked at No. 8 in Canada, becoming another top 10 hit in the region. It also charted at No. 13 in Australia, No. 14 in New Zealand, No. 75 in Austria and No. 98 in the Netherlands.
The music video for Fall Out Boy 's "Beat It '' was directed by Shane Drake and was made in homage to Jackson. "I think when you 're doing a Michael Jackson cover, there 's this expectation that you 're going to do one of his videos verbatim, '' Stump said. "What we decided to do was kind of inspired by Michael Jackson and the mythology of him. There are specific images that are reference points for us, but at any given point, it 's not any of his videos. It 's kind of all of his videos, all at once, but on a Fall Out Boy budget, so it 's not quite as fancy. '' The costumes for the video were similar to the originals. "My costume is this take on one of the guys from Michael Jackson 's original ' Beat It ' video, like, the guy who plays the rival dancer, '' Wentz said during the filming of the video. The music video featured numerous cameos, including a karate class / dance session being taught by Tony Hale, Donald Faison and Joel David Moore dressed up like Michael Jackson. The short film later received a MTV Video Music Award nomination for Best Rock Video.
|
jimmy buffett boats beaches bars & ballads song list | Boats, Beaches, bars & ballads - wikipedia
Boats, Beaches, Bars & Ballads is a four - compact disc (or cassette) compilation box set of Jimmy Buffett and the Coral Reefer Band 's greatest hits, rarities, and previously unreleased songs. Released in 1992, the collection reached quadruple platinum.
Boats is a CD of sailor and sailing songs, true to Buffett 's heart. Previously unreleased songs include "Love and Luck. '' "Take it Back, '' written for the US America 's Cup yachting team, was previously available only as a single B - side. "Love and Luck '' would later appear on the live album, Tuesdays, Thursdays, and Saturdays. The album is 66 minutes, 22 seconds long.
Beaches is a CD of Buffett 's days in Key West, the Gulf Coast, and the Caribbean. Of the 18 beach songs, notable songs are "Margaritaville, '' also known as the Buffett national anthem, and "Cheeseburger In Paradise ''. Previously unreleased songs include "Money Back Guarantee '' and "Christmas in the Caribbean. '' Stars on the Water is a cover of the Rodney Crowell song. The album is 63 minutes, 16 seconds long.
Bars is a CD of bars, songs written in bars, and plain party songs. Beginning with Fins, the album also includes "Why Do n't We Get Drunk (and Screw) '' and "Pencil Thin Mustache. '' Previously unreleased songs include "Elvis Imitators, '' an attempt at an Elvis Presley song, written by Steve Goodman and released by Buffett under the band name Freddie and the Fishsticks, and "Domino College. '' "Fins '' and "Cuban Crime Of Passion '' were cowritten with Tom Corcoran.
Ballads is a CD of love songs and ballads. The most notable songs are "Come Monday '' and "He Went to Paris ''. This collection includes a slightly different mix of "I Heard I Was in Town '' than what appears on Somewhere Over China. Previously unreleased songs include "Middle of the Night '' and "Everlasting Moon. ''
Not counting eight unreleased songs, the box set focuses primarily on his 1970s output. Buffett recorded and released 35 of the tracks in the 1970s and 26 of the tracks in the 1980s. The box set takes songs from every studio album Buffett released up to that point, barring only "Down to Earth '' (1970), "High Cumberland Jubilee '' (1971), and "Rancho Deluxe '' (a largely instrumental soundtrack Buffet released in 1975).
"Take It Back '', "Love and Luck '', "Money Back Guarantee '', "Christmas in the Caribbean '', "Elvis Imitators '', "Domino College '', "Everlasting Moon '', and "Middle of the Night '' were previously unreleased on a Buffett album.
Seven songs appear from 1973 album, "A White Sport Coat and a Pink Crustacean ''
Four songs appear from Buffett 's 1974 album, "Living and Dying in 3 / 4 Time ''
Four songs appear from Buffett 's 1974 album, "A * 1 * A ''
Four songs appear from Buffett 's 1976 album, "Havana Daydreamin ''
Five Songs appear from the 1977 album "Changes in Latitudes, Changes in Attitudes ''
Six songs appear from the 1978 album "Son of a Son of a Sailor ''
Six songs appear from the 1979 album, "Volcano ''
Six songs appear from the 1981 album "Coconut Telegraph ''
Four songs appear from the 1982 album, "Somewhere Over China ''
Six songs appear from the 1983 album, "One Particular Harbour ''
Three songs appear from the 1984 album, "Riddles in the Sand ''
Four songs appear from the 1985 album, "Last Mango In Paris ''
Two songs appear from the 1986 album, "Floridays ''
One song appears from the 1988 album, "Hot Water
Two songs appear from the 1989 album, "Off to See the Lizard ''
|
when was the day of the dead originally celebrated | Day of the Dead - wikipedia
The Day of the Dead (Spanish: Día de Muertos) is a Mexican holiday celebrated throughout Mexico, in particular the Central and South regions, and by people of Mexican ancestry living in other places, especially the United States. It is acknowledged internationally in many other cultures. The multi-day holiday focuses on gatherings of family and friends to pray for and remember friends and family members who have died, and help support their spiritual journey. In 2008, the tradition was inscribed in the Representative List of the Intangible Cultural Heritage of Humanity by UNESCO.
The holiday is sometimes called Día de los Muertos in Anglophone countries, a back - translation of its original name, Día de Muertos. It is particularly celebrated in Mexico where the day is a public holiday. Prior to Spanish colonization in the 16th century, the celebration took place at the beginning of summer. Gradually, it was associated with October 31, November 1, and November 2 to coincide with the Western Christianity triduum of Allhallowtide: All Saints ' Eve, All Saints ' Day, and All Souls ' Day. Traditions connected with the holiday include building private altars called ofrendas, honoring the deceased using calaveras, aztec marigolds, and the favorite foods and beverages of the departed, and visiting graves with these as gifts. Visitors also leave possessions of the deceased at the graves.
Scholars trace the origins of the modern Mexican holiday to indigenous observances dating back hundreds of years and to an Aztec festival dedicated to the goddess Mictecacihuatl. The holiday has spread throughout the world, being absorbed into other deep traditions in honor of the dead. It has become a national symbol and as such is taught (for educational purposes) in the nation 's schools. Many families celebrate a traditional "All Saints ' Day '' associated with the Catholic Church.
Originally, the Day of the Dead as such was not celebrated in northern Mexico, where it was unknown until the 20th century because its indigenous people had different traditions. The people and the church rejected it as a day related to syncretizing pagan elements with Catholic Christianity. They held the traditional ' All Saints ' Day ' in the same way as other Christians in the world. There was limited Mesoamerican influence in this region, and relatively few indigenous inhabitants from the regions of Southern Mexico, where the holiday was celebrated. In the early 21st century in northern Mexico, Día de Muertos is observed because the Mexican government made it a national holiday based on educational policies from the 1960s; it has introduced this holiday as a unifying national tradition based on indigenous traditions.
The Mexican Day of the Dead celebration is similar to other societies ' observances of a time to honor the dead. The Spanish tradition, for instance, includes festivals and parades, as well as gatherings of families at cemeteries to pray for their deceased loved ones at the end of the day.
The Day of the Dead celebrations in Mexico developed from ancient traditions among its pre-Columbian cultures. Rituals celebrating the deaths of ancestors had been observed by these civilizations perhaps for as long as 2,500 -- 3,000 years. The festival that developed into the modern Day of the Dead fell in the ninth month of the Aztec calendar, about the beginning of August, and was celebrated for an entire month. The festivities were dedicated to the goddess known as the "Lady of the Dead '', corresponding to the modern La Calavera Catrina.
By the late 20th century in most regions of Mexico, practices had developed to honor dead children and infants on November 1, and to honor deceased adults on November 2. November 1 is generally referred to as Día de los Inocentes ("Day of the Innocents '') but also as Día de los Angelitos ("Day of the Little Angels ''); November 2 is referred to as Día de los Muertos or Día de los Difuntos ("Day of the Dead '').
Frances Ann Day summarizes the three - day celebration, the Day of the Dead:
People go to cemeteries to be with the souls of the departed and build private altars containing the favorite foods and beverages, as well as photos and memorabilia, of the departed. The intent is to encourage visits by the souls, so the souls will hear the prayers and the comments of the living directed to them. Celebrations can take a humorous tone, as celebrants remember funny events and anecdotes about the departed.
Plans for the day are made throughout the year, including gathering the goods to be offered to the dead. During the three - day period families usually clean and decorate graves; most visit the cemeteries where their loved ones are buried and decorate their graves with ofrendas (altars), which often include orange Mexican marigolds (Tagetes erecta) called cempasúchil (originally named cempoaxochitl, Nāhuatl for "twenty flowers ''). In modern Mexico the marigold is sometimes called Flor de Muerto (Flower of Dead). These flowers are thought to attract souls of the dead to the offerings.
Toys are brought for dead children (los angelitos, or "the little angels ''), and bottles of tequila, mezcal or pulque or jars of atole for adults. Families will also offer trinkets or the deceased 's favorite candies on the grave. Some families have ofrendas in homes, usually with foods such as candied pumpkin, pan de muerto ("bread of dead ''), and sugar skulls; and beverages such as atole. The ofrendas are left out in the homes as a welcoming gesture for the deceased. Some people believe the spirits of the dead eat the "spiritual essence '' of the ofrendas food, so though the celebrators eat the food after the festivities, they believe it lacks nutritional value. Pillows and blankets are left out so the deceased can rest after their long journey. In some parts of Mexico, such as the towns of Mixquic, Pátzcuaro and Janitzio, people spend all night beside the graves of their relatives. In many places, people have picnics at the grave site, as well.
Some families build altars or small shrines in their homes; these sometimes feature a Christian cross, statues or pictures of the Blessed Virgin Mary, pictures of deceased relatives and other people, scores of candles, and an ofrenda. Traditionally, families spend some time around the altar, praying and telling anecdotes about the deceased. In some locations, celebrants wear shells on their clothing, so when they dance, the noise will wake up the dead; some will also dress up as the deceased.
Public schools at all levels build altars with ofrendas, usually omitting the religious symbols. Government offices usually have at least a small altar, as this holiday is seen as important to the Mexican heritage.
Those with a distinctive talent for writing sometimes create short poems, called calaveras (skulls), mocking epitaphs of friends, describing interesting habits and attitudes or funny anecdotes. This custom originated in the 18th or 19th century after a newspaper published a poem narrating a dream of a cemetery in the future, "and all of us were dead '', proceeding to read the tombstones. Newspapers dedicate calaveras to public figures, with cartoons of skeletons in the style of the famous calaveras of José Guadalupe Posada, a Mexican illustrator. Theatrical presentations of Don Juan Tenorio by José Zorrilla (1817 -- 1893) are also traditional on this day.
José Guadalupe Posada created a famous print of a figure he called La Calavera Catrina ("The Elegant Skull '') as a parody of a Mexican upper - class female. Posada 's striking image of a costumed female with a skeleton face has become associated with the Day of the Dead, and Catrina figures often are a prominent part of modern Day of the Dead observances.
A common symbol of the holiday is the skull (in Spanish calavera), which celebrants represent in masks, called calacas (colloquial term for skeleton), and foods such as sugar or chocolate skulls, which are inscribed with the name of the recipient on the forehead. Sugar skulls can be given as gifts to both the living and the dead. Other holiday foods include pan de muerto, a sweet egg bread made in various shapes from plain rounds to skulls and rabbits, often decorated with white frosting to look like twisted bones.
The traditions and activities that take place in celebration of the Day of the Dead are not universal, often varying from town to town. For example, in the town of Pátzcuaro on the Lago de Pátzcuaro in Michoacán, the tradition is very different if the deceased is a child rather than an adult. On November 1 of the year after a child 's death, the godparents set a table in the parents ' home with sweets, fruits, pan de muerto, a cross, a rosary (used to ask the Virgin Mary to pray for them) and candles. This is meant to celebrate the child 's life, in respect and appreciation for the parents. There is also dancing with colorful costumes, often with skull - shaped masks and devil masks in the plaza or garden of the town. At midnight on November 2, the people light candles and ride winged boats called mariposas (butterflies) to Janitzio, an island in the middle of the lake where there is a cemetery, to honor and celebrate the lives of the dead there.
In contrast, the town of Ocotepec, north of Cuernavaca in the State of Morelos, opens its doors to visitors in exchange for veladoras (small wax candles) to show respect for the recently deceased. In return the visitors receive tamales and atole. This is done only by the owners of the house where someone in the household has died in the previous year. Many people of the surrounding areas arrive early to eat for free and enjoy the elaborate altars set up to receive the visitors.
In some parts of the country (especially the cities, where in recent years other customs have been displaced) children in costumes roam the streets, knocking on people 's doors for a calaverita, a small gift of candies or money; they also ask passersby for it. This relatively recent custom is similar to that of Halloween 's trick - or - treating in the United States.
Some people believe possessing Day of the Dead items can bring good luck. Many people get tattoos or have dolls of the dead to carry with them. They also clean their houses and prepare the favorite dishes of their deceased loved ones to place upon their altar or ofrenda.
During Day of the Dead festivities, food is both eaten by living people and given to the spirits of their departed ancestors as ofrendas ("offerings ''). Tamales are one of the most common dishes prepared for this day for both purposes.
Pan de muerto and calaveras are associated specifically with Day of the Dead. Pan de muerto is a type of sweet roll shaped like a bun, topped with sugar, and often decorated with bone - shaped phalanges pieces. Calaveras, or sugar skulls, display colorful designs to represent the vitality and individual personality of the departed.
In addition to food, drink is also important to the tradition of Day of the Dead. Historically, the main alcoholic drink was pulque while today families will commonly drink the favorite beverage of their deceased ancestors. Other drinks associated with the holiday are atole and champurrado, warm, thick, non-alcoholic masa drinks.
Jamaican iced tea is a popular herbal tea made of the flowers and leaves of the Jamaican hibiscus plant (Hibiscus sabdariffa), known as flor de Jamaica in Mexico. It is served cold and quite sweet with a lot of ice. The ruby - red beverage is called hibiscus tea in English - speaking countries and called agua de Jamaica (water of Jamaica) in Spanish.
In Belize, Day of the Dead is practiced by people of the Yucatec Maya ethnicity. The celebration is known as Hanal Pixan which means "food for the souls '' in their language. Altars are constructed and decorated with food, drinks, candies, and candles put on them.
Día de las Ñatitas ("Day of the Skulls '') is a festival celebrated in La Paz, Bolivia, on May 5. In pre-Columbian times indigenous Andeans had a tradition of sharing a day with the bones of their ancestors on the third year after burial. Today families keep only the skulls for such rituals. Traditionally, the skulls of family members are kept at home to watch over the family and protect them during the year. On November 9, the family crowns the skulls with fresh flowers, sometimes also dressing them in various garments, and making offerings of cigarettes, coca leaves, alcohol, and various other items in thanks for the year 's protection. The skulls are also sometimes taken to the central cemetery in La Paz for a special Mass and blessing.
The Brazilian public holiday of Finados (Day of the Dead) is celebrated on November 2. Similar to other Day of the Dead celebrations, people go to cemeteries and churches with flowers and candles and offer prayers. The celebration is intended as a positive honoring of the dead. Memorializing the dead draws from indigenous, African and European Catholic origins.
Guatemalan celebrations of the Day of the Dead, on November 1, are highlighted by the construction and flying of giant kites in addition to the traditional visits to grave sites of ancestors. A big event also is the consumption of fiambre, which is made only for this day during the year.
In Ecuador the Day of the Dead is observed to some extent by all parts of society, though it is especially important to the indigenous Kichwa peoples, who make up an estimated quarter of the population. Indigena families gather together in the community cemetery with offerings of food for a day - long remembrance of their ancestors and lost loved ones. Ceremonial foods include colada morada, a spiced fruit porridge that derives its deep purple color from the Andean blackberry and purple maize. This is typically consumed with guagua de pan, a bread shaped like a swaddled infant, though variations include many pigs -- the latter being traditional to the city of Loja. The bread, which is wheat flour - based today, but was made with masa in the pre-Columbian era, can be made savory with cheese inside or sweet with a filling of guava paste. These traditions have permeated mainstream society, as well, where food establishments add both colada morada and gaugua de pan to their menus for the season. Many non-indigenous Ecuadorians visit the graves of the deceased, cleaning and bringing flowers, or preparing the traditional foods, too.
Usually people visit the cemetery and bring flowers to decorate the graves of dead relatives. Sometimes people play music at the cemetery.
In many American communities with Mexican residents, Day of the Dead celebrations are very similar to those held in Mexico. In some of these communities, in states such as Texas, New Mexico, and Arizona, the celebrations tend to be mostly traditional. The All Souls Procession has been an annual Tucson, Arizona event since 1990. The event combines elements of traditional Day of the Dead celebrations with those of pagan harvest festivals. People wearing masks carry signs honoring the dead and an urn in which people can place slips of paper with prayers on them to be burned. Likewise, Old Town San Diego, California annually hosts a traditional two - day celebration culminating in a candlelight procession to the historic El Campo Santo Cemetery.
In Missoula, Montana, celebrants wearing skeleton costumes and walking on stilts, riding novelty bicycles, and traveling on skis parade through town.
The festival also is held annually at historic Forest Hills Cemetery in Boston 's Jamaica Plain neighborhood. Sponsored by Forest Hills Educational Trust and the folkloric performance group La Piñata, the Day of the Dead festivities celebrate the cycle of life and death. People bring offerings of flowers, photos, mementos, and food for their departed loved ones, which they place at an elaborately and colorfully decorated altar. A program of traditional music and dance also accompanies the community event.
The Smithsonian Institution, in collaboration with the University of Texas at El Paso and Second Life, have created a Smithsonian Latino Virtual Museum and accompanying multimedia e-book: Día de los Muertos: Day of the Dead. The project 's website contains some of the text and images which explain the origins of some of the customary core practices related to the Day of the Dead, such as the background beliefs and the offrenda (the special altar commemorating one 's deceased loved one). The Made For iTunes multimedia e-book version provides additional content, such as further details; additional photo galleries; pop - up profiles of influential Latino artists and cultural figures over the decades; and video clips of interviews with artists who make Dia de Muertos - themed artwork, explanations and performances of Aztec and other traditional dances, an animation short that explains the customs to children, virtual poetry readings in English and Spanish.
Santa Ana, California is said to hold the "largest event in Southern California '' honoring Día de Muertos, called the annual Noche de Altares, which began in 2002. The celebration of the Day of the Dead in Santa Ana has grown to two large events with the creation of an event held at the Santa Ana Regional Transportation Center for the first time on November 1, 2015.
In other communities, interactions between Mexican traditions and American culture are resulting in celebrations in which Mexican traditions are being extended to make artistic or sometimes political statements. For example, in Los Angeles, California, the Self Help Graphics & Art Mexican - American cultural center presents an annual Day of the Dead celebration that includes both traditional and political elements, such as altars to honor the victims of the Iraq War, highlighting the high casualty rate among Latino soldiers. An updated, intercultural version of the Day of the Dead is also evolving at Hollywood Forever Cemetery. There, in a mixture of Native Californian art, Mexican traditions and Hollywood hip, conventional altars are set up side - by - side with altars to Jayne Mansfield and Johnny Ramone. Colorful native dancers and music intermix with performance artists, while sly pranksters play on traditional themes.
Similar traditional and intercultural updating of Mexican celebrations are held in San Francisco. For example, the Galería de la Raza, SomArts Cultural Center, Mission Cultural Center, de Young Museum and altars at Garfield Square by the Marigold Project. Oakland is home to Corazon Del Pueblo in the Fruitvale district. Corazon Del Pueblo has a shop offering handcrafted Mexican gifts and a museum devoted to Day of the Dead artifacts. Also, the Fruitvale district in Oakland serves as the hub of the Dia de Muertos annual festival which occurs the last weekend of October. Here, a mix of several Mexican traditions come together with traditional Aztec dancers, regional Mexican music, and other Mexican artisans to celebrate the day.
The Catholic traditions were initiated with the arrival of the Portuguese in 1510. The Almacho Dis (Souls Day) is commemorated on Nov 2, following all Saints Day (Nov. 1). On this day, the forgotten souls are remembered and spiritual charity is done though prayer for the deceased family and ancestral souls. Charity is practiced also by offering food for the poor called beggars lunch (bikareanchem jevonn) where the poor are invited for lunch, from the village and treated with dignity and respect. The invited down trodden are served as guest and usually commences with prayers for the deceased souls. Following mass on all souls day, the deceased visit the graveyard of the deceased souls and pray and also decorate their graves.
While ancestor veneration is an ancient part of Filipino culture, the modern observance is believed to have been imported from Mexico when the islands (as part of the Spanish East Indies) were governed from Mexico City as part of the Viceroyalty of New Spain. During the holiday (observed every November 1), Filipinos customarily visit family tombs and other graves, which they repair and clean. Entire families spend a night or two at their loved ones ' tombs, passing time with card games, eating, drinking, singing and dancing. Prayers such as the rosary are often said for the deceased, who are normally offered candles, flowers, food, and even liquor. Some Catholic Chinese Filipino families additionally offer joss sticks to the dead, and observe customs otherwise associated with the Hungry Ghost Festival.
In Christian Europe, Roman Catholic customs absorbed pagan traditions. All Saints Day and All Souls Day became the autumnal celebration of the dead. Over many centuries, rites which had occurred in cultivated fields, where the souls of the dead were thought to leave after the harvest, to cemeteries.
In many countries with a Roman Catholic heritage, All Saints Day and All Souls Day have evolved traditions in which people take the day off work, go to cemeteries with candles and flowers, and give presents to children, usually sweets and toys. In Portugal and Spain ofrendas ("offerings '') are made on this day. In Spain, the play Don Juan Tenorio is traditionally performed. In Belgium, France, Ireland, Italy, the Netherlands, Portugal, and Spain, people bring flowers (typically chrysanthemums in France and northern Europe) to the graves of dead relatives and say prayers over the dead.
In Eastern Europe holiday is called Dziady and also has pagan origins. In Belarus the ancient tradition involved cooking ritual dishes for supper. Part of each dish was put into a separate plate and left overnight for the dead ancestors. Public meetings are often organized on this day to commemorate the victims of Soviet political repressions.
As part of a promotion by the Mexican embassy in Prague, Czech Republic, since the late 20th century, some local citizens join in a Mexican - style Day of the Dead. A theatre group produces events featuring masks, candles, and sugar skulls.
Mexican - style Day of the Dead celebrations occur in major cities in Australia, Fiji, and Indonesia. Additionally, prominent celebrations are held in Wellington, New Zealand, complete with altars celebrating the deceased with flowers and gifts.
Many other cultures around the world have similar traditions of a day set aside to visit the graves of deceased family members. Often included in these traditions are celebrations, food, and beverages, in addition to prayers and remembrances of the departed.
In some African cultures, visits to ancestors ' graves, the leaving of food and gifts, and the asking of protection from them serve as important parts of traditional rituals. One such ritual is held just before the start of the hunting season.
The Qingming Festival (simplified Chinese: 清明 节; traditional Chinese: 清明 節; pinyin: qīng míng jié) is a traditional Chinese festival usually occurring around April 5 of the Gregorian calendar. Along with Double Ninth Festival on the 9th day of the 9th month in the Chinese calendar, it is a time to tend to the graves of departed ones. In addition, in the Chinese tradition, the seventh month in the Chinese calendar is called the Ghost Month (鬼 月), in which ghosts and spirits come out from the underworld to visit earth.
The Bon Festival (O - bon (お盆), or only Bon (盆)), is a Japanese Buddhist holiday held in August to honor the spirits of departed ancestors. It is derived in part from the Chinese observance of the Ghost Month, and was affixed to the solar calendar along with other traditional Japanese holidays.
In Korea, Chuseok (추석, 秋 夕; also called Hangawi) is a major traditional holiday. People go where the spirits of their ancestors are enshrined, and perform ancestral worship rituals early in the morning. They visit the tombs of immediate ancestors to trim plants, clean the area around the tomb, and offer food, drink, and crops to their ancestors.
During the Nepalese holiday of Gai Jatra ("Cow Pilgrimage ''), every family who has lost a member during the previous year creates a tai out of bamboo branches, cloth, and paper decorations, in which are placed portraits of the deceased. As a cow traditionally leads the spirits of the dead into the afterlife, an actual or symbolic cow is used depending on local custom. The festival is also a time to dress up in a costume reminiscent of the western Halloween, with popular subjects including political commentary and satire.
Disneyland Resorts ' annual "Halloween Time '' celebrates the art and traditions of Dias de los Muertos located at Frontierland.
The 1998 Grim Fandango video game features Mexican Day of the Dead iconography with Aztec and Egyptian influences.
The 2014 The Book of Life film follows a bullfighter who, on the Day of the Dead, embarks on an afterlife adventure.
In the 2015 James Bond film, Spectre, the opening sequence featured a Day of the Dead parade in Mexico City. At the time, no such parade took place in Mexico City; one year later, due to the interest in the film and the government desire to promote the pre-Hispanic Mexican culture, the federal and local authorities decided to organize an actual "Día de Muertos '' parade through Paseo de la Reforma and Centro Historico on October 29, 2016, which was attended by 250,000 people.
The 2016 Elena of Avalor season one episode "A Night to Remember '' focused on Dias de los Muertos.
The 2017 Pixar film Coco features the Dias de los Muertos holiday as a major element in its plot.
|
when did the telephone come into common use | Timeline of the telephone - wikipedia
This timeline of the telephone covers landline, radio, and cellular telephony technologies and provides many important dates in the history of the telephone.
|
who is attorney and solicitor general of india | Solicitor General of India - Wikipedia
The Solicitor General of India is below the Attorney General for India, who is the Indian government 's chief legal advisor, and its primary lawyer in the Supreme Court of India. The Solicitor General of India is appointed for the period of 3 years. The Solicitor General of India is the secondary law officer of the country, assists the Attorney General, and is himself assisted by several Additional Solicitors General of India. Like the Attorney General for India, the Solicitor General and the Additional Solicitors General advise the Government and appear on behalf of the Union of India in terms of the Law Officers (Terms and Conditions) Rules, 1972. However, unlike the post of Attorney General for India, which is a Constitutional post under Article 76 of the Constitution of India, the posts of the Solicitor General and the Additional Solicitors General are merely statutory. Appointments Committee of the Cabinet appoints the Solicitor General. Whereas Attorney General for India is appointed by the President under Article 76 (1) of solicitor general of India is appointed to assist the attorney general along with four additional solicitors general by the Appointments Committee of the Cabinet. The proposal for appointment of Solicitor General, Additional Solicitor General is generally moved at the, level of Joint secretary / Law Secretary in the Department of Legal Affairs and after obtaining the approval of the Minister of Law & Justice, the proposal
Duties of Solicitor General are laid out in Law Officers (Conditions of Service) Rules, 1987:
As law officers represent government of India, there are certain restrictions which are put on their private practice. A law officer is not allowed to:
Fee and allowances payable to the law officers (including Attorney General of India, Solicitor General of India and the Additional Solicitors General) of the Government of India are as under:
In addition to the above fee payable for cases, a retainer fee is paid to the Attorney General of India, Solicitor General of India and the Additional Solicitors General at the rate of Rs. 50,000, Rs. 40,000, and Rs. 30,000 per month, respectively. Moreover, the Attorney General of India is also paid a sumptuary allowance of rupees four thousand per month, except during the period of his leave.
The current Solicitor General of India and Additional Solicitors General as of 9 April 2015 are as follows:
The former Solicitors General for India were as follows:
The former Additional Solicitor General of India are as follows:
|
o x a l i c a c i d | Oxalic acid - Wikipedia
Oxalic acid is an organic compound with the formula C H O. It is a colorless crystalline solid that forms a colorless solution in water. Its condensed formula is HOOCCOOH, reflecting its classification as the simplest dicarboxylic acid. Its acid strength is much greater than that of acetic acid. Oxalic acid is a reducing agent and its conjugate base, known as oxalate (C 2O2 − 4), is a chelating agent for metal cations. Typically, oxalic acid occurs as the dihydrate with the formula C H O 2H O. Excessive ingestion of oxalic acid or prolonged skin contact can be dangerous.
The preparation of salts of oxalic acid from plants had been known, at the latest, since 1745, when the Dutch botanist and physician Herman Boerhaave isolated a salt from sorrel. By 1773, François Pierre Savary of Fribourg, Switzerland had isolated oxalic acid from its salt in sorrel.
In 1776, Swedish chemists Carl Wilhelm Scheele and Torbern Olof Bergman produced oxalic acid by reacting sugar with concentrated nitric acid; Scheele called the acid that resulted socker - syra or såcker - syra (sugar acid). By 1784, Scheele had shown that "sugar acid '' and oxalic acid from natural sources were identical.
In 1824, the German chemist Friedrich Wöhler obtained oxalic acid by reacting cyanogen with ammonia in aqueous solution. This experiment may represent the first synthesis of a natural product.
Oxalic acid is mainly manufactured by the oxidation of carbohydrates or glucose using nitric acid or air in the presence of vanadium pentoxide. A variety of precursors can be used including glycolic acid and ethylene glycol. A newer method entails oxidative carbonylation of alcohols to give the diesters of oxalic acid:
These diesters are subsequently hydrolyzed to oxalic acid. Approximately 120,000 tonnes are produced annually.
Historically oxalic acid was obtained exclusively by using caustics, such as sodium or potassium hydroxide, on sawdust.
Although it can be readily purchased, oxalic acid can be prepared in the laboratory by oxidizing sucrose using nitric acid in the presence of a small amount of vanadium pentoxide as a catalyst.
The hydrated solid can be dehydrated with heat or by azeotropic distillation.
Developed in the Netherlands, an electrocatalysis by a copper complex helps reduce carbon dioxide to oxalic acid; this conversion uses carbon dioxide as a feedstock to generate oxalic acid.
Anhydrous oxalic acid exists as two polymorphs; in one the hydrogen - bonding results in a chain - like structure whereas the hydrogen bonding pattern in the other form defines a sheet - like structure. Because the anhydrous material is both acidic and hydrophilic (water seeking), it is used in esterifications.
Oxalic acid is a relatively strong acid, despite being a carboxylic acid:
Oxalic acid undergoes many of the reactions characteristic of other carboxylic acids. It forms esters such as dimethyl oxalate (m.p. 52.5 to 53.5 ° C (126.5 to 128.3 ° F)). It forms an acid chloride called oxalyl chloride.
Oxalate, the conjugate base of oxalic acid, is an excellent ligand for metal ions, e.g. the drug oxaliplatin.
Oxalic acid and oxalates can be oxidized by permanganate in an autocatalytic reaction.
At least two pathways exist for the enzyme - mediated formation of oxalate. In one pathway, oxaloacetate, a component of the Krebs citric acid cycle, is hydrolyzed to oxalate and acetic acid by the enzyme oxaloacetase:
It also arises from the dehydrogenation of glycolic acid, which is produced by the metabolism of ethylene glycol.
Calcium oxalate is the most common component of kidney stones. Early investigators isolated oxalic acid from wood - sorrel (Oxalis). Members of the spinach family and the brassicas (cabbage, broccoli, brussels sprouts) are high in oxalates, as are sorrel and umbellifers like parsley. Rhubarb leaves contain about 0.5 % oxalic acid, and jack - in - the - pulpit (Arisaema triphyllum) contains calcium oxalate crystals. Similarly, the Virginia creeper, a common decorative vine, produces oxalic acid in its berries as well as oxalite crystals in the sap, in the form of raphides. Bacteria produce oxalates from oxidation of carbohydrates.
Plants of the Fenestraria genus produce optical fibers made from crystalline oxalic acid to transmit light to subterranean photosynthetic sites.
Oxidized bitumen or bitumen exposed to gamma rays also contains oxalic acid among its degradation products. Oxalic acid may increase the leaching of radionuclides conditioned in bitumen for radioactive waste disposal.
The conjugate base of oxalic acid is the hydrogenoxalate anion, and its conjugate base (oxalate) is a competitive inhibitor of the lactate dehydrogenase (LDH) enzyme. LDH catalyses the conversion of pyruvate to lactic acid (end product of the fermentation (anaerobic) process) oxidising the coenzyme NADH to NAD and H concurrently. Restoring NAD levels is essential to the continuation of anaerobic energy metabolism through glycolysis. As cancer cells preferentially use anaerobic metabolism (see Warburg effect) inhibition of LDH has been shown to inhibit tumor formation and growth, thus is an interesting potential course of cancer treatment.
About 25 % of produced oxalic acid is used as a mordant in dyeing processes. It is used in bleaches, especially for pulpwood. It is also used in baking powder and as a third reagent in silica analysis instruments.
Oxalic acid 's main applications include cleaning or bleaching, especially for the removal of rust (iron complexing agent). Bar Keepers Friend is an example of a household cleaner containing oxalic acid. Its utility in rust removal agents is due to its forming a stable, water - soluble salt with ferric iron, ferrioxalate ion.
Oxalic acid is an important reagent in lanthanide chemistry. Hydrated lanthanide oxalates form readily in very strongly acidic solutions in a densely crystalline, easily filtered form, largely free of contamination by nonlanthanide elements. Thermal decomposition of these oxalate gives the oxides, which is the most commonly marketed form of these elements.
Vaporized oxalic acid, or a 3.2 % solution of oxalic acid in sugar syrup, is used by some beekeepers as a miticide against the parasitic varroa mite.
Oxalic acid is rubbed onto completed marble sculptures to seal the surface and introduce a shine. Oxalic acid is also used to clean iron and manganese deposits from quartz crystals.
Oxalic acid is used as a bleach for wood, removing black stains caused by dissolved iron that binds to the wood during water penetration.
Oxalic acid in concentrated form can have harmful effects through contact and if ingested; manufacturers provide details in Material Safety Data Sheets (MSDS). It is not identified as mutagenic or carcinogenic; there is a possible risk of congenital malformation in the fetus; may be harmful if inhaled, and is extremely destructive to tissue of mucous membranes and upper respiratory tract; harmful if swallowed; harmful to and destructive of tissue and causes burns if absorbed through the skin or is in contact with the eyes. Symptoms and effects include a burning sensation, cough, wheezing, laryngitis, shortness of breath, spasm, inflammation and edema of the larynx, inflammation and edema of the bronchi, pneumonitis, pulmonary edema.
In humans, ingested oxalic acid has an oral LD (lowest published lethal dose) of 600 mg / kg. It has been reported that the lethal oral dose is 15 to 30 grams.
The toxicity of oxalic acid is due to kidney failure caused by precipitation of solid calcium oxalate, the main component of kidney stones. Oxalic acid can also cause joint pain due to the formation of similar precipitates in the joints. Ingestion of ethylene glycol results in oxalic acid as a metabolite which can also cause acute kidney failure.
|
what effect did american independence have on the international system | United States Declaration of Independence - Wikipedia
The United States Declaration of Independence is the statement adopted by the Second Continental Congress meeting at the Pennsylvania State House (now known as Independence Hall) in Philadelphia on July 4, 1776. The Declaration announced that the thirteen American colonies at war with the Kingdom of Great Britain would now regard themselves as thirteen independent sovereign states no longer under British rule. With the Declaration, these states formed a new nation -- the United States of America.
The Declaration was passed on July 2 with no opposing votes. A committee of five had drafted it to be ready when Congress voted on independence. John Adams, a leader in pushing for independence, had persuaded the committee to select Thomas Jefferson to compose the original draft of the document, which Congress edited to produce the final version. The Declaration was a formal explanation of why Congress had voted on July 2 to declare independence from Great Britain, more than a year after the outbreak of the American Revolutionary War. Adams wrote to his wife Abigail, "The Second Day of July 1776, will be the most memorable Epocha, in the History of America '' -- although Independence Day is actually celebrated on July 4, the date that the wording of the Declaration of Independence was approved.
After ratifying the text on July 4, Congress issued the Declaration of Independence in several forms. It was initially published as the printed Dunlap broadside that was widely distributed and read to the public. The source copy used for this printing has been lost and may have been a copy in Thomas Jefferson 's hand. Jefferson 's original draft is preserved at the Library of Congress, complete with changes made by John Adams and Benjamin Franklin, as well as Jefferson 's notes of changes made by Congress. The best - known version of the Declaration is a signed copy that is displayed at the National Archives in Washington, D.C., and which is popularly regarded as the official document. This engrossed copy was ordered by Congress on July 19 and signed primarily on August 2.
The sources and interpretation of the Declaration have been the subject of much scholarly inquiry. The Declaration justified the independence of the United States by listing colonial grievances against King George III, and by asserting certain natural and legal rights, including a right of revolution. Having served its original purpose in announcing independence, references to the text of the Declaration were few in the following years. Abraham Lincoln made it the centerpiece of his policies and his rhetoric, as in the Gettysburg Address of 1863. Since then, it has become a well - known statement on human rights, particularly its second sentence:
We hold these truths to be self - evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
This has been called "one of the best - known sentences in the English language '', containing "the most potent and consequential words in American history ''. The passage came to represent a moral standard to which the United States should strive. This view was notably promoted by Lincoln, who considered the Declaration to be the foundation of his political philosophy and argued that it is a statement of principles through which the United States Constitution should be interpreted.
The U.S. Declaration of Independence inspired many similar documents in other countries, the first being the 1789 Declaration of Flanders issued during the Brabant Revolution in the Austrian Netherlands (modern - day Belgium). It also served as the primary model for numerous declarations of independence in Europe and Latin America, as well as Africa (Liberia) and Oceania (New Zealand) during the first half of the 19th century.
Believe me, dear Sir: there is not in the British empire a man who more cordially loves a union with Great Britain than I do. But, by the God that made me, I will cease to exist before I yield to a connection on such terms as the British Parliament propose; and in this, I think I speak the sentiments of America.
By the time that the Declaration of Independence was adopted in July 1776, the Thirteen Colonies and Great Britain had been at war for more than a year. Relations had been deteriorating between the colonies and the mother country since 1763. Parliament enacted a series of measures to increase revenue from the colonies, such as the Stamp Act of 1765 and the Townshend Acts of 1767. Parliament believed that these acts were a legitimate means of having the colonies pay their fair share of the costs to keep them in the British Empire.
Many colonists, however, had developed a different conception of the empire. The colonies were not directly represented in Parliament, and colonists argued that Parliament had no right to levy taxes upon them. This tax dispute was part of a larger divergence between British and American interpretations of the British Constitution and the extent of Parliament 's authority in the colonies. The orthodox British view, dating from the Glorious Revolution of 1688, was that Parliament was the supreme authority throughout the empire, and so, by definition, anything that Parliament did was constitutional. In the colonies, however, the idea had developed that the British Constitution recognized certain fundamental rights that no government could violate, not even Parliament. After the Townshend Acts, some essayists even began to question whether Parliament had any legitimate jurisdiction in the colonies at all. Anticipating the arrangement of the British Commonwealth, by 1774 American writers such as Samuel Adams, James Wilson, and Thomas Jefferson were arguing that Parliament was the legislature of Great Britain only, and that the colonies, which had their own legislatures, were connected to the rest of the empire only through their allegiance to the Crown.
The issue of Parliament 's authority in the colonies became a crisis after Parliament passed the Coercive Acts (known as the Intolerable Acts in the colonies) in 1774 to punish the Province of Massachusetts for the Boston Tea Party of 1773. Many colonists saw the Coercive Acts as a violation of the British Constitution and thus a threat to the liberties of all of British America. In September 1774, the First Continental Congress convened in Philadelphia to coordinate a response. Congress organized a boycott of British goods and petitioned the king for repeal of the acts. These measures were unsuccessful because King George and the ministry of Prime Minister Lord North were determined not to retreat on the question of parliamentary supremacy. As the king wrote to North in November 1774, "blows must decide whether they are to be subject to this country or independent ''.
Most colonists still hoped for reconciliation with Great Britain, even after fighting began in the American Revolutionary War at Lexington and Concord in April 1775. The Second Continental Congress convened at the Pennsylvania State House in Philadelphia in May 1775, and some delegates hoped for eventual independence, but no one yet advocated declaring it. Many colonists no longer believed that Parliament had any sovereignty over them, yet they still professed loyalty to King George, who they hoped would intercede on their behalf. They were disappointed in late 1775, when the king rejected Congress 's second petition, issued a Proclamation of Rebellion, and announced before Parliament on October 26 that he was considering "friendly offers of foreign assistance '' to suppress the rebellion. A pro-American minority in Parliament warned that the government was driving the colonists toward independence.
Thomas Paine 's pamphlet Common Sense was published in January 1776, just as it became clear in the colonies that the king was not inclined to act as a conciliator. Paine had only recently arrived in the colonies from England, and he argued in favor of colonial independence, advocating republicanism as an alternative to monarchy and hereditary rule. Common Sense introduced no new ideas and probably had little direct effect on Congress 's thinking about independence; its importance was in stimulating public debate on a topic that few had previously dared to openly discuss. Public support for separation from Great Britain steadily increased after the publication of Paine 's enormously popular pamphlet.
Some colonists still held out hope for reconciliation, but developments in early 1776 further strengthened public support for independence. In February 1776, colonists learned of Parliament 's passage of the Prohibitory Act, which established a blockade of American ports and declared American ships to be enemy vessels. John Adams, a strong supporter of independence, believed that Parliament had effectively declared American independence before Congress had been able to. Adams labeled the Prohibitory Act the "Act of Independency '', calling it "a compleat Dismemberment of the British Empire ''. Support for declaring independence grew even more when it was confirmed that King George had hired German mercenaries to use against his American subjects.
Despite this growing popular support for independence, Congress lacked the clear authority to declare it. Delegates had been elected to Congress by thirteen different governments, which included extralegal conventions, ad hoc committees, and elected assemblies, and they were bound by the instructions given to them. Regardless of their personal opinions, delegates could not vote to declare independence unless their instructions permitted such an action. Several colonies, in fact, expressly prohibited their delegates from taking any steps towards separation from Great Britain, while other delegations had instructions that were ambiguous on the issue. As public sentiment grew for separation from Great Britain, advocates of independence sought to have the Congressional instructions revised. For Congress to declare independence, a majority of delegations would need authorization to vote for independence, and at least one colonial government would need to specifically instruct (or grant permission for) its delegation to propose a declaration of independence in Congress. Between April and July 1776, a "complex political war '' was waged to bring this about.
In the campaign to revise Congressional instructions, many Americans formally expressed their support for separation from Great Britain in what were effectively state and local declarations of independence. Historian Pauline Maier identifies more than ninety such declarations that were issued throughout the Thirteen Colonies from April to July 1776. These "declarations '' took a variety of forms. Some were formal written instructions for Congressional delegations, such as the Halifax Resolves of April 12, with which North Carolina became the first colony to explicitly authorize its delegates to vote for independence. Others were legislative acts that officially ended British rule in individual colonies, such as the Rhode Island legislature declaring its independence from Great Britain on May 4, the first colony to do so. Many "declarations '' were resolutions adopted at town or county meetings that offered support for independence. A few came in the form of jury instructions, such as the statement issued on April 23, 1776, by Chief Justice William Henry Drayton of South Carolina: "the law of the land authorizes me to declare... that George the Third, King of Great Britain... has no authority over us, and we owe no obedience to him. '' Most of these declarations are now obscure, having been overshadowed by the declaration approved by Congress on July 2, and signed July 4.
Some colonies held back from endorsing independence. Resistance was centered in the middle colonies of New York, New Jersey, Maryland, Pennsylvania, and Delaware. Advocates of independence saw Pennsylvania as the key; if that colony could be converted to the pro-independence cause, it was believed that the others would follow. On May 1, however, opponents of independence retained control of the Pennsylvania Assembly in a special election that had focused on the question of independence. In response, Congress passed a resolution on May 10 which had been promoted by John Adams and Richard Henry Lee, calling on colonies without a "government sufficient to the exigencies of their affairs '' to adopt new governments. The resolution passed unanimously, and was even supported by Pennsylvania 's John Dickinson, the leader of the anti-independence faction in Congress, who believed that it did not apply to his colony.
As was the custom, Congress appointed a committee to draft a preamble to explain the purpose of the resolution. John Adams wrote the preamble, which stated that because King George had rejected reconciliation and was hiring foreign mercenaries to use against the colonies, "it is necessary that the exercise of every kind of authority under the said crown should be totally suppressed ''. Adams 's preamble was meant to encourage the overthrow of the governments of Pennsylvania and Maryland, which were still under proprietary governance. Congress passed the preamble on May 15 after several days of debate, but four of the middle colonies voted against it, and the Maryland delegation walked out in protest. Adams regarded his May 15 preamble effectively as an American declaration of independence, although a formal declaration would still have to be made.
On the same day that Congress passed Adams 's radical preamble, the Virginia Convention set the stage for a formal Congressional declaration of independence. On May 15, the Convention instructed Virginia 's congressional delegation "to propose to that respectable body to declare the United Colonies free and independent States, absolved from all allegiance to, or dependence upon, the Crown or Parliament of Great Britain ''. In accordance with those instructions, Richard Henry Lee of Virginia presented a three - part resolution to Congress on June 7. The motion was seconded by John Adams, calling on Congress to declare independence, form foreign alliances, and prepare a plan of colonial confederation. The part of the resolution relating to declaring independence read:
Resolved, that these United Colonies are, and of right ought to be, free and independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved.
Lee 's resolution met with resistance in the ensuing debate. Opponents of the resolution conceded that reconciliation was unlikely with Great Britain, while arguing that declaring independence was premature, and that securing foreign aid should take priority. Advocates of the resolution countered that foreign governments would not intervene in an internal British struggle, and so a formal declaration of independence was needed before foreign aid was possible. All Congress needed to do, they insisted, was to "declare a fact which already exists ''. Delegates from Pennsylvania, Delaware, New Jersey, Maryland, and New York were still not yet authorized to vote for independence, however, and some of them threatened to leave Congress if the resolution were adopted. Congress, therefore, voted on June 10 to postpone further discussion of Lee 's resolution for three weeks. Until then, Congress decided that a committee should prepare a document announcing and explaining independence in the event that Lee 's resolution was approved when it was brought up again in July.
Support for a Congressional declaration of independence was consolidated in the final weeks of June 1776. On June 14, the Connecticut Assembly instructed its delegates to propose independence and, the following day, the legislatures of New Hampshire and Delaware authorized their delegates to declare independence. In Pennsylvania, political struggles ended with the dissolution of the colonial assembly, and a new Conference of Committees under Thomas McKean authorized Pennsylvania 's delegates to declare independence on June 18. The Provincial Congress of New Jersey had been governing the province since January 1776; they resolved on June 15 that Royal Governor William Franklin was "an enemy to the liberties of this country '' and had him arrested. On June 21, they chose new delegates to Congress and empowered them to join in a declaration of independence.
Only Maryland and New York had yet to authorize independence towards the end of June. Previously, Maryland 's delegates had walked out when the Continental Congress adopted Adams 's radical May 15 preamble, and had sent to the Annapolis Convention for instructions. On May 20, the Annapolis Convention rejected Adams 's preamble, instructing its delegates to remain against independence. But Samuel Chase went to Maryland and, thanks to local resolutions in favor of independence, was able to get the Annapolis Convention to change its mind on June 28. Only the New York delegates were unable to get revised instructions. When Congress had been considering the resolution of independence on June 8, the New York Provincial Congress told the delegates to wait. But on June 30, the Provincial Congress evacuated New York as British forces approached, and would not convene again until July 10. This meant that New York 's delegates would not be authorized to declare independence until after Congress had made its decision.
Political maneuvering was setting the stage for an official declaration of independence even while a document was being written to explain the decision. On June 11, 1776, Congress appointed a "Committee of Five '' to draft a declaration, consisting of John Adams of Massachusetts, Benjamin Franklin of Pennsylvania, Thomas Jefferson of Virginia, Robert R. Livingston of New York, and Roger Sherman of Connecticut. The committee left no minutes, so there is some uncertainty about how the drafting process proceeded; contradictory accounts were written many years later by Jefferson and Adams, too many years to be regarded as entirely reliable -- although their accounts are frequently cited. What is certain is that the committee discussed the general outline which the document should follow and decided that Jefferson would write the first draft. The committee in general, and Jefferson in particular, thought that Adams should write the document, but Adams persuaded the committee to choose Jefferson and promised to consult with him personally. Considering Congress 's busy schedule, Jefferson probably had limited time for writing over the next seventeen days, and likely wrote the draft quickly. He then consulted the others and made some changes, and then produced another copy incorporating these alterations. The committee presented this copy to the Congress on June 28, 1776. The title of the document was "A Declaration by the Representatives of the United States of America, in General Congress assembled. ''
Congress ordered that the draft "lie on the table ''. For two days, Congress methodically edited Jefferson 's primary document, shortening it by a fourth, removing unnecessary wording, and improving sentence structure. They removed Jefferson 's assertion that Britain had forced slavery on the colonies in order to moderate the document and appease persons in Britain who supported the Revolution. Jefferson wrote that Congress had "mangled '' his draft version, but the Declaration that was finally produced was "the majestic document that inspired both contemporaries and posterity, '' in the words of his biographer John Ferling.
Congress tabled the draft of the declaration on Monday, July 1, and resolved itself into a committee of the whole, with Benjamin Harrison of Virginia presiding, and they resumed debate on Lee 's resolution of independence. John Dickinson made one last effort to delay the decision, arguing that Congress should not declare independence without first securing a foreign alliance and finalizing the Articles of Confederation. John Adams gave a speech in reply to Dickinson, restating the case for an immediate declaration.
A vote was taken after a long day of speeches, each colony casting a single vote, as always. The delegation for each colony numbered from two to seven members, and each delegation voted amongst themselves to determine the colony 's vote. Pennsylvania and South Carolina voted against declaring independence. The New York delegation abstained, lacking permission to vote for independence. Delaware cast no vote because the delegation was split between Thomas McKean (who voted yes) and George Read (who voted no). The remaining nine delegations voted in favor of independence, which meant that the resolution had been approved by the committee of the whole. The next step was for the resolution to be voted upon by Congress itself. Edward Rutledge of South Carolina was opposed to Lee 's resolution but desirous of unanimity, and he moved that the vote be postponed until the following day.
On July 2, South Carolina reversed its position and voted for independence. In the Pennsylvania delegation, Dickinson and Robert Morris abstained, allowing the delegation to vote three - to - two in favor of independence. The tie in the Delaware delegation was broken by the timely arrival of Caesar Rodney, who voted for independence. The New York delegation abstained once again since they were still not authorized to vote for independence, although they were allowed to do so a week later by the New York Provincial Congress. The resolution of independence had been adopted with twelve affirmative votes and one abstention. With this, the colonies had officially severed political ties with Great Britain.
John Adams predicted in a famous letter, written to his wife on the following day, that July 2 would become a great American holiday. He thought that the vote for independence would be commemorated; he did not foresee that Americans -- including himself -- would instead celebrate Independence Day on the date when the announcement of that act was finalized.
"I am apt to believe that (Independence Day) will be celebrated, by succeeding Generations, as the great anniversary Festival. It ought to be commemorated, as the Day of Deliverance by solemn Acts of Devotion to God Almighty. It ought to be solemnized with Pomp and Parade, with shews, Games, Sports, Guns, Bells, Bonfires and Illuminations from one End of this Continent to the other from this Time forward forever more. ''
After voting in favor of the resolution of independence, Congress turned its attention to the committee 's draft of the declaration. Over several days of debate, they made a few changes in wording and deleted nearly a fourth of the text and, on July 4, 1776, the wording of the Declaration of Independence was approved and sent to the printer for publication.
There is a distinct change in wording from this original broadside printing of the Declaration and the final official engrossed copy. The word "unanimous '' was inserted as a result of a Congressional resolution passed on July 19, 1776:
Resolved, That the Declaration passed on the 4th, be fairly engrossed on parchment, with the title and stile of "The unanimous declaration of the thirteen United States of America, '' and that the same, when engrossed, be signed by every member of Congress.
Historian George Billias says:
The declaration is not divided into formal sections; but it is often discussed as consisting of five parts: introduction, preamble, indictment of King George III, denunciation of the British people, and conclusion.
Asserts as a matter of Natural Law the ability of a people to assume political independence; acknowledges that the grounds for such independence must be reasonable, and therefore explicable, and ought to be explained.
When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature 's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.
Outlines a general philosophy of government that justifies revolution when government harms natural rights.
We hold these truths to be self - evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.
A bill of particulars documenting the king 's "repeated injuries and usurpations '' of the Americans ' rights and liberties.
Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.
He has refused his Assent to Laws, the most wholesome and necessary for the public good.
He has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in their operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend to them.
He has refused to pass other Laws for the accommodation of large districts of people, unless those people would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable to tyrants only.
He has called together legislative bodies at places unusual, uncomfortable, and distant from the depository of their Public Records, for the sole purpose of fatiguing them into compliance with his measures.
He has dissolved Representative Houses repeatedly, for opposing with manly firmness of his invasions on the rights of the people.
He has refused for a long time, after such dissolutions, to cause others to be elected, whereby the Legislative Powers, incapable of Annihilation, have returned to the People at large for their exercise; the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within.
He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands.
He has obstructed the Administration of Justice by refusing his Assent to Laws for establishing Judiciary Powers.
He has made Judges dependent on his Will alone for the tenure of their offices, and the amount and payment of their salaries.
He has erected a multitude of New Offices, and sent hither swarms of Officers to harass our people and eat out their substance.
He has kept among us, in times of peace, Standing Armies without the Consent of our legislatures.
He has affected to render the Military independent of and superior to the Civil Power.
He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:
For quartering large bodies of armed troops among us:
For protecting them, by a mock Trial from punishment for any Murders which they should commit on the Inhabitants of these States:
For cutting off our Trade with all parts of the world:
For imposing Taxes on us without our Consent:
For depriving us in many cases, of the benefit of Trial by Jury:
For transporting us beyond Seas to be tried for pretended offences:
For abolishing the free System of English Laws in a neighbouring Province, establishing therein an Arbitrary government, and enlarging its Boundaries so as to render it at once an example and fit instrument for introducing the same absolute rule into these Colonies
For taking away our Charters, abolishing our most valuable Laws and altering fundamentally the Forms of our Governments:
For suspending our own Legislatures, and declaring themselves invested with power to legislate for us in all cases whatsoever.
He has abdicated Government here, by declaring us out of his Protection and waging War against us.
He has plundered our seas, ravaged our coasts, burnt our towns, and destroyed the lives of our people.
He is at this time transporting large Armies of foreign Mercenaries to compleat the works of death, desolation, and tyranny, already begun with circumstances of Cruelty & Perfidy scarcely paralleled in the most barbarous ages, and totally unworthy the Head of a civilized nation.
He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands.
He has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.
In every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated Petitions have been answered only by repeated injury. A Prince, whose character is thus marked by every act which may define a Tyrant, is unfit to be the ruler of a free people.
This section essentially finishes the case for independence. The conditions that justified revolution have been shown.
Nor have We been wanting in attentions to our British brethren. We have warned them from time to time of attempts by their legislature to extend an unwarrantable jurisdiction over us. We have reminded them of the circumstances of our emigration and settlement here. We have appealed to their native justice and magnanimity, and we have conjured them by the ties of our common kindred to disavow these usurpations, which, would inevitably interrupt our connections and correspondence. They too have been deaf to the voice of justice and of consanguinity. We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends.
The signers assert that there exist conditions under which people must change their government, that the British have produced such conditions and, by necessity, the colonies must throw off political ties with the British Crown and become independent states. The conclusion contains, at its core, the Lee Resolution that had been passed on July 2.
We, therefore, the Representatives of the united States of America, in General Congress, Assembled, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good People of these Colonies, solemnly publish and declare, That these united Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown, and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as Free and Independent States, they have full Power to levy War, conclude Peace, contract Alliances, establish Commerce, and to do all other Acts and Things which Independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.
The first and most famous signature on the engrossed copy was that of John Hancock, President of the Continental Congress. Two future presidents (Thomas Jefferson and John Adams) and a father and great - grandfather of two other presidents (Benjamin Harrison) were among the signatories. Edward Rutledge (age 26) was the youngest signer, and Benjamin Franklin (age 70) was the oldest signer. The fifty - six signers of the Declaration represented the new states as follows (from north to south):
Historians have often sought to identify the sources that most influenced the words and political philosophy of the Declaration of Independence. By Jefferson 's own admission, the Declaration contained no original ideas, but was instead a statement of sentiments widely shared by supporters of the American Revolution. As he explained in 1825:
Neither aiming at originality of principle or sentiment, nor yet copied from any particular and previous writing, it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion.
Jefferson 's most immediate sources were two documents written in June 1776: his own draft of the preamble of the Constitution of Virginia, and George Mason 's draft of the Virginia Declaration of Rights. Ideas and phrases from both of these documents appear in the Declaration of Independence. They were, in turn, directly influenced by the 1689 English Declaration of Rights, which formally ended the reign of King James II. During the American Revolution, Jefferson and other Americans looked to the English Declaration of Rights as a model of how to end the reign of an unjust king. The Scottish Declaration of Arbroath (1320) and the Dutch Act of Abjuration (1581) have also been offered as models for Jefferson 's Declaration, but these models are now accepted by few scholars.
Jefferson wrote that a number of authors exerted a general influence on the words of the Declaration. English political theorist John Locke is usually cited as one of the primary influences, a man whom Jefferson called one of "the three greatest men that have ever lived ''. In 1922, historian Carl L. Becker wrote, "Most Americans had absorbed Locke 's works as a kind of political gospel; and the Declaration, in its form, in its phraseology, follows closely certain sentences in Locke 's second treatise on government. '' The extent of Locke 's influence on the American Revolution has been questioned by some subsequent scholars, however. Historian Ray Forrest Harvey argued in 1937 for the dominant influence of Swiss jurist Jean Jacques Burlamaqui, declaring that Jefferson and Locke were at "two opposite poles '' in their political philosophy, as evidenced by Jefferson 's use in the Declaration of Independence of the phrase "pursuit of happiness '' instead of "property ''. Other scholars emphasized the influence of republicanism rather than Locke 's classical liberalism. Historian Garry Wills argued that Jefferson was influenced by the Scottish Enlightenment, particularly Francis Hutcheson, rather than Locke, an interpretation that has been strongly criticized.
Legal historian John Phillip Reid has written that the emphasis on the political philosophy of the Declaration has been misplaced. The Declaration is not a philosophical tract about natural rights, argues Reid, but is instead a legal document -- an indictment against King George for violating the constitutional rights of the colonists. Historian David Armitage has argued that the Declaration was strongly influenced by de Vattel 's The Law of Nations, the dominant international law treatise of the period, and a book that Benjamin Franklin said was "continually in the hands of the members of our Congress ''. Armitage writes, "Vattel made independence fundamental to his definition of statehood ''; therefore, the primary purpose of the Declaration was "to express the international legal sovereignty of the United States ''. If the United States were to have any hope of being recognized by the European powers, the American revolutionaries first had to make it clear that they were no longer dependent on Great Britain. The Declaration of Independence does not have the force of law domestically, but nevertheless it may help to provide historical and legal clarity about the Constitution and other laws.
The Declaration became official when Congress voted for it on July 4; signatures of the delegates were not needed to make it official. The handwritten copy of the Declaration of Independence that was signed by Congress is dated July 4, 1776. The signatures of fifty - six delegates are affixed; however, the exact date when each person signed it has long been the subject of debate. Jefferson, Franklin, and Adams all wrote that the Declaration had been signed by Congress on July 4. But in 1796, signer Thomas McKean disputed that the Declaration had been signed on July 4, pointing out that some signers were not then present, including several who were not even elected to Congress until after that date.
The Declaration was transposed on paper, adopted by the Continental Congress, and signed by John Hancock, President of the Congress, on July 4, 1776, according to the 1911 record of events by the U.S. State Department under Secretary Philander C. Knox. On August 2, 1776, a parchment paper copy of the Declaration was signed by 56 persons. Many of these signers were not present when the original Declaration was adopted on July 4. Signer Matthew Thornton from New Hampshire was seated in the Continental Congress in November; he asked for and received the privilege of adding his signature at that time, and signed on November 4, 1776.
Historians have generally accepted McKean 's version of events, arguing that the famous signed version of the Declaration was created after July 19, and was not signed by Congress until August 2, 1776. In 1986, legal historian Wilfred Ritz argued that historians had misunderstood the primary documents and given too much credence to McKean, who had not been present in Congress on July 4. According to Ritz, about thirty - four delegates signed the Declaration on July 4, and the others signed on or after August 2. Historians who reject a July 4 signing maintain that most delegates signed on August 2, and that those eventual signers who were not present added their names later.
Two future U.S. presidents were among the signatories: Thomas Jefferson and John Adams. The most famous signature on the engrossed copy is that of John Hancock, who presumably signed first as President of Congress. Hancock 's large, flamboyant signature became iconic, and the term John Hancock emerged in the United States as an informal synonym for "signature ''. A commonly circulated but apocryphal account claims that, after Hancock signed, the delegate from Massachusetts commented, "The British ministry can read that name without spectacles. '' Another apocryphal report indicates that Hancock proudly declared, "There! I guess King George will be able to read that! ''
Various legends emerged years later about the signing of the Declaration, when the document had become an important national symbol. In one famous story, John Hancock supposedly said that Congress, having signed the Declaration, must now "all hang together '', and Benjamin Franklin replied: "Yes, we must indeed all hang together, or most assuredly we shall all hang separately. '' The quotation did not appear in print until more than fifty years after Franklin 's death.
The Syng inkstand used at the signing was also used at the signing of the United States Constitution in 1787.
After Congress approved the final wording of the Declaration on July 4, a handwritten copy was sent a few blocks away to the printing shop of John Dunlap. Through the night, Dunlap printed about 200 broadsides for distribution. Before long, the Declaration was read to audiences and reprinted in newspapers throughout the thirteen states. The first official public reading of the document was by John Nixon in the yard of Independence Hall on July 8; public readings also took place on that day in Trenton, New Jersey and Easton, Pennsylvania. A German translation of the Declaration was published in Philadelphia by July 9.
President of Congress John Hancock sent a broadside to General George Washington, instructing him to have it proclaimed "at the Head of the Army in the way you shall think it most proper ''. Washington had the Declaration read to his troops in New York City on July 9, with thousands of British troops on ships in the harbor. Washington and Congress hoped that the Declaration would inspire the soldiers, and encourage others to join the army. After hearing the Declaration, crowds in many cities tore down and destroyed signs or statues representing royal authority. An equestrian statue of King George in New York City was pulled down and the lead used to make musket balls.
British officials in North America sent copies of the Declaration to Great Britain. It was published in British newspapers beginning in mid-August, it had reached Florence and Warsaw by mid-September, and a German translation appeared in Switzerland by October. The first copy of the Declaration sent to France got lost, and the second copy arrived only in November 1776. It reached Portuguese America by Brazilian medical student "Vendek '' José Joaquim Maia e Barbalho, who had met with Thomas Jefferson in Nîmes.
The Spanish - American authorities banned the circulation of the Declaration, but it was widely transmitted and translated: by Venezuelan Manuel García de Sena, by Colombian Miguel de Pombo, by Ecuadorian Vicente Rocafuerte, and by New Englanders Richard Cleveland and William Shaler, who distributed the Declaration and the United States Constitution among Creoles in Chile and Indians in Mexico in 1821. The North Ministry did not give an official answer to the Declaration, but instead secretly commissioned pamphleteer John Lind to publish a response entitled Answer to the Declaration of the American Congress. British Tories denounced the signers of the Declaration for not applying the same principles of "life, liberty, and the pursuit of happiness '' to African Americans. Thomas Hutchinson, the former royal governor of Massachusetts, also published a rebuttal. These pamphlets challenged various aspects of the Declaration. Hutchinson argued that the American Revolution was the work of a few conspirators who wanted independence from the outset, and who had finally achieved it by inducing otherwise loyal colonists to rebel. Lind 's pamphlet had an anonymous attack on the concept of natural rights written by Jeremy Bentham, an argument that he repeated during the French Revolution. Both pamphlets asked how the American slaveholders in Congress could proclaim that "all men are created equal '' without freeing their own slaves.
William Whipple, a signer of the Declaration of Independence who had fought in the war, freed his slave Prince Whipple because of revolutionary ideals. In the postwar decades, other slaveholders also freed their slaves; from 1790 to 1810, the percentage of free blacks in the Upper South increased to 8.3 percent from less than one percent of the black population. All Northern states abolished slavery by 1804.
The official copy of the Declaration of Independence was the one printed on July 4, 1776, under Jefferson 's supervision. It was sent to the states and to the Army and was widely reprinted in newspapers. The slightly different "engrossed copy '' (shown at the top of this article) was made later for members to sign. The engrossed version is the one widely distributed in the 21st century. Note that the opening lines differ between the two versions.
The copy of the Declaration that was signed by Congress is known as the engrossed or parchment copy. It was probably engrossed (that is, carefully handwritten) by clerk Timothy Matlack. A facsimile made in 1823 has become the basis of most modern reproductions rather than the original because of poor conservation of the engrossed copy through the 19th century. In 1921, custody of the engrossed copy of the Declaration was transferred from the State Department to the Library of Congress, along with the United States Constitution. After the Japanese attack on Pearl Harbor in 1941, the documents were moved for safekeeping to the United States Bullion Depository at Fort Knox in Kentucky, where they were kept until 1944. In 1952, the engrossed Declaration was transferred to the National Archives and is now on permanent display at the National Archives in the "Rotunda for the Charters of Freedom ''.
The document signed by Congress and enshrined in the National Archives is usually regarded as the Declaration of Independence, but historian Julian P. Boyd argued that the Declaration, like Magna Carta, is not a single document. Boyd considered the printed broadsides ordered by Congress to be official texts, as well. The Declaration was first published as a broadside that was printed the night of July 4 by John Dunlap of Philadelphia. Dunlap printed about 200 broadsides, of which 26 are known to survive. The 26th copy was discovered in The National Archives in England in 2009.
In 1777, Congress commissioned Mary Katherine Goddard to print a new broadside that listed the signers of the Declaration, unlike the Dunlap broadside. Nine copies of the Goddard broadside are known to still exist. A variety of broadsides printed by the states are also extant.
Several early handwritten copies and drafts of the Declaration have also been preserved. Jefferson kept a four - page draft that late in life he called the "original Rough draught ''. It is not known how many drafts Jefferson wrote prior to this one, and how much of the text was contributed by other committee members. In 1947, Boyd discovered a fragment of an earlier draft in Jefferson 's handwriting. Jefferson and Adams sent copies of the rough draft to friends, with slight variations.
During the writing process, Jefferson showed the rough draft to Adams and Franklin, and perhaps to other members of the drafting committee, who made a few more changes. Franklin, for example, may have been responsible for changing Jefferson 's original phrase "We hold these truths to be sacred and undeniable '' to "We hold these truths to be self - evident ''. Jefferson incorporated these changes into a copy that was submitted to Congress in the name of the committee. The copy that was submitted to Congress on June 28 has been lost and was perhaps destroyed in the printing process, or destroyed during the debates in accordance with Congress 's secrecy rule.
On April 21, 2017, it was announced that a second engrossed copy had been discovered in an archive in Sussex, England. Named by its finders the "Sussex Declaration '', it differs from the National Archives copy (which the finders refer to as the "Matlack Declaration '') in that the signatures on it are not grouped by States. How it came to be in England is not yet known, but the finders believe that the randomness of the signatures points to an origin with signatory James Wilson, who had argued strongly that the Declaration was made not by the States but by the whole people.
The Declaration was given little attention in the years immediately following the American Revolution, having served its original purpose in announcing the independence of the United States. Early celebrations of Independence Day largely ignored the Declaration, as did early histories of the Revolution. The act of declaring independence was considered important, whereas the text announcing that act attracted little attention. The Declaration was rarely mentioned during the debates about the United States Constitution, and its language was not incorporated into that document. George Mason 's draft of the Virginia Declaration of Rights was more influential, and its language was echoed in state constitutions and state bills of rights more often than Jefferson 's words. "In none of these documents '', wrote Pauline Maier, "is there any evidence whatsoever that the Declaration of Independence lived in men 's minds as a classic statement of American political principles. ''
Many leaders of the French Revolution admired the Declaration of Independence but were also interested in the new American state constitutions. The inspiration and content of the French Declaration of the Rights of Man and Citizen (1789) emerged largely from the ideals of the American Revolution. Its key drafts were prepared by Lafayette, working closely in Paris with his friend Thomas Jefferson. It also borrowed language from George Mason 's Virginia Declaration of Rights. The declaration also influenced the Russian Empire. The document had a particular impact on the Decembrist revolt and other Russian thinkers.
According to historian David Armitage, the Declaration of Independence did prove to be internationally influential, but not as a statement of human rights. Armitage argued that the Declaration was the first in a new genre of declarations of independence that announced the creation of new states.
Other French leaders were directly influenced by the text of the Declaration of Independence itself. The Manifesto of the Province of Flanders (1790) was the first foreign derivation of the Declaration; others include the Venezuelan Declaration of Independence (1811), the Liberian Declaration of Independence (1847), the declarations of secession by the Confederate States of America (1860 -- 61), and the Vietnamese Proclamation of Independence (1945). These declarations echoed the United States Declaration of Independence in announcing the independence of a new state, without necessarily endorsing the political philosophy of the original.
Other countries have used the Declaration as inspiration or have directly copied sections from it. These include the Haitian declaration of January 1, 1804, during the Haitian Revolution, the United Provinces of New Granada in 1811, the Argentine Declaration of Independence in 1816, the Chilean Declaration of Independence in 1818, Costa Rica in 1821, El Salvador in 1821, Guatemala in 1821, Honduras in (1821), Mexico in 1821, Nicaragua in 1821, Peru in 1821, Bolivian War of Independence in 1825, Uruguay in 1825, Ecuador in 1830, Colombia in 1831, Paraguay in 1842, Dominican Republic in 1844, Texas Declaration of Independence in March 1836, California Republic in November 1836, Hungarian Declaration of Independence in 1849, Declaration of the Independence of New Zealand in 1835, and the Czechoslovak declaration of independence from 1918 drafted in Washington D.C. with Gutzon Borglum among the drafters. The Rhodesian declaration of independence, ratified in November 1965, is based on the American one as well; however, it omits the phrases "all men are created equal '' and "the consent of the governed ''. The South Carolina declaration of secession from December 1860 also mentions the U.S. Declaration of Independence, though it, like the Rhodesian one, omits references to "all men are created equal '' and "consent of the governed ''.
Interest in the Declaration was revived in the 1790s with the emergence of the United States 's first political parties. Throughout the 1780s, few Americans knew or cared who wrote the Declaration. But in the next decade, Jeffersonian Republicans sought political advantage over their rival Federalists by promoting both the importance of the Declaration and Jefferson as its author. Federalists responded by casting doubt on Jefferson 's authorship or originality, and by emphasizing that independence was declared by the whole Congress, with Jefferson as just one member of the drafting committee. Federalists insisted that Congress 's act of declaring independence, in which Federalist John Adams had played a major role, was more important than the document announcing it. But this view faded away, like the Federalist Party itself, and, before long, the act of declaring independence became synonymous with the document.
A less partisan appreciation for the Declaration emerged in the years following the War of 1812, thanks to a growing American nationalism and a renewed interest in the history of the Revolution. In 1817, Congress commissioned John Trumbull 's famous painting of the signers, which was exhibited to large crowds before being installed in the Capitol. The earliest commemorative printings of the Declaration also appeared at this time, offering many Americans their first view of the signed document. Collective biographies of the signers were first published in the 1820s, giving birth to what Garry Wills called the "cult of the signers ''. In the years that followed, many stories about the writing and signing of the document were published for the first time.
When interest in the Declaration was revived, the sections that were most important in 1776 were no longer relevant: the announcement of the independence of the United States and the grievances against King George. But the second paragraph was applicable long after the war had ended, with its talk of self - evident truths and unalienable rights. The Constitution and the Bill of Rights lacked sweeping statements about rights and equality, and advocates of groups with grievances turned to the Declaration for support. Starting in the 1820s, variations of the Declaration were issued to proclaim the rights of workers, farmers, women, and others. In 1848, for example, the Seneca Falls Convention of women 's rights advocates declared that "all men and women are created equal ''.
John Trumbull 's painting Declaration of Independence has played a significant role in popular conceptions of the Declaration of Independence. The painting is 12 - by - 18 - foot (3.7 by 5.5 m) in size and was commissioned by the United States Congress in 1817; it has hung in the United States Capitol Rotunda since 1826. It is sometimes described as the signing of the Declaration of Independence, but it actually shows the Committee of Five presenting their draft of the Declaration to the Second Continental Congress on June 28, 1776, and not the signing of the document, which took place later.
Trumbull painted the figures from life whenever possible, but some had died and images could not be located; hence, the painting does not include all the signers of the Declaration. One figure had participated in the drafting but did not sign the final document; another refused to sign. In fact, the membership of the Second Continental Congress changed as time passed, and the figures in the painting were never in the same room at the same time. It is, however, an accurate depiction of the room in Independence Hall, the centerpiece of the Independence National Historical Park in Philadelphia, Pennsylvania.
Trumbull 's painting has been depicted multiple times on U.S. currency and postage stamps. Its first use was on the reverse side of the $100 National Bank Note issued in 1863. A few years later, the steel engraving used in printing the bank notes was used to produce a 24 - cent stamp, issued as part of the 1869 Pictorial Issue. An engraving of the signing scene has been featured on the reverse side of the United States two - dollar bill since 1976.
The apparent contradiction between the claim that "all men are created equal '' and the existence of American slavery attracted comment when the Declaration was first published. As mentioned above, Jefferson had included a paragraph in his initial draft that strongly indicted Great Britain 's role in the slave trade, but this was deleted from the final version. Jefferson himself was a prominent Virginia slave holder, having owned hundreds of slaves. Referring to this seeming contradiction, English abolitionist Thomas Day wrote in a 1776 letter, "If there be an object truly ridiculous in nature, it is an American patriot, signing resolutions of independency with the one hand, and with the other brandishing a whip over his affrighted slaves. ''
In the 19th century, the Declaration took on a special significance for the abolitionist movement. Historian Bertram Wyatt - Brown wrote that "abolitionists tended to interpret the Declaration of Independence as a theological as well as a political document ''. Abolitionist leaders Benjamin Lundy and William Lloyd Garrison adopted the "twin rocks '' of "the Bible and the Declaration of Independence '' as the basis for their philosophies. "As long as there remains a single copy of the Declaration of Independence, or of the Bible, in our land, '' wrote Garrison, "we will not despair. '' For radical abolitionists such as Garrison, the most important part of the Declaration was its assertion of the right of revolution. Garrison called for the destruction of the government under the Constitution, and the creation of a new state dedicated to the principles of the Declaration.
The controversial question of whether to add additional slave states to the United States coincided with the growing stature of the Declaration. The first major public debate about slavery and the Declaration took place during the Missouri controversy of 1819 to 1821. Antislavery Congressmen argued that the language of the Declaration indicated that the Founding Fathers of the United States had been opposed to slavery in principle, and so new slave states should not be added to the country. Proslavery Congressmen led by Senator Nathaniel Macon of North Carolina argued that the Declaration was not a part of the Constitution and therefore had no relevance to the question.
With the antislavery movement gaining momentum, defenders of slavery such as John Randolph and John C. Calhoun found it necessary to argue that the Declaration 's assertion that "all men are created equal '' was false, or at least that it did not apply to black people. During the debate over the Kansas -- Nebraska Act in 1853, for example, Senator John Pettit of Indiana argued that the statement "all men are created equal '' was not a "self - evident truth '' but a "self - evident lie ''. Opponents of the Kansas -- Nebraska Act, including Salmon P. Chase and Benjamin Wade, defended the Declaration and what they saw as its antislavery principles.
The Declaration 's relationship to slavery was taken up in 1854 by Abraham Lincoln, a little - known former Congressman who idolized the Founding Fathers. Lincoln thought that the Declaration of Independence expressed the highest principles of the American Revolution, and that the Founding Fathers had tolerated slavery with the expectation that it would ultimately wither away. For the United States to legitimize the expansion of slavery in the Kansas - Nebraska Act, thought Lincoln, was to repudiate the principles of the Revolution. In his October 1854 Peoria speech, Lincoln said:
Nearly eighty years ago we began by declaring that all men are created equal; but now from that beginning we have run down to the other declaration, that for some men to enslave others is a "sacred right of self - government ''.... Our republican robe is soiled and trailed in the dust.... Let us repurify it. Let us re-adopt the Declaration of Independence, and with it, the practices, and policy, which harmonize with it.... If we do this, we shall not only have saved the Union: but we shall have saved it, as to make, and keep it, forever worthy of the saving.
The meaning of the Declaration was a recurring topic in the famed debates between Lincoln and Stephen Douglas in 1858. Douglas argued that the phrase "all men are created equal '' in the Declaration referred to white men only. The purpose of the Declaration, he said, had simply been to justify the independence of the United States, and not to proclaim the equality of any "inferior or degraded race ''. Lincoln, however, thought that the language of the Declaration was deliberately universal, setting a high moral standard to which the American republic should aspire. "I had thought the Declaration contemplated the progressive improvement in the condition of all men everywhere, '' he said. During the seventh and last joint debate with Steven Douglas at Alton, Illinois on October 15, 1858, Lincoln said about the declaration:
I think the authors of that notable instrument intended to include all men, but they did not mean to declare all men equal in all respects. They did not mean to say all men were equal in color, size, intellect, moral development, or social capacity. They defined with tolerable distinctness in what they did consider all men created equal -- equal in "certain inalienable rights, among which are life, liberty, and the pursuit of happiness. '' This they said, and this they meant. They did not mean to assert the obvious untruth that all were then actually enjoying that equality, or yet that they were about to confer it immediately upon them. In fact, they had no power to confer such a boon. They meant simply to declare the right, so that the enforcement of it might follow as fast as circumstances should permit. They meant to set up a standard maxim for free society which should be familiar to all, constantly looked to, constantly labored for, and even, though never perfectly attained, constantly approximated, and thereby constantly spreading and deepening its influence, and augmenting the happiness and value of life to all people, of all colors, everywhere.
According to Pauline Maier, Douglas 's interpretation was more historically accurate, but Lincoln 's view ultimately prevailed. "In Lincoln 's hands, '' wrote Maier, "the Declaration of Independence became first and foremost a living document '' with "a set of goals to be realized over time ''.
Like Daniel Webster, James Wilson, and Joseph Story before him, Lincoln argued that the Declaration of Independence was a founding document of the United States, and that this had important implications for interpreting the Constitution, which had been ratified more than a decade after the Declaration. The Constitution did not use the word "equality '', yet Lincoln believed that the concept that "all men are created equal '' remained a part of the nation 's founding principles. He famously expressed this belief in the opening sentence of his 1863 Gettysburg Address: "Four score and seven years ago (i.e. in 1776) our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. ''
Lincoln 's view of the Declaration became influential, seeing it as a moral guide to interpreting the Constitution. "For most people now, '' wrote Garry Wills in 1992, "the Declaration means what Lincoln told us it means, as a way of correcting the Constitution itself without overthrowing it. '' Admirers of Lincoln such as Harry V. Jaffa praised this development. Critics of Lincoln, notably Willmoore Kendall and Mel Bradford, argued that Lincoln dangerously expanded the scope of the national government and violated states ' rights by reading the Declaration into the Constitution.
In July 1848, the first woman 's rights convention, the Seneca Falls Convention, was held in Seneca Falls, New York. The convention was organized by Elizabeth Cady Stanton, Lucretia Mott, Mary Ann McClintock, and Jane Hunt. In their "Declaration of Sentiments '', patterned on the Declaration of Independence, the convention members demanded social and political equality for women. Their motto was that "All men and women are created equal '' and the convention demanded suffrage for women. The suffrage movement was supported by William Lloyd Garrison and Frederick Douglass.
The Declaration was chosen to be the first digitized text (1971).
The Memorial to the 56 Signers of the Declaration of Independence was dedicated in 1984 in Constitution Gardens on the National Mall in Washington, D.C., where the signatures of all the original signers are carved in stone with their names, places of residence, and occupations.
The new One World Trade Center building in New York City (2014) is 1776 feet high to symbolize the year that the Declaration of Independence was signed.
The adoption of the Declaration of Independence was dramatized in the 1969 Tony Award -- winning musical 1776 and the 1972 movie version, as well as in the 2008 television miniseries John Adams. In 1970, The 5th Dimension recorded the opening of the Declaration through the phrase "for their future Security ''. The name of the song is "Declaration, '' and it was performed on the Ed Sullivan Show on December 7, 1969. At the time, it was taken as a song of protest against the Vietnam War.
|
who does he end up with in definitely maybe | Definitely, Maybe - wikipedia
Definitely, Maybe is a 2008 romantic comedy - drama film written and directed by Adam Brooks, and starring Ryan Reynolds, Isla Fisher, Rachel Weisz, Elizabeth Banks, Abigail Breslin, and Kevin Kline. Set in New York City during the 1990s, the film is about a political consultant who tries to help his eleven - year - old daughter understand his impending divorce by telling her the story of his past romantic relationships and how he ended up marrying her mother. The film grossed $55 million worldwide.
38 - year - old father Will Hayes is in the midst of a divorce. After her first sex - ed class, his 10 - year - old daughter Maya insists on hearing the story of how her parents met. Will reluctantly gives in, but decides to change the names and some of the facts relating to the various love affairs of his youth, thereby creating a love mystery; Maya is left guessing which of the women will turn out to be her mother. The story he tells Maya is depicted in flashbacks. From time to time the film switches back to the present, where Maya comments (often critically) and asks questions.
The story begins in 1992 when Will, an idealistic political operative, moves away from Wisconsin and his college sweetheart, Emily, to New York City, where he works on the Clinton campaign. Over the years, Will becomes involved with three women who enter his life, including Summer Hartley, an aspiring journalist, and April the copy girl for the campaign. Will and April have a chance meeting outside work, where Will reveals he is going to propose to Emily. When Will practices his proposal to Emily on April, she is taken aback by Will 's wholehearted words and replies, "Definitely, maybe. '' They go back to her apartment, where April has multiple copies of Jane Eyre in her collection, explaining that her father gave her a copy with an inscription in the front shortly before he died, and the book was later lost. She has spent years looking through copies of Jane Eyre at secondhand stores hoping to find the copy her father gave her, but she buys any copy she finds that has an inscription. They kiss, but Will abruptly stops and leaves.
Emily comes back to New York where she confesses, just after Will proposes, that she slept with his roommate. She did it on purpose to break up with Will, saying that she is "letting him go '' because she does not share his passionate ambitions. After Clinton is elected, Will opens a campaigning business with most of his work colleagues, which enjoys a good amount of success.
Before Will left Wisconsin, Emily asked Will to deliver a package to her former roommate, Summer Hartley, who is living in New York City. Will first meets Summer when he gives her the package, a diary that she wrote when she was a teenager (which, among other things, tells of her brief affair with Emily). He finds she is going out with a famous writer who is old enough to be her father. The writer breaks up with Summer, and Will starts a relationship with her. April quits her job and leaves to travel around the world. When she returns, she plans to tell Will that she loves him, but discovers that he is planning to propose marriage to Summer. April half - heartedly congratulates him instead. Summer writes an offensive article about one of Will 's clients. Will can not forgive this conflict of interest, and he ends his relationship with Summer. As a result of the article, Will loses his business and his dream of a political career ends, with all of his friends abandoning him.
April calls after a long absence and finds that Will has a new job, but is despondent and depressed, feelings further exacerbated when she reveals she has a new boyfriend named Kevin. She throws a birthday party for him, reuniting him with his old colleagues. Will gets drunk and confesses he loves April. When she tells him he should 've told her when he had his life together, he starts an argument with her when he implies that she is wasting her life working in a book store. Some time later, Will passes a used book store and finds the copy of Jane Eyre that April has been seeking with the note from her father. Will goes to April 's apartment to give her the book, but he decides against it when he meets Kevin, who is now living with her.
Emily moves to New York City, and she and Will rekindle their relationship after a run - in at a party of Summer 's they both were attending. Maya correctly guesses that "Emily '' is her mother. Maya states that it is unfortunate that the story has a sad ending, but Will explains that the story has a happy ending: Maya.
Will learns that April is single again, and he attempts to give her the copy of Jane Eyre. When she discovers that he has been holding onto the book for years, she grows upset and asks him to leave.
Maya is happy to have figured out the story, but she realizes that her father still loves April: he changed the name of her mother, Sarah, to Emily in the story, and the name of Natasha to Summer, but he did not change April 's name. Maya makes Will have an epiphany, realizing that he is miserable without April. On the spur of the moment they take a taxi to go meet April. April does not let them into her apartment. As they walk away, April runs out and asks about the story. Will confesses to April that he held on to the copy of Jane Eyre because it was the only thing he had left of her. April hugs Will and takes them in to hear the story. As Maya passes through the doorway, April jumps into Will 's arms and kisses him.
The film was scored by English composer Clint Mansell. Lakeshore Records released the score on March 18, 2008. All Music Guide reviewer William Ruhlmann praised the album as filled with "sweet, melodic numbers that often seem to lack only a lyric to turn them into pop songs ''. He also stated that it functioned as "light accompaniment to an equally light entertainment ''.
In its opening weekend, the film grossed $9.8 million in 2,204 theaters in the United States and Canada, ranking # 5 at the box office. As of September 28, 2008, the film has grossed $55,447,968 worldwide.
The film was released on DVD June 24, 2008, with a widescreen transfer, deleted scenes, two short featurettes, and a commentary track by Reynolds and director Brooks.
On review aggregator Rotten Tomatoes, 71 % of critics gave the film positive reviews based on 144 reviews; the average rating is 6.5 / 10. The site 's consensus reads: "With a clever script and charismatic leads, Definitely, Maybe is a refreshing entry into the romantic comedy genre. '' Metacritic reported the film had an average score of 59 out of 100 based on 30 reviews.
|
where does the last name guerrero originate from | Guerrero (surname) - wikipedia
Guerrero (Spanish pronunciation: (ɡeˈreɾo)) is a surname of Spanish origin meaning warrior.
This is a list of notable persons with the surname Guerrero. Following Spanish naming customs, only individuals whose first or paternal family name is Guerrero are included.
|
how far is old trafford cricket ground from old trafford football ground | Old Trafford cricket ground - wikipedia
Old Trafford, known for sponsorship reasons as Emirates Old Trafford, is a cricket ground in Old Trafford, Greater Manchester, England. It opened in 1857 as the home of Manchester Cricket Club and has been the home of Lancashire County Cricket Club since 1864.
Old Trafford is England 's second oldest Test venue and one of the most renowned. It was the venue for the first ever Ashes Test to be held in England in July 1884 and has hosted two Cricket World Cup semi-finals. In 1956, the first 10 - wicket haul in a single innings was achieved by England bowler Jim Laker who achieved bowling figures of 19 wickets for 90 runs -- a bowling record which is unmatched in Test and first - class cricket. In the 1993 Ashes Test at Old Trafford, leg - spinner Shane Warne bowled Mike Gatting with the Ball of the Century.
Extensive redevelopment of the ground to increase capacity and modernise facilities began in 2009 in an effort to safeguard international cricket at the venue. The pitch at Old Trafford has historically been the quickest in England, but will take spin later in the game.
The site was first used as a cricket ground in 1857, when the Manchester Cricket Club moved onto the meadows of the de Trafford estate. Despite the construction of a large pavilion (for the amateurs -- the professionals used a shed at the opposite end of the ground), Old Trafford 's first years were rocky: accessible only along a footpath from the railway station, the ground was situated out in the country, and games only attracted small crowds. It was not until the Roses match of 1875 that significant numbers attended a game. When W.G. Grace brought Gloucestershire in 1878, Old Trafford saw 28,000 spectators over three days, and this provoked improvements to access and facilities.
In 1884, Old Trafford became the second English ground, after The Oval, to stage Test cricket: with the first day being lost to rain, England drew with Australia. Expansion of the ground followed over the next decade, with the decision being taken to construct a new pavilion in 1894.
The ground was purchased outright from the de Traffords in 1898, for £ 24,372, as crowds increased, with over 50,000 spectators attending the 1899 Test match.
In 1902, the Australian Victor Trumper hit a hundred before lunch on the first day; Australia went on to win the Test by 3 runs -- the third closest Test result in history.
Crowds fell through the early 20th Century, and the ground was closed during the First World War; however, in the conflict 's aftermath, crowd numbers reached new heights. Investment followed throughout the inter-war period, and during this time, Lancashire experienced their most successful run to date, gaining four Championship titles in five years.
During the Second World War, Old Trafford was used as a transit camp for troops returning from Dunkirk, and as a supply depot. In December 1940, the ground was hit by bombs, damaging or destroying several stands. Despite this damage -- and the failure of an appeal to raise funds for repairs -- cricket resumed promptly after the war, with German PoWs being paid a small wage to prepare the ground. The ' Victory Test ' between England and Australia of August 1945 proved to be extremely popular, with 76,463 seeing it over three days.
Differences of opinion between the club 's committee and players led to a bad run of form in the 1950s and early 1960s; this consequently saw gate money drop, and a lack of investment. After 1964, however, the situation was reversed, and 1969 saw the first Indoor Cricket Centre opened. In 1956 Jim Laker became the first person to take all 10 wickets in a Test match innings, achieving figures of 10 for 53 in the fourth Test against Australia (the only other bowler to take all 10 wickets in an innings is Anil Kumble of India in 1999). Having also taken 9 for 37 in the first innings, Laker ended the match with record figures of 19 for 90, which remain unmatched to this day. On 1 May 1963 the first ever one day cricket match took place at Old Trafford, as the Gillette Cup was launched. Lancashire beat Leicestershire in a preliminary knock - out game, as 16th and 17th finishers in the Championship the previous year, to decide who would fill the 16th spot in the one - day competition. Following Lancashire 's reign as One Day champions in the 1970s, a programme of renovation and replacement was initiated in 1981. This changed the face of the ground to the extent that, now, only the Pavilion "is recognisable to a visitor who last watched or played a game in, say, the early 1980s ''. In 1981 Ian Botham hit 118, including six sixes (the second greatest number in an Ashes innings), which he has called "one of the three innings I would like to tell my grandchildren about ''. England went on to win the Ashes after being lampooned in the national media for such poor performances.
In 1990 Sachin Tendulkar scored his first Test hundred at the age of 17 -- becoming the second youngest centurion -- to help India draw In 1993 Shane Warne bowled the "Ball of the Century '' to Mike Gatting at the ground. In the same game, Graham Gooch was out handled the ball for 133 -- only the sixth out of nine times this has ever happened. and in 1995 -- Dominic Cork took a hat trick for England vs West Indies. In 2000 both Mike Atherton and Alec Stewart played their hundredth Tests, against the West Indies. In the third Test of the 2005 Ashes series the match ended in a nailbiting draw, with 10,000 fans shut out of the ground on the final day as tickets were sold out. England went on to win the series regaining the Ashes for the first time in over 20 years.
The cricket ground is near the Old Trafford football stadium (a five - minute walk away down Warwick Road and Sir Matt Busby Way), in the borough of Trafford in Greater Manchester, approximately two miles south west of Manchester city centre. Its capacity is 22,000 for Test matches, for which temporary stands are erected, and 15,000 for other matches. Since 1884, it has hosted 74 Tests, the third highest number in England, behind Lord 's and The Oval.
The two ends of the ground are the James Anderson End to the north and the Brian Statham End to the south, renamed in honour of the former Lancashire and England player. A section of Warwick Road to the east is also called Brian Statham Way. Immediately abutting the ground to the south - east is the Old Trafford tram stop.
Old Trafford has a reputation for unpredictable weather. Old Trafford is the only ground in England where a Test match has been abandoned without a ball being bowled -- and this has happened here twice in 1890 and 1938, though before 5 day test matches were introduced. Before Cardiff hosted its first Test match in July 2009, Old Trafford was reputedly the wettest Test ground in the country; Manchester is situated to the west of the Pennines and faces prevailing winds and weather fronts from the Atlantic.
These prevailing conditions have encouraged Lancashire to keep the ground as well - drained as possible, most recently through the acquisition of a Hover Cover in 2007, and the installation of new drains towards the end of the 2008 season.
In the second Test of 1938 in a desperate effort to ensure play after heavy rain the groundstaff moved the turf from the practice pitch to the square -- a unique attempt.In 2010 -- 11 the wickets were relaid, changing their extremely unusual East - West axis to a more conventional North - South layout. The Brian Statham End to the East, and Stretford End to the West, were replaced by the Pavilion End to the North, and the Brian Statham End to the South.
The three - tiered Victorian members ' pavilion was built in 1895 for £ 10,000. Hit by a bomb in 1940 -- which destroyed the Members ' Dining Room and groundsman 's quarters -- most of the pavilion was rebuilt. £ 1 million was spent on a new roof after it began to leak in 2003. The pavilion underwent redevelopment at the start of 2012 and was reopened for the YB40 game against Scotland.
The Point, Old Trafford 's distinctive £ 12 million conference centre, and at 1,000 seats one of the largest multi-purpose conference facilities in North West England, opened in 2010.
Old Trafford was unusual in that there were two media stands at opposite ends of the ground prior to the new Media Centre which opened in September 2012. Television and radio commentators previously operated in temporary television studios and commentary boxes at the Stretford End which were perched on hospitality boxes.
The idea of an indoor school was born in 1951, when nets were strung up in the Members ' Dining Room in the pavilion. A permanent facility was built in 1969, and replaced in 1997. The current building stands to the north - west of the pitch; it contains five 60 metre lanes on various surfaces, several conference rooms, and a large shop.
The Old Trafford Lodge opened in 1999, bringing to fruition a concept from 1981. The hotel had 68 rooms, 36 having unobstructed views of the playing surface. It has been demolished as part of the redevelopment of the ground
Following rejection of plans, in 2003, to sell Old Trafford, and move the club to a new purpose - built stadium in East Manchester, the focus was switched to upgrading the current ground. Lancashire CCC, with a coalition of businesses, are in the process of making the cricket ground the centre of an anticipated 750,000 sq ft (70,000 m) development, in a mixed - use scheme involving business, residential, retail, hotel and leisure facilities.
The first phase of redevelopment saw the laying of new drains in Autumn 2008. In 2009, the Stretford end of the ground was closed to facilitate destruction of the County Suite, Tyldesley Suite, ' K ' and ' L ' Stands and the scoreboard; The Point, overshadowing new seating to the west of the pavilion, opened in June 2010. During the 2010 / 11 winter the wickets were turned from their previous east -- west axis to a more typical north -- south alignment, which prevents the low evening sun from interfering with matches, and increased the number of available wickets by five, to sixteen. Many of Lancashire 's home games for the 2011 season were transferred to out grounds while the new wickets ' bedded in '.
The main planning process began in September 2008, but faced stiff legal opposition. Since Tesco pledged £ 21 million to the redevelopment, the stadium 's planning application included a request for a new supermarket nearby. Trafford Council gave this joint proposal permission in March 2010 -- a decision which was initially called in by the Communities Secretary for Judicial Review, before the go - ahead was given in September 2010. Derwent Holdings, a property development company denied permission to build a supermarket at the nearby White City retail park, then called for a Judicial Review. Although this was turned down by the High Court in March 2011, the case went to the Court of Appeal. Lancashire took the risky decision to begin work ahead of the matter being resolved, in order to qualify for grants from the North West Development Agency before it was wound up. However, the Court of Appeal ruled in Lancashire 's favour in July 2011, and denied leave to further appeal.
Work therefore began on this main phase in summer 2011, beginning with the installation of permanent floodlights and a new video screen. A new ' Players and Media ' facility, mimicking to some degree the design of The Point, has been built on the site of the demolished Washbrook - Statham stand, with a 2 - tiered cantilever stand being erected on either side. The Pavilion has been renovated to have its sloped roof replaced with two modern glass storeys, finished in April 2013.
The media facilities and corporate boxes on the western side of the ground have been demolished, leaving an empty space, which will be used for temporary seating or a stage when required.
The Old Trafford Lodge opened in 1999, however it will be demolished and construction will begin on the Hilton Garden Inn Emirates Old Trafford. It will be a Hilton Garden Inn 150 bedroom hotel for Hilton Worldwide. It is expected to be completed by July 2017.
The ground is used heavily throughout the summer as the base of Lancashire County Cricket Club, with other home games being played at Stanley Park, Blackpool, Birkdale in Southport and at Aigburth in Liverpool. Until 2008, Old Trafford commonly hosted a Test match each year; none were hosted in 2009, 2011 or 2012 due to sub-standard facilities, although following redevelopment, Old Trafford hosted an Ashes Test in 2013, and further Tests in 2014 and 2016. One Day Internationals and / or International Twenty20s continue to be hosted every year.
In Tests, the highest team score posted here is 656 / 8 dec by Australian national cricket team against English national cricket team on 23 July, 1964. The leading run scorers here are Denis Compton - 818 runs, Mike Atherton - 729 runs and Alec Stewart - 704 runs. The leading wicket takers are Alec Bedser - 51 wickets, James Anderson - 28 wickets and Jim Laker - 27 wickets.
In ODIs, the highest team score posted here is 318 / 7 by Sri Lanka national cricket team against English national cricket team on 28 June, 2006. The leading run scorers here are Graham Gooch - 405 runs, Allan Lamb - 341 runs and David Gower - 309 runs. The leading wicket takers are RGD Willis - 15 wickets, James Anderson - 14 wickets and Darran Gough - 13 wickets.
The ground is occasionally used as a venue for large - scale concerts, with a maximum capacity of 50,000. Although the old stage location, in front of the Indoor Cricket School, has been built on, buildings on the western side of the ground will be cleared by 2013 to again allow space for a stage. The concert capacity will increase to 65,000 after redevelopment.
The Old Trafford Lodge, The Point, and other corporate facilities are open all year round, as are the ground 's car parks, situated to the north and west of the ground.
The ground is served by the adjacent Old Trafford tram stop on the Manchester Metrolink 's Altrincham Line.
Coordinates: 53 ° 27 ′ 22.85 '' N 2 ° 17 ′ 12.34 '' W / 53.4563472 ° N 2.2867611 ° W / 53.4563472; - 2.2867611
|
what is the function of venture capital in the united states | Venture capital - wikipedia
Venture capital (VC) is a type of private equity, a form of financing that is provided by firms or funds to small, early - stage, emerging firms that are deemed to have high growth potential, or which have demonstrated high growth (in terms of number of employees, annual revenue, or both). Venture capital firms or funds invest in these early - stage companies in exchange for equity, or an ownership stake, in the companies they invest in. Venture capitalists take on the risk of financing risky start - ups in the hopes that some of the firms they support will become successful. The start - ups are usually based on an innovative technology or business model and they are usually from the high technology industries, such as information technology (IT), clean technology or biotechnology.
The typical venture capital investment occurs after an initial "seed funding '' round. The first round of institutional venture capital to fund growth is called the Series A round. Venture capitalists provide this financing in the interest of generating a return through an eventual "exit '' event, such as the company selling shares to the public for the first time in an initial public offering (IPO) or doing a merger and acquisition (also known as a "trade sale '') of the company.
In addition to angel investing, equity crowdfunding and other seed funding options, venture capital is attractive for new companies with limited operating history that are too small to raise capital in the public markets and have not reached the point where they are able to secure a bank loan or complete a debt offering. In exchange for the high risk that venture capitalists assume by investing in smaller and early - stage companies, venture capitalists usually get significant control over company decisions, in addition to a significant portion of the companies ' ownership (and consequently value). Start - ups like Uber, Airbnb, Flipkart, Xiaomi & Didi Chuxing are highly valued startups, where venture capitalists contribute more than financing to these early - stage firms; they also often provide strategic advice to the firm 's executives on its business model and marketing strategies.
Venture capital is also a way in which the private and public sectors can construct an institution that systematically creates business networks for the new firms and industries, so that they can progress and develop. This institution helps identify promising new firms and provide them with finance, technical expertise, mentoring, marketing "know - how '', and business models. Once integrated into the business network, these firms are more likely to succeed, as they become "nodes '' in the search networks for designing and building products in their domain. However, venture capitalists ' decisions are often biased, exhibiting for instance overconfidence and illusion of control, much like entrepreneurial decisions in general.
A venture may be defined as a project prospective converted into a process with an adequate assumed risk and investment. With few exceptions, private equity in the first half of the 20th century was the domain of wealthy individuals and families. The Wallenbergs, Vanderbilts, Whitneys, Rockefellers, and Warburgs were notable investors in private companies in the first half of the century. In 1938, Laurance S. Rockefeller helped finance the creation of both Eastern Air Lines and Douglas Aircraft, and the Rockefeller family had vast holdings in a variety of companies. Eric M. Warburg founded E.M. Warburg & Co. in 1938, which would ultimately become Warburg Pincus, with investments in both leveraged buyouts and venture capital. The Wallenberg family started Investor AB in 1916 in Sweden and were early investors in several Swedish companies such as ABB, Atlas Copco, Ericsson, etc. in the first half of the 20th century.
Before World War II (1939 -- 1945), money orders (originally known as "development capital '') remained primarily the domain of wealthy individuals and families. Only after 1945 did "true '' private equity investments begin to emerge, notably with the founding of the first two venture capital firms in 1946: American Research and Development Corporation (ARDC) and J.H. Whitney & Company.
Georges Doriot, the "father of venture capitalism '' (and former assistant dean of Harvard Business School), founded INSEAD in 1957. Along with Ralph Flanders and Karl Compton (former president of MIT), Doriot founded ARDC in 1946 to encourage private - sector investment in businesses run by soldiers returning from World War II. ARDC became the first institutional private - equity investment firm to raise capital from sources other than wealthy families, although it had several notable investment successes as well. ARDC is credited with the first trick when its 1957 investment of $70,000 in Digital Equipment Corporation (DEC) would be valued at over $355 million after the company 's initial public offering in 1968 (representing a return of over 1200 times on its investment and an annualized rate of return of 101 %).
Former employees of ARDC went on to establish several prominent venture - capital firms including Greylock Partners (founded in 1965 by Charlie Waite and Bill Elfers) and Morgan, Holland Ventures, the predecessor of Flagship Ventures (founded in 1982 by James Morgan). ARDC continued investing until 1971, when Doriot retired. In 1972 Doriot merged ARDC with Textron after having invested in over 150 companies.
John Hay Whitney (1904 -- 1982) and his partner Benno Schmidt (1913 -- 1999) founded J.H. Whitney & Company in 1946. Whitney had been investing since the 1930s, founding Pioneer Pictures in 1933 and acquiring a 15 % interest in Technicolor Corporation with his cousin Cornelius Vanderbilt Whitney. Florida Foods Corporation proved Whitney 's most famous investment. The company developed an innovative method for delivering nutrition to American soldiers, later known as Minute Maid orange juice and was sold to The Coca - Cola Company in 1960. J.H. Whitney & Company continued to make investments in leveraged buyout transactions and raised $750 million for its sixth institutional private equity fund in 2005.
One of the first steps toward a professionally managed venture capital industry was the passage of the Small Business Investment Act of 1958. The 1958 Act officially allowed the U.S. Small Business Administration (SBA) to license private "Small Business Investment Companies '' (SBICs) to help the financing and management of the small entrepreneurial businesses in the United States.
During the 1950s, putting a venture capital deal together may have required the help of two or three other organizations to complete the transaction. It was a business that was growing very rapidly, and as the business grew, the transactions grew exponentially.
During the 1960s and 1970s, venture capital firms focused their investment activity primarily on starting and expanding companies. More often than not, these companies were exploiting breakthroughs in electronic, medical, or data - processing technology. As a result, venture capital came to be almost synonymous with technology finance. An early West Coast venture capital company was Draper and Johnson Investment Company, formed in 1962 by William Henry Draper III and Franklin P. Johnson, Jr. In 1965, Sutter Hill Ventures acquired the portfolio of Draper and Johnson as a founding action. Bill Draper and Paul Wythes were the founders, and Pitch Johnson formed Asset Management Company at that time.
It is commonly noted that the first venture - backed startup is Fairchild Semiconductor (which produced the first commercially practical integrated circuit), funded in 1959 by what would later become Venrock Associates. Venrock was founded in 1969 by Laurance S. Rockefeller, the fourth of John D. Rockefeller 's six children, as a way to allow other Rockefeller children to develop exposure to venture capital investments.
It was also in the 1960s that the common form of private equity fund, still in use today, emerged. Private equity firms organized limited partnerships to hold investments in which the investment professionals served as general partner and the investors, who were passive limited partners, put up the capital. The compensation structure, still in use today, also emerged with limited partners paying an annual management fee of 1.0 -- 2.5 % and a carried interest typically representing up to 20 % of the profits of the partnership.
The growth of the venture capital industry was fueled by the emergence of the independent investment firms on Sand Hill Road, beginning with Kleiner, Perkins, Caufield & Byers and Sequoia Capital in 1972. Located in Menlo Park, CA, Kleiner Perkins, Sequoia and later venture capital firms would have access to the many semiconductor companies based in the Santa Clara Valley as well as early computer firms using their devices and programming and service companies.
Throughout the 1970s, a group of private equity firms, focused primarily on venture capital investments, would be founded that would become the model for later leveraged buyout and venture capital investment firms. In 1973, with the number of new venture capital firms increasing, leading venture capitalists formed the National Venture Capital Association (NVCA). The NVCA was to serve as the industry trade group for the venture capital industry. Venture capital firms suffered a temporary downturn in 1974, when the stock market crashed and investors were naturally wary of this new kind of investment fund.
It was not until 1978 that venture capital experienced its first major fundraising year, as the industry raised approximately $750 million. With the passage of the Employee Retirement Income Security Act (ERISA) in 1974, corporate pension funds were prohibited from holding certain risky investments including many investments in privately held companies. In 1978, the US Labor Department relaxed certain of the ERISA restrictions, under the "prudent man rule, '' thus allowing corporate pension funds to invest in the asset class and providing a major source of capital available to venture capitalists.
The public successes of the venture capital industry in the 1970s and early 1980s (e.g., Digital Equipment Corporation, Apple Inc., Genentech) gave rise to a major proliferation of venture capital investment firms. From just a few dozen firms at the start of the decade, there were over 650 firms by the end of the 1980s, each searching for the next major "home run. '' The number of firms multiplied, and the capital managed by these firms increased from $3 billion to $31 billion over the course of the decade.
The growth of the industry was hampered by sharply declining returns, and certain venture firms began posting losses for the first time. In addition to the increased competition among firms, several other factors affected returns. The market for initial public offerings cooled in the mid-1980s before collapsing after the stock market crash in 1987, and foreign corporations, particularly from Japan and Korea, flooded early - stage companies with capital.
In response to the changing conditions, corporations that had sponsored in - house venture investment arms, including General Electric and Paine Webber either sold off or closed these venture capital units. Additionally, venture capital units within Chemical Bank and Continental Illinois National Bank, among others, began shifting their focus from funding early stage companies toward investments in more mature companies. Even industry founders J.H. Whitney & Company and Warburg Pincus began to transition toward leveraged buyouts and growth capital investments.
By the end of the 1980s, venture capital returns were relatively low, particularly in comparison with their emerging leveraged buyout cousins, due in part to the competition for hot startups, excess supply of IPOs and the inexperience of many venture capital fund managers. Growth in the venture capital industry remained limited throughout the 1980s and the first half of the 1990s, increasing from $3 billion in 1983 to just over $4 billion more than a decade later in 1994.
After a shakeout of venture capital managers, the more successful firms retrenched, focusing increasingly on improving operations at their portfolio companies rather than continuously making new investments. Results would begin to turn very attractive, successful and would ultimately generate the venture capital boom of the 1990s. Yale School of Management Professor Andrew Metrick refers to these first 15 years of the modern venture capital industry beginning in 1980 as the "pre-boom period '' in anticipation of the boom that would begin in 1995 and last through the bursting of the Internet bubble in 2000.
The late 1990s were a boom time for venture capital, as firms on Sand Hill Road in Menlo Park and Silicon Valley benefited from a huge surge of interest in the nascent Internet and other computer technologies. Initial public offerings of stock for technology and other growth companies were in abundance, and venture firms were reaping large returns.
The Nasdaq crash and technology slump that started in March 2000 shook virtually the entire venture capital industry as valuations for startup technology companies collapsed. Over the next two years, many venture firms had been forced to write - off large proportions of their investments, and many funds were significant "under water '' (the values of the fund 's investments were below the amount of capital invested). Venture capital investors sought to reduce the size of commitments they had made to venture capital funds, and, in numerous instances, investors sought to unload existing commitments for cents on the dollar in the secondary market. By mid-2003, the venture capital industry had shriveled to about half its 2001 capacity. Nevertheless, PricewaterhouseCoopers ' MoneyTree Survey shows that total venture capital investments held steady at 2003 levels through the second quarter of 2005.
Although the post-boom years represent just a small fraction of the peak levels of venture investment reached in 2000, they still represent an increase over the levels of investment from 1980 through 1995. As a percentage of GDP, venture investment was 0.058 % in 1994, peaked at 1.087 % (nearly 19 times the 1994 level) in 2000 and ranged from 0.164 % to 0.182 % in 2003 and 2004. The revival of an Internet - driven environment in 2004 through 2007 helped to revive the venture capital environment. However, as a percentage of the overall private equity market, venture capital has still not reached its mid-1990s level, let alone its peak in 2000.
Venture capital funds, which were responsible for much of the fundraising volume in 2000 (the height of the dot - com bubble), raised only $25.1 billion in 2006, a 2 % decline from 2005 and a significant decline from its peak.
Obtaining venture capital is substantially different from raising debt or a loan. Lenders have a legal right to interest on a loan and repayment of the capital irrespective of the success or failure of a business. Venture capital is invested in exchange for an equity stake in the business. The return of the venture capitalist as a shareholder depends on the growth and profitability of the business. This return is generally earned when the venture capitalist "exits '' by selling its shareholdings when the business is sold to another owner.
Venture capitalists are typically very selective in deciding what to invest in; as a result, firms are looking for the extremely rare yet sought - after qualities such as innovative technology, potential for rapid growth, a well - developed business model, and an impressive management team. Of these qualities, funds are most interested in ventures with exceptionally high growth potential, as only such opportunities are likely capable of providing financial returns and a successful exit within the required time frame (typically 3 -- 7 years) that venture capitalists expect.
Because investments are illiquid and require the extended time frame to harvest, venture capitalists are expected to carry out detailed due diligence prior to investment. Venture capitalists also are expected to nurture the companies in which they invest, in order to increase the likelihood of reaching an IPO stage when valuations are favourable. Venture capitalists typically assist at four stages in the company 's development:
Because there are no public exchanges listing their securities, private companies meet venture capital firms and other private equity investors in several ways, including warm referrals from the investors ' trusted sources and other business contacts; investor conferences and symposia; and summits where companies pitch directly to investor groups in face - to - face meetings, including a variant known as "Speed Venturing '', which is akin to speed - dating for capital, where the investor decides within 10 minutes whether he wants a follow - up meeting. In addition, some new private online networks are emerging to provide additional opportunities for meeting investors.
This need for high returns makes venture funding an expensive capital source for companies, and most suitable for businesses having large up - front capital requirements, which can not be financed by cheaper alternatives such as debt. That is most commonly the case for intangible assets such as software, and other intellectual property, whose value is unproven. In turn, this explains why venture capital is most prevalent in the fast - growing technology and life sciences or biotechnology fields.
If a company does have the qualities venture capitalists seek including a solid business plan, a good management team, investment and passion from the founders, a good potential to exit the investment before the end of their funding cycle, and target minimum returns in excess of 40 % per year, it will find it easier to raise venture capital.
There are typically six stages of venture round financing offered in Venture Capital, that roughly correspond to these stages of a company 's development.
Between the first round and the fourth round, venture - backed companies may also seek to take venture debt.
A venture capitalist is a person who makes venture investments, and these venture capitalists are expected to bring managerial and technical expertise as well as capital to their investments. A venture capital fund refers to a pooled investment vehicle (in the United States, often an LP or LLC) that primarily invests the financial capital of third - party investors in enterprises that are too risky for the standard capital markets or bank loans. These funds are typically managed by a venture capital firm, which often employs individuals with technology backgrounds (scientists, researchers), business training and / or deep industry experience.
A core skill within VC is the ability to identify novel or disruptive technologies that have the potential to generate high commercial returns at an early stage. By definition, VCs also take a role in managing entrepreneurial companies at an early stage, thus adding skills as well as capital, thereby differentiating VC from buy - out private equity, which typically invest in companies with proven revenue, and thereby potentially realizing much higher rates of returns. Inherent in realizing abnormally high rates of returns is the risk of losing all of one 's investment in a given startup company. As a consequence, most venture capital investments are done in a pool format, where several investors combine their investments into one large fund that invests in many different startup companies. By investing in the pool format, the investors are spreading out their risk to many different investments instead of taking the chance of putting all of their money in one start up firm.
Venture capital firms are typically structured as partnerships, the general partners of which serve as the managers of the firm and will serve as investment advisors to the venture capital funds raised. Venture capital firms in the United States may also be structured as limited liability companies, in which case the firm 's managers are known as managing members. Investors in venture capital funds are known as limited partners. This constituency comprises both high - net - worth individuals and institutions with large amounts of available capital, such as state and private pension funds, university financial endowments, foundations, insurance companies, and pooled investment vehicles, called funds of funds.
Venture capitalist firms differ in their motivations and approaches. There are multiple factors, and each firm is different.
Some of the factors that influence VC decisions include:
Within the venture capital industry, the general partners and other investment professionals of the venture capital firm are often referred to as "venture capitalists '' or "VCs ''. Typical career backgrounds vary, but, broadly speaking, venture capitalists come from either an operational or a finance background. Venture capitalists with an operational background (operating partner) tend to be former founders or executives of companies similar to those which the partnership finances or will have served as management consultants. Venture capitalists with finance backgrounds tend to have investment banking or other corporate finance experience.
Although the titles are not entirely uniform from firm to firm, other positions at venture capital firms include:
Most venture capital funds have a fixed life of 10 years, with the possibility of a few years of extensions to allow for private companies still seeking liquidity. The investing cycle for most funds is generally three to five years, after which the focus is managing and making follow - on investments in an existing portfolio. This model was pioneered by successful funds in Silicon Valley through the 1980s to invest in technological trends broadly but only during their period of ascendance, and to cut exposure to management and marketing risks of any individual firm or its product.
In such a fund, the investors have a fixed commitment to the fund that is initially unfunded and subsequently "called down '' by the venture capital fund over time as the fund makes its investments. There are substantial penalties for a limited partner (or investor) that fails to participate in a capital call.
It can take anywhere from a month or so to several years for venture capitalists to raise money from limited partners for their fund. At the time when all of the money has been raised, the fund is said to be closed, and the 10 - year lifetime begins. Some funds have partial closes when one half (or some other amount) of the fund has been raised. The vintage year generally refers to the year in which the fund was closed and may serve as a means to stratify VC funds for comparison. This shows the difference between a venture capital fund management company and the venture capital funds managed by them.
From investors ' point of view, funds can be: (1) traditional -- where all the investors invest with equal terms; or (2) asymmetric -- where different investors have different terms. Typically the asymmetry is seen in cases where there 's an investor that has other interests such as tax income in case of public investors.
Venture capitalists are compensated through a combination of management fees and carried interest (often referred to as a "two and 20 '' arrangement):
Because a fund may run out of capital prior to the end of its life, larger venture capital firms usually have several overlapping funds at the same time; doing so lets the larger firm keep specialists in all stages of the development of firms almost constantly engaged. Smaller firms tend to thrive or fail with their initial industry contacts; by the time the fund cashes out, an entirely new generation of technologies and people is ascending, whom the general partners may not know well, and so it is prudent to reassess and shift industries or personnel rather than attempt to simply invest more in the industry or people the partners already know.
Because of the strict requirements venture capitalists have for potential investments, many entrepreneurs seek seed funding from angel investors, who may be more willing to invest in highly speculative opportunities, or may have a prior relationship with the entrepreneur.
Furthermore, many venture capital firms will only seriously evaluate an investment in a start - up company otherwise unknown to them if the company can prove at least some of its claims about the technology and / or market potential for its product or services. To achieve this, or even just to avoid the dilutive effects of receiving funding before such claims are proven, many start - ups seek to self - finance sweat equity until they reach a point where they can credibly approach outside capital providers such as venture capitalists or angel investors. This practice is called "bootstrapping ''.
Equity crowdfunding is emerging as an alternative to traditional venture capital. Traditional crowdfunding is an approach to raising the capital required for a new project or enterprise by appealing to large numbers of ordinary people for small donations. While such an approach has long precedents in the sphere of charity, it is receiving renewed attention from entrepreneurs, now that social media and online communities make it possible to reach out to a group of potentially interested supporters at very low cost. Some equity crowdfunding models are also being applied specifically for startup funding, such as those listed at Comparison of crowd funding services. One of the reasons to look for alternatives to venture capital is the problem of the traditional VC model. The traditional VCs are shifting their focus to later - stage investments, and return on investment of many VC funds have been low or negative.
In Europe and India, Media for equity is a partial alternative to venture capital funding. Media for equity investors are able to supply start - ups with often significant advertising campaigns in return for equity. In Europe, an investment advisory firm offers young ventures the option to exchange equity for services investment; their aim is to guide ventures through the development stage to arrive at a significant funding, mergers and acquisition, or other exit strategy.
In industries where assets can be securitized effectively because they reliably generate future revenue streams or have a good potential for resale in case of foreclosure, businesses may more cheaply be able to raise debt to finance their growth. Good examples would include asset - intensive extractive industries such as mining, or manufacturing industries. Offshore funding is provided via specialist venture capital trusts, which seek to use securitization in structuring hybrid multi-market transactions via an SPV (special purpose vehicle): a corporate entity that is designed solely for the purpose of the financing.
In addition to traditional venture capital and angel networks, groups have emerged, which allow groups of small investors or entrepreneurs themselves to compete in a privatized business plan competition where the group itself serves as the investor through a democratic process.
Law firms are also increasingly acting as an intermediary between clients seeking venture capital and the firms providing it.
Other forms include venture resources that seek to provide non-monetary support to launch a new venture.
Venture capital is also associated with job creation (accounting for 2 % of US GDP), the knowledge economy, and used as a proxy measure of innovation within an economic sector or geography. Every year, there are nearly 2 million businesses created in the USA, and 600 -- 800 get venture capital funding. According to the National Venture Capital Association, 11 % of private sector jobs come from venture - backed companies and venture - backed revenue accounts for 21 % of US GDP.
Babson College 's Diana Report found that the number of women partners in VC firms decreased from 10 % in 1999 to 6 % in 2014. The report also found that 97 % of VC - funded businesses had male chief executives, and that businesses with all - male teams were more than four times as likely to receive VC funding compared to teams with at least one woman. More than 75 % of VC firms in the US did not have any female venture capitalists at the time they were surveyed. It was found that a greater fraction of VC firms had never had a woman represent them on the board of one of their portfolio companies. For comparison, a UC Davis study focusing on large public companies in California found 49.5 % with at least one female board seat. When the latter results were published, some San Jose Mercury News readers dismissed the possibility that sexism was a cause. In a follow - up Newsweek article, Nina Burleigh asked "Where were all these offended people when women like Heidi Roizen published accounts of having a venture capitalist stick her hand in his pants under a table while a deal was being discussed? ''
Venture capital, as an industry, originated in the United States, and American firms have traditionally been the largest participants in venture deals with the bulk of venture capital being deployed in American companies. However, increasingly, non-US venture investment is growing, and the number and size of non-US venture capitalists have been expanding.
Venture capital has been used as a tool for economic development in a variety of developing regions. In many of these regions, with less developed financial sectors, venture capital plays a role in facilitating access to finance for small and medium enterprises (SMEs), which in most cases would not qualify for receiving bank loans.
In the year of 2008, while VC funding were still majorly dominated by U.S. money ($28.8 billion invested in over 2550 deals in 2008), compared to international fund investments ($13.4 billion invested elsewhere), there has been an average 5 % growth in the venture capital deals outside the USA, mainly in China and Europe. Geographical differences can be significant. For instance, in the UK, 4 % of British investment goes to venture capital, compared to about 33 % in the U.S.
Venture capitalists invested some $29.1 billion in 3,752 deals in the U.S. through the fourth quarter of 2011, according to a report by the National Venture Capital Association. The same numbers for all of 2010 were $23.4 billion in 3,496 deals.
According to a report by Dow Jones VentureSource, venture capital funding fell to $6.4 billion in the USA in the first quarter of 2013, an 11.8 % drop from the first quarter of 2012, and a 20.8 % decline from 2011. Venture firms have added $4.2 billion into their funds this year, down from $6.3 billion in the first quarter of 2013, but up from $2.6 billion in the fourth quarter of 2012.
The Venture Capital industry in Mexico is a fast - growing sector in the country that, with the support of institutions and private funds, is estimated to reach US $100 billion invested by 2018.
In Israel, high - tech entrepreneurship and venture capital have flourished well beyond the country 's relative size. As it has very little natural resources and, historically has been forced to build its economy on knowledge - based industries, its VC industry has rapidly developed, and nowadays has about 70 active venture capital funds, of which 14 international VCs with Israeli offices, and additional 220 international funds which actively invest in Israel. In addition, as of 2010, Israel led the world in venture capital invested per capita. Israel attracted $170 per person compared to $75 in the USA. About two thirds of the funds invested were from foreign sources, and the rest domestic. In 2013, Wix.com joined 62 other Israeli firms on the Nasdaq. Read more about Venture capital in Israel.
Canadian technology companies have attracted interest from the global venture capital community partially as a result of generous tax incentive through the Scientific Research and Experimental Development (SR&ED) investment tax credit program. The basic incentive available to any Canadian corporation performing R&D is a refundable tax credit that is equal to 20 % of "qualifying '' R&D expenditures (labour, material, R&D contracts, and R&D equipment). An enhanced 35 % refundable tax credit of available to certain (i.e. small) Canadian - controlled private corporations (CCPCs). Because the CCPC rules require a minimum of 50 % Canadian ownership in the company performing R&D, foreign investors who would like to benefit from the larger 35 % tax credit must accept minority position in the company, which might not be desirable. The SR&ED program does not restrict the export of any technology or intellectual property that may have been developed with the benefit of SR&ED tax incentives.
Canada also has a fairly unusual form of venture capital generation in its Labour Sponsored Venture Capital Corporations (LSVCC). These funds, also known as Retail Venture Capital or Labour Sponsored Investment Funds (LSIF), are generally sponsored by labor unions and offer tax breaks from government to encourage retail investors to purchase the funds. Generally, these Retail Venture Capital funds only invest in companies where the majority of employees are in Canada. However, innovative structures have been developed to permit LSVCCs to direct in Canadian subsidiaries of corporations incorporated in jurisdictions outside of Canada.
Many Swiss start - ups are university spin - offs, in particular from its federal institutes of technology in Lausanne and Zurich. According to a study by the London School of Economics analysing 130 ETH Zurich spin - offs over 10 years, about 90 % of these start - ups survived the first five critical years, resulting in an average annual IRR of more than 43 %. Switzerland 's most active early - stage investors are The Zurich Cantonal Bank, investiere.ch, Swiss Founders Fund, as well as a number of angel investor clubs.
Europe has a large and growing number of active venture firms. Capital raised in the region in 2005, including buy - out funds, exceeded € 60 billion, of which € 12.6 billion was specifically allocated to venture investment. Trade association Invest Europe has a list of active member firms and industry statistics.
European venture capital investments in 2015 increased by 5 % year - on - year to € 3.8 billion, with 2,836 companies backed. The amount invested increased across all stages led by seed investments with an increase of 18 %. Most capital was concentrated in life sciences (34 %), computer & consumer electronics (20 %) and communications (19 %) sectors, according to Invest Europe 's annual data.
In 2012, in France, according to a study by AFIC (the French Association of VC firms), € 6.1 B have been invested through 1,548 deals (39 % in new companies, 61 % in new rounds) by firms such as Partech Ventures or Innovacom.
A study published in early 2013 showed that contrary to popular belief, European startups backed by venture capital do not perform worse than US counterparts. European venture - backed firms have an equal chance of listing on the stock exchange, and a slightly lower chance of a "trade sale '' (acquisition by other company).
Leading early - stage venture capital investors in Europe include Mark Tluszcz of Mangrove Capital Partners and Danny Rimer of Index Ventures, both of whom were named on Forbes Magazine 's Midas List of the world 's top dealmakers in technology venture capital in 2007.
India is fast catching up with the West in the field of venture capital and a number of venture capital funds have a presence in the country (IVCA). In 2006, the total amount of private equity and venture capital in India reached $7.5 billion across 299 deals. In the Indian context, venture capital consists of investing in equity, quasi-equity, or conditional loans in order to promote unlisted, high - risk, or high - tech firms driven by technically or professionally qualified entrepreneurs. It is also used to refer to investors "providing seed '', "start - up and first - stage financing '', or financing companies that have demonstrated extraordinary business potential. Venture capital refers to capital investment; equity and debt; both of which carry indubitable risk. The risk anticipated is very high. The venture capital industry follows the concept of "high risk, high return '', innovative entrepreneurship, knowledge - based ideas and human capital intensive enterprises have taken the front seat as venture capitalists invest in risky finance to encourage innovation.
China is also starting to develop a venture capital industry (CVCA).
Vietnam is experiencing its first foreign venture capitals, including IDG Venture Vietnam ($100 million) and DFJ Vinacapital ($35 million)
Singapore is widely recognized and featured as one of the hottest places to both start up and invest, mainly due to its healthy ecosystem, its strategic location and connectedness to foreign markets. With 100 deals valued at US $3.5 billion, Singapore saw a record value of PE and VC investments in 2016. The number of PE and VC investments increased substantially over the last 5 years: In 2015, Singapore recorded 81 investments with an aggregate value of US $2.2 billion while in 2014 and 2013, PE and VC deal values came to US $2.4 billion and US $0.9 billion respectively. With 53 percent, tech investments account for the majority of deal volume. Moreover, Singapore is home to two of South - East Asia 's largest unicorns. Garena is reportedly the highest - valued unicorn in the region with a US $3.5 billion price tag, while Grab is the highest - funded, having raised a total of US $1.43 billion since its incorporation in 2012. Start - ups and small businesses in Singapore receive support from policy makers and the local government fosters the role VCs play to support entrepreneurship in Singapore and the region. For instance, in 2016, Singapore 's National Research Foundation (NRF) has given out grants up to around $30 million to four large local enterprises for investments in startups in the city - state. This first of its kind partnership NRF has entered into is designed to encourage these enterprises to source for new technologies and innovative business models. Currently, the rules governing VC firms are being reviewed by the Monetary Authority of Singapore (MAS) to make it easier to set up funds and increase funding opportunities for start - ups. This mainly includes simplifying and shortening the authorization process for new venture capital managers and to study whether existing incentives that have attracted traditional asset managers here will be suitable for the VC sector. A public consultation on the proposals was held in January 2017 with changes expected to be introduced by July.
The Middle East and North Africa (MENA) venture capital industry is an early stage of development but growing. The MENA Private Equity Association Guide to Venture Capital for entrepreneurs lists VC firms in the region, and other resources available in the MENA VC ecosystem. Diaspora organization TechWadi aims to give MENA companies access to VC investors based in the US.
The Southern African venture capital industry is developing. The South African Government and Revenue Service is following the international trend of using tax efficient vehicles to propel economic growth and job creation through venture capital. Section 12 J of the Income Tax Act was updated to include venture capital. Companies are allowed to use a tax efficient structure similar to VCTs in the UK. Despite the above structure, the government needs to adjust its regulation around intellectual property, exchange control and other legislation to ensure that Venture capital succeeds. < www.savca.co.za >
Currently, there are not many venture capital funds in operation and it is a small community; however the number of venture funds are steadily increasing with new incentives slowly coming in from government. Funds are difficult to come by and due to the limited funding, companies are more likely to receive funding if they can demonstrate initial sales or traction and the potential for significant growth. The majority of the venture capital in Sub-Saharan Africa is centered on South Africa and Kenya.
Unlike public companies, information regarding an entrepreneur 's business is typically confidential and proprietary. As part of the due diligence process, most venture capitalists will require significant detail with respect to a company 's business plan. Entrepreneurs must remain vigilant about sharing information with venture capitalists that are investors in their competitors. Most venture capitalists treat information confidentially, but as a matter of business practice, they do not typically enter into Non Disclosure Agreements because of the potential liability issues those agreements entail. Entrepreneurs are typically well advised to protect truly proprietary intellectual property.
Limited partners of venture capital firms typically have access only to limited amounts of information with respect to the individual portfolio companies in which they are invested and are typically bound by confidentiality provisions in the fund 's limited partnership agreement.
There are several strict guidelines regulating those that deal in venture capital. Namely, they are not allowed to advertise or solicit business in any form as per the U.S. Securities and Exchange Commission guidelines.
|
which is a large kuiper belt object that was visited by new horizons in 2015 | New Horizons - Wikipedia
New Horizons is an interplanetary space probe that was launched as a part of NASA 's New Frontiers program. Engineered by the Johns Hopkins University Applied Physics Laboratory (APL) and the Southwest Research Institute (SwRI), with a team led by S. Alan Stern, the spacecraft was launched in 2006 with the primary mission to perform a flyby study of the Pluto system in 2015, and a secondary mission to fly by and study one or more other Kuiper belt objects (KBOs) in the decade to follow. It is the fifth artificial object to achieve the escape velocity needed to leave the Solar System.
On January 19, 2006, New Horizons was launched from Cape Canaveral Air Force Station by an Atlas V rocket directly into an Earth - and - solar escape trajectory with a speed of about 16.26 kilometers per second (58,536 km / h; 36,373 mph). After a brief encounter with asteroid 132524 APL, New Horizons proceeded to Jupiter, making its closest approach on February 28, 2007, at a distance of 2.3 million kilometers (1.4 million miles). The Jupiter flyby provided a gravity assist that increased New Horizons ' speed; the flyby also enabled a general test of New Horizons ' scientific capabilities, returning data about the planet 's atmosphere, moons, and magnetosphere.
Most of the post-Jupiter voyage was spent in hibernation mode to preserve on - board systems, except for brief annual checkouts. On December 6, 2014, New Horizons was brought back online for the Pluto encounter, and instrument check - out began. On January 15, 2015, the New Horizons spacecraft began its approach phase to Pluto.
On July 14, 2015, at 11: 49 UTC, it flew 12,500 km (7,800 mi) above the surface of Pluto, making it the first spacecraft to explore the dwarf planet. On October 25, 2016, at 21: 48 UTC, the last of the recorded data from the Pluto flyby was received from New Horizons. Having completed its flyby of Pluto, New Horizons has maneuvered for a flyby of Kuiper belt object (486958) 2014 MU 69, expected to take place on January 1, 2019, when it will be 43.4 AU from the Sun.
In August 1992, JPL scientist Robert Staehle called Pluto discoverer Clyde Tombaugh, requesting permission to visit his planet. "I told him he was welcome to it, '' Tombaugh later remembered, "though he 's got to go one long, cold trip. '' The call eventually led to a series of proposed Pluto missions, leading up to New Horizons.
Stamatios "Tom '' Krimigis, head of the Applied Physics Laboratory 's space division, one of many entrants in the New Frontiers Program competition, formed the New Horizons team with Alan Stern in December 2000. Appointed as the project 's principal investigator, Stern was described by Krimigis as "the personification of the Pluto mission ''. New Horizons was based largely on Stern 's work since Pluto 350 and involved most of the team from Pluto Kuiper Express. The New Horizons proposal was one of five that were officially submitted to NASA. It was later selected as one of two finalists to be subject to a three - month concept study, in June 2001. The other finalist, POSSE (Pluto and Outer Solar System Explorer), was a separate, but similar Pluto mission concept by the University of Colorado Boulder, led by principal investigator Larry W. Esposito, and supported by the JPL, Lockheed Martin and the University of California. However, the APL, in addition to being supported by Pluto Kuiper Express developers at the Goddard Space Flight Center and Stanford University, were at an advantage; they had recently developed NEAR Shoemaker for NASA, which had successfully entered orbit around 433 Eros earlier in the year, and would later land on the asteroid to scientific and engineering fanfare.
In November 2001, New Horizons was officially selected for funding as part of the New Frontiers program. However, the new NASA Administrator appointed by the Bush Administration, Sean O'Keefe, was not supportive of New Horizons, and effectively cancelled it by not including it in NASA 's budget for 2003. NASA 's Associate Administrator for the Science Mission Directorate Ed Weiler prompted Stern to lobby for the funding of New Horizons in hopes of the mission appearing in the Planetary Science Decadal Survey; a prioritized "wish list '', compiled by the United States National Research Council, that reflects the opinions of the scientific community. After an intense campaign to gain support for New Horizons, the Planetary Science Decadal Survey of 2003 -- 2013 was published in the summer of 2002. New Horizons topped the list of projects considered the highest priority among the scientific community in the medium - size category; ahead of missions to the Moon, and even Jupiter. Weiler stated that it was a result that "(his) administration was not going to fight ''. Funding for the mission was finally secured following the publication of the report, and Stern 's team were finally able to start building the spacecraft and its instruments, with a planned launch in January 2006 and arrival at Pluto in 2015. Alice Bowman became Mission Operations Manager.
New Horizons is the first mission in NASA 's New Frontiers mission category, larger and more expensive than the Discovery missions but smaller than the Flagship Program. The cost of the mission (including spacecraft and instrument development, launch vehicle, mission operations, data analysis, and education / public outreach) is approximately $700 million over 15 years (2001 -- 2016). The spacecraft was built primarily by Southwest Research Institute (SwRI) and the Johns Hopkins Applied Physics Laboratory. The mission 's principal investigator is Alan Stern of the Southwest Research Institute (formerly NASA Associate Administrator).
After separation from the launch vehicle, overall control was taken by Mission Operations Center (MOC) at the Applied Physics Laboratory in Howard County, Maryland. The science instruments are operated at Clyde Tombaugh Science Operations Center (T - SOC) in Boulder, Colorado. Navigation is performed at various contractor facilities, whereas the navigational positional data and related celestial reference frames are provided by the Naval Observatory Flagstaff Station through Headquarters NASA and JPL; KinetX is the lead on the New Horizons navigation team and is responsible for planning trajectory adjustments as the spacecraft speeds toward the outer Solar System. Coincidentally the Naval Observatory Flagstaff Station was where the photographic plates were taken for the discovery of Pluto 's moon Charon; and the Naval Observatory is itself not far from the Lowell Observatory where Pluto was discovered.
New Horizons was originally planned as a voyage to the only unexplored planet in the Solar System. When the spacecraft was launched, Pluto was still classified as a planet, later to be reclassified as a dwarf planet by the International Astronomical Union (IAU). Some members of the New Horizons team, including Alan Stern, disagree with the IAU definition and still describe Pluto as the ninth planet. Pluto 's satellites Nix and Hydra also have a connection with the spacecraft: the first letters of their names (N and H) are the initials of New Horizons. The moons ' discoverers chose these names for this reason, plus Nix and Hydra 's relationship to the mythological Pluto.
In addition to the science equipment, there are several cultural artifacts traveling with the spacecraft. These include a collection of 434,738 names stored on a compact disc, a piece of Scaled Composites 's SpaceShipOne, a "Not Yet Explored '' USPS stamp, and a Flag of the United States, along with other mementos.
About 30 grams (1 oz) of Clyde Tombaugh 's ashes are aboard the spacecraft, to commemorate his discovery of Pluto in 1930. A Florida - state quarter coin, whose design commemorates human exploration, is included, officially as a trim weight. One of the science packages (a dust counter) is named after Venetia Burney, who, as a child, suggested the name "Pluto '' after its discovery.
The goal of the mission is to understand the formation of the Pluto system, the Kuiper belt, and the transformation of the early Solar System. The spacecraft collected data on the atmospheres, surfaces, interiors, and environments of Pluto and its moons. It will also study other objects in the Kuiper belt. "By way of comparison, New Horizons gathered 5,000 times as much data at Pluto as Mariner did at the Red Planet. ''
Some of the questions the mission attempts to answer are: What is Pluto 's atmosphere made of and how does it behave? What does its surface look like? Are there large geological structures? How do solar wind particles interact with Pluto 's atmosphere?
Specifically, the mission 's science objectives are to:
The spacecraft is comparable in size and general shape to a grand piano and has been compared to a piano glued to a cocktail bar - sized satellite dish. As a point of departure, the team took inspiration from the Ulysses spacecraft, which also carried a radioisotope thermoelectric generator (RTG) and dish on a box - in - box structure through the outer Solar System. Many subsystems and components have flight heritage from APL 's CONTOUR spacecraft, which in turn had heritage from APL 's TIMED spacecraft.
New Horizons ' body forms a triangle, almost 0.76 m (2.5 ft) thick. (The Pioneers have hexagonal bodies, whereas the Voyagers, Galileo, and Cassini -- Huygens have decagonal, hollow bodies.) A 7075 aluminium alloy tube forms the main structural column, between the launch vehicle adapter ring at the "rear, '' and the 2.1 m (6 ft 11 in) radio dish antenna affixed to the "front '' flat side. The titanium fuel tank is in this tube. The RTG attaches with a 4 - sided titanium mount resembling a gray pyramid or stepstool. Titanium provides strength and thermal isolation. The rest of the triangle is primarily sandwich panels of thin aluminium facesheet (less than ⁄ in or 0.40 mm) bonded to aluminium honeycomb core. The structure is larger than strictly necessary, with empty space inside. The structure is designed to act as shielding, reducing electronics errors caused by radiation from the RTG. Also, the mass distribution required for a spinning spacecraft demands a wider triangle.
The interior structure is painted black to equalize temperature by radiative heat transfer. Overall, the spacecraft is thoroughly blanketed to retain heat. Unlike the Pioneers and Voyagers, the radio dish is also enclosed in blankets that extend to the body. The heat from the RTG adds warmth to the spacecraft while it is in the outer Solar System. While in the inner Solar System, the spacecraft must prevent overheating, hence electronic activity is limited, power is diverted to shunts with attached radiators, and louvers are opened to radiate excess heat. While the spacecraft is cruising inactively in the cold outer Solar System, the louvers are closed, and the shunt regulator reroutes power to electric heaters.
New Horizons has both spin - stabilized (cruise) and three - axis stabilized (science) modes controlled entirely with hydrazine monopropellant. Additional post launch delta - v of over 290 m / s (1,000 km / h; 650 mph) is provided by a 77 kg (170 lb) internal tank. Helium is used as a pressurant, with an elastomeric diaphragm assisting expulsion. The spacecraft 's on - orbit mass including fuel is over 470 kg (1,040 lb) on the Jupiter flyby trajectory, but would have been only 445 kg (981 lb) for the backup direct flight option to Pluto. Significantly, had the backup option been taken, this would have meant less fuel for later Kuiper belt operations.
There are 16 thrusters on New Horizons: four 4.4 N (1.0 lbf) and twelve 0.9 N (0.2 lbf) plumbed into redundant branches. The larger thrusters are used primarily for trajectory corrections, and the small ones (previously used on Cassini and the Voyager spacecraft) are used primarily for attitude control and spinup / spindown maneuvers. Two star cameras are used to measure the spacecraft attitude. They are mounted on the face of the spacecraft and provide attitude information while in spin - stabilized or 3 - axis mode. In between the time of star camera readings, spacecraft orientation is provided by dual redundant miniature inertial measurement units. Each unit contains three solid - state gyroscopes and three accelerometers. Two Adcole Sun sensors provide attitude determination. One detects the angle to the Sun, whereas the other measures spin rate and clocking.
A cylindrical radioisotope thermoelectric generator (RTG) protrudes in the plane of the triangle from one vertex of the triangle. The RTG provided 7002245700000000000 ♠ 245.7 W of power at launch, and was predicted to drop approximately 5 % every 4 years, decaying to 7002200000000000000 ♠ 200 W by the time of its encounter with the Plutonian system in 2015 and will decay too far to power the transmitters in the 2030s. There are no onboard batteries. RTG output is relatively predictable; load transients are handled by a capacitor bank and fast circuit breakers.
The RTG, model "GPHS - RTG, '' was originally a spare from the Cassini mission. The RTG contains 9.75 kg (21.5 lb) of plutonium - 238 oxide pellets. Each pellet is clad in iridium, then encased in a graphite shell. It was developed by the U.S. Department of Energy at the Materials and Fuels Complex, a part of the Idaho National Laboratory. The original RTG design called for 10.9 kg (24 lb) of plutonium, but a unit less powerful than the original design goal was produced because of delays at the United States Department of Energy, including security activities, that delayed plutonium production. The mission parameters and observation sequence had to be modified for the reduced wattage; still, not all instruments can operate simultaneously. The Department of Energy transferred the space battery program from Ohio to Argonne in 2002 because of security concerns.
The amount of radioactive plutonium in the RTG is about one - third the amount on board the Cassini -- Huygens probe when it launched in 1997. That Cassini launch was protested by some. The United States Department of Energy estimated the chances of a New Horizons launch accident that would release radiation into the atmosphere at 1 in 350, and monitored the launch as it always does when RTGs are involved. It was estimated that a worst - case scenario of total dispersal of on - board plutonium would spread the equivalent radiation of 80 % the average annual dosage in North America from background radiation over an area with a radius of 105 km (65 mi).
The spacecraft carries two computer systems: the Command and Data Handling system and the Guidance and Control processor. Each of the two systems is duplicated for redundancy, for a total of four computers. The processor used for its flight computers is the Mongoose - V, a 12 MHz radiation - hardened version of the MIPS R3000 CPU. Multiple redundant clocks and timing routines are implemented in hardware and software to help prevent faults and downtime. To conserve heat and mass, spacecraft and instrument electronics are housed together in IEMs (integrated electronics modules). There are two redundant IEMs. Including other functions such as instrument and radio electronics, each IEM contains 9 boards. The software of the probe runs on Nucleus RTOS operating system.
There have been two "safing '' events, that sent the spacecraft into safe mode:
Communication with the spacecraft is via X band. The craft had a communication rate of 7004380000000000000 ♠ 38 kbit / s at Jupiter; at Pluto 's distance, a rate of approximately 7003100000000000000 ♠ 1 kbit / s per transmitter is expected. Besides the low data rate, Pluto 's distance also causes a latency of about 4.5 hours (one - way). The 70 m (230 ft) NASA Deep Space Network (DSN) dishes are used to relay commands once it is beyond Jupiter. The spacecraft uses dual modular redundancy transmitters and receivers, and either right - or left - hand circular polarization. The downlink signal is amplified by dual redundant 12 - watt traveling - wave tube amplifiers (TWTAs) mounted on the body under the dish. The receivers are new, low - power designs. The system can be controlled to power both TWTAs at the same time, and transmit a dual - polarized downlink signal to the DSN that nearly doubles the downlink rate. DSN tests early in the mission with this dual polarization combining technique were successful, and the capability is now considered operational (when the spacecraft power budget permits both TWTAs to be powered).
In addition to the high - gain antenna, there are two backup low - gain antennas and a medium - gain dish. The high - gain dish has a Cassegrain reflector layout, composite construction, and a 2.1 - meter (7 ft) diameter providing over 7001420000000000000 ♠ 42 dBi of gain, has a half - power beam width of about a degree. The prime - focus, medium - gain antenna, with a 0.3 - meter (1 ft) aperture and 10 ° half - power beam width, is mounted to the back of the high - gain antenna 's secondary reflector. The forward low - gain antenna is stacked atop the feed of the medium - gain antenna. The aft low - gain antenna is mounted within the launch adapter at the rear of the spacecraft. This antenna was used only for early mission phases near Earth, just after launch and for emergencies if the spacecraft had lost attitude control.
New Horizons recorded scientific instrument data to its solid - state memory buffer at each encounter, then transmitted the data to Earth. Data storage is done on two low - power solid - state recorders (one primary, one backup) holding up to 7000800000000000000 ♠ 8 gigabyte s each. Because of the extreme distance from Pluto and the Kuiper belt, only one buffer load at those encounters can be saved. This is because New Horizons would require approximately 16 months after leaving the vicinity of Pluto to transmit the buffer load back to Earth. At Pluto 's distance, radio signals from the space probe back to Earth took four hours and 25 minutes to traverse 4.7 billion km of space.
Part of the reason for the delay between the gathering of and transmission of data is that all of the New Horizons instrumentation is body - mounted. In order for the cameras to record data, the entire probe must turn, and the one - degree - wide beam of the high - gain antenna was not pointing toward Earth. Previous spacecraft, such as the Voyager program probes, had a rotatable instrumentation platform (a "scan platform '') that could take measurements from virtually any angle without losing radio contact with Earth. New Horizons was mechanically simplified to save weight, shorten the schedule, and improve reliability during its 15 - year lifetime.
The Voyager 2 scan platform jammed at Saturn, and the demands of long time exposures at outer planets led to a change of plans such that the entire probe was rotated to make photos at Uranus and Neptune, similar to how New Horizons rotated.
New Horizons carries seven instruments: three optical instruments, two plasma instruments, a dust sensor and a radio science receiver / radiometer. The instruments are to be used to investigate the global geology, surface composition, surface temperature, atmospheric pressure, atmospheric temperature and escape rate of Pluto and its moons. The rated power is 7001210000000000000 ♠ 21 watts, though not all instruments operate simultaneously. In addition, New Horizons has an Ultrastable Oscillator subsystem, which may be used to study and test the Pioneer anomaly towards the end of the spacecraft 's life.
The Long - Range Reconnaissance Imager (LORRI) is a long - focal - length imager designed for high resolution and responsivity at visible wavelengths. The instrument is equipped with a 1024 × 1024 pixel by 12 - bits - per - pixel monochromatic CCD imager giving a resolution of 5 μrad (~ 1 arcsec). The CCD is chilled far below freezing by a passive radiator on the antisolar face of the spacecraft. This temperature differential requires insulation, and isolation from the rest of the structure. The 208.3 mm (8.20 in) aperture Ritchey -- Chretien mirrors and metering structure are made of silicon carbide, to boost stiffness, reduce weight, and prevent warping at low temperatures. The optical elements sit in a composite light shield, and mount with titanium and fiberglass for thermal isolation. Overall mass is 8.6 kg (19 lb), with the optical tube assembly (OTA) weighing about 5.6 kg (12 lb), for one of the largest silicon - carbide telescopes flown at the time (now surpassed by Herschel). For viewing on public web sites the 12 - bit per pixel LORRI images are converted to 8 - bit per pixel JPEG images. These public images do not contain the full dynamic range of brightness information available from the raw LORRI images files.
Solar Wind At Pluto (SWAP) is a toroidal electrostatic analyzer and retarding potential analyzer (RPA), that makes up one of the two instruments comprising New Horizons ' Plasma and high - energy particle spectrometer suite (PAM), the other being PEPSSI. SWAP measures particles of up to 6.5 keV and, because of the tenuous solar wind at Pluto 's distance, the instrument is designed with the largest aperture of any such instrument ever flown.
Pluto Energetic Particle Spectrometer Science Investigation (PEPSSI) is a time of flight ion and electron sensor that makes up one of the two instruments comprising New Horizons ' plasma and high - energy particle spectrometer suite (PAM), the other being SWAP. Unlike SWAP, which measures particles of up to 6.5 keV, PEPSSI goes up to 1 MeV.
Alice is an ultraviolet imaging spectrometer that is one of two photographic instruments comprising New Horizons ' Pluto Exploration Remote Sensing Investigation (PERSI); the other being the Ralph telescope. It resolves 1,024 wavelength bands in the far and extreme ultraviolet (from 50 -- 6993180000000000000 ♠ 180 nm), over 32 view fields. Its goal is to determine the composition of Pluto 's atmosphere. This Alice instrument is derived from another Alice aboard ESA 's Rosetta spacecraft.
The Ralph telescope, 6 cm (2.4 in) in aperture, is one of two photographic instruments that make up New Horizons ' Pluto Exploration Remote Sensing Investigation (PERSI), with the other being the Alice instrument. Ralph has two separate channels: MVIC (Multispectral Visible Imaging Camera), a visible - light CCD imager with broadband and color channels; and LEISA (Linear Etalon Imaging Spectral Array), a near - infrared imaging spectrometer. LEISA is derived from a similar instrument on the Earth Observing - 1 spacecraft. Ralph was named after Alice 's husband on The Honeymooners, and was designed after Alice.
On June 23, 2017, NASA announced that it has renamed the LEISA instrument to the "Lisa Hardaway Infrared Mapping Spectrometer '' in honor of Lisa Hardaway, the Ralph program manager at Ball Aerospace, who died in January 2017 at age 50.
The Venetia Burney Student Dust Counter (VBSDC), built by students at the University of Colorado Boulder, is operating continuously to make dust measurements. It consists of a detector panel, about 460 mm × 300 mm (18 in × 12 in), mounted on the anti-solar face of the spacecraft (the ram direction), and an electronics box within the spacecraft. The detector contains fourteen polyvinylidene difluoride (PVDF) panels, twelve science and two reference, which generate voltage when impacted. Effective collecting area is 0.125 m (1.35 sq ft). No dust counter has operated past the orbit of Uranus; models of dust in the outer Solar System, especially the Kuiper belt, are speculative. The VBSDC is always turned on measuring the masses of the interplanetary and interstellar dust particles (in the range of nano - and picograms) as they collide with the PVDF panels mounted on the New Horizons spacecraft. The measured data is expected to greatly contribute to the understanding of the dust spectra of the Solar System. The dust spectra can then be compared with those from observations of other stars, giving new clues as to where Earth - like planets can be found in the universe. The dust counter is named for Venetia Burney, who first suggested the name "Pluto '' at the age of 11. A thirteen - minute short film about the VBSDC garnered an Emmy Award for student achievement in 2006.
The Radio Science Experiment (REX) used an ultrastable crystal oscillator (essentially a calibrated crystal in a miniature oven) and some additional electronics to conduct radio science investigations using the communications channels. These are small enough to fit on a single card. Because there are two redundant communications subsystems, there are two, identical REX circuit boards.
On September 24, 2005, the spacecraft arrived at the Kennedy Space Center on board a C - 17 Globemaster III for launch preparations. The launch of New Horizons was originally scheduled for January 11, 2006, but was initially delayed until January 17, 2006, to allow for borescope inspections of the Atlas V 's kerosene tank. Further delays related to low cloud ceiling conditions downrange, and high winds and technical difficulties -- unrelated to the rocket itself -- prevented launch for a further two days.
The probe finally lifted off from Pad 41 at Cape Canaveral Air Force Station, Florida, directly south of Space Shuttle Launch Complex 39, at 19: 00 UTC on January 19, 2006. The Centaur second stage ignited at 19: 04: 43 UTC and burned for 5 minutes 25 seconds. It reignited at 19: 32 UTC and burned for 9 minutes 47 seconds. The ATK Star 48 B third stage ignited at 19: 42: 37 UTC and burned for 1 minute 28 seconds. Combined, these burns successfully sent the probe on a solar - escape trajectory at 16.26 kilometers per second (58,536 km / h; 36,373 mph). New Horizons took only nine hours to pass the Moon 's orbit. Although there were backup launch opportunities in February 2006 and February 2007, only the first twenty - three days of the 2006 window permitted the Jupiter flyby. Any launch outside that period would have forced the spacecraft to fly a slower trajectory directly to Pluto, delaying its encounter by five to six years.
The probe was launched by a Lockheed Martin Atlas V 551 rocket, with a third stage added to increase the heliocentric (escape) speed. This was the first launch of the Atlas V 551 configuration, which uses five solid rocket boosters, and the first Atlas V with a third stage. Previous flights had used zero, two, or three solid boosters, but never five. The vehicle, AV - 010, weighed 573,160 kilograms (1,263,600 lb) at lift - off, and had earlier been slightly damaged when Hurricane Wilma swept across Florida on October 24, 2005. One of the solid rocket boosters was hit by a door. The booster was replaced with an identical unit, rather than inspecting and requalifying the original.
The launch was dedicated to the memory of launch conductor Daniel Sarokon, who was described by space program officials as one of the most influential people in the history of space travel.
On January 28 and 30, 2006, mission controllers guided the probe through its first trajectory - correction maneuver (TCM), which was divided into two parts (TCM - 1A and TCM - 1B). The total velocity change of these two corrections was about 18 meters per second (65 km / h; 40 mph). TCM - 1 was accurate enough to permit the cancellation of TCM - 2, the second of three originally scheduled corrections. On March 9, 2006, controllers performed TCM - 3, the last of three scheduled course corrections. The engines burned for 76 seconds, adjusting the spacecraft 's velocity by about 1.16 m / s (4.2 km / h; 2.6 mph). Further trajectory maneuvers were not needed until September 25, 2007 (seven months after the Jupiter flyby), when the engines were fired for 15 minutes and 37 seconds, changing the spacecraft 's velocity by 2.37 m / s (8.5 km / h; 5.3 mph), followed by another TCM, almost three years later on June 30, 2010, that lasted 35.6 seconds, when New Horizons had already reached the halfway point (in time traveled) to Pluto.
During the week of February 20, 2006, controllers conducted initial in - flight tests of three onboard science instruments, the Alice ultraviolet imaging spectrometer, the PEPSSI plasma - sensor, and the LORRI long - range visible - spectrum camera. No scientific measurements or images were taken, but instrument electronics, and in the case of Alice, some electromechanical systems were shown to be functioning correctly.
On April 7, 2006, the spacecraft passed the orbit of Mars, moving at roughly 21 km / s (76,000 km / h; 47,000 mph) away from the Sun at a solar distance of 243 million kilometers.
Because of the need to conserve fuel for possible encounters with Kuiper belt objects subsequent to the Pluto flyby, intentional encounters with objects in the asteroid belt were not planned. After launch, the New Horizons team scanned the spacecraft 's trajectory to determine if any asteroids would, by chance, be close enough for observation. In May 2006 it was discovered that New Horizons would pass close to the tiny asteroid 132524 APL on June 13, 2006. Closest approach occurred at 4: 05 UTC at a distance of 101,867 km (63,297 mi). The asteroid was imaged by Ralph (use of LORRI was not possible because of proximity to the Sun), which gave the team a chance to test Ralph 's capabilities, and make observations of the asteroid 's composition as well as light and phase curves. The asteroid was estimated to be 2.5 km (1.6 mi) in diameter. The spacecraft successfully tracked the rapidly moving asteroid over June 10 -- 12, 2006.
The first images of Pluto from New Horizons were acquired September 21 -- 24, 2006, during a test of LORRI. They were released on November 28, 2006. The images, taken from a distance of approximately 4.2 billion km (2.6 billion mi; 28 AU), confirmed the spacecraft 's ability to track distant targets, critical for maneuvering toward Pluto and other Kuiper belt objects.
New Horizons used LORRI to take its first photographs of Jupiter on September 4, 2006, from a distance of 291 million kilometers (181 million miles). More detailed exploration of the system began in January 2007 with an infrared image of the moon Callisto, as well as several black - and - white images of Jupiter itself. New Horizons received a gravity assist from Jupiter, with its closest approach at 05: 43: 40 UTC on February 28, 2007, when it was 2.3 million kilometers (1.4 million miles) from Jupiter. The flyby increased New Horizons ' speed by 4 km / s (14,000 km / h; 9,000 mph), accelerating the probe to a velocity of 23 km / s (83,000 km / h; 51,000 mph) relative to the Sun and shortening its voyage to Pluto by three years.
The flyby was the center of a four - month intensive observation campaign lasting from January to June. Being an ever - changing scientific target, Jupiter has been observed intermittently since the end of the Galileo mission in September 2003. Knowledge about Jupiter benefited from the fact that New Horizons ' instruments were built using the latest technology, especially in the area of cameras, representing a significant improvement over Galileo 's cameras, which were modified versions of Voyager cameras, which, in turn, were modified Mariner cameras. The Jupiter encounter also served as a shakedown and dress rehearsal for the Pluto encounter. Because Jupiter is much closer to Earth than Pluto, the communications link can transmit multiple loadings of the memory buffer; thus the mission returned more data from the Jovian system than it was expected to transmit from Pluto.
One of the main goals during the Jupiter encounter was observing its atmospheric conditions and analyzing the structure and composition of its clouds. Heat - induced lightning strikes in the polar regions and "waves '' that indicate violent storm activity were observed and measured. The Little Red Spot, spanning up to 70 % of Earth 's diameter, was imaged from up close for the first time. Recording from different angles and illumination conditions, New Horizons took detailed images of Jupiter 's faint ring system, discovering debris left over from recent collisions within the rings or from other unexplained phenomena. The search for undiscovered moons within the rings showed no results. Travelling through Jupiter 's magnetosphere, New Horizons collected valuable particle readings. "Bubbles '' of plasma that are thought to be formed from material ejected by the moon Io, were noticed in the magnetotail.
The four largest moons of Jupiter were in poor positions for observation; the necessary path of the gravity - assist maneuver meant that New Horizons passed millions of kilometers from any of the Galilean moons. Still, its instruments were intended for small, dim targets, so they were scientifically useful on large, distant moons. Emphasis was put on Jupiter 's innermost Galilean moon, Io, whose active volcanoes shoot out tons of material into Jupiter 's magnetosphere, and further. Out of eleven observed eruptions, three were seen for the first time. That of Tvashtar reached an altitude of up to 330 km (210 mi). The event gave scientists an unprecedented look into the structure and motion of the rising plume and its subsequent fall back to the surface. Infrared signatures of a further 36 volcanoes were noticed. Callisto 's surface was analyzed with LEISA, revealing how lighting and viewing conditions affect infrared spectrum readings of its surface water ice. Minor moons such as Amalthea had their orbit solutions refined. The cameras determined their positions, acting as "reverse optical navigation ''.
After passing Jupiter, New Horizons spent most of its journey towards Pluto in hibernation mode: redundant components as well as guidance and control systems were shut down to extend their life cycle, decrease operation costs and free the Deep Space Network for other missions. During hibernation mode, the onboard computer monitored the probe 's systems and transmitted a signal back to Earth: a "green '' code if everything was functioning as expected or a "red '' code if mission control 's assistance was needed. The probe was activated for about two months a year so that the instruments could be calibrated and the systems checked. The first hibernation mode cycle started on June 28, 2007, the second cycle began on December 16, 2008, the third cycle on August 27, 2009, and the fourth cycle on August 29, 2014 after a 10 - week test.
New Horizons crossed the orbit of Saturn on June 8, 2008, and Uranus on March 18, 2011. After astronomers announced the discovery of two new moons in the Pluto system, Kerberos and Styx, mission planners started contemplating the possibility of the probe running into unseen debris and dust left over from ancient collisions between the moons. A study based on 18 months of computer simulations, Earth - based telescope observations and occultations of the Pluto system revealed that the possibility of a catastrophic collision with debris or dust was less than 0.3 % on the probe 's scheduled course. If the hazard increased, New Horizons could have used one of two possible contingency plans, the so - called SHBOTs (Safe Haven by Other Trajectories): the probe could have continued on its present trajectory with the antenna facing the incoming particles so the more vital systems would be protected, or, it could have positioned its antenna to make a course correction that would take it just 3000 km from the surface of Pluto where it was expected that the atmospheric drag would clean the surrounding space of possible debris.
While in hibernation mode in July 2012, New Horizons started gathering scientific data with SWAP, PEPSSI and VBSDC. Although it was originally planned to activate just the VBSDC, other instruments were powered on the initiative of principal investigator Alan Stern who decided they could use the opportunity to collect valuable heliospheric data. Before activating the other two instruments, ground tests were conducted to make sure that the expanded data gathering in this phase of the mission would not limit available energy, memory and fuel in the future and that all systems are functioning during the flyby. The first set of data was transmitted in January 2013 during a three - week activation from hibernation. The command and data handling software was updated to address the problem of computer resets.
Other possible targets were Neptune trojans. The probe 's trajectory to Pluto passed near Neptune 's trailing Lagrange point ("L ''), which may host hundreds of bodies in 1: 1 resonance. In late 2013, New Horizons passed within 1.2 AU (180,000,000 km; 110,000,000 mi) of the high - inclination L5 Neptune trojan 2011 HM 102, which was identified shortly before by the New Horizons KBO Search survey team while searching for more distant objects for New Horizons to fly by after its 2015 Pluto encounter. At that range, 2011 HM would have been bright enough to be detectable by New Horizons ' LORRI instrument; however, the New Horizons team eventually decided that they would not target 2011 HM for observations because the preparations for the Pluto approach took precedence.
Images from July 1 to 3, 2013 by LORRI were the first by the probe to resolve Pluto and Charon as separate objects. On July 14, 2014, mission controllers performed a sixth trajectory - correction maneuver (TCM) since its launch to enable the craft to reach Pluto. Between July 19 -- 24, 2014, New Horizons ' LORRI snapped 12 images of Charon revolving around Pluto, covering almost one full rotation at distances ranging from about 429 to 422 million kilometers (267,000,000 to 262,000,000 mi). In August 2014, astronomers made high - precision measurements of Pluto 's location and orbit around the Sun using the Atacama Large Millimeter / submillimeter Array (ALMA) to help NASA 's New Horizons spacecraft accurately home in on Pluto. On December 6, 2014, mission controllers sent a signal for the craft to "wake up '' from its final Pluto - approach hibernation and begin regular operations. The craft 's response that it was "awake '' arrived to Earth on December 7, 2014, at 02: 30 UTC.
Distant - encounter operations at Pluto began on January 4, 2015. At this date images of the targets with the onboard LORRI imager plus Ralph telescope would only be a few pixels in width. Investigators began taking Pluto and background starfield images to assist mission navigators in the design of course - correcting engine maneuvers that would precisely modify the trajectory of New Horizons to aim the approach. On January 15, 2015, NASA gave a brief update of the timeline of the approach and departure phases.
On February 12, 2015, NASA released new images of Pluto (taken from January 25 to 31) from the approaching probe. New Horizons was more than 203 million kilometers (126,000,000 mi) away from Pluto when it began taking the photos, which showed Pluto and its largest moon, Charon. The exposure time was too short to see Pluto 's smaller, much fainter, moons.
Investigators compiled a series of images of the moons Nix and Hydra taken from January 27 through February 8, 2015, beginning at a range of 201 million kilometers (125,000,000 mi). Pluto and Charon appear as a single overexposed object at the center. The right side image has been processed to remove the background starfield. The yet smaller two moons, Kerberos and Styx were seen on photos taken on April 25. Starting May 11 a hazard search was performed, by looking for unknown objects that could be a danger to the spacecraft, such as rings or more moons, which were possible to avoid by a course change.
Also in regards to the approach phase during January 2015, on August 21, 2012, the team announced that they would spend mission time attempting long - range observations of the Kuiper belt object temporarily designated VNH0004 (now designated 2011 KW 48), when the object was at a distance from New Horizons of 75 gigameters (0.50 AU). The object would be too distant to resolve surface features or take spectroscopy, but it would be able to make observations that can not be made from Earth, namely a phase curve and a search for small moons. A second object was planned to be observed in June 2015, and a third in September after the flyby; the team hoped to observe a dozen such objects through 2018. On April 15, 2015, Pluto was imaged showing a possible polar cap.
On July 4, 2015, New Horizons experienced a software anomaly and went into safe mode, preventing the spacecraft from performing scientific observations until engineers could resolve the problem. On July 5, NASA announced that the problem was determined to be a timing flaw in a command sequence used to prepare the spacecraft for its flyby, and the spacecraft would resume scheduled science operations on July 7. The science observations lost because of the anomaly were judged to have no impact on the mission 's main objectives and minimal impact on other objectives.
The timing flaw consisted of performing two tasks simultaneously -- compressing previously acquired data to release space for more data, and making a second copy of the approach command sequence -- that together overloaded the spacecraft 's primary computer. After the overload was detected, the spacecraft performed as designed: it switched from the primary computer to the backup computer, entered safe mode, and sent a distress call back to Earth. The distress call was received the afternoon of July 4, which alerted engineers that they needed to contact the spacecraft to get more information and resolve the issue. The resolution was that the problem happened as part of preparations for the approach, and was not expected to happen again because no similar tasks were planned for the remainder of the encounter.
The closest approach of the New Horizons spacecraft to Pluto occurred at 11: 49 UTC on July 14, 2015 at a range of 12,472 km (7,750 mi) from the surface and 13,658 km (8,487 mi) from the center of Pluto. Telemetry data confirming a successful flyby and a healthy spacecraft were received on Earth from the vicinity of the Pluto system on July 15, 2015, 00: 52: 37 UTC, after 22 hours of planned radio silence due to the spacecraft being pointed toward the Pluto system. Mission managers estimated a one in 10,000 chance that debris could have destroyed it during the flyby, preventing it from sending data to Earth. The first details of the encounter were received the next day, but the download of the complete data set took just over 15 months, and analysis of the data will take longer.
The mission 's science objectives are grouped in three distinct priorities. The "primary objectives '' are required; the "secondary objectives '' are expected to be met but are not demanded. The "tertiary objectives '' are desired. These objectives may be attempted, though they may be skipped in favor of the above objectives. An objective to measure any magnetic field of Pluto was dropped. A magnetometer instrument could not be implemented within a reasonable mass budget and schedule, and SWAP and PEPSSI could do an indirect job detecting some magnetic field around Pluto.
New Horizons passed within 12,500 km (7,800 mi) of Pluto, with this closest approach on July 14, 2015 at 11: 50 UTC. New Horizons had a relative velocity of 13.78 km / s (49,600 km / h; 30,800 mph) at its closest approach, and came as close as 28,800 km (17,900 mi) to Charon. Starting 3.2 days before the closest approach, long - range imaging included the mapping of Pluto and Charon to 40 km (25 mi) resolution. This is half the rotation period of the Pluto -- Charon system and allowed imaging of all sides of both bodies. Close range imaging was repeated twice per day in order to search for surface changes caused by localized snow fall or surface cryovolcanism. Because of Pluto 's tilt, a portion of the northern hemisphere would be in shadow at all times. During the flyby, engineers expected LORRI to be able to obtain select images with resolution as high as 50 m per pixel (160 ft / px) if closest distance were around 12,500 km, and MVIC was expected to obtain four - color global dayside maps at 1.6 km (1 mi) resolution. LORRI and MVIC attempted to overlap their respective coverage areas to form stereo pairs. LEISA obtained hyperspectral near - infrared maps at 7 km / px (4.3 mi / px) globally and 0.6 km / px (0.37 mi / px) for selected areas.
Meanwhile, Alice characterized the atmosphere, both by emissions of atmospheric molecules (airglow), and by dimming of background stars as they pass behind Pluto (occultation). During and after closest approach, SWAP and PEPSSI sampled the high atmosphere and its effects on the solar wind. VBSDC searched for dust, inferring meteoroid collision rates and any invisible rings. REX performed active and passive radio science. The communications dish on Earth measured the disappearance and reappearance of the radio occultation signal as the probe flew by behind Pluto. The results resolved Pluto 's diameter (by their timing) and atmospheric density and composition (by their weakening and strengthening pattern). (Alice can perform similar occultations, using sunlight instead of radio beacons.) Previous missions had the spacecraft transmit through the atmosphere, to Earth ("downlink ''). Pluto 's mass and mass distribution were evaluated by the gravitational tug on the spacecraft. As the spacecraft speeds up and slows down, the radio signal exhibited a Doppler shift. The Doppler shift was measured by comparison with the ultrastable oscillator in the communications electronics.
Reflected sunlight from Charon allowed some imaging observations of the nightside. Backlighting by the Sun gave an opportunity to highlight any rings or atmospheric hazes. REX performed radiometry of the nightside.
New Horizons ' best spatial resolution of the small satellites is 330 m per pixel (1,080 ft / px) at Nix, 780 m / px (2,560 ft / px) at Hydra, and approximately 1.8 km / px (1.1 mi / px) at Kerberos and Styx. Estimates for the diameters of these bodies are: Nix at 54 × 41 × 36 km (34 × 25 × 22 mi); Hydra at 43 × 33 km (27 × 21 mi); Kerberos at 12 × 4.5 km (7.5 × 2.8 mi); and Styx at 7 × 5 km (4.3 × 3.1 mi). This translates to a resolution of 164 / 124 / 109, 55 / 42 /?, 7 / 3 /?, and 4 / 3 /? pixels in width for Nix, Hydra, Kerberos, and Styx, respectively.
Initial predictions envisioned Kerberos as a relatively large and massive object whose dark surface led to it having a faint reflection. This proved to be wrong as images obtained by New Horizons on July 14 and sent back to Earth in October 2015 revealed an object just 8 km (5.0 mi) across with a highly reflective surface suggesting the presence of relatively clean water ice.
Soon after the Pluto flyby, New Horizons reported that the spacecraft was healthy, its flight path was within the margins, and science data of the Pluto -- Charon system had been recorded. The spacecraft 's immediate task was to begin returning the 6.25 gigabytes of information collected. The free space path loss at its distance of 4.5 light - hours (3,017,768,400 km) is approximately 303 dB at 7 GHz. Using the high gain antenna and transmitting at full power, New Horizons ' EIRP is + 83 dBm, and at this distance the signal reaching Earth is − 220 dBm. The received signal level (RSL) using one, un-arrayed Deep Space Network antenna with 72 dBi of forward gain equals − 148 dBm. Because of the extremely low RSL, it could only transmit data at 1 to 2 kb / s.
By March 30, 2016, New Horizons had reached the halfway point of transmitting this data. The transfer was completed on October 25, 2016 at 21: 48 UTC, when the last piece of data -- part of a Pluto -- Charon observation sequence by the Ralph / LEISA imager -- was received by the Johns Hopkins University Applied Physics Laboratory.
At a distance of 41 AU (6.13 billion km; 3.81 billion mi) from the Sun and 2.35 AU (352 million km; 218 million mi) from 2014 MU as of March 2018, New Horizons is heading in the direction of the constellation Sagittarius at 14.18 km / s (8.81 mi / s; 2.99 AU / a) relative to the Sun. The brightness of the Sun from the spacecraft is magnitude − 18.6.
The New Horizons team requested, and received, a mission extension through 2021 to explore additional Kuiper belt objects (KBOs). During this Kuiper Belt Extended Mission (KEM), the spacecraft will perform a close fly - by of (486958) 2014 MU 69 and conduct more distant observations on an additional two dozen objects.
Mission planners searched for one or more additional Kuiper belt objects (KBOs) of the order of 50 -- 100 km (31 -- 62 mi) in diameter as targets for flybys similar to the spacecraft 's Plutonian encounter, but, despite the large population of KBOs, many factors limit the number of possible targets. Because the flight path is determined by the Pluto flyby, and the probe only has 33 kilograms of hydrazine remaining, the object to be visited needs to be within a cone, extending from Pluto, of less than a degree 's width. This ruled out any possibility for a flyby of Eris, a trans - Neptunian object comparable in size to Pluto. It will also need to be within 55 AU, because beyond 55 AU, the communications link will become too weak, and the RTG power output will have decayed significantly enough to hinder observations. Desirable KBOs would be well over 50 km (30 mi) in diameter, neutral in color (to compare with the reddish Pluto), and, if possible, have a moon that imparts a wobble. After a search along New Horizon 's flight path using the Hubble Space Telescope, only three KBOs were found in range, and one of those objects was later dropped from consideration.
In 2011 a dedicated search for suitable KBOs using ground telescopes was started by mission scientists. Large ground telescopes with wide - field cameras, notably the twin 6.5 - meter Magellan Telescopes in Chile, the 8.2 - meter Subaru Observatory in Hawaii, and the Canada -- France -- Hawaii Telescope were used to search for potential targets. Through the citizen - science project, the public helped to scan telescopic images for possible suitable mission candidates by participating in the Ice Hunters project. The ground - based search resulted in the discovery of about 143 KBOs of potential interest, but none of these were close enough to the flight path of New Horizons. Only the Hubble Space Telescope was deemed likely to find a suitable target in time for a successful KBO mission. On June 16, 2014, time on Hubble was granted. Hubble has a much greater ability to find suitable KBOs than ground telescopes. The probability that a target for New Horizons would be found was estimated beforehand at about 95 %.
On October 15, 2014, it was revealed that Hubble 's search had uncovered three potential targets, temporarily designated PT1 ("potential target 1 ''), PT2 and PT3 by the New Horizons team. All are objects with estimated diameters in the 30 -- 55 km (19 -- 34 mi) range, too small to be seen by ground telescopes, at distances from the Sun of 43 -- 44 AU, which would put the encounters in the 2018 -- 2019 period. The initial estimated probabilities that these objects are reachable within New Horizons ' fuel budget are 100 %, 7 %, and 97 %, respectively. All are members of the "cold '' (low - inclination, low - eccentricity) classical Kuiper belt, and thus very different from Pluto. PT1 (given the temporary designation "1110113Y '' on the HST web site), the most favorably situated object, is magnitude 26.8, 30 -- 45 km (19 -- 28 mi) in diameter, and will be encountered around January 2019. A course change to reach it required about 35 % of New Horizons ' available trajectory - adjustment fuel supply. A mission to PT3 was in some ways preferable, in that it is brighter and therefore probably larger than PT1, but the greater fuel requirements to reach it would have left less for maneuvering and unforeseen events. Once sufficient orbital information was provided, the Minor Planet Center gave provisional designations to the three target KBOs: 2014 MU 69 (PT1), 2014 OS 393 (PT2), and 2014 PN 70 (PT3). By the fall of 2014, a possible fourth target, 2014 MT 69, had been eliminated by follow - up observations. PT2 was out of the running before the Pluto flyby. The spacecraft will also study almost 20 KBOs from afar.
On August 28, 2015, (486958) 2014 MU (PT1) was chosen as the flyby target. The necessary course adjustment was performed with four engine firings between October 22 and November 4, 2015. The flyby is scheduled for January 1, 2019. Funding was secured on July 1, 2016.
Aside from its flyby of (486958) 2014 MU, the extended mission for New Horizons calls for the spacecraft to conduct observations of, and look for ring systems around, between 25 and 35 different KBOs. In addition, it will continue to study the gas, dust and plasma composition of the Kuiper belt before the mission extension ends in 2021.
On November 2, 2015, New Horizons imaged KBO 15810 Arawn with the LORRI instrument from 280 million km away (170 million mi; 1.9 AU), showing the shape of the object and one or two details. This KBO was again imaged by the LORRI instrument on April 7 -- 8, 2016, from a distance of 111 million km (69 million mi; 0.74 AU). The new images allowed the science team to further refine the location of 15810 Arawn to within 1,000 km (620 mi) and to determine its rotational period of 5.47 hours.
In July 2016, the LORRI camera captured some distant images of Quaoar from 2.1 billion km away (1.3 billion mi; 14 AU); the oblique view will complement Earth - based observations to study the object 's light - scattering properties.
On December 5, 2017, when New Horizons was 40.9 AU from Earth, a calibration image of the Wishing Well cluster marked the most distant image ever taken by a spacecraft (breaking the 27 - year record set by Voyager 1 's famous Pale Blue Dot). Two hours later, New Horizons surpassed its own record, imaging the Kuiper belt objects 2012 HZ and 2012 HE from a distance of 0.50 and 0.34 AU, respectively. These are the closest images of a Kuiper belt object besides Pluto as of February 2018.
The science objectives of the flyby include characterizing the geology and morphology of 2014 MU, mapping the surface composition (searching for ammonia, carbon monoxide, methane, and water ice). Searches will be conducted for orbiting moonlets, a coma, rings, and the surrounding environment. Additional objectives include:
2014 MU is the first object to be targeted for a flyby that was discovered after the spacecraft was launched. New Horizons is planned to come within 3,500 km (2,200 mi) of 2014 MU, three times closer than the spacecraft 's earlier encounter with Pluto. Images with a resolution of up to 30 m (98 ft) are expected.
The new mission began on October 22, 2015, when New Horizons carried out the first in a series of four initial targeting maneuvers designed to send it toward 2014 MU. The maneuver, which started at approximately 19: 50 UTC and used two of the spacecraft 's small hydrazine - fueled thrusters, lasted approximately 16 minutes and changed the spacecraft 's trajectory by about 10 meters per second (33 ft / s). The remaining three targeting maneuvers took place on October 25, October 28, and November 4, 2015.
The craft will be brought out of its current hibernation period on June 4, 2018, to prepare for the approach phase. After verifying its health status, the spacecraft will transition from a spin - stabilized mode to a 3 - axis - stabilized mode on August 13, 2018. The official approach phase will begin on August 16, 2018, and continue through December 24, 2018. The first images from New Horizons are scheduled to be taken in early September 2018. If obstacles are detected, mission planners may opt to divert the spacecraft 's trajectory as late as mid-December 2018.
The Core phase begins a week before the encounter, and continues for two days after the encounter. The majority of the science data will be collected within 48 hours of the closest approach in a phase called the Inner Core. Closest approach will occur January 1, 2019, at 05: 33 UTC, at which point it will be 7012649254758838000 ♠ 43.4 AU from the Sun. At this distance, the one - way transit time for radio signals between Earth and New Horizons will be 6 hours.
After the encounter, preliminary data will be sent to Earth on January 1 and 2, 2019, to transmit high priority data. On January 9, New Horizons will return to a spin - stabilized mode, to prepare to send the remainder of its data back to Earth. This downlink is expected to continue through most of 2019.
New Horizons has been called "the fastest spacecraft ever launched '' because it left Earth at 16.26 kilometers per second (58,536 km / h; 36,373 mph), faster than any other spacecraft to date. It is also the first spacecraft launched directly into a solar escape trajectory, which requires an approximate speed while near Earth of 16.5 km / s (59,000 km / h; 37,000 mph), plus additional delta - v to cover air and gravity drag, all to be provided by the launch vehicle.
However, it is not the fastest spacecraft to leave the Solar System. As of January 2018, this record is held by Voyager 1, traveling at 16.985 km / s (61,146 km / h; 37,994 mph) relative to the Sun. Voyager 1 attained greater hyperbolic excess velocity from gravitational slingshots by Jupiter and Saturn than New Horizons. When New Horizons reaches the distance of 7013149597870700000 ♠ 100 AU, it will be travelling at about 13 km / s (47,000 km / h; 29,000 mph), around 4 km / s (14,000 km / h; 8,900 mph) slower than Voyager 1 at that distance. Other spacecraft, such as the Helios probes, can also be measured as the fastest objects, because of their orbital speed relative to the Sun at perihelion: 68.7 km / s (247,000 km / h; 154,000 mph) for Helios - B. Because they remain in solar orbit, their specific orbital energy relative to the Sun is lower than New Horizons and other artificial objects escaping the Solar System.
New Horizons ' Star 48 B third stage is also on a hyperbolic escape trajectory from the Solar System, and reached Jupiter before the New Horizons spacecraft. The Star 48B was expected to cross Pluto 's orbit on October 15, 2015. Because it is not in controlled flight, it did not receive the correct gravity assist, and passed within 200 million km (120 million mi) of Pluto. The Centaur second stage did not achieve solar escape velocity, and is in heliocentric orbit.
|
who's face is on the two dollar bill | United States dollar - Wikipedia
United States East Timor Ecuador El Salvador Marshall Islands Federated States of Micronesia Palau Panama Zimbabwe
The United States dollar (sign: $; code: USD; also abbreviated US $ and referred to as the dollar, U.S. dollar, or American dollar) is the official currency of the United States and its insular territories per the United States Constitution. For most practical purposes, it is divided into 100 smaller cent (¢) units, but officially it can be divided into 1000 mills (₥). The circulating paper money consists of Federal Reserve Notes that are denominated in United States dollars (12 U.S.C. § 418).
One might argue that the U.S. dollar is de jure commodity money of silver as enacted by the Coinage Act of 1792 which determined the dollar to be 371 ⁄ grain (24.1 g) pure or 416 grain (27.0 g) standard silver; however, since the suspension in 1971 of convertibility of paper U.S. currency into any precious metal, the U.S. dollar is, de facto, fiat money. Since the currency is the most used in international transactions, it is the world 's primary reserve currency. Several countries use it as their official currency, and in many others it is the de facto currency. Besides the United States, it is also used as the sole currency in two British Overseas Territories in the Caribbean: the British Virgin Islands and Turks and Caicos Islands. A few countries use the Federal Reserve Notes for paper money, while still minting their own coins, or also accept U.S. dollar coins (such as the Susan B. Anthony dollar). As of July 12, 2017, there was approximately $1.56 trillion in circulation, of which $1.52 trillion was in Federal Reserve notes (the remaining $40 billion is in the form of coins).
Article I, Section 8 of the U.S. Constitution provides that the Congress has the power "To coin money ''. Laws implementing this power are currently codified at 31 U.S.C. § 5112. Section 5112 prescribes the forms in which the United States dollars should be issued. These coins are both designated in Section 5112 as "legal tender '' in payment of debts. The Sacagawea dollar is one example of the copper alloy dollar. The pure silver dollar is known as the American Silver Eagle. Section 5112 also provides for the minting and issuance of other coins, which have values ranging from one cent to 100 dollars. These other coins are more fully described in Coins of the United States dollar.
The Constitution provides that "a regular Statement and Account of the Receipts and Expenditures of all public Money shall be published from time to time ''. That provision of the Constitution is made specific by Section 331 of Title 31 of the United States Code. The sums of money reported in the "Statements '' are currently being expressed in U.S. dollars (for example, see the 2009 Financial Report of the United States Government). The U.S. dollar may therefore be described as the unit of account of the United States.
The word "dollar '' is one of the words in the first paragraph of Section 9 of Article I of the Constitution. There, "dollars '' is a reference to the Spanish milled dollar, a coin that had a monetary value of 8 Spanish units of currency, or reales. In 1792 the U.S. Congress passed a Coinage Act. Section 9 of that act authorized the production of various coins, including "DOLLARS OR UNITS -- each to be of the value of a Spanish milled dollar as the same is now current, and to contain three hundred and seventy - one grains and four sixteenth parts of a grain of pure, or four hundred and sixteen grains of standard silver ''. Section 20 of the act provided, "That the money of account of the United States shall be expressed in dollars, or units... and that all accounts in the public offices and all proceedings in the courts of the United States shall be kept and had in conformity to this regulation ''. In other words, this act designated the United States dollar as the unit of currency of the United States.
Unlike the Spanish milled dollar, the U.S. dollar is based upon a decimal system of values. In addition to the dollar the coinage act officially established monetary units of mill or one - thousandth of a dollar (symbol ₥), cent or one - hundredth of a dollar (symbol ¢), dime or one - tenth of a dollar, and eagle or ten dollars, with prescribed weights and composition of gold, silver, or copper for each. It was proposed in the mid-1800s that one hundred dollars be known as a union, but no union coins were ever struck and only patterns for the $50 half union exist. However, only cents are in everyday use as divisions of the dollar; "dime '' is used solely as the name of the coin with the value of 10 ¢, while "eagle '' and "mill '' are largely unknown to the general public, though mills are sometimes used in matters of tax levies, and gasoline prices are usually in the form of $ X. XX9 per gallon, e.g., $3.599, more commonly written as $3.59 ⁄. When currently issued in circulating form, denominations equal to or less than a dollar are emitted as U.S. coins while denominations equal to or greater than a dollar are emitted as Federal Reserve notes (with the exception of gold, silver and platinum coins valued up to $100 as legal tender, but worth far more as bullion). Both one - dollar coins and notes are produced today, although the note form is significantly more common. In the past, "paper money '' was occasionally issued in denominations less than a dollar (fractional currency) and gold coins were issued for circulation up to the value of $20 (known as the "double eagle '', discontinued in the 1930s). The term eagle was used in the Coinage Act of 1792 for the denomination of ten dollars, and subsequently was used in naming gold coins. Paper currency less than one dollar in denomination, known as "fractional currency '', was also sometimes pejoratively referred to as "shinplasters ''. In 1854, James Guthrie, then Secretary of the Treasury, proposed creating $100, $50 and $25 gold coins, which were referred to as a "Union '', "Half Union '', and "Quarter Union '', thus implying a denomination of 1 Union = $100.
Today, USD notes are made from cotton fiber paper, unlike most common paper, which is made of wood fiber. U.S. coins are produced by the United States Mint. U.S. dollar banknotes are printed by the Bureau of Engraving and Printing and, since 1914, have been issued by the Federal Reserve. The "large - sized notes '' issued before 1928 measured 7.42 by 3.125 inches (188.5 by 79.4 mm); small - sized notes, introduced that year, measure 6.14 by 2.61 by 0.0043 inches (155.96 by 66.29 by 0.11 mm). When the current, smaller sized U.S. currency was introduced it was referred to as Philippine - sized currency because the Philippines had previously adopted the same size for its legal currency.
In the 16th century, Count Hieronymus Schlick of Bohemia began minting coins known as Joachimstalers (from German thal, or nowadays usually Tal, "valley '', cognate with "dale '' in English), named for Joachimstal, the valley where the silver was mined (St. Joachim 's Valley, now Jáchymov; then part of the Kingdom of Bohemia, now part of the Czech Republic). Joachimstaler was later shortened to the German Taler, a word that eventually found its way into Danish and Swedish as daler, Norwegian as dalar and daler, Dutch as daler or daalder, Ethiopian as ታላሪ (talari), Hungarian as tallér, Italian as tallero, and English as dollar. Alternatively, thaler is said to come from the German coin Guldengroschen ("great guilder '', being of silver but equal in value to a gold guilder), minted from the silver from Joachimsthal.
The coins minted at Joachimsthal soon lent their name to other coins of similar size and weight from other places. One such example, was a Dutch coin depicting a lion, hence its Dutch name leeuwendaler (in English: lion daler).
The leeuwendaler was authorized to contain 427.16 grains of. 75 fine silver and passed locally for between 36 and 42 stuivers. It was lighter than the large - denomination coins then in circulation, thus it was more advantageous for a Dutch merchant to pay a foreign debt in leeuwendalers and it became the coin of choice for foreign trade.
The leeuwendaler was popular in the Dutch East Indies and in the Dutch New Netherland Colony (New York), and circulated throughout the Thirteen Colonies during the 17th and early 18th centuries. It was also popular throughout Eastern Europe, where it led to the current Romanian and Moldovan currency being called leu (literally "lion '').
Among the English - speaking community, the coin came to be popularly known as lion dollar -- and is the origin of the name dollar. The modern American - English pronunciation of dollar is still remarkably close to the 17th - century Dutch pronunciation of daler.
By analogy with this lion dollar, Spanish pesos -- with the same weight and shape as the lion dollar -- came to be known as Spanish dollars. By the mid-18th century, the lion dollar had been replaced by the Spanish dollar, the famous "piece of eight '', which was distributed widely in the Spanish colonies in the New World and in the Philippines. Eventually, dollar became the name of the first official American currency.
The colloquialism "buck '' (s) (much like the British word "quid '' (s, pl) for the pound sterling) is often used to refer to dollars of various nations, including the U.S. dollar. This term, dating to the 18th century, may have originated with the colonial leather trade. It may also have originated from a poker term. "Greenback '' is another nickname originally applied specifically to the 19th century Demand Note dollars created by Abraham Lincoln to finance the costs of the Civil War for the North. The original note was printed in black and green on the back side. It is still used to refer to the U.S. dollar (but not to the dollars of other countries). Other well - known names of the dollar as a whole in denominations include "greenmail '', "green '' and "dead presidents '' (the last because deceased presidents are pictured on most bills).
A "grand '', sometimes shortened to simply "G '', is a common term for the amount of $1,000. The suffix "K '' or "k '' (from "kilo - '') is also commonly used to denote this amount (such as "$10 k '' to mean $10,000). However, the $1,000 note is no longer in general use. A "large '' or "stack '', it is usually a reference to a multiple of $1,000 (such as "fifty large '' meaning $50,000). The $100 note is nicknamed "Benjamin '', "Benji '', "Ben '', or "Franklin '' (after Benjamin Franklin), "C - note '' (C being the Roman numeral for 100), "Century note '' or "bill '' (e.g. "two bills '' being $200). The $50 note is occasionally called a "yardstick '' or a "grant '' (after President Ulysses S. Grant, pictured on the obverse). The $20 note is referred to as a "double sawbuck '', "Jackson '' (after Andrew Jackson), or "double eagle ''. The $10 note is referred to as a "sawbuck '', "ten - spot '' or "Hamilton '' (after Alexander Hamilton). The $5 note as "Lincoln '', "fin '', "fiver '' or "five - spot ''. The infrequently - used $2 note is sometimes called "deuce '', "Tom '', or "Jefferson '' (after Thomas Jefferson). The $1 note as a "single '' or "buck ''. The dollar has also been referred to as a "bone '' and "bones '' in plural (e.g. "twenty bones '' is equal to $20). The newer designs, with portraits displayed in the main body of the obverse (rather than in cameo insets), upon paper color - coded by denomination, are sometimes referred to as "bigface '' notes or "Monopoly money ''.
"Piastre '' was the original French word for the U.S. dollar, used for example in the French text of the Louisiana Purchase. Calling the dollar a piastre is still common among the speakers of Cajun French and New England French. Modern French uses dollar for this unit of currency as well. The term is still used as slang for U.S. dollars in the French - speaking Caribbean islands, most notably Haiti.
The symbol $, usually written before the numerical amount, is used for the U.S. dollar (as well as for many other currencies). The sign was the result of a late 18th - century evolution of the scribal abbreviation "p '' for the peso, the common name for the Spanish dollars that were in wide circulation in the New World from the 16th to the 19th centuries. These Spanish pesos or dollars were minted in Spanish America, namely in Mexico City; Potosí, Bolivia; and Lima, Peru. The p and the s eventually came to be written over each other giving rise to $.
Another popular explanation is that it is derived from the Pillars of Hercules on the Spanish Coat of arms of the Spanish dollar. These Pillars of Hercules on the silver Spanish dollar coins take the form of two vertical bars () and a swinging cloth band in the shape of an "S ''.
Yet another explanation suggests that the dollar sign was formed from the capital letters U and S written or printed one on top of the other. This theory, popularized by novelist Ayn Rand in Atlas Shrugged, does not consider the fact that the symbol was already in use before the formation of the United States.
The American dollar coin was initially based on the value and look of the Spanish dollar, used widely in Spanish America from the 16th to the 19th centuries. The first dollar coins issued by the United States Mint (founded 1792) were similar in size and composition to the Spanish dollar, minted in Mexico and Peru. The Spanish, U.S. silver dollars, and later, Mexican silver pesos circulated side by side in the United States, and the Spanish dollar and Mexican peso remained legal tender until the Coinage Act of 1857. The coinage of various English colonies also circulated. The lion dollar was popular in the Dutch New Netherland Colony (New York), but the lion dollar also circulated throughout the English colonies during the 17th century and early 18th century. Examples circulating in the colonies were usually worn so that the design was not fully distinguishable, thus they were sometimes referred to as "dog dollars ''.
The U.S. dollar was first defined by the Coinage Act of 1792, which specified a "dollar '' to be based in the Spanish milled dollar and of 371 grains and 4 sixteenths part of a grain of pure or 416 grains (27.0 g) of standard silver and an "eagle '' to be 247 and 4 eighths of a grain or 270 grains (17 g) of gold (again depending on purity). The choice of the value 371 grains arose from Alexander Hamilton 's decision to base the new American unit on the average weight of a selection of worn Spanish dollars. Hamilton got the treasury to weigh a sample of Spanish dollars and the average weight came out to be 371 grains. A new Spanish dollar was usually about 377 grains in weight, and so the new U.S. dollar was at a slight discount in relation to the Spanish dollar.
The same coinage act also set the value of an eagle at 10 dollars, and the dollar at ⁄ eagle. It called for 90 % silver alloy coins in denominations of 1, ⁄, ⁄, ⁄, and ⁄ dollars; it called for 90 % gold alloy coins in denominations of 1, ⁄, ⁄, and ⁄ eagles. The value of gold or silver contained in the dollar was then converted into relative value in the economy for the buying and selling of goods. This allowed the value of things to remain fairly constant over time, except for the influx and outflux of gold and silver in the nation 's economy.
The early currency of the United States did not exhibit faces of presidents, as is the custom now; although today, by law, only the portrait of a deceased individual may appear on United States currency. In fact, the newly formed government was against having portraits of leaders on the currency, a practice compared to the policies of European monarchs. The currency as we know it today did not get the faces they currently have until after the early 20th century; before that "heads '' side of coinage used profile faces and striding, seated, and standing figures from Greek and Roman mythology and composite Native Americans. The last coins to be converted to profiles of historic Americans were the dime (1946) and the Dollar (1971).
For articles on the currencies of the colonies and states, see Connecticut pound, Delaware pound, Georgia pound, Maryland pound, Massachusetts pound, New Hampshire pound, New Jersey pound, New York pound, North Carolina pound, Pennsylvania pound, Rhode Island pound, South Carolina pound and Virginia pound.
During the American Revolution the thirteen colonies became independent states. Freed from British monetary regulations, they each issued £ sd paper money to pay for military expenses. The Continental Congress also began issuing "Continental Currency '' denominated in Spanish dollars. The dollar was valued relative to the states ' currencies at the following rates:
Continental currency depreciated badly during the war, giving rise to the famous phrase "not worth a continental ''. A primary problem was that monetary policy was not coordinated between Congress and the states, which continued to issue bills of credit. Additionally, neither Congress nor the governments of the several states had the will or the means to retire the bills from circulation through taxation or the sale of bonds. The currency was ultimately replaced by the silver dollar at the rate of 1 silver dollar to 1000 continental dollars.
From 1792, when the Mint Act was passed, the dollar was defined as 371.25 grains (24.056 g) of silver. Many historians erroneously assume gold was standardized at a fixed rate in parity with silver; however, there is no evidence of Congress making this law. This has to do with Alexander Hamilton 's suggestion to Congress of a fixed 15: 1 ratio of silver to gold, respectively. The gold coins that were minted however, were not given any denomination whatsoever and traded for a market value relative to the Congressional standard of the silver dollar. 1834 saw a shift in the gold standard to 23.2 grains (1.50 g), followed by a slight adjustment to 23.22 grains (1.505 g) in 1837 (16: 1 ratio).
In 1862, paper money was issued without the backing of precious metals, due to the Civil War. Silver and gold coins continued to be issued and in 1878 the link between paper money and coins was reinstated. This disconnection from gold and silver backing also occurred during the War of 1812. The use of paper money not backed by precious metals had also occurred under the Articles of Confederation from 1777 to 1788. With no solid backing and being easily counterfeited, the continentals quickly lost their value, giving rise to the phrase "not worth a continental ''. This was a primary reason for the "No state shall... make any thing but gold and silver coin a tender in payment of debts '' clause in article 1, section 10 of the United States Constitution.
In order to finance the War of 1812, Congress authorized the issuance of Treasury Notes, interest - bearing short - term debt that could be used to pay public dues. While they were intended to serve as debt, they did function "to a limited extent '' as money. Treasury Notes were again printed to help resolve the reduction in public revenues resulting from the Panic of 1837 and the Panic of 1857, as well as to help finance the Mexican -- American War and the Civil War.
In addition to Treasury Notes, in 1861, Congress authorized the Treasury to borrow $50 million in the form of Demand Notes, which did not bear interest but could be redeemed on demand for precious metals. However, by December 1861, the Union government 's supply of specie was outstripped by demand for redemption and they were forced to suspend redemption temporarily. The following February, Congress passed the Legal Tender Act of 1862, issuing United States Notes, which were not redeemable on demand and bore no interest, but were legal tender, meaning that creditors had to accept them at face value for any payment except for public debts and import tariffs. However, silver and gold coins continued to be issued, resulting in the depreciation of the newly printed notes through Gresham 's Law. In 1869, Supreme Court ruled in Hepburn v. Griswold that Congress could not require creditors to accept United States Notes, but overturned that ruling the next year in the Legal Tender Cases. In 1875, Congress passed the Specie Payment Resumption Act, requiring the Treasury to allow US Notes to be redeemed for gold after January 1, 1879. The Treasury ceased to issue United States Notes in 1971.
The Gold Standard Act of 1900 abandoned the bimetallic standard and defined the dollar as 23.22 grains (1.505 g) of gold, equivalent to setting the price of 1 troy ounce of gold at $20.67. Silver coins continued to be issued for circulation until 1964, when all silver was removed from dimes and quarters, and the half dollar was reduced to 40 % silver. Silver half dollars were last issued for circulation in 1970. Gold coins were confiscated by Executive Order 6102 issued in 1933 by Franklin Roosevelt. The gold standard was changed to 13.71 grains (0.888 g), equivalent to setting the price of 1 troy ounce of gold at $35. This standard persisted until 1968.
Between 1968 and 1975, a variety of pegs to gold were put in place, eventually culminating in a sudden end, on August 15, 1971, to the convertibility of dollars to gold later dubbed the Nixon Shock. The last peg was $42.22 per ounce before the U.S. dollar was let to freely float on currency markets.
According to the Bureau of Engraving and Printing, the largest note it ever printed was the $100,000 Gold Certificate, Series 1934. These notes were printed from December 18, 1934, through January 9, 1935, and were issued by the Treasurer of the United States to Federal Reserve Banks only against an equal amount of gold bullion held by the Treasury. These notes were used for transactions between Federal Reserve Banks and were not circulated among the general public.
Official United States coins have been produced every year from 1792 to the present.
Discontinued coin denominations include:
Collector coins for which everyday transactions are non-existent.
Technically, all these coins are still legal tender at face value, though some are far more valuable today for their numismatic value, and for gold and silver coins, their precious metal value. From 1965 to 1970 the Kennedy half dollar was the only circulating coin with any silver content, which was removed in 1971 and replaced with cupronickel. However, since 1992, the U.S. Mint has produced special Silver Proof Sets in addition to the regular yearly proof sets with silver dimes, quarters, and half dollars in place of the standard copper - nickel versions. In addition, an experimental $4.00 (Stella) coin was also minted in 1879, but never placed into circulation, and is properly considered to be a pattern rather than an actual coin denomination.
The $50 coin mentioned was only produced in 1915 for the Panama - Pacific International Exposition (1915) celebrating the opening of the Panama Canal. Only 1,128 were made, 645 of which were octagonal; this remains the only U.S. coin that was not round as well as the largest and heaviest U.S. coin ever produced.
A $100 gold coin was produced in High relief during 2015, although it was primarily produced for collectors, not for general circulation.
From 1934 to present, the only denominations produced for circulation have been the familiar penny, nickel, dime, quarter, half dollar and dollar. The nickel is the only coin still in use today that is essentially unchanged (except in its design) from its original version. Every year since 1866, the nickel has been 75 % copper and 25 % nickel, except for 4 years during World War II when nickel was needed for the war.
Due to the penny 's low value, some efforts have been made to eliminate the penny as circulating coinage.
The United States Mint produces Proof Sets specifically for collectors and speculators. Silver Proofs tend to be the standard designs but with the dime, quarter, and half dollar containing 90 % silver. Starting in 1983 and ending in 1997, the Mint also produced proof sets containing the year 's commemorative coins alongside the regular coins. Another type of proof set is the Presidential Dollar Proof Set where four special $1 coins are minted each year featuring a president. Because of budget constraints and increasing stockpiles of these relatively unpopular coins, the production of new Presidential dollar coins for circulation was suspended on December 13, 2011, by U.S. Treasury Secretary Timothy F. Geithner. Future minting of such coins will be made solely for collectors.
The first United States dollar was minted in 1794. Known as the Flowing Hair Dollar, contained 416 grains of "standard silver '' (89.25 % silver and 10.75 % copper), as specified by Section 13 of the Coinage Act of 1792. It was designated by Section 9 of that Act as having "the value of a Spanish milled dollar ''.
Dollar coins have not been very popular in the United States. Silver dollars were minted intermittently from 1794 through 1935; a copper - nickel dollar of the same large size, featuring President Dwight D. Eisenhower, was minted from 1971 through 1978. Gold dollars were also minted in the 19th century. The Susan B. Anthony dollar coin was introduced in 1979; these proved to be unpopular because they were often mistaken for quarters, due to their nearly equal size, their milled edge, and their similar color. Minting of these dollars for circulation was suspended in 1980 (collectors ' pieces were struck in 1981), but, as with all past U.S. coins, they remain legal tender. As the number of Anthony dollars held by the Federal Reserve and dispensed primarily to make change in postal and transit vending machines had been virtually exhausted, additional Anthony dollars were struck in 1999. In 2000, a new $1 coin, featuring Sacagawea, (the Sacagawea dollar) was introduced, which corrected some of the problems of the Anthony dollar by having a smooth edge and a gold color, without requiring changes to vending machines that accept the Anthony dollar. However, this new coin has failed to achieve the popularity of the still - existing $1 bill and is rarely used in daily transactions. The failure to simultaneously withdraw the dollar bill and weak publicity efforts have been cited by coin proponents as primary reasons for the failure of the dollar coin to gain popular support.
In February 2007, the U.S. Mint, under the Presidential $1 Coin Act of 2005, introduced a new $1 U.S. Presidential dollar coin. Based on the success of the "50 State Quarters '' series, the new coin features a sequence of presidents in order of their inaugurations, starting with George Washington, on the obverse side. The reverse side features the Statue of Liberty. To allow for larger, more detailed portraits, the traditional inscriptions of "E Pluribus Unum '', "In God We Trust '', the year of minting or issuance, and the mint mark will be inscribed on the edge of the coin instead of the face. This feature, similar to the edge inscriptions seen on the British £ 1 coin, is not usually associated with U.S. coin designs. The inscription "Liberty '' has been eliminated, with the Statue of Liberty serving as a sufficient replacement. In addition, due to the nature of U.S. coins, this will be the first time there will be circulating U.S. coins of different denominations with the same president featured on the obverse (heads) side (Lincoln / penny, Jefferson / nickel, Franklin D. Roosevelt / dime, Washington / quarter, Kennedy / half dollar, and Eisenhower / dollar). Another unusual fact about the new $1 coin is Grover Cleveland will have two coins with two different portraits issued due to the fact he was the only U.S. President to be elected to two non-consecutive terms.
Early releases of the Washington coin included error coins shipped primarily from the Philadelphia mint to Florida and Tennessee banks. Highly sought after by collectors, and trading for as much as $850 each within a week of discovery, the error coins were identified by the absence of the edge impressions "E PLURIBUS UNUM IN GOD WE TRUST 2007 P ''. The mint of origin is generally accepted to be mostly Philadelphia, although identifying the source mint is impossible without opening a mint pack also containing marked units. Edge lettering is minted in both orientations with respect to "heads '', some amateur collectors were initially duped into buying "upside down lettering error '' coins. Some cynics also erroneously point out that the Federal Reserve makes more profit from dollar bills than dollar coins because they wear out in a few years, whereas coins are more permanent. The fallacy of this argument arises because new notes printed to replace worn out notes, which have been withdrawn from circulation, bring in no net revenue to the government to offset the costs of printing new notes and destroying the old ones. As most vending machines are incapable of making change in banknotes, they commonly accept only $1 bills, though a few will give change in dollar coins.
The U.S. Constitution provides that Congress shall have the power to "borrow money on the credit of the United States ''. Congress has exercised that power by authorizing Federal Reserve Banks to issue Federal Reserve Notes. Those notes are "obligations of the United States '' and "shall be redeemed in lawful money on demand at the Treasury Department of the United States, in the city of Washington, District of Columbia, or at any Federal Reserve bank ''. Federal Reserve Notes are designated by law as "legal tender '' for the payment of debts. Congress has also authorized the issuance of more than 10 other types of banknotes, including the United States Note and the Federal Reserve Bank Note. The Federal Reserve Note is the only type that remains in circulation since the 1970s.
Currently printed denominations are $1, $2, $5, $10, $20, $50, and $100. Notes above the $100 denomination stopped being printed in 1946 and were officially withdrawn from circulation in 1969. These notes were used primarily in inter-bank transactions or by organized crime; it was the latter usage that prompted President Richard Nixon to issue an executive order in 1969 halting their use. With the advent of electronic banking, they became less necessary. Notes in denominations of $500, $1,000, $5,000, $10,000, and $100,000 were all produced at one time; see large denomination bills in U.S. currency for details. With the exception of the $100,000 bill (which was only issued as a Series 1934 Gold Certificate and was never publicly circulated; thus it is illegal to own), these notes are now collectors ' items and are worth more than their face value to collectors.
Though still predominantly green, post-2004 series incorporate other colors to better distinguish different denominations. As a result of a 2008 decision in an accessibility lawsuit filed by the American Council of the Blind, the Bureau of Engraving and Printing is planning to implement a raised tactile feature in the next redesign of each note, except the $1 and the current version of the $100 bill. It also plans larger, higher - contrast numerals, more color differences, and distribution of currency readers to assist the visually impaired during the transition period.
The monetary base consists of coins and Federal Reserve Notes in circulation outside the Federal Reserve Banks and the U.S. Treasury, plus deposits held by depository institutions at Federal Reserve Banks. The adjusted monetary base has increased from approximately 400 billion dollars in 1994, to 800 billion in 2005, and over 3000 billion in 2013. The amount of cash in circulation is increased (or decreased) by the actions of the Federal Reserve System. Eight times a year, the 12 - person Federal Open Market Committee meets to determine U.S. monetary policy. Every business day, the Federal Reserve System engages in Open market operations to carry out that monetary policy. If the Federal Reserve desires to increase the money supply, it will buy securities (such as U.S. Treasury Bonds) anonymously from banks in exchange for dollars. Conversely, it will sell securities to the banks in exchange for dollars, to take dollars out of circulation.
When the Federal Reserve makes a purchase, it credits the seller 's reserve account (with the Federal Reserve). This money is not transferred from any existing funds -- it is at this point that the Federal Reserve has created new high - powered money. Commercial banks can freely withdraw in cash any excess reserves from their reserve account at the Federal Reserve. To fulfill those requests, the Federal Reserve places an order for printed money from the U.S. Treasury Department. The Treasury Department in turn sends these requests to the Bureau of Engraving and Printing (to print new dollar bills) and the Bureau of the Mint (to stamp the coins).
Usually, the short - term goal of open market operations is to achieve a specific short - term interest rate target. In other instances, monetary policy might instead entail the targeting of a specific exchange rate relative to some foreign currency or else relative to gold. For example, in the case of the United States the Federal Reserve targets the federal funds rate, the rate at which member banks lend to one another overnight. The other primary means of conducting monetary policy include: (i) Discount window lending (as lender of last resort); (ii) Fractional deposit lending (changes in the reserve requirement); (iii) Moral suasion (cajoling certain market players to achieve specified outcomes); (iv) "Open mouth operations '' (talking monetary policy with the market).
The 6th paragraph of Section 8 of Article 1 of the U.S. Constitution provides that the U.S. Congress shall have the power to "coin money '' and to "regulate the value '' of domestic and foreign coins. Congress exercised those powers when it enacted the Coinage Act of 1792. That Act provided for the minting of the first U.S. dollar and it declared that the U.S. dollar shall have "the value of a Spanish milled dollar as the same is now current ''.
The table to the right shows the equivalent amount of goods that, in a particular year, could be purchased with $1. The table shows that from 1774 through 2012 the U.S. dollar has lost about 97.0 % of its buying power.
The decline in the value of the U.S. dollar corresponds to price inflation, which is a rise in the general level of prices of goods and services in an economy over a period of time. A consumer price index (CPI) is a measure estimating the average price of consumer goods and services purchased by households. The United States Consumer Price Index, published by the Bureau of Labor Statistics, is a measure estimating the average price of consumer goods and services in the United States. It reflects inflation as experienced by consumers in their day - to - day living expenses. A graph showing the U.S. CPI relative to 1982 -- 1984 and the annual year - over-year change in CPI is shown at right.
The value of the U.S. dollar declined significantly during wartime, especially during the American Civil War, World War I, and World War II. The Federal Reserve, which was established in 1913, was designed to furnish an "elastic '' currency subject to "substantial changes of quantity over short periods '', which differed significantly from previous forms of high - powered money such as gold, national bank notes, and silver coins. Over the very long run, the prior gold standard kept prices stable -- for instance, the price level and the value of the U.S. dollar in 1914 was not very different from the price level in the 1880s. The Federal Reserve initially succeeded in maintaining the value of the U.S. dollar and price stability, reversing the inflation caused by the First World War and stabilizing the value of the dollar during the 1920s, before presiding over a 30 % deflation in U.S. prices in the 1930s.
Under the Bretton Woods system established after World War II, the value of gold was fixed to $35 per ounce, and the value of the U.S. dollar was thus anchored to the value of gold. Rising government spending in the 1960s, however, led to doubts about the ability of the United States to maintain this convertibility, gold stocks dwindled as banks and international investors began to convert dollars to gold, and as a result the value of the dollar began to decline. Facing an emerging currency crisis and the imminent danger that the United States would no longer be able to redeem dollars for gold, gold convertibility was finally terminated in 1971 by President Nixon, resulting in the "Nixon shock ''.
The value of the U.S. dollar was therefore no longer anchored to gold, and it fell upon the Federal Reserve to maintain the value of the U.S. currency. The Federal Reserve, however, continued to increase the money supply, resulting in stagflation and a rapidly declining value of the U.S. dollar in the 1970s. This was largely due to the prevailing economic view at the time that inflation and real economic growth were linked (the Phillips curve), and so inflation was regarded as relatively benign. Between 1965 and 1981, the U.S. dollar lost two thirds of its value.
In 1979, President Carter appointed Paul Volcker Chairman of the Federal Reserve. The Federal Reserve tightened the money supply and inflation was substantially lower in the 1980s, and hence the value of the U.S. dollar stabilized.
Over the thirty - year period from 1981 to 2009, the U.S. dollar lost over half its value. This is because the Federal Reserve has targeted not zero inflation, but a low, stable rate of inflation -- between 1987 and 1997, the rate of inflation was approximately 3.5 %, and between 1997 and 2007 it was approximately 2 %. The so - called "Great Moderation '' of economic conditions since the 1970s is credited to monetary policy targeting price stability.
There is ongoing debate about whether central banks should target zero inflation (which would mean a constant value for the U.S. dollar over time) or low, stable inflation (which would mean a continuously but slowly declining value of the dollar over time, as is the case now). Although some economists are in favor of a zero inflation policy and therefore a constant value for the U.S. dollar, others contend that such a policy limits the ability of the central bank to control interest rates and stimulate the economy when needed.
Notes: 1. Mexican peso values prior to 1993 revaluation 2. Value at the start of the year
Sources:
|
the time of their lives abbot and costello | The time of their Lives - wikipedia
The Time of Their Lives is a 1946 American fantasy - comedy film starring the comedic duo Abbott and Costello.
The time is 1780, and Horatio Prim (Lou Costello) is a master tinker. He travels to the Kings Point estate of Tom Danbury (Jess Barker) with a letter of commendation from Gen. George Washington. He plans to present this letter to Danbury, hoping it will persuade the wealthy man to let Horatio marry Nora O'Leary (Anne Gillis), Danbury 's housemaid. Unfortunately, Horatio has a romantic rival in Cuthbert Greenway (Bud Abbott), Danbury 's butler, who is very fond of Nora and intends to prevent Horatio from presenting his letter, which Nora has taken for safekeeping.
Nora happens to overhear Danbury discussing his part in Benedict Arnold 's plot; Danbury captures her, and hides the commendation letter in a secret compartment of the mantel clock. Danbury 's fiancée, Melody Allen (Marjorie Reynolds), witnesses the situation and sets off on horseback to warn Washington 's army. She enlists Horatio 's help, but the two of them are mistakenly shot by American troops who are arriving at the estate. Their bodies are thrown down a well, and the soldiers ransack the house and burn it to the ground. The souls of the two unfortunates are condemned to remain on the estate until the "crack of doom '' unless evidence of their innocence can be proved to the world.
For the next 166 years the ghosts of Horatio and Melody roam the grounds of the estate. Then, in the 1940s, the estate is restored by Sheldon Gage (John Shelton). When the restoration is finished, complete with the "original '' furniture (which was removed before the estate 's fateful burning), Sheldon invites some friends to spend the night there. Accompanying him are his psychiatrist, Dr. Ralph Greenway (Bud Abbott), a descendant of Cuthbert, as well as Sheldon 's fiancée, June Prescott (Lynn Baggett) and her Aunt Millie (Binnie Barnes).
Upon arriving they are greeted by Emily (Gale Sondergaard), the maid who strongly believes that the estate is haunted. Ghosts Horatio and Melody have some fun with this idea and try to scare the guests (playing the harpsichord, turning on the radio full volume), especially Greenway whom Horatio mistakes for Cuthbert and hits with a candlestick. The newcomers hold a séance (during which Dr. Greenway is struck by Horatio for asking if he and Melody are "the two traitors '' buried in the well) and learn the identities of the two ghosts, and of the letter which can free them (the spirit of Tom, channeling through Emily, relays the secret combination to open the clock and reveal the letter).
They search for the letter but soon learn that not all of the furniture is original, as the clock that holds the letter sits in a New York museum. Greenway, as a way of atoning for the cruelty of his predecessor, travels to the museum to retrieve the letter. However, unexpected events force him to steal it. He arrives back at the estate with the state police in pursuit. Horatio uses the curse to his and Melody 's advantage by riding in the police car that is supposed to take Greenway to jail; the car is thus prevented from leaving the estate, until the clock is opened. Finally the letter is found, and Melody and Horatio leave the estate to enter heaven, each called by a loved one (Melody by Tom & Horatio by Nora). Unfortunately for Horatio, who is met at heaven 's gate by Nora, he must wait one more day, as Nora points to a sign that says "Closed for Washington 's Birthday ''.
The Time of Their Lives was filmed at Universal Studios from March 6 through May 15, 1946.
As in the duo 's preceding film, Little Giant, Abbott and Costello do not play friends or any kind of partners, but are simply individual members of the cast. Also as in the previous movie, Costello 's character is largely the hero, while Abbott plays a somewhat unsympathetic dual role. The two comics ' trademark vaudeville routines are absent, and in fact they speak directly to each other only during one scene at the beginning of the film. (This loosening of the comics ' onscreen partnership was reportedly due to personal problems and tensions that briefly broke up their act in 1945.) The film is also noted for having a somewhat darker and more serious tone than most films featuring the duo.
Abbott learned to drive a car for this film. According to his son, Bud Abbott, Jr., this was the only time in his life he ever drove an automobile.
A few weeks into filming, Costello wanted to switch roles with Abbott. He refused to work, but director Charles Barton waited him out. Costello eventually returned to work and said nothing more about it.
The film was re-released in 1951, along with Little Giant.
The film has been released three times on VHS: 1989, 1991, and 2000. It has also been released twice on DVD: The Best of Abbott and Costello Volume Two on May 4, 2004; and on October 28, 2008 as part of Abbott and Costello: The Complete Universal Pictures Collection.
|
where did copper come from for bronze age | Tin sources and trade in ancient times - wikipedia
Tin is an essential metal in the creation of tin bronzes, and its acquisition was an important part of ancient cultures from the Bronze Age onward. Its use began in the Middle East and the Balkans around 3000 BC. Tin is a relatively rare element in the Earth 's crust, with about 2 parts per million (ppm), compared to iron with 50,000 ppm, copper with 70 ppm, lead with 16 ppm, arsenic with 5 ppm, silver with 0.1 ppm, and gold with 0.005 ppm (Valera & Valera 2003, p. 10). Ancient sources of tin were therefore rare, and the metal usually had to be traded over very long distances to meet demand in areas which lacked tin deposits.
Known sources of tin in ancient times include the southeastern tin belt that runs from Yunnan in China to the Malay Peninsula; Devon and Cornwall in England; Brittany in France; the border between Germany and the Czech Republic; Spain; Portugal; Italy; and central and South Africa (Wertime 1979, p. 1; Muhly 1979). Syria and Egypt have been suggested as minor sources of tin, but the archaeological evidence is inconclusive.
Tin extraction and use can be dated to the beginning of the Bronze Age around 3000 BC, during which copper objects formed from polymetallic ores had different physical properties (Cierny & Weisgerber 2003, p. 23). The earliest bronze objects had tin or arsenic content of less than 2 % and are therefore believed to be the result of unintentional alloying due to trace metal content in copper ores such as tennantite, which contains arsenic (Penhallurick 1986, p. 4). The addition of a second metal to copper increases its hardness, lowers the melting temperature, and improves the casting process by producing a more fluid melt that cools to a denser, less spongy metal (Penhallurick 1986, pp. 4 -- 5). This was an important innovation that allowed for the much more complex shapes cast in closed molds of the Bronze Age. Arsenical bronze objects appear first in the Middle East where arsenic is commonly found in association with copper ore, but the health risks were quickly realized and the quest for sources of the much less hazardous tin ores began early in the Bronze Age (Charles 1979, p. 30). This created the demand for rare tin metal and formed a trade network that linked the distant sources of tin to the markets of Bronze Age cultures.
Cassiterite (SnO), oxidized tin, most likely was the original source of tin in ancient times. Other forms of tin ores are less abundant sulfides such as stannite that require a more involved smelting process. Cassiterite often accumulates in alluvial channels as placer deposits due to the fact that it is harder, heavier, and more chemically resistant than the granite in which it typically forms (Penhallurick 1986). These deposits can be easily seen in river banks, because cassiterite is usually black or purple or otherwise dark, a feature exploited by early Bronze Age prospectors. It is likely that the earliest deposits were alluvial and perhaps exploited by the same methods used for panning gold in placer deposits.
The importance of tin to the success of Bronze Age cultures and the scarcity of the resource offers a glimpse into that time period 's trade and cultural interactions, and has therefore been the focus of intense archaeological studies. However, a number of problems have plagued the study of ancient tin such as the limited archaeological remains of placer mining, the destruction of ancient mines by modern mining operations, and the poor preservation of pure tin objects due to tin disease or tin pest. These problems are compounded by the difficulty in provenancing tin objects and ores to their geological deposits using isotopic or trace element analyses. Current archaeological debate is concerned with the origins of tin in the earliest Bronze Age cultures of the Near East (Penhallurick 1986; Cierny & Weisgerber 2003; Dayton 1971; Giumlia - Mair 2003; Muhly 1979; Muhly 1985).
Europe has very few sources of tin. Therefore, throughout ancient times it was imported long distances from the known tin mining districts of antiquity. These were Erzgebirge along the modern border between Germany and Czech Republic, the Iberian Peninsula, Brittany in modern France, and Devon and Cornwall in southwestern Britain (Benvenuti et al. 2003, p. 56; Valera & Valera 2003, p. 11). There are several smaller sources of tin in the Balkans (Mason et al. 2016, p. 110) and another minor source of tin is known to exist at Monte Valerio in Tuscany, Italy. The Tuscan source was exploited by Etruscan miners around 800 BC, but it was not a significant source of tin for the rest of the Mediterranean (Benvenuti et al. 2003). The Etruscans themselves found the need to import tin from the northwest of the Iberian Peninsula at that time and later from Cornwall (Penhallurick 1986, p. 80).
It has been claimed that tin was first mined in Europe around 2500 BC in Erzgebirge, and knowledge of tin bronze and tin extraction techniques spread from there to Brittany and Cornwall around 2000 BC and from northwestern Europe to northwestern Spain and Portugal around the same time (Penhallurick 1986, p. 93). However, the only Bronze Age object from Central Europe whose tin has been scientifically provenanced is the Nebra sky disk, and its tin (and gold, though not its copper), is shown by tin isotopes to have come from Cornwall (Haustein, Gillis & Pernicka 2010). In addition, a rare find of a pure tin ingot in Scandinavia was provenanced to Cornwall (Ling et al. 2014). Available evidence, though very limited, thus points to Cornwall as the sole early source of tin in Central and Northern Europe.
Brittany -- adjacent to Cornwall on the Celtic Sea -- has significant sources of tin which show evidence of being extensively exploited after the Roman conquest of Gaul during the first century BC and onwards (Penhallurick 1986, pp. 86 -- 91). Brittany remained a significant source of tin throughout the medieval period.
A group of 52 bronze artifacts from the late Bronze Age Balkans has been shown to have tin of multiple origins, based on the correlation of tin isotope differences with the different find locations of the artifacts. While the locations of these separate tin sources are uncertain, the larger Serbian group of artifacts is inferred to be derived from tin sources in western Serbia (e.g. Mount Cer), while the smaller group, largely from western Romania, is inferred to have western Romanian origins (Mason et al. 2016, p. 116).
Iberian tin was widely traded across the Mediterranean during the Bronze Age, and extensively exploited during Roman times. But Iberian tin deposits were largely forgotten throughout the medieval period, were not rediscovered till the 18th century, and only re-gained importance during the mid-19th century (Penhallurick 1986, pp. 100 -- 101).
Cornwall and Devon were important sources of tin for Europe and the Mediterranean throughout ancient times and may have been the earliest sources of tin in Western Europe. But within the historical period, they only dominated the European market from late Roman times in the 3rd century AD, with the exhaustion of many Spanish tin mines (Gerrard 2000, p. 21). Cornwall maintained its importance as a source of tin throughout medieval times and into the modern period (Gerrard 2000).
Western Asia has very little tin ore; the few sources that have recently been found are too insignificant to have played a major role during most of ancient history (Cierny & Weisgerber 2003, p. 23). However, it is possible that they were exploited at the onset of the Bronze Age and are responsible for the development of early bronze manufacturing technology (Muhly 1973; Muhly 1979). Kestel, in Southern Turkey, is the site of an ancient Casserite mine that was used from 3250 to 1800 BC. It contains miles of tunnels, some only large enough for a child. A grave with children which were probably workers has been found. It was abandoned, with crucibiles and other tools left at the site. The next evidence of the production of pure tin in the Middle East is an ingot from the 1300 BC Uluburun shipwreck off the coast of Turkey (Hauptmann, Maddin & Prange 2002).
While there are a few sources of cassiterite in Central Asia, namely in Uzbekistan, Tajikistan, and Afghanistan that show signs of having been exploited starting around 2000 BC (Cierny & Weisgerber 2003, p. 28), archaeologists disagree about whether they were significant sources of tin for the earliest Bronze Age cultures of the Middle East (Dayton 2003; Muhly 1973; Maddin 1998; Stech & Pigott 1986).
In Northern Asia the only tin deposits considered exploitable by ancient peoples occur in the far eastern region of Siberia (Dayton 2003, p. 165). This source of tin appears to have been exploited by the Eurasian Steppe people known as the Turbino culture of the Middle Bronze Age (1000 BC) as well as northern Chinese cultures around the same time (Penhallurick 1986, p. 35).
Eastern Asia has a number of small cassiterite deposits along the Yellow River which were exploited by the earliest Chinese Bronze Age culture of Erlitou and the Shang Dynasty (2500 to 1800 BC). However, the richest deposits for the region, and indeed the world, lie in Southeastern Asia, stretching from Yunnan in China to the Malay Peninsula. The deposits in Yunnan were not mined until around 700 BC, but by the Han Dynasty had become the main source of tin in China according to historical texts of the Han, Jin, Tang, and Song dynasties (Murowchick 1991, pp. 76 -- 77). Other cultures of Southeast Asia exploited the abundant cassiterite resources sometime between second and third millennia BC, but due to the lack of archaeological work in the region little else is known about tin exploitation during ancient times in that part of the world.
Tin was used in the Indian subcontinent starting between 1500 and 1000 BC (Hedge 1979, p. 39; Chakrabarti & Lahiri 1996). While India does have some small scattered deposits of tin, they were not a major source of tin for Indian Bronze Age cultures as shown by their dependence on imported tin.
While rich veins of tin are known to exist in Central and South Africa, whether these were exploited during ancient times is still debated (Dayton 2003, p. 165). However, the Bantu culture of Zimbabwe are known to have actively mined, smelted and traded tin between the 11th and 15th centuries AD (Penhallurick 1986, p. 11).
Tin deposits exist in many parts of South America, with minor deposits in southern Peru, Colombia, Brazil, and northwestern Argentina, and major deposits of exploitable cassiterite in northern Bolivia. These deposits were exploited as early as 1000 AD in the manufacture of tin bronze by Andean cultures, including the later Inca Empire, which considered tin bronze the "imperial alloy ''. In North America, the only known exploitable source of tin during ancient times is located in the Zacatecas tin province of north central Mexico which supplied west Mexican cultures with enough tin for bronze production (Lechtman 1996, p. 478).
The tin belt of Southeast Asia extends all the way down to Tasmania, but metals were not exploited in Australia until the arrival of Europeans in the 18th century.
Due to the scattered nature of tin deposits around the world and its essential nature for the creation of tin bronze, tin trade played an important role in the development of cultures throughout ancient times. Archaeologists have reconstructed parts of the extensive trade networks of ancient cultures from the Bronze Age to modern times using historical texts, archaeological excavations, and trace element and lead isotope analysis to determine the origins of tin objects around the world (Valera & Valera 2003; Rovia & Montero 2003; Maddin 1998).
The earliest sources of tin in the Early Bronze Age in the Near East are still unknown and the subject of much debate in archaeology (Dayton 1971; Dayton 2003; Maddin 1998; Muhly 1973; Penhallurick 1986; Stech & Pigott 1986; Kalyanaraman 2010). Possibilities include minor now - depleted sources in the Near East, trade from Central Asia (Muhly 1979), Sub-Saharan Africa (Dayton 2003), Europe, or elsewhere.
It is possible that as early as 2500 BC, Erzgebirge had begun exporting tin, using the well established Baltic amber trade route to supply Scandinavia as well as the Mediterranean with tin (Penhallurick 1986, pp. 75 -- 77). By 2000 BC, the extraction of tin in Britain, France, Spain, and Portugal had begun and tin was traded to the Mediterranean sporadically from all these sources. Evidence of tin trade in the Mediterranean can be seen in a number of Bronze Age shipwrecks containing tin ingots such as the Uluburun off the coast of Turkey dated 1300 BC which carried over 300 copper bars weighing 10 tons, and approximately 40 tin bars weighing 1 ton (Pulak 2001). While Sardinia does not appear to have much in terms of significant sources of tin, it does have rich copper and other mineral wealth and served as a centre for metals trade during the Bronze Age and likely actively imported tin from the Iberian Peninsula for export to the rest of the Mediterranean (Lo Schiavo 2003).
By classical Greek times, the tin sources were well established. Greece and the Western Mediterranean appear to have traded their tin from European sources, while the Middle East acquired their tin from Central Asian sources through the Silk Road (Muhly 1979, p. 45). For example, Iron Age Greece had access to tin from Iberia by way of the Phoenicians who traded extensively there, from Erzgebirge by way of the Baltic Amber Road overland route, or from Brittany and Cornwall through overland routes from their colony at Massalia (modern day Marseilles) established in the 6th century BC (Penhallurick 1986). In 450 BC, Herodotus described tin as coming from Northern European islands named the Cassiterides along the extreme borders of the world, suggesting very long distance trade, likely from Britain, northwestern Iberia, or Brittany, supplying tin to Greece and other Mediterranean cultures (Valera & Valera 2003, p. 11). It should be noted that the idea that the Phoenicians went to Cornwall for its tin and supplied it to the whole of the Mediterranean has no archaeological basis and is largely considered a myth (Penhallurick 1986, p. 123).
The early Roman world was mainly supplied with tin from its Iberian provinces of Gallaecia and Lusitania and to a lesser extent Tuscany. Pliny mentions that in 80 BC, a senatorial decree halted all mining on the Italian Peninsula, stopping any tin mining activity in Tuscany and increasing Roman dependence on tin from Brittany, Iberia, and Cornwall. After the Roman conquest of Gaul, Brittany 's tin deposits saw intensified exploitation after the first century BC (Penhallurick 1986, pp. 86 -- 91). With the exhaustion of the Iberian tin mines, Cornwall became a major supplier of tin for the Romans after the 3rd century AD (Gerrard 2000).
Throughout the medieval period, demand for tin increased as pewter gained popularity. Brittany and Cornwall remained the major producers and exporters of tin throughout the Mediterranean through to modern times (Gerrard 2000).
Near Eastern development of bronze technology spread across Central Asia by way of the Eurasian Steppes, and with it came the knowledge and technology for tin prospection and extraction. By 2000 to 1500 BC Uzbekistan, Afghanistan, and Tajikistan appear to have exploited their sources of tin, carrying the resources east and west along the Silk Road crossing Central Asia (Cierny & Weisgerber 2003, p. 28). This trade link likely followed an existing trade route of lapis lazuli, a highly prized semi-precious blue gemstone, and chlorite vessels decorated with turquoise from Central Asia that have been found as far west as Egypt and that date to the same period (Giumlia - Mair 2003, p. 93).
In China, early tin was extracted along the Yellow River in Erlitou and Shang times between 2500 and 1800 BC. By Han and later times, China imported its tin from what is today Yunnan province. This has remained China 's main source of tin throughout history and into modern times (Murowchick 1991).
It is unlikely that Southeast Asian tin from Indochina was widely traded around the world in ancient times as the area was only opened up to Indian, Muslim, and European traders around 800 AD (Penhallurick 1986, p. 51).
Indo -- Roman trade relations are well known from historical texts such as Pliny 's Natural History (book VI, 26), and tin is mentioned as one of the resources being exported from Rome to South Arabia, Somaliland, and India (Penhallurick 1986, p. 53; Dayton 2003, p. 165).
|
does the system of all subsets of a finite set under the operation subset of | Power set - wikipedia
In mathematics, the power set (or powerset) of any set S is the set of all subsets of S, including the empty set and S itself, variously denoted as P (S), P (S), ℘ (S) (using the "Weierstrass p ''), P (S), P (S), or, identifying the powerset of S with the set of all functions from S to a given set of two elements, 2. In axiomatic set theory (as developed, for example, in the ZFC axioms), the existence of the power set of any set is postulated by the axiom of power set.
Any subset of P (S) is called a family of sets over S.
If S is the set (x, y, z), then the subsets of S are
and hence the power set of S is ((), (x), (y), (z), (x, y), (x, z), (y, z), (x, y, z)).
If S is a finite set with S = n elements, then the number of subsets of S is P (S) = 2. This fact, which is the motivation for the notation 2, may be demonstrated simply as follows,
Cantor 's diagonal argument shows that the power set of a set (whether infinite or not) always has strictly higher cardinality than the set itself (informally the power set must be larger than the original set). In particular, Cantor 's theorem shows that the power set of a countably infinite set is uncountably infinite. The power set of the set of natural numbers can be put in a one - to - one correspondence with the set of real numbers (see Cardinality of the continuum).
The power set of a set S, together with the operations of union, intersection and complement can be viewed as the prototypical example of a Boolean algebra. In fact, one can show that any finite Boolean algebra is isomorphic to the Boolean algebra of the power set of a finite set. For infinite Boolean algebras this is no longer true, but every infinite Boolean algebra can be represented as a subalgebra of a power set Boolean algebra (see Stone 's representation theorem).
The power set of a set S forms an abelian group when considered with the operation of symmetric difference (with the empty set as the identity element and each set being its own inverse) and a commutative monoid when considered with the operation of intersection. It can hence be shown (by proving the distributive laws) that the power set considered together with both of these operations forms a Boolean ring.
In set theory, X is the set of all functions from Y to X. As "2 '' can be defined as (0, 1) (see natural number), 2 (i.e., (0, 1)) is the set of all functions from S to (0, 1). By identifying a function in 2 with the corresponding preimage of 1, we see that there is a bijection between 2 and P (S), where each function is the characteristic function of the subset in P (S) with which it is identified. Hence 2 and P (S) could be considered identical set - theoretically. (Thus there are two distinct notational motivations for denoting the power set by 2: the fact that this function - representation of subsets makes it a special case of the X notation and the property, mentioned above, that 2 = 2.)
This notion can be applied to the example above in which S = (x, y, z) to see the isomorphism with the binary numbers from 0 to 2 − 1 with n being the number of elements in the set. In S, a "1 '' in the position corresponding to the location in the set indicates the presence of the element. So (x, y) = 110.
For the whole power set of S we get:
The power set is closely related to the binomial theorem. The number of subsets with k elements in the power set of a set with n elements is given by the number of combinations, C (n, k), also called binomial coefficients.
For example, the power set of a set with three elements, has:
If S is a finite set, there is a recursive algorithm to calculate P (S).
Define the operation F (e, T) = (X ∪ (e) X ∈ T).
In English, return the set with the element e added to each set X in T.
In other words, the power set of the empty set is the set containing the empty set and the power set of any other set is all the subsets of the set containing some specific element and all the subsets of the set not containing that specific element.
The set of subsets of S of cardinality less than κ is denoted by P (S) or P (S). Similarly, the set of non-empty subsets of S might be denoted by P (S).
A set can be regarded as an algebra having no nontrivial operations or defining equations. From this perspective the idea of the power set of X as the set of subsets of X generalizes naturally to the subalgebras of an algebraic structure or algebra.
Now the power set of a set, when ordered by inclusion, is always a complete atomic Boolean algebra, and every complete atomic Boolean algebra arises as the lattice of all subsets of some set. The generalization to arbitrary algebras is that the set of subalgebras of an algebra, again ordered by inclusion, is always an algebraic lattice, and every algebraic lattice arises as the lattice of subalgebras of some algebra. So in that regard subalgebras behave analogously to subsets.
However, there are two important properties of subsets that do not carry over to subalgebras in general. First, although the subsets of a set form a set (as well as a lattice), in some classes it may not be possible to organize the subalgebras of an algebra as itself an algebra in that class, although they can always be organized as a lattice. Secondly, whereas the subsets of a set are in bijection with the functions from that set to the set (0, 1) = 2, there is no guarantee that a class of algebras contains an algebra that can play the role of 2 in this way.
Certain classes of algebras enjoy both of these properties. The first property is more common, the case of having both is relatively rare. One class that does have both is that of multigraphs. Given two multigraphs G and H, a homomorphism h: G → H consists of two functions, one mapping vertices to vertices and the other mapping edges to edges. The set H of homomorphisms from G to H can then be organized as the graph whose vertices and edges are respectively the vertex and edge functions appearing in that set. Furthermore, the subgraphs of a multigraph G are in bijection with the graph homomorphisms from G to the multigraph Ω definable as the complete directed graph on two vertices (hence four edges, namely two self - loops and two more edges forming a cycle) augmented with a fifth edge, namely a second self - loop at one of the vertices. We can therefore organize the subgraphs of G as the multigraph Ω, called the power object of G.
What is special about a multigraph as an algebra is that its operations are unary. A multigraph has two sorts of elements forming a set V of vertices and E of edges, and has two unary operations s, t: E → V giving the source (start) and target (end) vertices of each edge. An algebra all of whose operations are unary is called a presheaf. Every class of presheaves contains a presheaf Ω that plays the role for subalgebras that 2 plays for subsets. Such a class is a special case of the more general notion of elementary topos as a category that is closed (and moreover cartesian closed) and has an object Ω, called a subobject classifier. Although the term "power object '' is sometimes used synonymously with exponential object Y, in topos theory Y is required to be Ω.
In category theory and the theory of elementary topoi, the universal quantifier can be understood as the right adjoint of a functor between power sets, the inverse image functor of a function between sets; likewise, the existential quantifier is the left adjoint.
|
who narrates the prologue in beauty and the beast | Beauty and the Beast (2017 film) - wikipedia
Beauty and the Beast is a 2017 American musical romantic fantasy film directed by Bill Condon from a screenplay written by Stephen Chbosky and Evan Spiliotopoulos, and co-produced by Walt Disney Pictures and Mandeville Films. The film is based on Disney 's 1991 animated film of the same name, itself an adaptation of Jeanne - Marie Leprince de Beaumont 's eighteenth - century fairy tale. The film features an ensemble cast that includes Emma Watson and Dan Stevens as the titular characters with Luke Evans, Kevin Kline, Josh Gad, Ewan McGregor, Stanley Tucci, Audra McDonald, Gugu Mbatha - Raw, Ian McKellen, and Emma Thompson in supporting roles.
Principal photography began at Shepperton Studios in Surrey, United Kingdom on May 18, 2015, and ended on August 21. Beauty and the Beast premiered on February 23, 2017, at Spencer House in London, and was released in the United States on March 17, 2017, in standard, Disney Digital 3 - D, RealD 3D, IMAX and IMAX 3D formats, along with Dolby Cinema. The film received generally positive reviews from critics, with many praising Watson and Stevens ' performances as well as the ensemble cast, faithfulness to the original animated film alongside elements from the Broadway musical, visual style, production design, and musical score, though it received criticism for some of the character designs and its excessive similarity to the original. The film grossed over $1.2 billion worldwide, becoming the highest - grossing live - action musical film, and making it the highest - grossing film of 2017 and the 10th - highest - grossing film of all time.
In Rococo - era France, an enchantress disguised as a beggar arrives at a ball and offers the host, a coldhearted prince, a rose for shelter. When he refuses, she transforms him into a beast and his servants into household objects, and erases the castle from the memories of their loved ones. She casts a spell on the rose and warns the prince that the curse will never lift unless he learns to love another, and earn their love in return, before the last petal falls.
Years later, in the village of Villeneuve, Belle dreams of adventure and brushes off advances from Gaston, an arrogant former soldier. Lost in the forest, Belle 's father Maurice seeks refuge in the Beast 's castle, but the Beast imprisons him for stealing a rose from his garden as a birthday gift to Belle. Belle ventures out in search for him and finds him locked in the castle dungeon. The Beast agrees to let her take Maurice 's place.
Belle befriends the castle 's servants, who treat her to a spectacular dinner. When she wanders into the forbidden west wing and finds the rose, the Beast, enraged, scares her into the woods. She is cornered by wolves, but the Beast rescues her and is injured in the process. As Belle nurses his wounds, a friendship develops between them. The Beast shows Belle a gift from the enchantress, a book that transports readers wherever they want. Belle uses it to visit her childhood home in Paris, where she discovers a plague doctor mask and realises that she and her father were forced to leave her mother 's deathbed when her mother succumbed to the plague.
In Villeneuve, Gaston sees rescuing Belle as an opportunity to win her hand in marriage and agrees to help Maurice. When Maurice learns of his ulterior motive and rejects him, Gaston abandons him to the wolves. Maurice is rescued by the herb - wife Agathe, but when he tells the townsfolk of Gaston 's crime, but is unable to provide solid evidence, Gaston convinces them to send Maurice to an insane asylum.
After sharing a romantic dance with the Beast, Belle discovers her father 's predicament using a magic mirror. The Beast releases her to save Maurice, giving her the mirror to remember him with. At Villeneuve, Belle proves Maurice 's sanity by revealing the Beast in the mirror to the townsfolk. Realizing that Belle loves the Beast, Gaston has her thrown into the asylum carriage with her father and rallies the villagers to follow him to the castle to slay the Beast. Maurice and Belle escape, and Belle rushes back to the castle.
During the battle, Gaston abandons his companion LeFou, who then sides with the servants to fend off the villagers. Gaston attacks the Beast in his tower, who is too depressed to fight back, but regains his will upon seeing Belle return. He overpowers Gaston but spares his life before reuniting with Belle. However, Gaston fatally shoots the Beast from a bridge, but it collapses when the castle crumbles, and he falls to his death. The Beast dies as the last petal falls and the servants become inanimate. When Belle tearfully professes her love to him, Agathe reveals herself as the enchantress and undoes the curse, repairing the crumbling castle and restoring the Beast 's and servants ' human forms and the villagers ' memories. The Prince and Belle host a ball for the kingdom, where they dance happily.
^ In the initial theatrical release, Mitchell was miscredited as Rudi Gooman in the cast, but listed under his real name in the soundtrack credits.
^ In the initial theatrical release, Turner is miscredited as Henry Garrett in the cast.
Stephen Merchant also appeared in the film as Monsieur Toilette, a servant who was turned into a toilet. This character was cut from the film, but is featured in the deleted scenes.
Previously, Disney had begun work on a film adaptation of the 1994 Broadway musical. However, in a 2011 interview, composer Alan Menken stated the planned film version of the Beauty and the Beast stage musical "was canned ''.
By April 2014, Walt Disney Pictures had already begun developing a new live - action version and remake of Beauty and the Beast after making other live - action fantasy films such as Alice in Wonderland, Maleficent, Cinderella and The Jungle Book. In June 2014, Bill Condon was signed to direct the film from a script by Evan Spiliotopoulos. Later in September of that same year, Stephen Chbosky (who had previously directed Watson in The Perks of Being a Wallflower) was hired to re-write the script.
Before Condon was hired to direct the film, Disney approached him with a proposal to remake the film in a more radical way as Universal Studios had remade Snow White and the Huntsman (2012). Condon later explained that "after Frozen opened, the studio saw that there was this big international audience for an old - school - musical approach. But initially, they said, ' We 're interested in a musical to a degree, but only half full of songs. ' My interest was taking that film and doing it in this new medium -- live - action -- as a full - on musical movie. So I backed out for a minute, and they came back and said, ' No, no, no, we get it, let 's pursue it that way. ' '' Walt Disney Pictures president of production Sean Bailey credited Walt Disney Studios chairman Alan F. Horn with the decision to make the film as a musical: "We worked on this for five or six years, and for 18 months to two years, Beauty was a serious dramatic project, and the scripts were written to reflect that. It was n't a musical at that time. But we just could n't get it to click and it was Alan Horn who championed the idea of owning the Disney of it all. We realized there was a competitive advantage in the songs. What is wrong with making adults feel like kids again? ''
In January 2015, Emma Watson announced that she would be starring as Belle, the female lead. Watson was the first choice of Walt Disney Studios chairman Alan F. Horn, as he had previously overseen Warner Bros. which released the eight Harry Potter films that co-starred Watson as Hermione Granger. Two months later, Luke Evans and Dan Stevens were revealed to be in talks to play Gaston and the Beast respectively, and Watson confirmed their casting the following day through tweets. The rest of the principal cast, including Josh Gad, Emma Thompson, Kevin Kline, Audra McDonald, Ian McKellen, Gugu Mbatha - Raw, Ewan McGregor and Stanley Tucci were announced between March and April to play LeFou, Mrs. Potts, Maurice, Madame de Garderobe, Cogsworth, Plumette, Lumière and Cadenza, respectively.
Susan Egan, who originated the role of Belle on Broadway, commented on the casting of Watson as "perfect ''. Paige O'Hara, who voiced Belle in the original animated film and its sequels, offered to help Watson with her singing lessons.
According to The Hollywood Reporter, Emma Watson was reportedly paid $3 million upfront, together with an agreement that her final take - home pay could rise as high as $15 million if the film generated gross box office income similar to Maleficent 's $759 million worldwide gross.
Principal photography on the film began at Shepperton Studios in Surrey, United Kingdom, on May 18, 2015. Filming with the principal actors concluded on August 21. Six days later, co-producer Jack Morrissey confirmed that the film had officially wrapped production.
The Beast was portrayed with a "more traditional motion capture puppeteering for the body and the physical orientation '', where actor Dan Stevens was "in a forty - pound gray suit on stilts for much of the film ''. The facial capture for the Beast was done separately in order to "communicate the subtleties of the human face '' and "(capture the) thought that occurs to him '' which gets "through (to) the eyes, which are the last human element in the Beast. '' The castle servants who are transformed into household objects were created with CGI animation.
Before the release of the film, Bill Condon refilmed one certain sequence in the "Days of the Sun '' number, due to confusion among test audiences caused by actress Harriet Jones, who looked similar to Hattie Morahan, who portrayed Agathe. In the original version of the scene, it was Jones 's character, the Prince 's mother, who sings the first verse of the song, with Rudi Goodman playing the young Prince and Henry Garrett playing his father; but in the reshot version of the scene, the singing part is given to the Prince (now played by Adam Mitchell). The King was also recast to Tom Turner, although Harriet Jones was still the Queen, albeit with dark hair. Both Goodman and Garrett 's names were mistakenly featured in the original theatrical release 's credits, but was later corrected in home releases.
When released in 1991, Beauty and the Beast marked a turning point for Walt Disney Pictures by appealing to millions of fans with its Oscar - winning musical score by lyricist Howard Ashman and composer Alan Menken. In Bill Condon 's opinion, that original score was the key reason he agreed to direct a live - action version of the movie. "That score had more to reveal '', he says, "You look at the songs and there 's not a clunker in the group. In fact, Frank Rich described it as the best Broadway musical of 1991. The animated version was already darker and more modern than the previous Disney fairytales. Take that vision, put it into a new medium, make it a radical reinvention, something not just for the stage because it 's not just being literal, now other elements come into play. It 's not just having real actors do it ''.
Condon initially prepared on only drawing inspiration from the original film, but he also planned to include most of the songs composed by Alan Menken, Howard Ashman and Tim Rice from the Broadway musical, with the intention of making the film as a "straight - forward, live - action, large - budget movie musical ''. Menken returned to score the film 's music, which features songs from the original film by him and Howard Ashman, plus new material written by Menken and Tim Rice. Menken said the film will not include songs that were written for the Broadway musical and instead, created four new songs. However, an instrumental version of the song "Home '', which was written for the musical, is used during the scene where Belle first enters her room in the castle.
On January 19, 2017, it was confirmed by both Disney and Céline Dion -- singer of the original 1991 Beauty and the Beast duet song, with singer Peabo Bryson -- that Dion would be performing one of the new original songs "How Does a Moment Last Forever '' to play over the end titles. She originally had doubts about whether or not to record the song due to the recent death of her husband and manager René Angélil, who had previously helped her secure the 1991 pop duet. While ultimately accepting the opportunity, she said: "(The) first Beauty and the Beast decision was made with my husband. Now I 'm making decisions on my own. It 's a little bit harder. I could n't say yes right away, because I felt like I was kind of cheating in a way ''. She eventually felt compelled to record the song because of the impact Beauty and the Beast has had on her career. According to Dion, "I was at the beginning of my career, it put me on the map, it put me where I am today ''. Also, Josh Groban was announced to be performing the new original song "Evermore '' on January 26, 2017.
The 2017 film features a remake of the 1991 original song Beauty and the Beast recorded as a duet by Ariana Grande and John Legend. Grande and Legend 's updated version of the Beauty and the Beast title song is faithful to the original, Grammy - winning duet, performed by Céline Dion and Peabo Bryson for the 1991 Disney film.
Emma Thompson also performed a rendition of "Beauty and the Beast '', which was performed by Angela Lansbury in the original 1991 animated film release.
Disney debuted the music video for Ariana Grande and John Legend 's interpretation of the title song "Beauty and the Beast '' on Freeform television network on March 5, 2017, and it has since attained over 100 million video views on the Vevo video - hosting service.
On March 16, 2015, Disney announced the film would be released in 3D on March 17, 2017. The first official presentation of the film took place at Disney 's three - day D23 Expo in August 2015.
The world premiere of Beauty and the Beast took place on February 23, 2017, at Spencer House in London, United Kingdom; and the film later premiered at the El Capitan Theatre in Hollywood, California, on March 2, 2017. The stream was broadcast onto YouTube.
A sing along version of the film released in over 1,200 US theaters nationwide on April 7, 2017. The United Kingdom received the same version on April 21, 2017.
Disney spent around $140 million for marketing the film worldwide. Following an announcement on May 22, 2016, Disney premiered the first official teaser trailer on Good Morning America the next day. In its first 24 hours, the teaser trailer reached 91.8 million views, which topped the number of views seen in that amount of time in history, including for the teasers for other films distributed by Disney such as Avengers: Age of Ultron, Star Wars: The Force Awakens and Captain America: Civil War. This record has since been broken by Thor: Ragnarok and It. The first official teaser poster was released on July 7, 2016. On November 2, 2016, Entertainment Weekly debuted the first official image on the cover of their magazine for the week along with nine new photos as well. One week later, Emma Watson and Disney debuted a new poster for the film. On November 14, 2016, the first theatrical trailer was released again on Good Morning America. The trailer reached 127.6 million views in its first 24 hours, setting a new record as the trailer with the most views in one day, beating out Fifty Shades Darker. This record has since been broken again by The Fate of the Furious. A TV spot with Watson singing was shown during the 74th Golden Globe Awards. Disney released the final trailer on January 30, 2017.
Beauty and the Beast was released on Blu - ray, DVD and Digital HD on June 6, 2017. The film debuted at No. 1 on the NPD VideoScan overall disc sales chart, with all other titles in the top 20, collectively, selling only 40 % as many units as Beauty and the Beast. The movie regained the top spot on the national home video sales charts during its third week of release. The movie became available on Netflix on September 19, 2017.
Beauty and the Beast grossed $504 million in the United States and Canada and $759.4 million in other territories for a worldwide gross of $1.263 billion. With a production budget of $160 million, it is the second-most expensive musical ever made; only Hello, Dolly! (1969) with a budget of $25 million ($165 million in 2016 dollars) cost more. In just ten days, it became the highest - grossing live - action musical of all time, beating the nine - year - old record held by Mamma Mia!. It is currently the second - biggest musical ever overall, behind Disney 's Frozen (2013). Worldwide, the film proved to be a global phenomenon, earning a total of $357 million over its four - day opening weekend from 56 markets. Critics said the film was playing like superhero movies amongst women. It was the second biggest March global opening, behind only Batman v Superman: Dawn of Justice, the thirteenth - biggest worldwide opening ever and the seventh - biggest for Disney. This includes $21 million from IMAX plays on 1,026 screens, a new record for an IMAX PG title. It surpassed the entire lifetime total of the original film in just six days.
Beauty and the Beast is the 300th digitally remastered release in IMAX company 's history, which began with the re-release of Apollo 13 in 2002. Its robust global debut helped push the company past $6 billion for the first time, and led to analysts believing that the film had a shot of passing $1 billion worldwide from theatrical earnings. On April 12, it passed the $1 billion threshold, becoming the first film of 2017, the fourteenth Disney film, and the twenty - ninth film overall to pass the mark. It became the first film since Rogue One (also a Disney property) in December 2016 to make over a billion dollars, and did so on its twenty - ninth day of release. It is currently the highest - grossing film of 2017, the highest - grossing March release, the highest - grossing remake of all - time, and the fifth - biggest Disney film. Even after inflation adjusted, it is still ahead of the $425 million gross ($760 million in 2017 dollars) of the original film.
In the United States and Canada, Beauty and the Beast topped Fandango 's pre-sales and became the fastest - selling family film in the company 's history, topping the studio 's own animated film Finding Dory released the previous year. Early tracking had the film grossing around $100 million in its opening weekend, with some publications predicting it could reach $130 million. By the time the film 's release was 10 days away, analysts raised projections to as high as $150 million. It earned $16.3 million from Thursday previews night, marking the biggest of 2017 (breaking Logan 's record), the biggest ever for a Disney live - action film (breaking Maleficent 's record), the second biggest ever for both a G or PG - rated film (behind the sixth Harry Potter film Harry Potter and the Half - Blood Prince which also starred Watson), and the third biggest ever in the month of March (behind Batman v Superman: Dawn of Justice and The Hunger Games). An estimated 41 % of the gross came from IMAX, 3D and premium large format screenings which began at 6 pm, while the rest -- 59 % -- came from regular 2D shows which began at 7 p.m. The numbers were considered more impressive given that the film played during a school week.
On its opening day, the film made $63.8 million from 4,210 theaters across 9,200 screens, marking the third biggest in the month of March, trailing behind Batman v Superman ($81.5 million) and The Hunger Games ($67 million). It was also the biggest opening day ever for a film that was n't PG - 13, displacing the $58 million opening Wednesday of Harry Potter and the Half - Blood Prince. Its opening day alone (which includes Thursday 's previews) almost matched the entire opening weekend of previous Disney live - action films, Maleficent ($69.4 million) and Cinderella ($67.9 million). Unlike all previous four Disney live - action films witnessing a hike on their second day, Saturday, Beauty and the Beast actually fell 2 %, but nevertheless, the dip was paltry, and the grosses are so much bigger compared to the other titles. Earning a total of $174.8 million on its opening weekend, it defied all expectations and went on to set numerous notable records. This includes the biggest opening of the year as well as the biggest for the month of March and pre-summer / spring opening, beating Batman v Superman, the biggest start ever for a PG title (also for a family film), surpassing Finding Dory, the biggest debut of all time for a female - fueled film, ahead of The Hunger Games: Catching Fire, the biggest for a Disney live - action adaptation, ahead of Alice in Wonderland and the biggest musical debut ever, supplanting Pitch Perfect 2. Furthermore, it is also Watson 's highest - opening, beating Harry Potter and the Deathly Hallows -- Part 2 same with Emma Thompson, director Bill Condon 's biggest debut ever ahead of The Twilight Saga: Breaking Dawn -- Part 2 and the biggest outside of summer, save for Star Wars: The Force Awakens, not accounting for inflation.
It became the forty - third film to debut with over $100 million and the fifteenth film to open above $150 million. Its three - day opening alone surpassed the entire original North American run of the first film ($146 million; before the 3D re-release), instantly becoming the second - biggest film of the year, behind Logan ($184 million), and also the second - highest - grossing musical, behind Grease 's $188 million cumulative gross in 1978. Seventy percent of the total ticket sales came from 2D showings signifying that people who do n't go to theaters frequently came out in bulk to watch the film. About 26 % of the remaining tickets were for 3D. IMAX accounted for 7 % ($12.5 million) of the total weekend 's gross, setting a new record for a PG title, ahead of Alice in Wonderland ($12.1 million) while PLF repped 11 % of the box office. Seventy percent of the film 's opening day demographic was female, dropping to 60 % through the weekend. According polling service PostTrak, about 84 percent of American parents who saw the film on its opening day said they would "definitely '' recommend it for families. The film 's opening was credited to positive word of mouth from audiences, good reviews from critics, effective marketing which sold the title not just as a family film but also as a romantic drama, the cast 's star power (especially Emma Watson), lack of competition, being the first family film since The Lego Batman Movie a month earlier, nostalgia, and the success and ubiquity of the first film and Disney 's brand.
On Monday, its fourth day of release, the film fell precipitously by 72 % earning $13.5 million. The steep fall was due to a limited marketplace where only 11 % K - 12 and 15 % colleges were off per ComScore. Nevertheless, it is the second - biggest March Monday, behind Batman v Superman ($15 million). This was followed by the biggest March and pre-summer Tuesday with $17.8 million, a 32 % increase from its previous day. The same day, the film passed $200 million in ticket sales. It earned $228.6 million in the first week of release, the sixth - biggest seven - day gross of all time. In its second weekend, the film continued to maintain the top positioning and fell gradually by 48 % earning another $90.4 million to register the fourth - biggest second weekend of all time, and the third - biggest for Disney. In terms of percentage drop, its 48 % decline is the third - smallest drop for any film opening above $125 million (behind Finding Dory and The Force Awakens). The hold was notable considering how the film was able to fend off three new wide releases: Power Rangers, Life, and CHiPs. As a result, it passed the $300 million threshold becoming the first film of 2017 the pass said mark. The film grossed $45.4 million in its third weekend, finally being overtaken for the top spot by newcomer The Boss Baby ($50.2 million). On April 4, 2017, its nineteenth day of release, it passed the $400 million threshold becoming the first film of 2017 to do so. By its fourth weekend, the film began was playing in 3,969 cinemas, a fall of 241 theaters from its previous weekend. Of those, approximately 1,200 cinemas were sing - along versions. It earned $26.3 million (- 48 %) and retained second place. By comparison, previous Disney films Moana (− 8 %) and Frozen (− 2 %) both witnessed mild percentage declines the weekend their sing - alone versions were released. Its seventh weekend of release was in contemporaneous with another Emma Watson - starring new film The Circle. That weekend, The Circle was number four, while Beauty and the Beast was at number six. By May 28, the film had earned over $500 million in ticket sales becoming the first (and currently only) film of 2017, the third female - fueled film (after The Force Awakens and Rogue One: A Star Wars Story followed by Wonder Woman) and the eighth overall film in cinematic history to pass the mark.
It has already become the biggest March release, dethroning The Hunger Games (2012), the biggest musical film (both animated and live - action), as well as the biggest film of 2017.
Internationally, the film began playing on Thursday, March 16, 2017. Through Sunday, March 19, it had a total international opening of $182.3 million from 55 markets, 44 of which were major territories, far exceeding initial estimations of $100 million and opened at No. 1 in virtually all markets except Vietnam, Turkey, and India. Its launch is the second - biggest for the month of March, behind Batman v Superman ($256.5 million). In IMAX, it recorded the biggest debut for a PG title (although it carried varying certificate amongst different markets) with $8.5 million from 649 screens, the second - biggest for a PG title behind The Jungle Book. In its second weekend, it fell just by 35 % earning another $120.6 million and maintaining its first position hold. It added major markets like France and Australia. It topped the international box office for three consecutive weekends before finally being dethroned by Ghost in the Shell and The Boss Baby in its fourth weekend. Despite the fall, the film helped Disney push past the $1 billion thresold internationally for the first time in 2017.
It scored the biggest opening day of the year in Hong Kong and the Philippines, the biggest March Thursday in Italy ($1 million, also the biggest Disney Thursday debut), the biggest March opening day in Austria, and the second - biggest in Germany ($1.1 million), Disney 's biggest March in Denmark, the biggest Disney live - action debut in China ($12.6 million), the UK ($6.2 million), Mexico ($2.4 million) and Brazil ($1.8 million) and the third - biggest in South Korea with $1.2 million, behind only Pirates of the Caribbean: At World 's End and Pirates of the Caribbean: On Stranger Tides. In terms of opening weekend, the largest debut came from China ($44.8 million), followed by the UK ($24.3 million), Korea ($11.8 million), Mexico ($11.8 million), Australia ($11.1 million), Brazil ($11 million), Germany ($10.7 million), France ($8.4 million), Italy ($7.6 million), Philippines ($6.3 million), Russia ($6 million) and Spain ($5.8 million).
In the United Kingdom and Ireland, the film recorded the biggest opening ever for a PG - rated film, the biggest Disney live - action opening of all time, the biggest March opening weekend, the biggest opening for a musical (ahead of 2012 's Les Misérables), the number one opening of 2017 to date and the fifth - biggest - ever overall with £ 19.7 million ($24.5 million) from 639 theatres and almost twice that of The Jungle Book (£ 9.9 million). This included the second - biggest Saturday ever (£ 7.9 million), only behind Star Wars: The Force Awakens. It witnessed a decline in its second weekend, earning £ 12.33 million ($15.4 million). Though the film was falling at a faster rate than The Jungle Book, it had already surpassed the said film and its second weekend is the third - biggest ever (behind the two James Bond films Skyfall (2012) and Spectre). In India, despite facing heavy competitions from four new Hindi releases, two Tamils films and a Malayalam and a Punjabi release, the film managed to take an occupancy of 15 % on its opening day, an impressive feat despite tremendous competitions. It earned around ₹ 1.5 crore (US $230,000) nett on its opening day from an estimated 600 screens which is more than the three Hindi releases -- Machine, Trapped, and Aa Gaya Hero -- combined. Disney reported a total of ₹ 9.26 crore (US $1.4 million) gross for its opening weekend there. It was ahead of all new releases and second overall behind Bollywood film Badrinath Ki Dulhania. In Russia, despite receiving a restrictive 16 rating, the film managed to deliver a very successful opening with $6 million.
In China, expectations were high for the film. The release date was announced on January 24, giving Disney and local distributor China Film Group Corporation ample time -- around two months -- to market the film nationwide. The release date was strategically chosen to coincide with White Day. Preliminary reports suggested that it could open to $40 -- 60 million in its opening weekend. Largely driven by young women, its opening day pre-sales outpaced that of The Jungle Book. The original film was, however, never widely popular in the country. Although China has occasionally blocked gay - themed content from streaming video services, in this case, Chinese censors decided to leave the gay scene intact. According to local box office tracker Ent Group, the film grossed an estimated $12.1 million on its opening day (Friday), representing 70 % of the total receipts. Including previews, it made a total of $14.5 million from 100,000 screenings, which is 43 % of all screenings in the country. It climbed to $18.5 million on Saturday (102,700 showings) for a three - day total of $42.6 million, securing 60 % of the total marketplace. Disney on the other hand reported a different figure of $44.8 million. Either ways, it recorded the second - biggest opening for a Disney live - action film, with $3.4 million coming from 386 IMAX screens. Japan -- a huge Disney market -- served as the film 's final market and opened there on April 21. It debuted with a better - than - expected $12.5 million on its opening weekend helping the film push past the $1.1 billion threshold. An estimated $1.1 million came from IMAX screenings, the fourth - biggest ever in the country. The two - day gross was $9.7 million, outstripping Frozen 's previous record of $9.5 million. Due to positive reviews, good word - of - mouth and benefitting from the Golden Week, the film saw a 9 % increase on its second weekend. The hold was strong enough to fend off newcomer The Fate of the Furious from securing the top spot. The total there is now over $98 million after seven weekends and is the biggest film release of the year and, overall, the eleventh - biggest of all time. It topped the box office there for eight consecutive weekends.
The only markets where the film did not top the weekend charts were Vietnam (behind Kong: Skull Island), Turkey (with two local movies and Logan ahead) and India (where Badrinath Ki Dulhania retained No. 1). It topped the box office for four straight weekends in Germany, Korea, Austria, Finland, Poland, Portugal, Brazil, Venezuela, Bolivia, Switzerland and the UK (exclusive of previews). In the Philippines, it emerged as the most successful commercial film of all time -- both local and foreign -- with over $13.5 million. In just five weeks, the film became one of the top 10 highest - grossing film of all time in the United Kingdom and Ireland, ahead of all but one Harry Potter film (Deathly Hallows -- Part 2) and all three The Lord of the Rings movies (which also starred Ian McKellen). It is currently the eighth - biggest grosser with £ 70.1 million ($90 million), overtaking Mamma Mia! to become the biggest musical production ever there. The biggest international earning markets following the UK are Japan ($108 million), China ($85.8 million), Brazil ($41.5 million), Korea ($37.5 million), and Australia ($35 million). In Europe alone, the cumulative total is $267 million to become the second - highest - grossing film in the past year (behind Rogue One: A Star Wars Story).
Beauty and the Beast received generally positive reviews, with praise for the faithfulness to the original film with a few elements of the Broadway musical version, cast performances, visuals, Jacqueline Durran 's costume designs, production design, Alan Menken 's musical score and songs, though the designs of the Beast and the servants ' household object forms received mixed reviews. On review aggregator Rotten Tomatoes, the film has an approval rating of 71 % based on 294 reviews, with an average rating of 6.6 / 10. The site 's critical consensus reads, "With an enchanting cast, beautifully crafted songs, and a painterly eye for detail, Beauty and the Beast offers a faithful yet fresh retelling that honors its beloved source material. '' On Metacritic, the film has a score of 65 out of 100, based on 47 critics, indicating "generally favorable reviews ''. In CinemaScore polls, audiences gave the film an average grade of "A '' on an A+ to F scale.
Leslie Felperin of The Hollywood Reporter wrote: "It 's a Michelin - triple - starred master class in patisserie skills that transforms the cinematic equivalent of a sugar rush into a kind of crystal - meth - like narcotic high that lasts about two hours. '' Felperin also praised the performances of Watson and Kline as well the special effects, costume designs and the sets while commended the inclusion of Gad 's character of LeFou as the first LGBT character in Disney. Owen Gleiberman of Variety, in his positive review of the film, wrote: "It 's a lovingly crafted movie, and in many ways a good one, but before that it 's an enraptured piece of old - is - new nostalgia. '' Gleiberman compared Steven 's character of the Beast to a royal version of the titular character in The Elephant Man and the 1946 version of the Beast in Jean Cocteau 's original adaptation. A.O. Scott of The New York Times praised the performances of both Watson and Stevens, and wrote: "It looks good, moves gracefully and leaves a clean and invigorating aftertaste. I almost did n't recognize the flavor: I think the name for it is joy. '' Likewise, The Washington Post 's Ann Hornaday complimented Watson 's performance, describing it as "alert and solemn '' while noting her singing ability as "serviceable enough to get the job done ''. Richard Roeper of Chicago Sun - Times awarded the film three and a half stars, lauded the performances of Watson and Thompson which he drew a comparison to Paige O'Hara 's and Angela Lansbury 's performances in the 1991 animated version while appreciating the performances of the other cast and also pointing out on its usage of the combination of motion capture and CGI technology as a big advantage which he stated: "Almost overwhelmingly lavish, beautifully staged and performed with exquisite timing and grace by the outstanding cast ''. Mike Ryan of Uproxx praised the cast, production design and the new songs while noting the film does n't try anything different, saying: "There 's certainly nothing that new about this version of Beauty and the Beast (well, except it is n't a cartoon anymore), but it 's a good recreation of a classic animated film that should leave most die - hards satisfied. '' In her A - review, Nancy Churnin of The Dallas Morning News praised the film 's emotional and thematic depth, remarking: "There 's an emotional authenticity in director Bill Condon 's live - action Beauty and the Beast film that helps you rediscover Disney 's beloved 1991 animated film and 1994 stage show in fresh, stirring ways. '' James Berardinelli of ReelViews described the 2017 version as "enthralling ''.
Brian Truitt of USA Today commended the performances of Evans, Gad, McGregor and Thompson alongside Condon 's affinity with musicals, the production design, visual effects featured in some of the song numbers including new songs made by the composers Alan Menken and Tim Rice, particularly Evermore which he described the new song with a potential for an Academy Award for Best Original Song. Peter Travers of Rolling Stone rated the film three out of four stars which he deemed it as an "exhilarating gift '' while he remarked that "Beauty and the Beast does justice to Disney 's animated classic, even if some of the magic is M.I.A (Missing in Action). '' Stephanie Zacharek of Time magazine gave a positive review with a description as "Wild, Vivid and Crazy - Beautiful '' as she wrote "Nearly everything about Beauty and the Beast is larger than life, to the point that watching it can be a little overwhelming. '' and added that "it 's loaded with feeling, almost like a brash interpretive dance expressing the passion and elation little girls (and some boys, too) must have felt upon seeing the earlier version. '' The San Francisco Chronicle 's Mick LaSalle struck an affirmative tone, calling it one of the joys of 2017, stating that "Beauty and the Beast creates an air of enchantment from its first moments, one that lingers and builds and takes on qualities of warmth and generosity as it goes along '' while referring the film as "beautiful '' and also praised the film for its emotional and psychological tone as well Steven 's motion capture performance. Tim Robey of The Daily Telegraph gave the film four stars out of five and wrote that "It dazzles on this chocolate box of a picture that feels almost greedy yet to make this film work, down to a sugar - rush finale to grasp the nettle and make an out - an - out, bells - and - whistles musical '' while he praised the performances of Watson, McKellen, Thompson, McGregor, Evans and Gad. Mark Hughes of Forbes also similarly praised the film which he wrote that "it could revive the story in a faithful but entirely new and unique way elevating the material beyond expectations, establishing itself as a cinematic equal to the original '' and also complimented the importance of undertaking a renowned yet problematic masterpiece as well addressing changes in the elements of the story while acknowledging the film 's effectiveness in resonating to the audiences.
Several critics regarded the film as inferior to its 1991 animated predecessor. David Sims of The Atlantic wrote that the 2017 film "feels particularly egregious, in part, because it 's so slavishly devoted to the original; every time it falls short of its predecessor (which is quite often), it 's hard not to notice ''. Michael Phillips of the Chicago Tribune said that the 2017 film "takes our knowledge and our interest in the material for granted. It zips from one number to another, throwing a ton of frenetically edited eye candy at the screen, charmlessly. '' Phillips wrote that the film featured some "less conspicuously talented '' performers who are "stuck doing karaoke, or motion - capture work of middling quality '', though he praised Kline 's performance as the "best, sweetest thing in the movie; he brings a sense of calm, droll authority ''. Peter Bradshaw of The Guardian praised Watson 's performance and wrote that the film was "lit in that fascinatingly artificial honey - glow light, and it runs smoothly on rails -- the kind of rails that bring in and out the stage sets for the lucrative Broadway touring version. '' In the same newspaper, Wendy Ide criticized the film as "ornate to the point of desperation '' in its attempt to emulate the animated film.
Chris Nashawaty of Entertainment Weekly gave the film a B -, writing that while the film looks "exceptionally great '', he sensed that the new songs were "not transporting ''. He felt the film needed more life and depth, but praised Watson 's and Steven 's performances as the "film 's stronger elements ''. Dana Schwartz of The New York Observer felt that some of the characters, such as Gaston and the Beast, had been watered down from the 1991 film, and that the additional backstory elements failed to "advance the plot or theme in any meaningful way '' while adding considerable bloat. Schwartz considered the singing of the cast to be adequate but felt that their voices should have been dubbed over, especially for the complex songs.
Controversy erupted after director Bill Condon said there was a "gay moment '' in the film, when LeFou briefly waltzes with Stanley, one of Gaston 's friends. Afterwards in an interview with Vulture.com, Condon stated, "Can I just say, I 'm sort of sick of this. Because you 've seen the movie -- it 's such a tiny thing, and it 's been overblown. '' Condon also added that Beauty and the Beast features much more diversity than just the highly talked - about LeFou: "That was so important. We have interracial couples -- this is a celebration of everybody 's individuality, and that 's what 's exciting about it. '' GLAAD president and CEO Sarah Kate Ellis praised the move stating, "It is a small moment in the film, but it is a huge leap forward for the film industry. ''
In Russia, Vitaly Milonov agitated the culture minister for banning the film, but instead it was given a 16 + rating (children under the age of 16 can only be admitted to see it in theaters with accompanying adults). Additionally, a theater in Henagar, Alabama did not screen the film because of the subplot. In Malaysia, the Film Censorship Board insisted the "gay moment '' scene be cut, prompting an indefinite postponement of its release by Disney, followed by their decision to withdraw it completely if it could not be released uncensored. The studio moved the release date to March 30, to allow more time for Malaysia 's censor board to make a decision on whether or not to release the film without changes. The distributors and producers then submitted an appeal to the Film Appeal Committee of Malaysia, which allowed the film to be released without any cuts and a P13 rating on the grounds that the "gay element '' was minor and did not affect the positive elements featured in the film. In Kuwait, the movie was withdrawn from cinemas by National Cinema Company which owns most of the cinemas in the country. A board member of the company stated that the Ministry of Information 's censorship department had requested it to stop its screening and edit it for things deemed offensive by it.
There were also a number of boycotts against the film. A call to boycott on LifePetitions received over 129,000 signatures, while the American Family Association featured a petition to boycott with the film, asking the public to help crowdfund a CGI version of Pilgrim 's Progress instead.
Disney has sought to portray Belle as an empowered young woman, but a debate questioning whether it is possible to fall in love with someone who is holding you prisoner, and whether this is a problematic theme, has resulted. As was the case with the original animated film, one argument is that Belle suffers from Stockholm syndrome (a condition that causes hostages to develop a psychological alliance with their captors as a survival strategy during captivity). Emma Watson studied whether Belle is trapped in an abusive relationship with the Beast before signing on and concluded that she does not think the criticism fits this version of the folk tale. Watson described Stockholm Syndrome as "where a prisoner will take on the characteristics of and fall in love with the captor. Belle actively argues and disagrees with (Beast) constantly. She has none of the characteristics of someone with Stockholm Syndrome because she keeps her independence, she keeps that freedom of thought '', also adding that Belle defiantly "gives as good as she gets '' before forming a friendship and romance with the Beast.
Psychiatrist Frank Ochberg, who coined the term "Stockholm syndrome '', said he does not think Belle exhibits the trauma symptoms of prisoners suffering from the syndrome because she does not go through a period of feeling that she is going to die. Some therapists, while acknowledging that the pairing 's relationship does not meet the clinical definition of Stockholm syndrome, argue that the relationship depicted is dysfunctional and abusive and does not model healthy romantic relationships for young viewers. Constance Grady of Vox writes that Jeanne - Marie Leprince de Beaumont 's Beauty and the Beast was a fairy tale originally written to prepare young girls in 18th - century France for arranged marriages, and that the power disparity is amplified in the Disney version. Anna Menta of Elite Daily argued that the Beast does not apologize to Belle for imprisoning, hurting, or manipulating her, and his treatment of Belle is not painted as wrong.
http://cinemacommander.us/m-movies-online/321612/beauty-and-the-beast.html
|
describe two advantages that radio telescopes have over light telescopes | Radio telescope - wikipedia
A radio telescope is a specialized antenna and radio receiver used to receive radio waves from astronomical radio sources in the sky in radio astronomy. Radio telescopes are the main observing instrument used in radio astronomy, which studies the radio frequency portion of the electromagnetic spectrum emitted by astronomical objects, just as optical telescopes are the main observing instrument used in traditional optical astronomy which studies the light wave portion of the spectrum coming from astronomical objects. Radio telescopes are typically large parabolic ("dish '') antennas similar to those employed in tracking and communicating with satellites and space probes. They may be used singly or linked together electronically in an array. Unlike optical telescopes, radio telescopes can be used in the daytime as well as at night. Since astronomical radio sources such as planets, stars, nebulas and galaxies are very far away, the radio waves coming from them are extremely weak, so radio telescopes require very large antennas to collect enough radio energy to study them, and extremely sensitive receiving equipment. Radio observatories are preferentially located far from major centers of population to avoid electromagnetic interference (EMI) from radio, television, radar, motor vehicles, and other manmade electronic devices.
Radio waves from space were first detected by engineer Karl Guthe Jansky in 1932 at Bell Telephone Laboratories in Holmdel, New Jersey using an antenna built to study noise in radio receivers. The first purpose - built radio telescope was a 9 - meter parabolic dish constructed by radio amateur Grote Reber in his back yard in Wheaton, Illinois in 1937. The sky survey he did with it is often considered the beginning of the field of radio astronomy.
The first radio antenna used to identify an astronomical radio source was one built by Karl Guthe Jansky, an engineer with Bell Telephone Laboratories, in 1932. Jansky was assigned the job of identifying sources of static that might interfere with radio telephone service. Jansky 's antenna was an array of dipoles and reflectors designed to receive short wave radio signals at a frequency of 20.5 MHz (wavelength about 14.6 meters). It was mounted on a turntable that allowed it to rotate in any direction, earning it the name "Jansky 's merry - go - round ''. It had a diameter of approximately 100 ft (30 m) and stood 20 ft (6 m) tall. By rotating the antenna, the direction of the received interfering radio source (static) could be pinpointed. A small shed to the side of the antenna housed an analog pen - and - paper recording system. After recording signals from all directions for several months, Jansky eventually categorized them into three types of static: nearby thunderstorms, distant thunderstorms, and a faint steady hiss of unknown origin. Jansky finally determined that the "faint hiss '' repeated on a cycle of 23 hours and 56 minutes. This period is the length of an astronomical sidereal day, the time it takes any "fixed '' object located on the celestial sphere to come back to the same location in the sky. Thus Jansky suspected that the hiss originated outside of the Solar System, and by comparing his observations with optical astronomical maps, Jansky concluded that the radiation was coming from the Milky Way Galaxy and was strongest in the direction of the center of the galaxy, in the constellation of Sagittarius.
An amateur radio operator, Grote Reber, was one of the pioneers of what became known as radio astronomy. He built the first parabolic "dish '' radio telescope, a 9 metres (30 ft) in diameter) in his back yard in Wheaton, Illinois in 1937. He repeated Jansky 's pioneering work, identifying the Milky Way as the first off - world radio source, and he went on to conduct the first sky survey at very high radio frequencies, discovering other radio sources. The rapid development of radar during World War II created technology which was applied to radio astronomy after the war, and radio astronomy became a branch of astronomy, with universities and research institutes constructing large radio telescopes.
The range of frequencies in the electromagnetic spectrum that makes up the radio spectrum is very large. This means that the types of antennas that are used as radio telescopes vary widely in design, size, and configuration. At wavelengths of 30 meters to 3 meters (10 MHz - 100 MHz), they are generally either directional antenna arrays similar to "TV antennas '' or large stationary reflectors with moveable focal points. Since the wavelengths being observed with these types of antennas are so long, the "reflector '' surfaces can be constructed from coarse wire mesh such as chicken wire. At shorter wavelengths parabolic "dish '' antennas predominate. The angular resolution of a dish antenna is determined by the ratio of the diameter of the dish to the wavelength of the radio waves being observed. This dictates the dish size a radio telescope needs for a useful resolution. Radio telescopes that operate at wavelengths of 3 meters to 30 cm (100 MHz to 1 GHz) are usually well over 100 meters in diameter. Telescopes working at wavelengths shorter than 30 cm (above 1 GHz) range in size from 3 to 90 meters in diameter.
The increasing use of radio frequencies for communication makes astronomical observations more and more difficult (see Open spectrum). Negotiations to defend the frequency allocation for parts of the spectrum most useful for observing the universe are coordinated in the Scientific Committee on Frequency Allocations for Radio Astronomy and Space Science.
Some of the more notable frequency bands used by radio telescopes include:
The world 's largest filled - aperture (i.e. full dish) radio telescope is the Five hundred meter Aperture Spherical Telescope (FAST) completed in 2016 by China. The 500 - meter - diameter (1,600 ft) dish with an area as large as 30 football fields is built into a natural Karst depression in the landscape in Guizhou province and can not move; the feed antenna is in a cabin suspended above the dish on cables. The active dish is composed of 4450 moveable panels controlled by a computer. By changing the shape of the dish and moving the feed cabin on its cables, the telescope can be steered to point to any region of the sky up to 40 ° from the zenith. Although the dish is 500 meters in diameter, only a 300 - meter circular area on the dish is illuminated by the feed antenna at any given time, so the actual effective aperture is 300 meters. Construction was begun in 2007 and completed July 2016 and the telescope became operational September 25, 2016.
The world 's second largest filled - aperture telescope is the Arecibo radio telescope located in Arecibo, Puerto Rico. Another stationary dish telescope like FAST, whose 305 m (1,001 ft) dish is built into a natural depression in the landscape, the antenna is steerable within an angle of about 20 ° of the zenith by moving the suspended feed antenna. The largest individual radio telescope of any kind is the RATAN - 600 located near Nizhny Arkhyz, Russia, which consists of a 576 - meter circle of rectangular radio reflectors, each of which can be pointed towards a central conical receiver.
The above stationary dishes are not fully "steerable ''; they can only be aimed at points in an area of the sky near the zenith, and can not receive from sources near the horizon. The largest fully steerable dish radio telescope is the 100 meter Green Bank Telescope in West Virginia, United States, constructed in 2000. The largest fully steerable radio telescope in Europe is the Effelsberg 100 - m Radio Telescope near Bonn, Germany, operated by the Max Planck Institute for Radio Astronomy, which also was the world 's largest fully steerable telescope for 30 years until the Green Bank antenna was constructed. The third - largest fully steerable radio telescope is the 76 - meter Lovell Telescope at Jodrell Bank Observatory in Cheshire, England, completed in 1957. The fourth - largest fully steerable radio telescopes are six 70 - meter dishes: three Russian RT - 70, and three in the NASA Deep Space Network. As of 2016, the planned Qitai Radio Telescope will be the world 's largest fully steerable single - dish radio telescope with a diameter of 110 m (360 ft).
A typical size of the single antenna of a radio telescope is 25 meters. Dozens of radio telescopes with comparable sizes are operated in radio observatories all over the world.
The 500 meter Five hundred meter Aperture Spherical Telescope (FAST), under construction, China (2016)
The 305 meter Arecibo Observatory, Puerto Rico (1963)
The 100 meter Green Bank Telescope, Green Bank, West Virginia, US, the largest fully steerable radio telescope dish (2002)
The 100 meter Effelsberg, in Bad Münstereifel, Germany (1971)
The 76 meter Lovell, Jodrell Bank Observatory, England (1957)
The 70 meter DSS 14 "Mars '' antenna at Goldstone Deep Space Communications Complex, Mojave Desert, California, US (1958)
The 70 meter Yevpatoria RT - 70, Crimea, first of three RT - 70 in the former Soviet Union, (1978)
The 70 meter Galenki RT - 70, Galenki, Russia, second of three RT - 70 in the former Soviet Union, (1984)
The 70 meter DSS - 43 antenna at Canberra Deep Space Communication Complex, Canberra, Australia (1987)
The 70 meter DSS - 63 antenna at the Madrid Deep Space Communications Complex, near Madrid, Spain (late 1980s)
Since 1965, humans have launched three space - based radio telescopes. In 1965, the Soviet Union sent the first one called Zond 3. In 1997, Japan sent the second, HALCA. The last one was sent by Russia in 2011 called Spektr - R.
One of the most notable developments came in 1946 with the introduction of the technique called astronomical interferometry, which means combining the signals from multiple antennas so that they simulate a larger antenna, in order to achieve greater resolution. Astronomical radio interferometers usually consist either of arrays of parabolic dishes (e.g., the One - Mile Telescope), arrays of one - dimensional antennas (e.g., the Molonglo Observatory Synthesis Telescope) or two - dimensional arrays of omnidirectional dipoles (e.g., Tony Hewish 's Pulsar Array). All of the telescopes in the array are widely separated and are usually connected using coaxial cable, waveguide, optical fiber, or other type of transmission line. Recent advances in the stability of electronic oscillators also now permit interferometry to be carried out by independent recording of the signals at the various antennas, and then later correlating the recordings at some central processing facility. This process is known as Very Long Baseline Interferometry (VLBI). Interferometry does increase the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas furthest apart in the array.
A high - quality image requires a large number of different separations between telescopes. Projected separation between any two telescopes, as seen from the radio source, is called a baseline. For example, the Very Large Array (VLA) near Socorro, New Mexico has 27 telescopes with 351 independent baselines at once, which achieves a resolution of 0.2 arc seconds at 3 cm wavelengths. Martin Ryle 's group in Cambridge obtained a Nobel Prize for interferometry and aperture synthesis. The Lloyd 's mirror interferometer was also developed independently in 1946 by Joseph Pawsey 's group at the University of Sydney. In the early 1950s, the Cambridge Interferometer mapped the radio sky to produce the famous 2C and 3C surveys of radio sources. An example of a large physically connected radio telescope array is the Giant Metrewave Radio Telescope, located in Pune, India. The largest array, the Low - Frequency Array (LOFAR), is currently being constructed in western Europe, consisting of about 20,000 small antennas in 48 stations distributed over an area several hundreds of kilometers in diameter, and operates between 1.25 and 30 m wavelengths. VLBI systems using post-observation processing have been constructed with antennas thousands of miles apart. Radio interferometers have also been used to obtain detailed images of the anisotropies and the polarization of the Cosmic Microwave Background, like the CBI interferometer in 2004.
The world 's largest physically connected telescopes, the Square Kilometre Array (SKA), are planned to start operation in 2024.
Many astronomical objects are not only observable in visible light but also emit radiation at radio wavelengths. Besides observing energetic objects such as pulsars and quasars, radio telescopes are able to "image '' most astronomical objects such as galaxies, nebulae, and even radio emissions from planets.
|
who was tara dating on the walking dead | Tara Chambler - wikipedia
Tara Chambler is a fictional character from the horror drama television series The Walking Dead, which airs on AMC in the United States and is based on the comic book series of the same name. The character is based on Tara Chalmers from The Walking Dead: Rise of the Governor, a novel based on the comic book series and the past of the Governor. She is portrayed by Alanna Masterson. She is the first character identified as LGBT to be introduced in the series.
Tara 's family encounters the Governor, under the alias "Brian Heriot '', and they invite him into their apartment complex. Later, Tara 's sister, Lilly, forms a relationship with him, although Tara remains fiercely protective of her sister and her niece. Eventually, after her father dies, the Governor and Tara 's family leave the apartment complex and find Martinez 's camp. As the Governor coerces the people of the camp to attack the safe haven prison, Tara discovers his true brutality and the vendetta he maintains, and is traumatized when he murders Hershel. After the downfall of the prison, Glenn comes to Tara for help in finding his wife, Maggie, who escaped during the gunfire. Tara reluctantly follows him on his quest and, eventually, becomes a member of Rick Grimes 's group after he forgives her for being part of the Governor 's militia. She remains an active part of the group when they reach the Alexandria Safe - Zone where she sparks up romance with Dr. Denise Cloyd, and becomes one of Alexandria 's primary supply runners.
Tara Chalmers lives in Atlanta with her sister April and her father David in an apartment building that they have secured. April saves a group of people, including the man later known as the Governor, from a large herd of undead. The elderly David later dies and turns into a walker without having been bitten. After Philip kills David, tension grows between him and Tara. Philip sexually assaults April. The morning afterwards, April is nowhere to be found, and Tara forces the group, at gunpoint, to leave the building.
Tara Chambler is Lilly 's sister, David 's daughter, and Meghan 's aunt.
Tara Chambler is introduced in the episode "Live Bait ''. After welcoming the Governor (who addresses himself as "Brian Heriot '') into their apartment, Tara seems to trust him relatively sooner than her sister and quickly seems to see him as a friend. She is briefly disoriented by his bloody smashing of David 's skull after David reanimates and tries to bite her; however, she later accepts and agrees with the decision, and realizes that all people who die turn, whether they have been bitten or not. She and the others leave the apartment after David 's burial, in search of shelter elsewhere. While on the road, a group of walkers forces them to flee. They are stopped once again when Brian and Meghan fall into a pit full of walkers, all of which Brian successfully kills before any harm can come to himself or Meghan. In the episode "Dead Weight '', Tara begins a romantic relationship with Alisha, at Martinez 's camp. In the mid-season finale "Too Far Gone '', Tara joins the Governor in attacking the prison, using Hershel and Michonne as leverage, and believing the prison occupants to be bad people, as Brian tells them. However, when Rick Grimes tries to reason with the Governor for the sake of his people. Tara begins to question Brian 's plan, especially when Brian holds a sword to Hershel 's neck, despite Rick offering to welcome them in. Rick, seeing that Tara does n't want to be there, tries to personally reason with Tara but she is too conflicted to answer. When the Governor decapitates Hershel, Tara tries to retreat from the battle in shock. Alisha tries to get her to fight back but Tara is disgusted with the Governor and ultimately walks off, traumatized.
In the mid-season premiere "Inmates '', in the aftermath of the prison attack, Glenn finds Tara hiding within the gates and tells her that he needs her help to escape the prison and find his wife, Maggie, Hershel 's daughter. Tara is too disgusted with herself for trusting the Governor and reveals that she saw Lilly die and questions why Glenn wants her help; he states that he does n't want it, but needs it. After fighting off walkers, Glenn collapses with fatigue and Tara is left to attack the walker who tried to bite him. She is encountered by Abraham Ford, Eugene Porter and Rosita Espinosa, who are impressed by her skills and ask her to accompany them. In the episode "Claimed '', she travels alongside the trio until Glenn forces them to stop the truck as Abraham explains that their mission is to get Eugene to Washington D.C. to cure the outbreak. However, Glenn still insists on finding Maggie; Tara and the others accompany him after Eugene accidentally rips the truck 's fuel line, and she begins to bond with Abraham. In the episode "Us '', Tara and Glenn enter a tunnel on the road to Terminus, where Glenn believes Maggie might be, and they find evidence of a fresh cave - in, as well as many trapped walkers. Glenn insists he needs to see the faces of the walkers to ensure none of them are Maggie, and Tara helps. When the two set a diversion with a flashlight to sneak around the swarm of walkers, Tara slips and gets her ankle stuck in the debris. She tells Glenn to leave her, but Glenn, tired of losing people, fires on the walkers until he runs out of bullets and tries to commit suicide by telling them to get him. They are then unexpectedly saved by Abraham, Eugene and Rosita, who have found Maggie. Glenn introduces Tara to Maggie but says that he met Tara on the road, avoiding any mention of the Governor. Tara later agrees to join Abraham in his mission to Washington D.C., and they reach Terminus. In the season finale "A '', it is shown that they were forced into a train car with the rest of the group. Rick recognizes Tara from the prison but says nothing on the subject.
In the episode "No Sanctuary '', Tara crafts a makeshift weapon to use in the escape attempt, but it fails and she is left inside. She encourages the group and is confident that they will be able to survive their break - out. When Rick opens the boxcar for everyone to escape, Tara helps kill walkers on their way out and aids in protecting the group. In the episode "Strangers '', Tara speaks to Rick about her involvement with the Governor; he tells her that he was aware of her hesitation to be there and that is why he tried to talk to her. After they resolve their differences he accepts her as part of his family. Later, the group follows Gabriel Stokes to his church. She goes on a supply run with Glenn and Maggie, and forms a close bond with Maggie. Later, Tara reveals the truth to Maggie that she was with the Governor during Herschel 's murder at his hands. After Tara explains herself, Maggie forgives her and they hug. In the episode "Four Walls and a Roof '', upon hearing of Gareth 's return, Abraham demands that the group leave for Washington right away. As part of a bargain to make him stay and fight, Tara promises to go with him tomorrow regardless of what happens. She joins Rick 's posse to help trap Gareth 's group inside the church, and then watches as Rick, Michonne, Sasha and Abraham brutally slaughter the Terminus cannibals. The next day she is with the others bidding farewell to Bob before he dies from infection, and then following Abraham in the church bus to Washington. In the episode "Self Help '', she helps keep Eugene safe when the bus crashes, and promises to keep his secret about sabotaging the bus. She is not happy when Eugene reveals he lied about knowing a cure, but still defends him when an enraged Abraham nearly punches him to death.
In the episode "Remember '', when the survivors arrive in Alexandria, Tara is assigned the job of a supply runner. In the episode "Spend, Tara is tasked with looting the warehouse for parts needed to restore power to Alexandria, along with Nicholas, Aiden, Glenn, Noah, and Eugene. Inside the warehouse, Aiden accidentally shoots a grenade on a walker, leaving Tara knocked unconscious by the blast. Eugene looks after Tara and, when the walkers begin to close in, he summons up the courage to carry her out to safety inside the van and Tara is taken back to Alexandria. In the season finale "Conquer '', after days of being unconscious, Tara wakes up in Alexandria with Rosita by her side.
In the season premiere "First Time Again '', Tara is still recovering in bed while Maggie and Rosita check up on her. She is soon able to walk around and helps with building a wall barrier, to help guard the walkers that are stuck in the quarry. She and Maggie discuss how Nicholas caused Noah 's death and Maggie reveals to her that Nicholas tried to kill Glenn. Maggie then reminds her she too used to be on the enemy 's side when The Governor attacked, and the two hug. In the episode "JSS '', Tara is first seen in the infirmary with Eugene. Tara meets Denise and asks why has n't she met her yet. Tara asks Denise if she can help her with a headache. As the wolves attack Alexandria, Tara, Eugene and Denise stay in the infirmary. An injured Holly is brought in who has been stabbed. Tara notices Denise being reluctant to help a dying Holly and pressures her to help. Despite Denise trying her best to save Holly, she passes away due to blood loss. Before Tara leaves the infirmary, she quietly reminds Denise to destroy Holly 's brain so she will not reanimate. In the episode "Now '', Tara encourages Denise not to give up hope on Scott. Denise tells her later that he will make it and kisses her. In the episode "Heads Up '', Tara saves an Alexandrian, Spencer (Austin Nichols) by shooting at walkers after Spencer falls into a herd for trying to use a zip - line to crawl across. Despite saving his life, Rick is angry at her for wasting bullets and she flips him off. Rick apologizes but says that she did n't need to save him. In the mid-season finale "Start to Finish '', Tara is first seen helping drag Tobin to safety when the walls fall down and the herd enters Alexandria. She and Rosita then rescue Eugene and take refuge in a nearby garage, trapped in there by the walkers. Rosita is beginning to give up hope but Tara encourages her to keep going and the trio start working to escape the garage. Later on, they escape and stumble into the same room The Wolf is holding Denise captive, with Carol and Morgan unconscious on the floor. He forces them to surrender their weapons and Tara watches helplessly as he takes Denise with him as a hostage. In the mid-season premiere "No Way Out '', Tara joins Rick and the rest of the town in wiping out the walkers. In the episode "The Next World '', two months later, it is shown that Tara and Denise (who managed to survive the event) are now living together as a couple. In the episode "Not Tomorrow Yet '', Tara accompanies the group to the Saviors ' compound to infiltrate and kill them. Both Jesus and Father Gabriel comfort Tara, who is feeling guilty for lying to Denise. She later kills two members of the Saviors. Heath and Tara then leave to go on a two - week supply run.
Tara returns in the episode "Swear ''. She gets separated from Heath, ending up on a beach, unconscious. A girl named Cyndie (Sydney Park) gives her water and leaves. Soon after, Tara wakes up and follows her, only to find a community named Oceanside full of armed women that kill any stranger they see on sight in order to protect themselves. She is discovered and tries to flee as the women try to gun her down. She is later captured. At dinner, she learns they were attacked by the Saviors and all of their men were killed. Tara is asked to stay by the leader of the community, Natania, but she convinces them to let her go, as she says she needs to get back to her girlfriend (unaware Denise is dead). Later, Tara realizes she is being led out to be killed, so she escapes with the help of Cyndie, who asks her to swear not to tell anybody else about the community. Tara returns home only to find out about the deaths. Rosita asks if there is a place, no matter how dangerous, to find food. Tara lies, saying she did not see anything on her supply run, thus keeping her promise to Cyndie. In the mid-season finale, "Hearts Still Beating '', Tara arrives outside Rick 's house to give Olivia (Ann Mahoney) Denise 's lemonade at Negan 's request. Tara comforts Olivia for having to face Negan. Later, Tara, Rosita, Carl, and other Alexandrians watch the exchange between Negan and Spencer over a game of pool as Spencer tries to convince Negan to kill Rick and put him in charge. As Negan brutally murders Spencer for dislike of his weak abilities, Tara watches in shock with everyone else. Soon after, Rosita pulls out her gun and tries to shoot Negan, only to miss and hit Lucille, his beloved baseball bat, causing him to rage and threaten Rosita to which Tara is visually distressed about. Tara then watches Olivia get shot in the face by Arat, one of Negan 's soldiers after Rosita lies continuously about who made the bullet she shot Lucille with, resulting in Negan ordering Arat to kill someone of her choice, furthering Tara 's distress. After this, Negan continues to demand to know who made the bullet. Tara lies and says it was her briefly before Eugene admits it was him and is taken away.
In the episode "New Best Friends '', Tara is part of the group who meets the Scavengers while Rick negotiates a deal with them to fight the Saviors. She is dismayed to see Rosita becoming more restless and eager to fight while Tara advises patience. In the episode "Say Yes '', Tara is shown to be conflicted as she knows that Oceanside has the numbers and weapons to make a difference in the fight, but does not want to break her promise to Cyndie. She also knows that if Rick and the others go to Oceanside, it will most likely lead to a fight. She ultimately comes to Rick at the end of the episode and states she there is something she needs to tell him. In the episode "Something They Need '', it is shown that Tara told Rick and the others about Oceanside and their considerable firepower, causing them to form a plan to ambush the community and take the weapons. Tara infiltrates the community and attempts to convince Natania and Cyndie to join them and fight rather than hide. When they refuse, Tara is forced to allow Rick 's plan to take the community hostage. Natania manages to disarm Tara and holds her at gunpoint, demanding they all leave. Rick refuses and states they are taking the guns one way or another. The tensions are halted when a herd of walkers converges on them, forcing the groups to work together. When Oceanside still refuses to aid them, the group leaves with their weapons (although Tara promises to return them once the fighting is done). After thanking Cydnie once more for her help, Rick approaches Tara and reminds her that she does not have to feel guilty. Tara responds that she knows that and does not anymore. Tara returns to Alexandria with Rick and the others, to find Rosita waiting for them. She explains that Dwight is in their cell.
In the season finale, "The First Day of the Rest of Your Life '', Tara encourages Daryl to kill Dwight as revenge for Denise 's death, but Daryl resists. Later, Tara is shown to be disappointed in Rick and Daryl 's decision to trust Dwight. At the battle with Negan the following day, when Rosita is shot, Tara helps her to safety. Tara is later seen at Rosita 's bedside while she heals from her injuries.
Masterson was promoted to the main cast for the renewed fifth season. As of the second episode of the seventh season, her name appears in the opening credits.
For the episode "Crossed '' in the fifth season, Zack Handlen, writing for The A.V. Club commented positively on the character, saying, "Tara is pretty great. ''
For the episode "Swear '', the character of Tara was mostly well received. Matt Fowler commented that, "Tara still needs a bit of work from a character standpoint, but at least her conviction that all the murders her crew committed were justified more or less fits with her as someone who was part of the Governor 's assault on Rick 's prison ''. He was also skeptical about her decision to lie about the community. Zack Handlen, writing for The AV Club was more complimentary on the ending scene, calling it "a rare example of a character actually making a difficult but responsible moral choice! ''. He praised Tara in the hour saying, "As maybe the closest thing to self - aware comic relief the show has left, Tara remains likable enough '' and praised the "moment of selflessness and faith '' in not speaking of Oceanside which "generates one of the few moments of legitimate tension in the whole hour. '' Despite this, he was critical of her decision to lie to members of Oceanside saying, "I like Tara, but it 's harder to root for her when she tells such hilariously stupid lies. Her decision to talk about Rick and company murdering a bunch of Negan 's men also seems like a bad call. ("You should totally trust my group! We 're good at killing! '') ''
Masterson 's performance received a mixed response from critics. Jacob Stolworthy for The Independent was complimentary of Masterson 's portrayal of Tara saying, "Granted, if fans were told they 'd be getting an episode dedicated to Tara upon her introduction in season four, eyebrows would have been raised. But it 's through this character - played with a refreshing charm by Masterson (whose pregnancy is to account for her lack of presence) - that we meet yet another new community ''. Conversely, Shane Ryan for Paste Magazine was extremely critical of Masterson 's performance. He went further to say, "I went from thinking this was an episode about a couple of badass tropical killers to realizing the mysterious body washed up on shore belonged to Tara... that was the worst kind of gut punch. '' In contrast to Stolworthy, Ryan disliked the humor in the episode saying, "Every single "funny '' bit of dialogue Tara uttered under stress was painfully unfunny. Hire a comedy writer, Walking Dead. Your shit needs a punch - up. ''
Ron Hogan for Den of Geek was complimentary of Tara 's humor, saying, "Alanna Masterson has some good comedic sensibilities (...) However, I just do n't feel like Tara 's a strong enough character to carry an entire episode, and neither she nor Heath have been developed enough to disappear for two months ' worth of episodes stretched across two seasons. '' Conversely, Jeff Stone for IndieWire liked the decision to focus on Tara for an episode saying, "Tara has always been a sentimental favorite of mine, with her humorous streak and unwillingness to be a full - blown Ricketeer stormtrooper. It 's nice to have her back. ''
Some critics felt the characterization of the core group of survivors, including Tara, was off in "Something They Need ''. Ron Hagan for Den of Geek! said, "Rick and Tara finally discuss the presence of the Seaside Motel group, and that means he 's ready to go wage a full - fledged assault on a group of women and children, blowing up dynamite outside their walls, drawing the attention of zombies in the area, and then taking all their guns away to fight his own battle. And yes, that 's the hero of the story. '' He was relieved that the cliffhanger involving Sasha in the previous week was not stretched out to the finale. Zack Handlen for The A.V. Club had a similar perspective on raiding Oceanside. He said, "The fact that Tara not only signed off on this plan, but also seems to be one hundred percent behind it, is at odds with everything we know about her. However much she 's supposed to believe in Rick now (and clearly, she 's supposed to believe in him a lot), for her to willingly go in on such an openly aggressive scheme is bizarre. This is n't "we 're going to talk, and see what happens next. ''
|
where would the most high energy photons be found in the sun | Solar core - wikipedia
The core of the Sun is considered to extend from the center to about 0.2 to 0.25 of solar radius. It is the hottest part of the Sun and of the Solar System. It has a density of 150 g / cm3 (150 times the density of liquid water) at the center, and a temperature of 27 million degrees Fahrenheit (15 million degrees Celsius, 15 million Kelvin). The core is made of hot, dense plasma (ions and electrons), at a pressure estimated at 265 billion bar (3.84 trillion psi or 26.5 peta pascals (PPa)) at the center. Due to fusion, the composition of the solar plasma drops from 68 - 70 % hydrogen by mass at the outer core, to 33 % hydrogen at the core / Sun center.
The core inside 0.20 of the solar radius, contains 34 % of the Sun 's mass, but only 0.8 % of the Sun 's volume. Inside 0.24 solar radius, the core generates 99 % of the fusion power of the Sun. There are two distinct reactions in which four hydrogen nuclei may eventually result in one helium nucleus: the proton - proton chain reaction -- which is responsible for most of the Sun 's released energy -- and the CNO cycle.
The Sun at the photosphere is about 73 - 74 % by mass hydrogen, which is the same composition as the atmosphere of Jupiter, and the primordial composition of hydrogen and helium at the earliest star formation after the Big Bang. However, as depth into the Sun increases, fusion decreases the fraction of hydrogen. Traveling inward, hydrogen mass fraction starts to decrease rapidly after the core radius has been reached (it is still about 70 % at a radius 25 % of the Sun 's radius) and inside this, the hydrogen fraction drops rapidly as the core is traversed, until it reaches a low of about 33 % hydrogen, at the Sun 's center (radius zero). All but 2 % of the remaining plasma mass (i.e., 65 %) is helium, at the center of the Sun.
Approximately 3.6 × 10 protons (hydrogen nuclei), or roughly 299 million metric tons of hydrogen, are converted into helium nuclei every second releasing energy at a rate of 3.86 × 10 joules per second.
The core produces almost all of the Sun 's heat via fusion: the rest of the star is heated by the outward transfer of heat from the core. The energy produced by fusion in the core, except a small part carried out by neutrinos, must travel through many successive layers to the solar photosphere before it escapes into space as sunlight, kinetic or thermal energy of particles. The energy conversion per unit time (power) of fusion in the core varies with distance from the solar center. At the center of the Sun, fusion power is estimated by models to be about 276.5 watts / m. Despite its intense temperature, the peak power generating density of the core overall is similar to an active compost heap, and is lower than the power density produced by the metabolism of an adult human. The Sun is much hotter than a compost heap due to the Sun 's enormous volume.
The low power outputs occurring inside the fusion core of the Sun may also be surprising, considering the large power which might be predicted by a simple application of the Stefan -- Boltzmann law for temperatures of 10 to 15 million kelvin. However, layers of the Sun are radiating to outer layers only slightly lower in temperature, and it is this difference in radiation powers between layers which determines net power generation and transfer in the solar core.
At 19 % of the solar radius, near the edge of the core, temperatures are about 10 million kelvin and fusion power density is 6.9 W / m, which is about 2.5 % of the maximum value at the solar center. The density here is about 40 g / cm, or about 27 % of that at the center. Some 91 % of the solar energy is produced within this radius. Within 24 % of the radius (the outer "core '' by some definitions), 99 % of the Sun 's power is produced. Beyond 30 % of the solar radius, where temperature is 7 million K and density has fallen to 10 g / cm the rate of fusion is almost nil. There are two distinct reactions in which 4 H nuclei may eventually result in one He nucleus: "proton - proton chain reaction '' and the "CNO cycle '' (see below).
The first reaction in which 4 H nuclei may eventually result in one He nucleus is known as the proton - proton chain, is:
(1 H + 1 H → 2 D + e + + v then 2 D + 1 H → 3 H e + γ then 3 H e + 3 H e → 4 H e + 1 H + 1 H (\ displaystyle \ left \ ((\ begin (aligned) && () ^ (1) \! H+ ^ (1) \! H& \ rightarrow () ^ (2) \! D + e ^ (+) + v \ \ (\ text (then)) && () ^ (2) \! D+ () ^ (1) \! H& \ rightarrow () ^ (3) \! He+ \ gamma \ \ (\ text (then)) && () ^ (3) \! He+ () ^ (3) \! He& \ rightarrow () ^ (4) \! He+ () ^ (1) \! H+ () ^ (1) \! H \ \ \ end (aligned)) \ right.)
This reaction sequence is thought to be the most important one in the solar core. The characteristic time for the first reaction is about one billion years even at the high densities and temperatures of the core, due to the necessity for the weak force to cause beta decay before the nucleons can adhere (which rarely happens in the time they tunnel toward each other, to be close enough to do so). The time that deuterium and helium - 3 in the next reactions last, by contrast, are only about 4 seconds and 400 years. These later reactions proceed via the nuclear force and are thus much faster. The total energy released by these reactions in turning 4 hydrogen atoms into 1 helium atom is 26.7 MeV.
The second reaction sequence, in which 4 H nuclei may eventually result in one He nucleus is called the carbon - nitrogen - oxygen cycle -- or "CNO cycle '' for short -- generates less than 10 % of the total solar energy. This involves carbon atoms which are not consumed in the overall process. The details of this "carbon cycle '' are as follows:
(12 C + 1 H → 13 N + γ then 13 N → 13 C + e + + ν then 13 C + 1 H → 14 N + γ then 14 N + 1 H → 15 O + γ then 15 O → 15 N + e + + ν then 15 N + 1 H → 12 C + 4 H e + γ (\ displaystyle \ left \ ((\ begin (aligned) && () ^ (12) \! C+ () ^ (1) \! H& \ rightarrow () ^ (13) \! N+ \ gamma \ \ (\ text (then)) && () ^ (13) \! N& \ rightarrow () ^ (13) \! C + e ^ (+) + \ nu \ \ (\ text (then)) && () ^ (13) \! C+ () ^ (1) \! H& \ rightarrow () ^ (14) \! N+ \ gamma \ \ (\ text (then)) && () ^ (14) \! N+ () ^ (1) \! H& \ rightarrow () ^ (15) \! O+ \ gamma \ \ (\ text (then)) && () ^ (15) \! O& \ rightarrow () ^ (15) \! N + e ^ (+) + \ nu \ \ (\ text (then)) && () ^ (15) \! N+ () ^ (1) \! H& \ rightarrow () ^ (12) \! C+ () ^ (4) \! He+ \ gamma \ \ \ end (aligned)) \ right.)
This process can be further understood by the picture on the right, starting from the top in clockwise direction.
The rate of nuclear fusion depends strongly on density. Therefore, the fusion rate in the core is in a self - correcting equilibrium: a slightly higher rate of fusion would cause the core to heat up more and expand slightly against the weight of the outer layers. This would reduce the fusion rate and correct the perturbation; and a slightly lower rate would cause the core to cool and shrink slightly, increasing the fusion rate and again reverting it to its present level.
However the Sun gradually becomes hotter during its time on the main sequence, because the helium atoms in the core are denser than the hydrogen atoms they were fused from. This increases the gravitational pressure on the core which is resisted by a gradual increase in the rate at which fusion occurs. This process speeds up over time as the core gradually becomes denser. It is estimated that the sun has become 30 % brighter in the last four and a half billion years and will continue to increase in brightness by 1 % every 100 million years.
The high - energy photons (gamma rays) released in fusion reactions take indirect paths to the Sun 's surface. According to current models, random scattering from free electrons in the solar radiative zone (the zone within 75 % of the solar radius, where heat transfer is by radiation) sets the photon diffusion time scale (or "photon travel time '') from the core to the outer edge of the radiative zone at about 170,000 years. From there they cross into the convective zone (the remaining 25 % of distance from the Sun 's center), where the dominant transfer process changes to convection, and the speed at which heat moves outward becomes considerably faster.
In the process of heat transfer from core to photosphere, each gamma ray in the Sun 's core is converted during scattering into several million visible light photons before escaping into space. Neutrinos are also released by the fusion reactions in the core, but unlike photons they very rarely interact with matter, so almost all are able to escape the Sun immediately. For many years measurements of the number of neutrinos produced in the Sun were much lower than theories predicted, a problem which was recently resolved through a better understanding of neutrino oscillation.
|
who was the most decorated american soldier of the vietnam war | Joe Hooper (Medal of Honor) - wikipedia
Joe Ronnie Hooper (August 8, 1938 -- May 6, 1979) was an American who served in both the United States Navy and United States Army where he finished his career there as a captain. He was awarded the Medal of Honor while serving as an army staff sergeant on February 21, 1968 during the Vietnam War. He was one of the most decorated U.S. soldiers of the war and was wounded in action eight times.
Hooper was born on August 8, 1938 in Piedmont, South Carolina. His family moved when he was a child to Moses Lake, Washington where he attended Moses Lake High School.
Hooper enlisted in the United States Navy in December 1956. After graduation from boot camp at San Diego, California he served as an Airman aboard USS Wasp (CV - 18) and USS Hancock (CV - 19). He was honorably discharged in July 1959, shortly after being advanced to petty officer third class.
Hooper enlisted in the United States Army in May 1960 as a private first class, and attended Basic Training at Fort Ord, California. After graduation, he volunteered for Airborne School at Fort Benning, Georgia and then was assigned to Company C, 1st Airborne Battle Group, 325th Infantry, 82nd Airborne Division at Fort Bragg, North Carolina and was promoted to corporal during his assignment. He then served a tour of duty in South Korea with the 20th Infantry in October 1961 and shortly after arriving he was promoted to sergeant and was made a squad leader. He left Korea in November 1963 and was assigned to the 2nd Armored Division at Fort Hood, Texas for a year as a squad leader and then became a squad leader with Company D, 2nd Battalion (Airborne), 502nd Infantry, 101st Airborne Division at Fort Campbell, Kentucky. He was promoted to staff sergeant in September 1966 and volunteered for service in South Vietnam. Instead he was assigned as a platoon sergeant in Panama with the 3rd Battalion (Airborne), 508th Infantry, first with HQ Company and later with Company B.
Hooper could n't stay out of trouble and suffered several Article 15 hearings, being reduced to the rank of corporal in July 1967. He was promoted once again to sergeant in October 1967, and was assigned to Company D, 2nd Battalion (Airborne), 501st Infantry, 101st Airborne Division at Fort Campbell and deployed with the division to Vietnam in December as a squad leader. During his tour of duty with Delta Company (Delta Raiders), 2nd Battalion (Airborne), 501st Airborne Infantry, 101st Airborne Division, he was recommended for the Medal of Honor for his heroic actions on February 21, 1968 outside of Hue.
He returned from Vietnam and was discharged in June 1968. He reenlisted in the Army the following September, and served as a public relations specialist. On March 7, 1969, he was presented the Medal of Honor by President Richard Nixon during a ceremony in the White House. From July 1969 to August 1970, he served as a platoon sergeant with the 3rd Battalion, 5th Infantry in Panama. He managed to finagle a second tour in Vietnam; from April to June 1970, he served as a pathfinder with the 101st Aviation Group, 101st Airborne Division (Airmobile), and from June to December 1970, he served as a platoon sergeant with Company A, 2nd Battalion, 327th Infantry, 101st Airborne Division (Airmobile). In December 1970, he received a direct commission to second lieutenant and served as a platoon leader with Company A, 2nd Battalion, 501st Infantry, 101st Airborne Division (Airmobile) until April 1971.
Upon his return to the United States, he attended the Infantry Officer Basic Course at Fort Benning and was then assigned as an instructor at Fort Polk, Louisiana. Despite wanting to serve twenty years in the Army, Hooper was made to retire in February 1974 as a first lieutenant, mainly because he had only completed a handful of college courses beyond his GED. As soon as he was released from active duty, he joined a unit of the Army Reserve 's 12th Special Forces Group (Airborne) in Washington as a Company Executive Officer. In February 1976, he transferred to the 104th Division (Training), also based in Washington. He was promoted to captain in March 1977. He attended drills only intermittently and was separated from the service in September 1978.
For his service in Vietnam, the U.S. Army also awarded Hooper two Silver Stars, six Bronze Stars, eight Purple Hearts, the Presidential Unit Citation, the Vietnam Service Medal with six campaign stars, and the Combat Infantryman Badge. He is credited with 115 enemy killed in ground combat, 22 of which occurred on February 21, 1968. He became one of the most decorated soldiers in the Vietnam War, and was one of three soldiers who were wounded in action eight times in the war.
Rumors persist that he became distressed by the anti-war politics of the time and took to excessive drinking which contributed to his death. He died of a cerebral hemorrhage in Louisville, Kentucky on May 6, 1979, at the age of 40.
Hooper is buried at Arlington National Cemetery in Section 46, adjacent to the Memorial Amphitheater.
Hooper 's military decorations and awards include:
Army Presidential Unit Citation
Vietnam Presidential Unit Citation
Republic of Vietnam Gallantry Cross Unit Citation
Republic of Vietnam Civil Actions Unit Citation
Vietnam Parachutist Badge
Rank and organization: Staff Sergeant, U.S. Army, Company D, 2d Battalion (Airborne), 501st Infantry, 101st Airborne Division. Place and date: Near Hue, Republic of Vietnam, February 21, 1968. Entered service at: Los Angeles, Calif. Born: August 8, 1938, Piedmont, S.C.
|
who was the first to host the world cup twice | FIFA World Cup hosts - wikipedia
Seventeen countries have been FIFA World Cup hosts in the competition 's twenty tournaments since the inaugural World Cup in 1930. The organization at first awarded hosting to countries at meetings of FIFA 's congress. The choice of location was controversial in the earliest tournaments, given the three - week boat journey between South America and Europe, the two centers of strength in football at the time.
The decision to hold the first cup in Uruguay, for example, led to only four European nations competing. The next two World Cups were both held in Europe. The decision to hold the second of these, the 1938 FIFA World Cup, in France was controversial, as the South American countries had been led to understand that the World Cup would rotate between the two continents.
Both Argentina and Uruguay thus boycotted the tournament. The first tournament following World War II, held in Brazil in 1950, had three teams withdraw for either financial problems or disagreements with the organization.
To avoid any future boycotts or controversy, FIFA began a pattern of alternation between the Americas and Europe, which continued until the 2002 FIFA World Cup in Asia. The system evolved so that the host country is now chosen in a vote by FIFA 's Congress. This is done under an exhaustive ballot system. The decision is currently made roughly seven years in advance of the tournament, though the hosts for the 2022 tournament were chosen at the same time as those for the 2018 tournament.
Only Mexico, Italy, France, Germany (West Germany until shortly after the 1990 World Cup) and Brazil have hosted the event on two occasions. Mexico City 's Estadio Azteca and Rio de Janeiro 's Maracanã are the only venues ever to have hosted two FIFA World Cup finals. Only the 2002 FIFA World Cup had more than one host, being split between Japan and South Korea.
Upon the selection of Canada -- Mexico -- United States bid for the 2026 FIFA World Cup, the tournament will be the first to be hosted by more than two countries. Mexico becomes the first country to host three men 's World Cups and its Estadio Azteca, should it be selected, will become the first stadium to stage three World Cup tournaments.
Except 1934, host nations are granted an automatic spot in the World Cup group stages. The first host ever to fail to advance past the first round was South Africa in 2010. denotes the best result in the team 's history, - the best result at the time of the competition (improved later).
Bids:
Before the FIFA Congress could vote on the first - ever World Cup host, a series of withdrawals led to the election of Uruguay. The Netherlands and Hungary withdrew, followed by Sweden withdrawing in favour of Italy. Then both Italy and Spain withdrew, in favour of the only remaining candidate, Uruguay. The FIFA Congress met in Barcelona, Spain on 18 May 1929 to ratify the decision, and Uruguay was chosen without a vote.
Results:
Notice that the celebration of the first World Cup coincided with the centennial anniversary of the first Constitution of Uruguay. For that reason, the main stadium built in Montevideo for the World Cup was named Estadio Centenario.
Bids:
Sweden decided to withdraw before the vote, allowing the only remaining candidate Italy to take the hosting job for the 1934 World Cup. The decision was ratified by the FIFA Congress in Stockholm, Sweden and Zürich, Switzerland on 14 May 1932. The Italian Football Federation accepted the hosting duties on 9 October 1932.
Results:
Bids:
Without any nations withdrawing their bids, the FIFA Congress convened in Berlin, Germany on 13 August 1936 to decide the next host. Electing France took only one ballot, as France had more than half of the votes in the first round.
Results:
Bids for 1942:
Cancelled FIFA election of the host for the outbreak of the Second World War in September 1939.
Bids for 1946:
Bid:
Brazil, Argentina, and Germany had officially bid for the 1942 World Cup, but the Cup was cancelled after the outbreak of World War II. The 1950 World Cup was originally scheduled for 1949, but the day after Brazil was selected by the FIFA Congress on 26 July 1946 in Luxembourg City, Luxembourg, the World Cup was rescheduled for 1950.
Result:
Bid:
The 1954 World Cup hosting duty was decided on 26 July 1946, the same day that Brazil was selected for the 1949 World Cup, in Luxembourg City. On 27 July, the FIFA Congress pushed back the 5th World Cup finals by three years, deciding it should take place in 1954.
Result:
Bid:
Argentina, Chile, Mexico, and Sweden expressed interest in hosting the tournament. Swedish delegates lobbied other countries at the FIFA Congress held in Rio de Janeiro around the opening of the 1950 World Cup finals. Sweden was awarded the 1958 tournament unopposed in on 23 June 1950.
Result:
Bids:
West Germany withdrew before the vote, which took place in Lisbon, Portugal on 10 June 1956, leaving two remaining bids. In one round of voting, Chile won over Argentina.
Results:
Bids:
Spain withdrew from the bidding prior to voting by the FIFA Congress, held in Rome, Italy on 22 August 1960. Again, there was only one round of voting, with England defeating West Germany.
Results:
Bids:
The FIFA Congress convened in Tokyo, Japan on 8 October 1964. One round of voting saw Mexico win the hosting duties over Argentina.
Results:
Three hosts for the 1974, 1978, and 1982 World Cups were chosen in London, England on 6 July 1966 by the FIFA Congress. Spain and West Germany, both facing each other in the running for hosting duties for the 1974 and 1982 World Cups, agreed to give one another a hosting job. Germany withdrew from the 1982 bidding process while Spain withdrew from the 1974 bidding process, essentially guaranteeing each a hosting spot. Mexico, who had won the 1970 hosting bid over Argentina just two years prior, agreed to withdraw and let Argentina take the 1978 hosting position.
Bid:
Host voting, handled by the then - FIFA Executive Committee (or Exco), met in Stockholm on 9 June 1974 and ratified the unopposed Colombian bid.
Result:
However, Colombia withdrew after being selected to host the World Cup for financial problems on 5 November 1982, less than four years before the event was to start. A call for bids was sent out again, and FIFA received intent from three nations:
In Zürich on 20 May 1983, Mexico won the bidding unanimously as voted by the Executive Committee, for the first time in FIFA World Cup bidding history (except those nations who bid unopposed).
Results:
Bids:
Except Italy and the Soviet Union, all nations withdrew before the vote, which was to be conducted by Exco in Zürich on 19 May 1984. Once again, only one round of voting was required, as Italy won more votes than the Soviet Union.
Results:
Bids:
Despite having three nations bidding, voting only took one round. The vote was held in Zürich (for the third straight time) on 4 July 1988. The United States gained a majority of votes of the Exco members.
Results:
Bids:
This vote was held in Zürich for the fourth straight time on 1 July 1992. Only one round of voting was required to have France assume the hosting job over Morocco.
Result:
Bids:
On 31 May 1996, the hosting selection meeting was held in Zürich for the fifth straight time. A joint bid was formed between Japan and South Korea, and the bid was "voted by acclamation '', an oral vote without ballots. The first joint bid of the World Cup was approved, edging out Mexico.
Results:
The 2002 FIFA World Cup was co-hosted in Asia for the first time by South Korea and Japan (the final was held in Japan). Initially, the two Asian countries were competitors in the bidding process. But just before the vote, they agreed with FIFA to co-host the event. However, the rivalry and distance between them led to organizational and logistical problems. FIFA has said that co-hosting is not likely to happen again, and in 2004 officially stated that its statutes did not allow co-hosting bids.
Bids:
On 6 July 2000, the host selection meeting was held for the sixth straight time in Zürich. Brazil withdrew its bid three days before the vote, and the field was narrowed to four. This was the first selection in which more than one vote round was required. Three votes were eventually needed. Germany was at least tied for first in each of the three votes, and ended up defeating South Africa by only one vote after an abstention (see below).
The controversy over the decision to award the 2006 FIFA World Cup to Germany led to a further change in practice. The final tally was 12 votes to 11 in favour of Germany over the contenders South Africa, who had been favorites to win. New Zealand FIFA member Charlie Dempsey, who was instructed to vote for South Africa by the Oceania Football Confederation, abstained from voting at the last minute. If he had voted for the South African bid, the tally would have been 12 -- 12, giving the decision to FIFA President Sepp Blatter, who, it was widely believed, would then have voted for South Africa.
Dempsey was among eight members of the Executive Committee to receive a fax by editors of the German satirical magazine Titanic on Wednesday, the night before the vote, promising a cuckoo clock and Black Forest ham in exchange for voting for Germany. He argued that the pressure from all sides including "an attempt to bribe '' him had become too much for him.
On 4 August 2000, consequently, FIFA decided to rotate the hosting of the final tournaments between its constituent confederations. This was until October 2007, during the selection of the host for the 2014 FIFA World Cup, when they announced that they will no longer continue with their continental rotation policy (see below).
Bids:
The first World Cup bidding process under continental rotation (the process of rotating hosting of the World Cup to each confederation in turn) was the 2010 FIFA World Cup, the first World Cup to be held in Africa. On 7 July 2001, during the FIFA Congress in Buenos Aires, a decision was ratified, which was that the rotation will begin in Africa. On 23 September 2002, FIFA 's Executive Committee confirmed that only African member associations, would be invited to submit bids to host the 2010 FIFA World Cup.
In January 2003, Nigeria entered the bidding process, but withdrew their bid in September. In March 2003, Sepp Blatter initially said Nigeria 's plan to co host the 2010 FIFA World Cup with four African countries would not work. Nigeria had originally hoped to bid jointly with West African neighbours Benin, Ghana, and Togo.
After it was confirmed by FIFA that joint bidding would not be allowed in the future, Libya and Tunisia withdrew both of their bids on 8 May 2004. On 15 May 2004 in Zürich (the seventh consecutive time that a host selection has been made there), South Africa, after a narrow loss in the 2006 bidding, defeated perennial candidate Morocco to host, 14 votes to 10. Egypt received no votes.
On 28 May 2015, media covering the 2015 FIFA corruption case reported that high - ranking officials from the South African bid committee had secured the right to host the World Cup by paying US $10 million in bribes to then - FIFA Vice President Jack Warner and to other FIFA Executive Committee members.
On 4 June 2015, FIFA executive Chuck Blazer, having co-operated with the FBI and the Swiss authorities, confirmed that he and the other members of FIFA 's executive committee were bribed in order to promote the South African 1998 and 2010 World Cups. Blazer stated, "I and others on the FIFA executive committee agreed to accept bribes in conjunction with the selection of South Africa as the host nation for the 2010 World Cup. ''
On 6 June 2015, The Daily Telegraph reported that Morocco had received the most votes, but South Africa was awarded the tournament instead.
Bids:
FIFA continued its continental rotation procedure by earmarking the 2014 World Cup for South America. FIFA initially indicated that it might back out of the rotation concept, but later decided to continue it through the 2014 host decision, after which it was dropped.
Colombia had expressed interest in hosting the 2014 World Cup, but withdrew undertaking the 2011 FIFA U-20 World Cup. Brazil also expressed interest in hosting the World Cup. CONMEBOL, the South American Football Federation, indicated their preference for Brazil as a host. Brazil was the only nation to submit a formal bid when the official bidding procedure for CONMEBOL member associations was opened in December 2006, as by that time, Colombia, Chile and Argentina had already withdrawn, and Venezuela was not allowed to bid.
Brazil made the first unopposed bid since the initial selection of the 1986 FIFA World Cup (when Colombia was selected as host, but later withdrew for financial problems). The FIFA Executive Committee confirmed it as the host country on 30 October 2007 by a unanimous decision.
Result:
2018 Bids:
2022 Bids:
FIFA announced on 29 October 2007 that it will no longer continue with its continental rotation policy, implemented after the 2006 World Cup host selection. The newest host selection policy is that any country may bid for a World Cup, provided that their continental confederation has not hosted either of the past two World Cups. For the 2018 World Cup bidding process, this meant that bids from Africa and South America were not allowed.
For the 2022 World Cup bidding process, this meant that bids from South America and Europe were not allowed. Also, FIFA formally allowed joint bids once more (after they were banned in 2002), because there was only one organizing committee per joint bid, unlike Korea -- Japan, which had two different organizing committees. Countries that announced their interest included Australia, England, Indonesia, Japan, Qatar, Russia, South Korea, United States, the joint bid of Spain and Portugal and the joint bid of Belgium and Netherlands.
The hosts for both World Cups were announced by the FIFA Executive Committee on 2 December 2010. Russia was selected to host the 2018 FIFA World Cup, making it the first time that the World Cup will be hosted in Eastern Europe and making it the biggest country geographically to host the World Cup. Qatar was selected to host the 2022 FIFA World Cup, making it the first time a World Cup will be held in the Arab World and the second time in Asia since the 2002 tournament in South Korea and Japan. Also, the decision made it the smallest country geographically to host the World Cup.
Bids:
Under FIFA rules as of 2016, the 2026 Cup could not be in either Europe (UEFA) or Asia (AFC), leaving an African (CAF) bid, a North American (CONCACAF) bid, a South American (CONMEBOL) bid or an Oceanian (OFC) bid as other possible options. In March 2017, FIFA 's president Gianni Infantino confirmed that "Europe (UEFA) and Asia (AFC) had been excluded from the bidding following the selection of Russia and Qatar in 2018 and 2022 respectively. ''
The bidding process was originally scheduled to start in 2015, with the appointment of hosts scheduled for the FIFA Congress on 10 May 2017 in Kuala Lumpur, Malaysia. On 10 June 2015, FIFA announced that the bid process for the 2026 FIFA World Cup was postponed. However, following the FIFA Council meeting on 10 May 2016, a new bid schedule was announced for May 2020 as the last in a four - phase process.
On 14 October 2016, FIFA said it would accept a tournament - sharing bid by CONCACAF members Canada, Mexico and the United States.
On 10 April 2017, Canada, the United States, and Mexico announced their intention to submit a joint bid to co-host, with three - quarters of the games to be played in the U.S., including the final.
On 11 August 2017, Morocco officially announced a bid to host.
Therefore, the official 2026 FIFA World Cup bids were from two football confederations. The first one was from CONCACAF, which was triple bid by Canada, United States and Mexico, and the second one was from CAF with a bid by Morocco.
The host was announced on 13 June 2018 at the 68th FIFA Congress in Moscow, Russia. The United Bid from Canada, Mexico and the United States was selected over the Morocco bid by 134 votes to 65 with 1 selecting neither and 3 abstentions. This will be the first World Cup to be hosted by more than two countries. Mexico becomes the first country to host three men 's World Cups and its Estadio Azteca, should selected, will become the first stadium to stage three World Cup tournaments. On the other hand, Canada becomes the fifth country to host both the men 's and women 's World Cups, after Sweden (Men 's: 1958 / Women 's: 1995), United States (Men 's: 1994 / Women 's: 1999, 2003), Germany (Men 's: 1974, 2006 / Women 's: 2011), and France (Men 's: 1934, 1998 / Women 's: 2019). The United States becomes the first country to host both men 's and women 's World Cup twice each.
The bidding process for the world cup is yet to start. Two early bids for the 2030 FIFA World Cup - the Centennial World Cup - have however been proposed. The first proposed bid has been a collective bid by the members of the Argentine Football Association and Uruguayan Football Association into a proposed joint bid from Uruguay and Argentina. The second bid has been a proposed bid by The Football Association of England. Under FIFA rules as of 2017, the 2030 World Cup can not be held in Asia (AFC) because the Asian Football Confederation is excluded from the bidding following the selection of Qatar in 2022. Also in June 2017, UEFA 's president Aleksander Čeferin stated that Europe (UEFA) will definitely fight for its right to host the 2030 World Cup.
The Uruguay -- Argentina proposed bid would not coincide with the centennial anniversary of the first FIFA World Cup final, and the bicentennial of the first Constitution of Uruguay, but if selected the tournament dates would coincide. The Uruguay - Argentina bid was officially confirmed on 29 July 2017. A joint bid was announced by the Argentine Football Association and the Uruguayan Football Association on 29 July 2017. Before Uruguay and Argentina played out a goalless draw in Montevideo, FC Barcelona players Luis Suárez and Lionel Messi promoted the bid with commemorative shirts. On 31 August 2017, it was suggested Paraguay would join as a third host. CONMEBOL, the South American confederation (which can not bid under current FIFA rules), confirmed the joint three - way bid in September 2017.
English FA vice chairman David Gill has proposed that his country could bid for 2030, provided the bidding process is made more transparent. "England is one of few countries that could stage even a 48 - nation event in its entirety, while Football Association chief executive Martin Glenn made it clear earlier this year bidding for 2030 was an option. '' In June 2017, UEFA stated that "it would support a pan-British bid for 2030 or even a single bid from England. '' Moreover, a possible English bid for 2030 was also backed by the German Football Association. On 15 July 2018, Deputy Leader of the UK Labour Party, Tom Watson, said in an interview that he and his party backed a 2030 World Cup bid for the UK saying that "I hope it 's one of the first things a Labour government does, which is work with our FA to try and put a World Cup bid together. '' On 16 July 2018, British Prime Minister Theresa May expressed her support of the bid and her openness about discussions with football authorities. Although there had been no prior discussion with the Football Association, the Scottish FA also expressed its interest about joining a Home Nations bid. Former Scottish First Minister Henry McLeish has called the Scottish government and the Scottish Football Association to bid for the 2030 FIFA World Cup with the other British nations.
On 17 June 2018, the Royal Moroccan Football Federation announced its co-bidding for the 2030 World Cup. There are two possible joint bids one with the Tunisia and the Algeria or the other with two UEFA members Spain and Portugal.
On 17 June 2018, the English Football Association announced that they are in talks with home nations over a UK - wide bid to host 2030 World Cup. On 1 August 2018, it was reported that the FA was preparing a bid for England to host the World Cup in 2030. A decision is expected to be made in 2019, after the FA will conduct a feasibility work on a potential bid.
On 10 July 2018, Egypt 's Sports Minister expressed interest in bidding to host.
Confirmed plan to bid:
Potential bid:
Expressed interest in bidding:
The first bid for the 2034 FIFA World Cup has been proposed as a collective bid by the members of the Association of Southeast Asian Nations. The idea of a combined ASEAN bid had been mooted as early as January 2011, when the Football Association of Singapore 's then - President, Zainudin Nordin, said in a statement that the proposal had been made at an ASEAN Foreign Ministers meeting. In 2013, Nordin and Special Olympics Malaysia President, Datuk Mohamed Feisol Hassan, recalled the idea for ASEAN to jointly host a World Cup. Under FIFA rules as of 2017, the 2030 World Cup can not be held in Asia (AFC) as Asian Football Confederation members are excluded from the bidding following the selection of Qatar in 2022. Therefore, the earliest bid by an AFC member could be made for 2034.
Later, Malaysia has withdrawn from involvement but Singapore and other ASEAN countries continued the campaign to submit a joint bid for the World Cup in 2034. In February 2017, ASEAN held talks on launching a joint bid during a visit by FIFA chief Gianni Infantino to Yangon, Myanmar.
In July 2017, Joko Driyono, the Vice-President of Indonesia 's Football Association, during the meeting in Vietnam said that Indonesia and Thailand were set to lead a consortium of South - East Asian nations in the bid. Driyono added that, for geographic and infrastructure considerations and the expanded format (48 teams), at least two or three ASEAN countries combined would be in a position necessary to host matches.
In September 2017, the Football Association of Thailand 's Deputy CEO, Benjamin Tan, at the ASEAN Football Federation (AFF) Council meeting, confirmed that his Association has "put in their interest to bid and co-host '' the 2034 World Cup with Indonesia. On the same occasion, the General Secretary of the AFF, Dato Sri Azzuddin Ahmad, confirmed that Indonesia and Thailand will submit a joint bid.
Indonesia is the only Southeast Asian country to have participated in the World Cup; its national team was the first from Asia to do so and it became one of the fewest World Cup teams to qualify without playing a match because of opponents ' withdrawals. The 1938 World Cup is the Indonesians ' only appearance to date when they were known as the Dutch East Indies. A loss to Hungary made Indonesia the only national team in World Cup history to have played a single match. Far from ever hosting the World Cup, no other Southeast Asian country has ever played at the tournament. As of June 2018, Indonesia is ranked by FIFA at 164th place.
China has announced its interest in bidding several times and reports stated "that China is likely to host the World Cup for the first time in 2034. ''
In July 2018 Egypt 's sports minister Ashraf Sobhy expressed interest in bidding to host.
Expressed interest in bidding:
World Cup - winning bids are bolded. Planned but not - yet - official bids for 2030 and beyond are not included.
It is widely considered that home advantage is common in the World Cup, with the host team usually performing above average. Of the eight countries that have won the tournament, only Brazil and Spain have not been champions while hosting, and England has their only title as hosts.
Further, Sweden got to its only final as hosts of the 1958 tournament, and only the Swedes and Brazilians finished as runner - up in home soil. The host country reached the semi-finals thirteen times in twenty tournaments, and both Chile and South Korea had their only finishes on the top four at home. Only South Africa in 2010 managed to not go past the group stage.
|
where did hear no evil see no evil speak no evil | Three wise monkeys - wikipedia
The three wise monkeys (Japanese: 三猿, Hepburn: san'en or sanzaru, alternatively 三 匹 の 猿 sanbiki no saru, literally "three monkeys ''), sometimes called the three mystic apes, are a pictorial maxim. Together they embody the proverbial principle "see no evil, hear no evil, speak no evil ''. The three monkeys are Mizaru, covering his eyes, who sees no evil; Kikazaru, covering his ears, who hears no evil; and Iwazaru, covering his mouth, who speaks no evil.
There are various meanings ascribed to the monkeys and the proverb including associations with being of good mind, speech and action. In the Western world the phrase is often used to refer to those who deal with impropriety by turning a blind eye.
Outside Japan the monkeys ' names are sometimes given as Mizaru, Mikazaru, and Mazaru, as the last two names were corrupted from the Japanese originals. The monkeys are Japanese macaques, a common species in Japan.
The source that popularized this pictorial maxim is a 17th - century carving over a door of the famous Tōshō - gū shrine in Nikkō, Japan. The carvings at Toshogu Shrine were carved by Hidari Jingoro, and believed to have incorporated Confucius 's Code of Conduct, using the monkey as a way to depict man 's life cycle. There are a total of eight panels, and the iconic three wise monkeys picture comes from panel 2. The philosophy, however, probably originally came to Japan with a Tendai - Buddhist legend, from China in the 8th century (Nara Period). It has been suggested that the figures represent the three dogmas of the so - called middle school of the sect.
In Chinese, a similar phrase exists in the Analects of Confucius from 2nd to 4th century B.C.: "Look not at what is contrary to propriety; listen not to what is contrary to propriety; speak not what is contrary to propriety; make no movement which is contrary to propriety '' (非禮 勿 視, 非禮 勿 聽, 非禮 勿 言, 非禮 勿 動). It may be that this phrase was shortened and simplified after it was brought into Japan.
It is through the Kōshin rite of folk religion that the most significant examples are presented. The Kōshin belief or practice is a Japanese folk religion with Chinese Taoism origins and ancient Shinto influence. It was founded by Tendai Buddhist monks in the late 10th century. A considerable number of stone monuments can be found all over the eastern part of Japan around Tokyo. During the later part of the Muromachi period, it was customary to display stone pillars depicting the three monkeys during the observance of Kōshin.
Though the teaching had nothing to do with monkeys, the concept of the three monkeys originated from a simple play on words. The saying in Japanese is mizaru, kikazaru, iwazaru (見 ざる, 聞か ざる, 言わ ざる) "see not, hear not, speak not '', where the - zaru is a negative conjugation on the three verbs, matching zaru, the modified form of saru (猿) "monkey '' used in compounds. Thus the saying (which does not include any specific reference to "evil '') can also be interpreted as referring to three monkeys.
The shrine at Nikko is a Shinto shrine, and the monkey is an extremely important being in the Shinto religion. The monkey is believed to be the messenger of the Hie Shinto shrines, which also have connections with Tendai Buddhism. There are even important festivals that are celebrated during the year of the Monkey (occurring every twelve years) and a special festival is celebrated every sixteenth year of the Kōshin.
"The Three Mystic Apes '' (Sambiki Saru) were described as "the attendants of Saruta Hito no Mikoto or Kōshin, the God of the Roads ''. The Kōshin festival was held on the 60th day of the calendar. It has been suggested that during the Kōshin festival, according to old beliefs, one 's bad deeds might be reported to heaven "unless avoidance actions were taken... ''. It has been theorized that the three Mystic Apes, Not Seeing, Hearing, or Speaking, may have been the "things that one has done wrong in the last 59 days ''.
According to other accounts, the monkeys caused the Sanshi and Ten - Tei not to see, say or hear the bad deeds of a person. The Sanshi (三 尸) are the Three Corpses living in everyone 's body. The Sanshi keep track of the good deeds and particularly the bad deeds of the person they inhabit. Every 60 days, on the night called Kōshin - Machi (庚申待), if the person sleeps, the Sanshi will leave the body and go to Ten - Tei (天帝), the Heavenly God, to report about the deeds of that person. Ten - Tei will then decide to punish bad people, making them ill, shortening their time alive, and in extreme cases putting an end to their lives. Those believers of Kōshin who have reason to fear will try to stay awake during Kōshin nights. This is the only way to prevent the Sanshi from leaving their body and reporting to Ten - Tei.
An ancient representation of the "no see, no hear, no say, no do '' can be found in four golden figurines in the Zelnik Istvan Southeast Asian Gold Museum. These golden statues date from the 6th to 8th century. The figures look like tribal human people with not very precise body carvings and strong phallic symbols. This set indicates that the philosophy comes from very ancient roots.
It is not clear how or when the saying travelled; in Ethiopia the Ge'ez language has the saying "Let the eye fast, let the mouth fast, let the ears fast. ''
Just as there is disagreement about the origin of the phrase, there are differing explanations of the meaning of "see no evil, hear no evil, speak no evil ''.
Sometimes there is a fourth monkey depicted, Shizaru, who symbolizes the principle of "do no evil ''. He may be shown crossing his arms or covering his genitals. In another variation, a fourth monkey is depicted with a sulking posture and the caption "have no fun ''. Yet another variation has the fourth monkey hold its nose to avoid a stench and has been dubbed "smell no evil '' accordingly.
According to Osho Rajneesh, the monkey symbolism originated in ancient Hindu tradition and Buddhist monks spread this symbolism across Asia. The original Hindu and Buddhist version contains 4 monkeys and the fourth monkey covers his genitals. The Buddhist version means this as "Do n't do anything evil ''.
In Hindu original version the meaning of the fourth monkey is totally different from the popular Buddhist version. It means, "Hide your pleasures. Hide your enjoyment, do n't show it to anybody. ''
Osho Rajneesh gave his own meaning regarding this. The first monkey denotes ' Do n't listen to the truth because it will disturb all your consoling lies '. The second monkey denotes ' Do n't look at the truth; otherwise your God will be dead and your heaven and hell will disappear '. The third monkey denotes ' Do n't speak the truth, otherwise you will be condemned, crucified, poisoned, tortured by the whole crowd, the unconscious people. You will be condemned, do n't speak the truth! ' The fourth monkey denotes "Keep your pleasures, your joys, hidden. Do n't let anybody know that you are a cheerful man, a blissful man, an ecstatic man, because that will destroy your very life. It is dangerous ''.
The three wise monkeys, and the associated proverb, are known throughout Asia and in the Western world. They have been a motif in pictures, such as the ukiyo - e (Japanese woodblock printings) by Keisai Eisen, and are frequently represented in modern culture.
Mahatma Gandhi 's one notable exception to his lifestyle of non-possession was a small statue of the three monkeys. Today, a larger representation of the three monkeys is prominently displayed at the Sabarmati Ashram in Ahmedabad, Gujarat, where Gandhi lived from 1915 to 1930 and from where he departed on his famous salt march. Gandhi 's statue also inspired a 2008 artwork by Subodh Gupta, Gandhi 's Three Monkeys.
The maxim inspired an award - winning 2008 Turkish film by director Nuri Bilge Ceylan called Three Monkeys (Üç Maymun).
Unicode provides emoji representations of the monkeys in the Emoticons block as follows:
|
where did the restaurant arby's get its name | Arby 's - Wikipedia
Arby 's is an American quick - service fast - food sandwich restaurant chain with more than 3,300 restaurants system wide and third in terms of revenue. In October 2017, Food & Wine called Arby 's "America 's second largest sandwich chain (after Subway) ''.
Arby 's is owned by Inspire Brands, the renamed Arby 's Restaurant Group, Inc. (ARG). ARG was renamed as the company took over ownership of Buffalo Wild Wings on February 5, 2018.
Roark Capital Group acquired Arby 's Restaurant Group in July 2011 and owns 81.5 % of the company, with The Wendy 's Company owning the other 18.5 %. In addition to its classic Roast Beef and Beef ' n Cheddar sandwiches, Arby 's products also include deli - style Market Fresh line of sandwiches, Curly Fries and Jamocha Shakes. Its headquarters are in Sandy Springs, Georgia, a suburb of Atlanta which uses Atlanta mailing addresses.
As of December 31, 2015, there are 3,342 restaurants. There are international locations in five countries outside the United States: Canada, Turkey, Qatar, Kuwait, and United Arab Emirates.
Arby 's was founded in Boardman, Ohio, on July 23, 1964, by Forrest (1922 -- 2008) and Leroy Raffel, owners of a restaurant equipment business who thought there was a market opportunity for a fast food franchise based on a food other than hamburgers. The brothers wanted to call their restaurants "Big Tex '', but that name was already used by an Akron business. Instead, they chose the name "Arby 's, '' based on R.B., the initials of Raffel Brothers.
The Raffel brothers opened the first new restaurant in Boardman, Ohio, just outside Youngstown, on July 23, 1964. They initially served only roast beef sandwiches, potato chips, and soft drinks. Hoping to attract a more upscale clientele, Arby 's interior design was purposely more luxurious in appearance than the typical fast food sandwich stand of the day. Arby 's offered their roast beef sandwiches for $. 69 at a time when hamburger stands were charging $. 15 for a hamburger. A year later, the first Arby 's licensee opened a restaurant in Akron, Ohio. The famous Arby 's "hat '' was designed by the original sign makers, Peskin Sign Co.
During the 1970s, the expansion of Arby 's took place at a rate of 50 stores per year. During this time it created several menu items, including the Beef ' n Cheddar, Jamocha Shakes, chicken sandwiches, Curly Fries, and two signature sauces: Arby 's Sauce and Horsey Sauce. Baked potatoes were added to the menu in 1968. Curly Fries were initially introduced as Curly - Q Fries in 1988. It became the first restaurant in the fast food industry to offer a complete "lite '' menu in 1991 with several sandwiches and salads under 300 calories and 94 percent fat - free.
The family - owned business tried converting into a public company in 1970 by offering the sale of stock, but the IPO never went through when the stock market subsequently fell. In 1976, the family sold the company to Royal Crown Cola Company for $18 million and Leroy Raffel remained as CEO until his retirement three years later.
In 1984, Victor Posner obtained Arby 's via a hostile takeover of its then parent Royal Crown through his DWG Corporation. Nine years later, with a new owner of DWG Corporation and a new name, Triarc Companies, Inc., a former PepsiCo executive, Don Pierce, was brought in to "resurrect '' Arby 's. With $100 million additional funding, Pierce moved to a new "Roast Town '' concept, similar in format to Boston Market, in 1996. The Roast Town concept received poor marks in market tests and was quickly discontinued. Pierce and his team left the company and it sold all of its 354 company - owned locations to RTM Restaurant Group, an existing Arby 's franchise, for $71 million. Another marketing concept that was tried was a dual - brand venture that was started in 1995 with ZuZu 's Handmade Mexican Grill. The marketing venture was a failure resulting in lawsuits being filed by each company against the other.
In 1992, Les Franchises P.R.A.G. Inc. opened the first Arby 's franchise in the Canadian province of Quebec. It was also the 100th location to open in Canada and joins other locations that were then operating in the provinces of Ontario, Alberta, New Brunswick, Nova Scotia, Manitoba, British Columbia, and Saskatchewan. The Quebec location also sold the uniquely French - Canadian dish called poutine.
In 2002, Arby 's returned to operating restaurants by purchasing the second largest Arby 's franchisee, Sybra Inc., with 293 locations out of bankruptcy outbidding RTM so as to prevent RTM from becoming too large. RTM was purchased by Arby 's on July 25, 2005.
In 2008, Triarc purchased Wendy 's, and changed its name to Wendy 's / Arby 's Group, to reflect their core businesses. In January 2011, it was announced that Wendy 's / Arby 's Group were looking into selling the Arby 's side of the business to focus on the Wendy 's brand. It was officially announced the companies would split on January 21, 2011. In 2009, the Wendy 's / Arby 's Group signed a franchise deal with the Al Jammaz Group of Saudi Arabia to open dual - branded Wendy 's / Arby 's through the Middle East with the first location opening in Dubai in the United Arab Emirates in May 2010. The Wendy 's / Arby 's Group also signed a similar franchise deal in June 2010 with Tab Gida Sanayi ve Ticaret to open dual - branded restaurants in Turkey.
In 2010, Arby 's opened a restaurant at Ft. Bliss, their first location at an American military base under a deal that the Wendy 's / Arby 's Group had signed with the Army and Air Force Exchange Service to open restaurants at bases both in the United States and overseas. Only American military personnel and some of their guests can patronize the Arby 's locations situated on the military bases and operated by the Post Exchange.
There were two different attempts to operate franchises in the United Kingdom. GSR Restaurant Group opened their first Arby 's franchise location in London in 1992 followed by a second location the following year in Glasgow. These were also the first locations to open in Europe, but both were forced to close by 1994. In 2001, Barown Restaurants opened two Arby 's franchise locations in Southampton, Hampshire, and Sutton, Surrey, but both were forced to close after operating for a few months.
On June 13, 2011, Wendy 's / Arby 's Group Inc. announced that it would sell the majority of its Arby 's chain to Roark Capital Group while maintaining an 18.5 % stake in the company. At the time of the sale, Arby 's was experiencing an operating loss for the year of $35 million with 350 Arby 's franchisees more than 60 days late in royalty payments and 74 low performing franchised units and 96 company - owned units forced to close. Despite its cash flow problem, Arby 's also reported that it had six months of sales growth at established stores in the United States which it had attributed to its new turnaround plan that it had recently launched. The new owners turned the company around by closing more underperforming locations, changing the company 's marketing strategy, and by introducing new products on a regular basis. After four years, Arby 's was able to issue $300 million in dividends, which resulted in Wendy 's receiving $54.9 million for its minority stake with the remainder paid to Roark.
In September 2017, Arby 's returned to Kuwait for the first time in two decades by the opening of an Arby 's franchise in Jabriya by Kharafi Global.
In November 2017, Arby 's announced it had negotiated a purchase for the restaurant chain Buffalo Wild Wings for $2.4 billion in cash.
In addition to roast beef, deli style sandwiches, called "Market Fresh Sandwiches, '' are sold at Arby 's. The original lineup of sandwiches included Roast Beef and Swiss, Roast Turkey and Swiss, Roast Ham and Swiss, and Roast Chicken Caesar. With the exception of the Chicken Caesar, all Market Fresh Sandwiches came with the standard toppings of spicy brown honey mustard, mayonnaise, red onion rings, green leaf lettuce, tomato slices and sliced Swiss cheese. Additions to the Market Fresh lineup included Roast Turkey Ranch and Bacon and the Ultimate BLT. Market Fresh Five - Star Club, served on Harvest White Bread, was introduced in 2003 for a limited time.
The Ultimate BLT was released for a limited time in 2002 and later in 2012.
In 2003, the line was again expanded to include other styles of specialty sandwiches that were served on baguettes that included the Italian Beef ' n Provolone, French Dip ' n Swiss, Philly Beef Supreme, and Pot Roast sandwiches.
Corned beef and turkey Reuben sandwiches were added to the menu in 2005.
In early 2006, Arby 's Restaurant Group signed a contract with Pepsi, making Pepsi the chain 's exclusive soft drink provider. When any franchisees who had contracts with Coca - Cola expired, they switched to Pepsi - Cola, with the only exception being the Arby 's located at Youngstown State University, because the University has its own separate contract with Coca - Cola for other university purposes, particularly the athletic department. This Arby 's closed in mid-2012 when construction began to convert the location into a Wendy 's. It was announced in August 2017, that Coca - Cola had won a contract to serve Coke products at all its restaurants, ending an almost 11 - year association with Pepsi. The transition began in early 2018, and all Arby 's locations were serving Coca - Cola beverages by June 19, 2018. Arby 's promoted their announcement by breaking two Guinness World Records. The first record, "world 's smallest advertisement '', measured 38.3 microns by 19.2 microns on a sesame seed and was printed at Georgia Tech. The second record, "largest advertising poster '', took up approximately 5 acres of land and was placed in Monowi, Nebraska, America 's smallest town.
Toasted Subs, sandwiches served on a toasted ciabatta roll, were first introduced in September 2007. The initial line - up included the French Dip & Swiss Toasted Sub, Philly Beef Toasted Sub, Classic Italian Toasted Sub, and Turkey Bacon Club. Three months later, the Toasted Subs product line was extended to include the Meatball Toasted Sub and the Chicken Parmesan Toasted Sub.
In October 2013, Arby 's introduced a Smokehouse Brisket sandwich.
In September 2014, Arby 's introduced gyros to its menu for a limited time. Gyros were previously offered in 2006. They have since become a permanent menu fixture on the menu in April 2016.
On an almost annual basis, Arby 's had offered some sort of a flatbread melt sandwich for a limited time. In 2007 and again in 2008, it was the Philly Cheesesteak and the Fajita Beef. The Beef Fajita returned with the new Chicken Fajita in 2009. After a six - year hiatus, Steak Fajita Flatbreads were offered for a limited time in 2015. The following year, Steak Fajita returned in 2016 with Chicken Fajita along with a choice between a hot and mild sauce.
After a nine - year hiatus, a pork - beef mixture meatball sandwich was reintroduced in July 2016.
In August 2015, Arby 's introduced a series of five small sandwiches called Sliders starting at $1.29, with prices varied by location. These new menu items led to an increase in sales at many locations. This is not the first time Arby 's tried to market miniature sandwiches. Two years earlier, Arby 's tried to sell a similar product called the Mighty Minis that were sold in pairs. During the first month of national sales, the firm was able to sell 1 million or a ton of sliders. To encourage additional sales outside normal lunch and dinner meal hours, began to offer sliders and small size drinks and sides at the reduced price of $1 between the hours of 2 and 5 p.m. starting in October 2015. Due to Arby 's great success in the increase of sales created by the introduction of this new product line, Nation 's Restaurant News awarded Arby 's its MenuMasters Award for 2016. The Turkey ' n Cheese was initially offered as a limited time menu item in December 2016 but was shown to be popular enough to be retained on the regular menu. A Pizza Slider was introduced as a limited time menu item in May 2017.
In late August 2016, Arby 's introduced four chicken sandwiches that used a buttermilk - based breaded breast filet.
Arby 's debuted a new sandwich known as Smokehouse Pork Belly Sandwich in October 2016.
In October 2016, word leaked through social media that Arby 's was about to test a venison sandwich, which Arby 's confirmed, selecting 17 stores in Georgia, Michigan, Minnesota, Pennsylvania, Tennessee, and Wisconsin (all major deer hunting states) to offer it during a four - day test during those states ' respective hunting seasons. Prior to the start of the promotion, USA Today published the locations of all 17 participating restaurants. Both due to curiosity and heavy demand from hunters, the sandwiches sold out in all markets on the first day of the test. Another USA Today article reported that the farm - raised venison was imported from New Zealand. In the following year, Arby 's announced that the venison sandwich would return nationwide on October 21, 2017, also available in limited quantities.
In September 2017, Arby 's introduced the Chicken Pepperoni Parm sandwich, their version of a chicken parmigiana sandwich which also contains pepperoni slices.
In October 2017, Arby 's announced that three locations in Colorado, Montana, and Wyoming will offer an elk sandwich for a limited time. The farm - raised elk meat that was used to make the sandwiches was obtained from the same farm in New Zealand that provides the venison to Arby 's since many states, such as Montana, prohibit the raising of game animals on commercial farms.
Some locations also serve breakfast, including at truck stop locations operated by Pilot Flying J and Love 's Travel Stops and Country Stores that are typically operating 24 hours.
In December 2017, Arby 's introduced Buffalo Chicken Tenders, Oreo bites and Nashville Hot fish sandwiches (seasoned with a dried cayenne pepper seasoning powder).
In May 2018, Arby 's introduced Market Fresh sandwiches, a Bacon Beef and Cheddar sandwich and a Coke Float (which is here until September 3, 2018 for a limited time).
Ving Rhames is the narrator of Arby 's commercials. In the commercials, food items (old and new) are being showcased by a chef, who is only seen from the neck down. Radio ads, that rarely pop up, use the same audio from the commercials (except for the man at the end saying "At participating locations for a limited time '').
Current Arby 's locations:
Former Arby 's countries:
In November 2002, Access Now filed a lawsuit against RTM, then a franchise of TriArc, that some 800 of their stores did not comply with the Americans with Disabilities Act of 1990 (ADA). The lawsuit had no liability damages except for lawyer fees. In August 2006, the court accepted the settlement between RTM and Access Now. The result was that every year, 100 of the RTM stores would be retrofitted to comply with the ADA. Accordingly, it was estimated that about $1.2 million would be spent to retrofit those stores each year.
In February 2017, Arby 's reported that they were investigating a malware attack on its payment card system that had targeted thousands of customers ' credit and debit cards. The malware was placed on point - of - sale systems inside Arby 's corporate - owned restaurants, not its locations owned by franchisees, and was active between October 25, 2016 and January 19, 2017. Eight credit unions and banks from Alabama, Arkansas, Indiana, Louisiana, Michigan, Pennsylvania and Montana have filed suit since early February against Arby 's concerning the data breach.
Inspire Brands, Inc., formerly Arby 's Restaurant Group, Inc., is the owner of the Arby 's restaurant chain, Buffalo Wild Wings and R Taco. The company is based in Atlanta with a support center in Minneapolis. Inspire Brands is owned by Roark Capital Group, majority shareholder, and The Wendy 's Company, minority.
Arby 's Restaurant Group, Inc. (ARG) was renamed Inspire Brands, Inc. as the company took ownership of Buffalo Wild Wings on February 5, 2018. Buffalo Wild Wings also owned the R Taco chain. Arby 's CEO Paul Brown was selected to continue as Inspire Brand CEO. Brown expected that Inspire would acquire additional chains in different segments. He plans to structure the company similar to Hilton Hotels & Resorts.
On August 16, 2018, The Wendy 's Company announced that it sold its 12.3 % stake in Inspire Brands back to Inspire Brands for $450 million, which includes a 38 % premium over its stake most recent valuation.
|
who plays the flash in the new justice league movie | Justice League (film) - wikipedia
Justice League is an upcoming American superhero film based on the DC Comics superhero team of the same name, distributed by Warner Bros. Pictures. It is intended to be the fifth installment in the DC Extended Universe (DCEU). The film is directed by Zack Snyder and written by Chris Terrio and Joss Whedon, from a story by Snyder and Terrio, and features an ensemble cast that includes Ben Affleck, Henry Cavill, Gal Gadot, Jason Momoa, Ezra Miller, Ray Fisher, Ciarán Hinds, Amy Adams, Willem Dafoe, Jesse Eisenberg, Jeremy Irons, Diane Lane, Connie Nielsen, and J.K. Simmons. In Justice League, Batman and Wonder Woman assemble a team consisting of Flash, Aquaman, and Cyborg to face the catastrophic threat of Steppenwolf and his army of Parademons.
The film was announced in October 2014 with Snyder on board to direct and Terrio attached to write the script. Principal photography commenced in April 2016 and ended in October 2016. Snyder left the project in May 2017, following the death of his daughter, with Joss Whedon supervising post-production, as well as working on additional scenes and reshoots. Justice League is scheduled to be released on November 17, 2017, in 2D, 3D, and IMAX.
Months after the events of Batman v Superman: Dawn of Justice and inspired by Superman 's apparent sacrifice for humanity, Bruce Wayne and Diana Prince assemble a team of metahumans consisting of Barry Allen, Arthur Curry, and Victor Stone to face the catastrophic threat of Steppenwolf and his army of Parademons, who are on the hunt for three Mother Boxes on Earth.
Joe Morton and Robin Wright reprise their roles as Dr. Silas Stone, a scientist at S.T.A.R. Labs and Victor Stone 's father, and as General Antiope, Hippolyta 's sister and Diana 's aunt / mentor, from Batman v Superman: Dawn of Justice and Wonder Woman, respectively. Amber Heard, Billy Crudup, and Kiersey Clemons will portray Mera, Dr. Henry Allen, and Iris West, respectively. Julian Lewis Jones and Michael McElhatton have been cast in undisclosed roles.
In February 2007, it was announced that Warner Bros. hired husband and wife duo Michele and Kieran Mulroney to write a script for a Justice League film. The news came around the same time that Joss Whedon 's long - developed Wonder Woman film had been cancelled, as well as The Flash, written and directed by David S. Goyer. Reportedly titled Justice League: Mortal, Michele and Kiernan Mulroney submitted their script to Warner Bros. in June 2007, receiving positive feedback, which prompted the studio to immediately fast track production in the hopes of filming to begin before the 2007 - 2008 Writers Guild of America strike. Warner Bros. was less willing to proceed with development with a sequel to Superman Returns, having been disappointed with the box office return. Brandon Routh was not approached to reprise the role of Superman in Justice League: Mortal, nor was Christian Bale from Batman Begins. Warner Bros. intended for Justice League: Mortal to be the start of a new film franchise, and to branch out into separate sequels and spin - offs. Shortly after filming finished with The Dark Knight, Bale stated in an interview that "It 'd be better if it does n't tread on the toes of what our Batman series is doing, '' though he personally felt it would make more sense for Warner Bros. to release the film after The Dark Knight Rises. Jason Reitman was the original choice to direct Justice League, but he turned it down, as he considers himself an independent filmmaker and prefers to stay out of big budget superhero films. George Miller signed to direct in September 2007, with Barrie Osbourne producing on a projected $220 million budget.
The following month roughly 40 actors and actresses were auditioning for the ensemble superhero roles, among them were Joseph Cross, Michael Angarano, Max Thieriot, Minka Kelly, Adrianne Palicki and Scott Porter. Miller intended to cast younger actors as he wanted them to "grow '' into their roles over the course of several films. D.J. Cotrona was cast as Superman, along with Armie Hammer as Batman. Jessica Biel reportedly declined the Wonder Woman role after being in negotiations. The character was also linked to actresses Teresa Palmer and Shannyn Sossamon, along with Mary Elizabeth Winstead, who confirmed that she had auditioned. Ultimately Megan Gale was cast as Wonder Woman, while Palmer was cast as Talia al Ghul, whom Miller had in mind to act with a Russian accent. The script for Justice League: Mortal would have featured John Stewart as Green Lantern, a role originally offered to Columbus Short. Hip hop recording artist and rapper Common was cast, with Adam Brody as Barry Allen / Flash, and Jay Baruchel as the lead villain, Maxwell Lord. Longtime Miller collaborator Hugh Keays - Byrne had been cast in an unnamed role, rumored to be Martian Manhunter. Aquaman had yet to be cast. Marit Allen was hired as the original costume designer before her untimely death in November 2007, and the responsibilities were assumed by Weta Workshop.
However, the writers strike began that same month and placed the film on hold. Warner Bros. had to let the options lapse for the cast, but development was fast tracked once more in February 2008 when the strike ended. Warner Bros. and Miller wanted to start filming immediately, but production was pushed back three months. Originally, the majority of Justice League: Mortal would be shot at Fox Studios Australia in Sydney, with other locations scouted nearby at local colleges, and Sydney Heads doubling for Happy Harbor. The Australian Film Commission had a say with casting choices, giving way for George Miller to cast Gale, Palmer and Keays - Bryne, all Australian natives. The production crew was composed entirely of Australians, but the Australian government denied Warner Bros. a 40 percent tax rebate as they felt they had not hired enough Australian actors. Miller was frustrated, stating that "A once - in - a-lifetime opportunity for the Australian film industry is being frittered away because of very lazy thinking. They 're throwing away hundreds of millions of dollars of investment that the rest of the world is competing for and, much more significantly, highly skilled creative jobs. '' Production offices were then moved to Vancouver Film Studios in Canada. Filming was pushed back to July 2008, while Warner Bros was still confident they could release the film for a summer 2009 release.
With production delays continuing, and the success of The Dark Knight in July 2008, Warner Bros. decided to focus on the development of individual films featuring the main heroes, allowing director Christopher Nolan to separately complete his Batman trilogy with The Dark Knight Rises in 2012. Gregory Noveck, senior vice president of creative affairs for DC Entertainment stated "we 're going to make a Justice League movie, whether it 's now or 10 years from now. But we 're not going to do it and Warners is not going to do it until we know it 's right. '' Actor Adam Brody joked "They (Warner Brothers) just did n't want to cross their streams with a whole bunch of Batmans in the universe. '' Warner Bros. relaunched development for the solo Green Lantern film, released in 2011 as a critical and financial disappointment. Meanwhile, film adaptations for The Flash and Wonder Woman continued to languish in development while filming for a Superman reboot was commencing in 2011 with Man of Steel, produced by Nolan and written by Batman screenwriter David S. Goyer. Shortly after filming had finished for Man of Steel, Warner Bros hired Will Beall to write the script for a new Justice League film. Warner Bros. president Jeff Robinov explained that Man of Steel would be "setting the tone for what the movies are going to be like going forward. In that, it 's definitely a first step. '' The film included references to the existence of other superheroes in the DC Universe, and setting the tone for a shared fictional universe of DC Comics characters on film. Goyer stated that should Green Lantern appear in a future installment, it would be a rebooted version of the character and not connected to the 2011 film.
With the release of Man of Steel in June 2013, Goyer was hired to write a sequel, as well as a new Justice League, with the Beall draft being scrapped. The sequel was later revealed to be Batman v Superman: Dawn of Justice, a team up film featuring Ben Affleck as Batman, Gal Gadot as Wonder Woman, and Ray Fisher as Victor Stone / Cyborg in a minor role that will become more significant in leading up to the proposed Justice League film. The universe is separate from Nolan and Goyer 's work on The Dark Knight trilogy, although Nolan is still involved as an executive producer for Batman v Superman. In April 2014, it was announced that Zack Snyder would also be directing Goyer 's Justice League script. Warner Bros. was reportedly courting Chris Terrio to rewrite Justice League the following July, after having been impressed with his rewrite of Batman v Superman: Dawn of Justice. On October 15, 2014, Warner Bros. announced the film would be released in two parts, with Part One releasing on November 17, 2017, and Part Two on June 14, 2019. Snyder will direct both films. In early July 2015, EW revealed that the script for Justice League Part One had been completed by Terrio. Zack Snyder stated that the film will be inspired by the New Gods comic series by Jack Kirby. Although Justice League was initially announced as a two - part film with the second part releasing two years after the first, Snyder announced in June 2016 that they would be two distinct, separate films and not one film split into two parts, both being stand - alone stories.
In April 2014, Ray Fisher was cast as Victor Stone / Cyborg, and was set to cameo in Batman v Superman: Dawn of Justice followed by a larger role in Justice League. Henry Cavill, Ben Affleck, Gal Gadot, Diane Lane and Amy Adams are also expected to reprise their roles from Batman v Superman: Dawn of Justice. In October 2014, Jason Momoa was cast as Arthur Curry / Aquaman and debuted as the character in Dawn of Justice. On October 20, 2014, Momoa told ComicBook.com that the Justice League film would be coming first and that is what they were preparing for, and he did not know if the solo Aquaman film would be prior to Justice League or post. He thought it might be the origin of where Aquaman came from. On January 13, 2016, The Hollywood Reporter announced that Amber Heard was in negotiations to appear in the film as Aquaman 's love interest Mera. In March 2016, producer Charles Roven said that Green Lantern would not appear in any film before Justice League Part Two, and stated that they "could put Green Lantern in some introduction in Justice League 2, or barring that, a movie after. '' Also in March, The Hollywood Reporter announced that J.K. Simmons was cast as Commissioner James Gordon, and Heard was confirmed to join the cast as Mera. Adams also confirmed that she would reprise her role as Lois Lane in both Justice League films. The following month, Simmons confirmed that he would play Gordon. By April 2016, Willem Dafoe was cast in an undisclosed role, later revealed to be Nuidis Vulko. Cavill confirmed that he would return for both Justice League films. In May 2016, Jeremy Irons confirmed he will appear as Alfred Pennyworth. That same month, Jesse Eisenberg stated that he would reprise his role as Lex Luthor, and in June 2016, he confirmed in an interview with Shortlist magazine of his return. In July 2016, Julian Lewis Jones was cast in an undisclosed role. Laurence Fishburne, who portrays Perry White in the DCEU, said he declined to reprise his role in this film due to scheduling conflicts. In April 2017, Michael McElhatton revealed that he has a role in the film.
In July 2015, it was revealed that filming would begin in spring 2016 after Wonder Woman ended principal photography. Principal photography commenced on April 11, 2016, with shooting taking place at Warner Bros. Studios, Leavesden, as well as various locations around London, Scotland, Los Angeles and in Djúpavík in the Westfjords of Iceland. Snyder 's longtime cinematographer Larry Fong was replaced by Fabian Wagner due to scheduling conflicts. Affleck was also revealed to be serving as executive producer. In May 2016, it was revealed that Geoff Johns and Jon Berg will be producing the Justice League films and they will also be in charge of the DC Extended Universe after the largely negative critical reception of Batman v Superman: Dawn of Justice. Geoff Johns confirmed on June 3, 2016, that the title of the film is Justice League. That same month, Irons stated that the Justice League storyline will be more linear and simple, in comparison to the theatrical version of Batman v Superman: Dawn of Justice. Johns later stated that the film would be "hopeful and optimistic '' in comparison to the previous DCEU films. Filming wrapped in October 2016.
In May 2017, Snyder stepped down during post-production of the film to properly deal with the death of his daughter. Joss Whedon, who Snyder had previously brought on to rewrite some additional scenes, took over to handle post-production duties in Snyder 's place. In July 2017, it was announced the film was undergoing two months of reshoots in London and Los Angeles, with Warner Bros. putting about $25 million into them (more than the typical $6 -- 10 million additional filming costs). The reshoots coincided with Cavill 's shooting schedule for Mission: Impossible 6, for which he had grown a mustache which he was contracted to keep while filming, so Justice League 's VFX team had to resort to using special effects to digitally remove the mustache in post. Warner Bros. later announced that Whedon would receive a screenwriting credit on the film in recognition of the alterations he made.
In March 2016, Hans Zimmer, who composed the score for Man of Steel and Batman v Superman: Dawn of Justice, stated that he is officially retired from the "superhero business ''. By June 2016 he was replaced by Junkie XL, who wrote and composed the soundtrack of Batman v Superman: Dawn of Justice with Zimmer. In June 2017, Danny Elfman was announced to have replaced Junkie XL. Elfman had previously composed Batman, Batman Returns, and the theme song for Batman: The Animated Series.
Justice League is scheduled to be released on November 17, 2017. The film will get an IMAX release.
A sequel was scheduled to be released in June 2019, but has since been delayed to accommodate the release for a standalone Batman film. In March 2017, producer Charles Roven announced that Zack Snyder would return as director.
|
the fighting temptations the church is in mourning | The Fighting Temptations - wikipedia
The Fighting Temptations is a 2003 American musical comedy - drama film directed by Jonathan Lynn, written by Elizabeth Hunter and Saladin K. Patterson, and distributed by Paramount Pictures and MTV Films. The main plot revolves around Darrin Hill (Cuba Gooding Jr.) who travels to his hometown of Monte Carlo, Georgia as he attempts to revive a church choir in order to enter a gospel competition with the help of a beautiful lounge singer, Lilly (Beyoncé), with whom he falls in love. Through the choir 's music, Darrin brings the church community back together all the while seeking a relationship with Lilly.
The film is notable for its soundtrack and ensemble cast. The film received mixed reviews upon release.
In the year 1980, a young boy named Darrin Hill (Cuba Gooding Jr.) and his mother, MaryAnn (Faith Evans), are run out of their hometown of Monte Carlo, Georgia, after MaryAnn is soon discovered to be singing secular music when she sings in a church choir also. After being confronted about this by the self - righteous Paulina Pritchett (LaTanya Richardson), MaryAnn is forced to choose between singing professionally or remaining in the choir. Aunt Sally Walker (Ann Nesby), Darrin 's great - aunt, attempts to defend MaryAnn but fails, and the church 's pastor, Reverend Paul Lewis (Wendell Pierce), is too afraid of Paulina (who is his sister and who uses this fact to her advantage) and the other church members to let her stay. MaryAnn and Darrin are last seen on a bus saying goodbye to Aunt Sally, as they sadly wave to each other.
In 2003, Darrin has grown up to become a successful advertising executive in New York City with a bad habit of lying. His only true friend and secretary, Rosa Lopez (Lourdes Benedicto), does a good job at keeping his credit problems under control. However, Darrin has achieved so much under false pretenses, having faked his college degree and high school diploma and lied about being the son of a congressman. Eventually, his lies soon catch up with him and get him in trouble with his paranoid boss (Dakin Matthews) and Darrin loses his job. After being tracked down by a private investigator, Darrin soon finds out that Aunt Sally has died.
Darrin returns to his hometown of Monte Carlo, Georgia, and on the way, looks back on both comical and heartwarming memories of MaryAnn (who is later revealed to have died in a car accident when Darrin was a teenager) and the experiences they had together. When Darrin arrives, he finds a new friendship in Lucious (Mike Epps), the town 's happy - go - lucky, womanizing cab driver (who also has a fear of the Lord). At Aunt Sally 's funeral, Shirley Caesar makes a cameo appearance as a character who was an old friend of Sally and sings at the funeral. After the funeral, Darrin soon learns from the Reverend that Aunt Sally had stated in her will that he must direct the church choir and enter the annual "Gospel Explosion '' competition and win the prize money of $10,000 and in doing so, will inherit Aunt Sally 's stock in the company that produces the show which is currently worth $150,000.
Upon taking charge of the once - powerful choir, Darrin discovers that it has fallen into decline over the years, with only a handful of members remaining. He also faces opposition from Paulina, who is now the church 's high - strung and wicked treasurer who expected to be the new choir director now that Sally has died. Paulina holds a grudge against Darrin because of his mother and for "stealing '' her spot as choir director, as she was next in line and had waited for years for Sally to pass away.
After several setbacks, Darrin eventually recruits many new members, most of which he does so by promising half of the competition 's prize money to them (though he has no intention of paying anyone). He also reconnects with his childhood friend and crush Lilly (Beyoncé), who has faced ostracization from the townspeople similar to MaryAnn 's, due to her being an R&B nightclub singer, and having a son, Dean, out of wedlock. Lilly, at first, refuses to join the choir as she is both disgusted by Darrin 's romantic advances and because she does n't want to put up with the townpeople 's criticism of her, but with some assurance from Darrin, she ultimately relents and becomes the new lead singer of the choir, causing Paulina to quit in retaliation.
Several weeks later, Paulina reveals that Darrin forgot to enter the choir into the auditions on time. Luckily, the audition judge, Luther Washington (Faizon Love), who is also the town 's prison warden, lets them perform in a show for his prisoners when their booked act cancels. Thanks to Lilly 's beautiful looks and voice, the choir performs well and Washington lets them into the competition. Washington also lets Darrin borrow three convicts who can sing. The three convicts are Bee - Z Briggs, Lightfoot, and Mr. Johnson (T - Bone, Chris Cole, and Montell Jordan). After weeks of success, the choir has become more popular, as more people have joined both it and the church. Lilly starts to trust Darrin and develops romantic feelings for him as well. However, Paulina takes a message for Darrin in a phone call from Rosa and learns of his past troubles, and intends on exposing him the first chance she gets. The next afternoon at a church barbecue, Paulina deliberately reveals Darrin 's secrets with a polite demeanor, in order to make herself look innocent. Lilly, furious and heartbroken about this, tells Darrin that she does n't care what he does; she was only using him because he was using her, and the people whom he promised money to begin to panic.
Since Lilly wants nothing to do with him, Darrin decides to quit and returns to New York, where he has been offered his job back. However, when Darrin goes back and gets a new condo and a promotion, he comes to realize that none of these things mean anything without Lilly and the choir. Darrin quits his job and returns to Monte Carlo to reconcile with Lilly. The two then recruit Lucious and the Reverend and all of them rush down to the Gospel Explosion to join the choir for the performance.
When Darrin and Lilly arrive, Paulina tries to keep them out, citing that Darrin forfeited his inheritance when he left Monte Carlo. However, Reverend Lewis finally stands up to Paulina and calls her out for being a selfish, conniving, hypocritical individual. He then reveals to the choir that her husband, whom she previously had claimed was deceased, is alive and remarried to a better woman. Lilly scolds Paulina for insulting Sally 's will and wishes, which gave Darrin the choir. They manage to convince the others to vote Paulina out of the choir, giving Darrin his position as director back. Before their performance begins, Darrin tells Lilly that she inspired him to name the choir The Fighting Temptations.
After an outstanding performance, the choir wins the competition, but before ending his acceptance speech, Darrin starts a proper relationship with Lilly by surprising her with an unexpected marriage proposal, to which she accepts. Eighteen months later, the two are shown to be happily married with a baby of their own. In addition, the church is about to undergo an expansion, and a reformed Paulina rejoins the choir, having become more open to those who join.
In a mid-credits scene, Lucious pesters the baby jokingly claiming to be its father. Lilly 's grandfather hears Lucious and scares him off with a cold stare.
The film crew used several locations throughout Georgia. The final scene was filmed in Columbus, GA at the RiverCenter for the Performing Arts. Several of the extras were local residents of Columbus, GA.
The music of the film received universal acclaim, most notably, Beyoncé 's cover of "Fever ''.
However, the film itself received generally mixed reviews upon its release. The film was criticized for its allegedly poor dialogue, rehashed premise and romantic chemistry between the film 's lead actors (Cuba Gooding Jr. and Beyoncé) as well as their noticeable age difference. Despite the mixed reviews the film opened at # 2. Notably, Ebert & Roeper reviewed the film and Roger Ebert gave it thumbs up, while Richard Roeper gave it thumbs down. It holds a 42 % rating on Rotten Tomatoes.
The movie was a minor success at the box office, with a worldwide gross of US $ 32,445,215 but failed to produce enough revenue for a second installment to go into production (though the primary actors had signed on for one).
However, the cast themselves were very pleased with the film, citing its uplifting story and soundtrack. Gooding has confessed in several interviews following the film 's release that he enjoyed filming his kissing scenes with Beyoncé. In a 2011 interview on The Late Late Show, Gooding made a humorous reference to this by saying: (In the false words of the host) "Did n't he make love with Beyoncé on the set of something? ''.
A soundtrack accompanied the film and was released by Sony on September 9, 2003 prior to the film itself. The soundtrack received generally positive reviews and was more successful than the film itself. Only one song from the album, "Summertime '', is not included in the movie. The late R&B legend Luther Vandross also recorded a song for the film called "Shine '' but it failed to make the final cut of the film as well as the soundtrack. The most obvious reference to this is the fact that Vandross frequently says the title of the film in the song. The song "Come Back Home '' appears in the film, but was not included on the soundtrack album. It also noted that several other songs performed during the movie including "Church Is in Mourning (Aunt Sally 's Funeral Tribute) '' by Shirley Caesar, "Wo n't Ever Change '' by Mary Mary, "Waiting '' by Ramiyah, and "Soldier '' by The Blind Boys of Alabama, were also not included on the soundtrack.
The film was released on VHS and DVD on February 3, 2004. While there is a high - definition digital release, it has yet to be released on Blu - ray Disc as of 2018.
In a 2003 interview with the late Mickey Jones, (who had a supporting role in the film), for HollywoodJesus.com, he stated that he hoped the film performed well because all of the principal actors had signed on for a sequel. As a result of the film 's underperformance at the box office, a continuation never materialized.
|
who got promoted from league 1 last season | EFL League One - wikipedia
The English Football League One (often referred to as League One for short or Sky Bet League One for sponsorship reasons) is the second - highest division of the English Football League and the third tier in the English football league system.
League One was introduced for the 2004 -- 05 season. It was previously known as the Football League Second Division and prior to the advent of the Premier League, the Football League Third Division.
At present (2017 -- 18 season), Oldham Athletic hold the longest tenure in League One, last being out of the division in the 1996 -- 97 season when they were relegated from the First Division. There are currently seven former Premier League clubs competing in the League One, namely Blackpool, Bradford City, Charlton Athletic, Oldham Athletic, Portsmouth, Wigan Athletic, and former Premier League champions Blackburn Rovers.
There are 24 clubs in League One. Each club plays every other club twice (once at home and once away). Three points are awarded for a win, one for a draw and zero for a loss. At the end of the season a table of the final League standings is determined, based on the following criteria in this order: points obtained, goal difference, goals scored, an aggregate of the results between two or more clubs (ranked using the previous three criteria) and, finally, a series of one or more play - off matches.
At the end of each season the top two clubs, together with the winner of the play - offs between the clubs which finished in 3rd -- 6th position, are promoted to EFL Championship and are replaced by the three clubs that finished at the bottom of that division.
Similarly, the four clubs that finished at the bottom of EFL League One are relegated to EFL League Two and are replaced by the top three clubs and the club that won the 4th -- 7th place play - offs in that division.
Sky Sports currently show live League One matches with highlights shown on Channel 5 on their programme called Football League Tonight, which also broadcasts highlights of EFL Championship and EFL League Two matches. Highlights of all games in the Football League are also available to view separately on the Sky Sports website. In Sweden, TV4 Sport has the rights of broadcasting from the league. A couple of league matches during the season of 09 / 10 including play - off matches and the play - off final to the Championship were shown. In Australia, Setanta Sports Australia broadcasts live Championship matches. In the USA and surrounding countries including Cuba, some EFL Championship, EFL League One and EFL League Two games are shown on beIN Sports.
The following 24 clubs are competing in League One during the 2017 -- 18 season.
For past winners at this level before 2004, see List of winners of English Football League One and predecessors.
Bradford City 1 -- 0 Fleetwood Town
Fleetwood Town 0 -- 0 Bradford City
Starting from the 2012 -- 13 season, a Financial Fair Play arrangement has been in place in all 3 divisions of the Football League, the intention being eventually to produce a league of financially self - sustaining clubs. In League One, this takes the form of a Salary Cost Management Protocol in which a maximum of 60 % of a club 's turnover may be spent on players ' wages, with sanctions being applied in the form of transfer embargoes.
|
where does the name pepto bismol come from | Bismuth subsalicylate - wikipedia
Bismuth subsalicylate, sold under the brand name Pepto - Bismol, is an antacid medication used to treat temporary discomforts of the stomach and gastrointestinal tract, such as diarrhea, indigestion, heartburn and nausea. It is also commonly known as pink bismuth.
Bismuth subsalicylate has the empirical chemical formula of C H BiO, and it is a colloidal substance obtained by hydrolysis of bismuth salicylate (Bi (C H (OH) CO)). The actual structure is unknown and the formulation is only approximate.
As a derivative of salicylic acid, bismuth subsalicylate displays anti-inflammatory and bactericidal action. It also acts as an antacid.
There are some adverse effects. It can cause a black tongue and black stools in some users of the drug, when it combines with trace amounts of sulfur in saliva and the colon to form bismuth sulfide. Bismuth sulfide is a highly insoluble black salt, and the discoloration seen is temporary and harmless.
Long - term use (greater than 6 weeks) may lead to accumulation and toxicity. Some of the risks of salicylism can apply to the use of bismuth subsalicylate.
Children should not take medication with bismuth subsalicylate while recovering from influenza or chicken pox, as epidemiologic evidence points to an association between the use of salicylate - containing medications during certain viral infections and the onset of Reye 's syndrome. For the same reason, it is typically recommended that nursing mothers not use medication containing bismuth subsalicylate because small amounts of the medication are excreted in human breast milk, and these pose a theoretical risk of Reye 's syndrome to nursing children.
Salicylates are very toxic to cats, and thus bismuth subsalicylate should not be administered to cats.
Bismuth subsalicylate is used as an antacid and antidiarrheal, and to treat some other gastrointestinal symptoms, such as nausea. The means by which this occurs is still not well documented. It is thought to be some combination of the following:
In vitro and in vivo data have shown that bismuth subsalicylate hydrolyzes in the gut to bismuth oxychloride and salicylic acid and less commonly bismuth hydroxide. In the stomach, this is likely an acid - catalyzed hydrolysis. The salicylic acid is absorbed and therapeutical concentrations of salicylic acid can be found in blood after bismuth subsalicylate administration. Bismuth oxychloride and bismuth hydroxide are both believed to have bactericidal effects, as is salicylic acid for enterotoxigenic E. coli a common cause of "traveler 's diarrhea. ''
Organobismuth compounds have historically been used in growth media for selective isolation of microorganisms. Such salts have been shown to inhibit proliferation of Helicobacter pylori, other enteric bacteria, and some fungi.
Bismuth subsalicylate is the only active ingredient in an over-the - counter drug that can leave a shiny metal oxide slag behind after being completely burnt with a blow torch.
While bismuth salts were in use in Europe by the late 1700s, the combination of bismuth subsalicylate and zinc salts for astringency with salol (phenyl salicilate) appears to have begun in the US in the early 1900s as a remedy for life - threatening diarrhea in infants with cholera. At first sold directly to physicians, it was first marketed as Bismosal in 1918.
Pepto - Bismol began being sold in 1900 or 1901 by a doctor in New York. It was originally sold as a remedy for infant diarrhea by Norwich Pharmacal Company under the name "Bismosal: Mixture Cholera Infantum ''. It was renamed Pepto - Bismol in 1919. Norwich Eaton Pharmaceuticals was acquired by Procter and Gamble in 1982.
As of 1946, Canadian advertisements placed by Norwich show the product as Pepto - Besmol both in graphic and text.
Pepto - Bismol is an over-the - counter drug currently produced by the Procter & Gamble company in the United States, Canada and the United Kingdom. Pepto - Bismol is made in chewable tablets and swallowable caplets, but it is best known for its original formula, which is a thick liquid. This original formula is a medium pink in color, with a strong wintergreen or cherry flavor.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.