pageid
int64
12
74.6M
title
stringlengths
2
102
revid
int64
962M
1.17B
description
stringlengths
4
100
categories
list
markdown
stringlengths
1.22k
148k
15,950
James Madison
1,173,891,902
Founding Father, 4th president of the United States
[ "1751 births", "1836 deaths", "18th-century American philosophers", "18th-century American politicians", "19th-century American philosophers", "19th-century American politicians", "19th-century presidents of the United States", "American colonization movement", "American people of the War of 1812", "American planters", "American political party founders", "American political philosophers", "American slave owners", "Burials in Virginia", "Candidates in the 1808 United States presidential election", "Candidates in the 1812 United States presidential election", "Continental Congressmen from Virginia", "Delegates to the Virginia Ratifying Convention", "Democratic-Republican Party members of the United States House of Representatives from Virginia", "Democratic-Republican Party presidents of the United States", "Founding Fathers of the United States", "Hypochondriacs", "James Madison", "Jefferson administration cabinet members", "Madison family", "Members of the Virginia House of Delegates", "People from King George County, Virginia", "People from Orange County, Virginia", "People of the American Enlightenment", "Philosophers from Virginia", "Presidents of the United States", "Signers of the United States Constitution", "The Federalist Papers", "United States Secretaries of State", "Virginia colonial people", "Virginia dynasty" ]
James Madison (March 16, 1751 – June 28, 1836) was an American statesman, diplomat, and Founding Father who served as the fourth president of the United States from 1809 to 1817. Madison is hailed as the "Father of the Constitution" for his pivotal role in drafting and promoting the Constitution of the United States and the Bill of Rights. Madison was born into a prominent slave-owning planter family in Virginia. He served as a member of the Virginia House of Delegates and the Continental Congress during and after the American Revolutionary War. Dissatisfied with the weak national government established by the Articles of Confederation, he helped organize the Constitutional Convention, which produced a new constitution designed to strengthen republican government against democratic assembly. Madison's Virginia Plan was the basis for the convention's deliberations, and he was an influential voice at the convention. He became one of the leaders in the movement to ratify the Constitution and joined Alexander Hamilton and John Jay in writing The Federalist Papers, a series of pro-ratification essays that remains prominent among works of political science in American history. Madison emerged as an important leader in the House of Representatives and was a close adviser to President George Washington. During the early 1790s, Madison opposed the economic program and the accompanying centralization of power favored by Secretary of the Treasury Hamilton. Alongside Thomas Jefferson, he organized the Democratic–Republican Party in opposition to Hamilton's Federalist Party. After Jefferson was elected president in 1800, Madison served as his Secretary of State from 1801 to 1809 and supported Jefferson in the case of Marbury v. Madison. While Madison was Secretary of State, Jefferson made the Louisiana Purchase, and later, as President, Madison oversaw related disputes in the Northwest Territories. Madison was elected president in 1808. Motivated by desire to acquire land held by Britain, Spain, and Native Americans, and after diplomatic protests with a trade embargo failed to end British seizures of American shipped goods, Madison led the United States into the War of 1812. Although the war ended inconclusively, many Americans viewed the war's outcome as a successful "second war of independence" against Britain. Madison was re-elected in 1812, albeit by a smaller margin. The war convinced Madison of the necessity of a stronger federal government. He presided over the creation of the Second Bank of the United States and the enactment of the protective Tariff of 1816. By treaty or through war, Native American tribes ceded 26,000,000 acres (11,000,000 ha) of land to the United States under Madison's presidency. Retiring from public office at the end of his presidency in 1817, Madison returned to his plantation, Montpelier, and died there in 1836. During his lifetime, Madison was a slave owner. In 1783, to prevent a slave rebellion at Montpelier, Madison freed one of his slaves. He did not free any slaves in his will. Among historians, Madison is considered one of the most important Founding Fathers of the United States. Leading historians have generally ranked him as an above-average president, although they are critical of his endorsement of slavery and his leadership during the War of 1812. Madison's name is commemorated in many landmarks across the nation, both publicly and privately, with prominent examples including Madison Square Garden, James Madison University, the James Madison Memorial Building, and the USS James Madison. ## Early life and education James Madison Jr. was born on March 16, 1751 (March 5, 1750, Old Style), at Belle Grove Plantation near Port Conway in the Colony of Virginia, to James Madison Sr. and Eleanor Rose Conway Madison. His family had lived in Virginia since the mid-17th century. Madison's maternal grandfather, Francis Conway, was a prominent planter and tobacco merchant. His father was a tobacco planter who grew up on a plantation, then called Mount Pleasant, which he inherited upon reaching adulthood. With an estimated 100 slaves and a 5,000-acre (2,000 ha) plantation, Madison's father was among the largest landowners in Virginia's Piedmont. In the early 1760s, the Madison family moved into a newly built house that they named Montpelier. Madison grew up as the oldest of twelve children, with seven brothers and four sisters, though only six lived to adulthood. Of the surviving three brothers (Francis, Ambrose, and William) and three sisters (Nelly, Sarah, and Frances), it was Ambrose who would eventually help to manage Montpelier for both his father and older brother until his own death in 1793. President Zachary Taylor was a descendant of Elder William Brewster, a Pilgrim leader of the Plymouth Colony, a Mayflower immigrant, and a signer of the Mayflower Compact; and Isaac Allerton Jr., a colonial merchant, colonel, and son of Mayflower Pilgrim Isaac Allerton and Fear Brewster. Taylor's second cousin through that line was Madison. From age 11 to 16, Madison studied under Donald Robertson, a Scottish instructor who served as a tutor for several prominent planter families in the South. Madison learned mathematics, geography, and modern and classical languages, becoming exceptionally proficient in Latin. At age 16, Madison returned to Montpelier, where he studied under the Reverend Thomas Martin to prepare for college. Unlike most college-bound Virginians of his day, Madison did not attend the College of William and Mary, where the lowland Williamsburg climate—thought to be more likely to harbor infectious disease—might have strained his sensibilities concerning his own health. Instead, in 1769, he enrolled at the College of New Jersey (later renamed Princeton University). His college studies included Latin, Greek, theology, and the works of the Enlightenment. Emphasis was placed on both speech and debate; Madison was a leading member of the American Whig–Cliosophic Society, which competed on campus with a political counterpart, the Cliosophic Society. During his time at Princeton, Madison's closest friend was future Attorney General William Bradford. Along with fellow classmate Aaron Burr, Madison undertook an intense program of study and completed the college's three-year Bachelor of Arts degree in two years, graduating in 1771. Madison had contemplated either entering the clergy or practicing law after graduation but instead remained at Princeton to study Hebrew and political philosophy under the college's president, John Witherspoon. He returned home to Montpelier in early 1772. Madison's ideas on philosophy and morality were strongly shaped by Witherspoon, who converted him to the philosophy, values, and modes of thinking of the Age of Enlightenment. Biographer Terence Ball wrote that at Princeton, Madison "was immersed in the liberalism of the Enlightenment, and converted to eighteenth-century political radicalism. From then on James Madison's theories would advance the rights of happiness of man, and his most active efforts would serve devotedly the cause of civil and political liberty." After returning to Montpelier, without a chosen career, Madison served as a tutor to his younger siblings. Madison began to study law books in 1773, asking his friend Bradford, a law apprentice, to send him a written plan of study. He had acquired an understanding of legal publications by 1783. Madison saw himself as a law student but not a lawyer; he did not apprentice himself to a lawyer and never joined the bar. Following the Revolutionary War, Madison spent time at Montpelier in Virginia studying ancient democracies of the world in preparation for the Constitutional Convention. Madison suffered from episodes of mental exhaustion and illness with associated nervousness, which often caused temporary short-term incapacity after periods of stress. However, he enjoyed good physical health until his final years. ## American Revolution and Articles of Confederation During the 1760s and 1770s, American Colonists protested tightened British tax, monetary, and military laws forced on them by Parliament. In 1765, the British Parliament passed the Stamp Act, which caused strong opposition by the colonists and began a conflict that would culminate in the American Revolution. The American Revolutionary War broke out on April 19, 1775 and was ended by the Treaty of Paris signed on September 3, 1783. The colonists formed three prominent factions: Loyalists, who continued to back King George III of the United Kingdom; a significant neutral faction without firm commitments to either Loyalists or Patriots; and the Patriots, whom Madison joined, under the leadership of the Continental Congress. Madison believed that Parliament had overstepped its bounds by attempting to tax the American colonies, and he sympathized with those who resisted British rule. Historically, debate about the consecration of bishops was ongoing and eventual legislation was passed in the British Parliament (subsequently called the Consecration of Bishops Abroad Act 1786) to allow bishops to be consecrated for an American church outside of allegiance to the British Crown. Both in the United States and in Canada, the new Anglican churches began incorporating more active forms of polity in their own self-government, collective decision-making, and self-supported financing; these measures would be consistent with separation of religious and secular identities. Madison believed these measures to be insufficient, and also favored disestablishing the Anglican Church in Virginia; Madison believed that tolerance of an established religion was detrimental not only to freedom of religion but also because it encouraged excessive deference to any authority which might be asserted by an established church. After returning to Montpelier in 1774, Madison took a seat on the local Committee of Safety, a pro-revolution group that oversaw the local Patriot militia. In October 1775, he was commissioned as the colonel of the Orange County militia, serving as his father's second-in-command until he was elected as a delegate to the Fifth Virginia Convention, which was charged with producing Virginia's first constitution. Although Madison never battled in the Revolutionary War, he did rise to prominence in Virginia politics as a wartime leader. At the Virginia constitutional convention, he convinced delegates to alter the Virginia Declaration of Rights originally drafted on May 20, 1776, to provide for "equal entitlement", rather than mere "tolerance", in the exercise of religion. With the enactment of the Virginia constitution, Madison became part of the Virginia House of Delegates, and he was subsequently elected to the Virginia governor's Council of State, where he became a close ally of Governor Thomas Jefferson. On July 4, 1776, the United States Declaration of Independence was formally printed, declaring the 13 American states an independent nation. Madison participated in the debates concerning the Articles of Confederation in November 1777, contributing to the discussion of religious freedom affecting the drafting of the Articles, though his signature was not required for adopting the Articles of Confederation. Madison had proposed liberalizing the article on religious freedom, but the larger Virginia Convention stripped the proposed constitution of the more radical language of "free expression" of faith to the less controversial mention of highlighting "tolerance" within religion. Other amendments by the committee and the entire Convention included the addition of a section on the right to a uniform government. Madison again served on the Council of State, from 1777 to 1779, when he was elected to the Second Continental Congress, the governing body of the United States. During Madison's term in Congress from 1780 to 1783, the U.S. faced a difficult war against Great Britain, as well as runaway inflation, financial troubles, and a lack of cooperation between the different levels of government. According to historian J. C. A. Stagg, Madison worked to become an expert on financial issues, becoming a legislative workhorse and a master of parliamentary coalition building. Frustrated by the failure of the states to supply needed requisitions, Madison proposed to amend the Articles of Confederation to grant Congress the power to independently raise revenue through tariffs on imports. Though General George Washington, Congressman Alexander Hamilton, and other leaders also favored the tariff amendment, it was defeated because it failed to win the ratification of all thirteen states. While a member of Congress, Madison was an ardent supporter of a close alliance between the United States and France. As an advocate of westward expansion, he insisted that the new nation had to ensure its right to navigation on the Mississippi River and control of all lands east of it in the Treaty of Paris, which ended the Revolutionary War. Following his term in Congress, Madison won election to the Virginia House of Delegates in 1784. ## Ratification of the Constitution As a member of the Virginia House of Delegates, Madison continued to advocate for religious freedom, and, along with Jefferson, drafted the Virginia Statute for Religious Freedom. That amendment, which guaranteed freedom of religion and disestablished the Church of England, was passed in 1786. Madison also became a land speculator, purchasing land along the Mohawk River in partnership with another Jefferson protégé, James Monroe. Throughout the 1780s, Madison became increasingly worried about the disunity of the states and the weakness of the central government after the end of the Revolutionary War. He believed that direct democracy caused social decay and that a Republican government would be effective against partisanship and factionalism. He was particularly troubled by laws that legalized paper money and denied diplomatic immunity to ambassadors from other countries. Madison was also concerned about the lack of ability in Congress to capably create foreign policy, protect American trade, and foster the settlement of the lands between the Appalachian Mountains and the Mississippi River. As Madison wrote, "a crisis had arrived which was to decide whether the American experiment was to be a blessing to the world, or to blast for ever the hopes which the republican cause had inspired." Madison committed to an intense study of law and political theory and also was influenced by Enlightenment texts sent by Jefferson from France. Madison especially sought out works on international law and the constitutions of "ancient and modern confederacies" such as the Dutch Republic, the Swiss Confederation, and the Achaean League. He came to believe that the United States could improve upon past republican experiments by its size which geographically combined 13 colonies; with so many distinct interests competing against each other, Madison hoped to minimize the abuses of majority rule. Additionally, navigation rights to the major trade routes accessed by the Mississippi River highly concerned Madison. He opposed the proposal by John Jay that the United States concede claims to the river for 25 years, and, according to historian Ralph Ketcham, Madison's desire to fight the proposal was a major motivation in his to return to Congress in 1787. Leading up to the 1787 ratification debates for the Constitution, Madison worked with other members of the Virginia delegation, especially Edmund Randolph and George Mason, to create and present the Virginia Plan, an outline for a new federal constitution. It called for three branches of government (legislative, executive, and judicial), a bicameral Congress (consisting of the Senate and the House of Representatives) apportioned by population, and a federal Council of Revision that would have the right to veto laws passed by Congress. The Virginia Plan did not explicitly lay out the structure of the executive branch, but Madison himself favored a strong single executive. Many delegates were surprised to learn that the plan called for the abrogation of the Articles and the creation of a new constitution, to be ratified by special conventions in each state, rather than by the state legislatures. With the assent of prominent attendees such as Washington and Benjamin Franklin, the delegates agreed in a secret session that the abrogation of the Articles and the creation of a new constitution was a plausible option and began scheduling the process of debating its ratification in the individual states. As a compromise between small and large states, large states got a proportional House, while the small states got equal representation in the Senate. After the Philadelphia Convention ended in September 1787, Madison convinced his fellow congressmen to remain neutral in the ratification debate and allow each state to vote on the Constitution. Those who supported the Constitution were called Federalists, that included Madison. Throughout the United States, opponents of the Constitution, known as Anti-Federalists, began a public campaign against ratification. In response, starting in October 1787, Hamilton and John Jay, both Federalists, began publishing a series of pro-ratification newspaper articles in New York. After Jay dropped out of the project, Hamilton approached Madison, who was in New York on congressional business, to write some of the essays. The essays were published under the pseudonym of Publius. The trio produced 85 essays known as The Federalist Papers. The 85 essays were divided into two parts, 36 letters were against the Articles of Confederation, and 49 letters that favored the new Constitution. The articles were also published in book form and used by the supporters of the Constitution in the ratifying conventions. Federalist No. 10, Madison's first contribution to The Federalist Papers, became highly regarded in the 20th century for its advocacy of representative democracy. In it, Madison describes the dangers posed by the majority factions and argues that their effects can be limited through the formation of a large republic. He theorizes that in large republics the large number of factions that emerge will control their influence because no single faction can become a majority. In Federalist No. 51, he goes on to explain how the separation of powers between three branches of the federal government, as well as between state governments and the federal government, establishes a system of checks and balances that ensures that no one institution would become too powerful. As the Virginia ratification convention began, Madison focused his efforts on winning the support of the relatively small number of undecided delegates. His long correspondence with Randolph paid off at the convention, as Randolph announced that he would support unconditional ratification of the Constitution, with amendments to be proposed after ratification. Though former Virginia governor Patrick Henry gave several persuasive speeches arguing against ratification, Madison's expertise on the subject he had long argued for allowed him to respond with rational arguments to Henry's anti-Federalist appeals. Madison was also a defender of federal veto rights and, according to historian Ron Chernow "pleaded at the Constitutional Convention that the federal government should possess a veto over state laws". In his final speech to the ratifying convention, Madison implored his fellow delegates to ratify the Constitution as it had been written, arguing that failure to do so would lead to the collapse of the entire ratification effort, as each state would seek favorable amendments. On June 25, 1788, the convention voted 89–79 in favor of ratification. The vote came a week after New Hampshire became the ninth state to ratify, thereby securing the Constitution's adoption and with that, a new form of government. The following January, Washington was elected the nation's first president. ## Congressman and party leader (1789–1801) ### Election to Congress After Virginia ratified the constitution, Madison returned to New York and resumed his duties in the Congress of the Confederation. After Madison was defeated in his bid for the Senate, and with concerns for both his political career and the possibility that Patrick Henry and his allies would arrange for a second constitutional convention, Madison ran for the House of Representatives. Henry and the Anti-Federalists were in firm control of the General Assembly in the autumn of 1788. At Henry's behest, the Virginia legislature designed to deny Madison a seat, and created congressional districts. Henry and his supporters ensured that Orange County was in a district heavily populated with Anti-Federalists, roughly three to one, to oppose Madison. This practice is called gerrymandering. Henry also recruited James Monroe, a strong challenger to Madison. Locked in a difficult race against Monroe, Madison promised to support a series of constitutional amendments to protect individual liberties. In an open letter, Madison wrote that, while he had opposed requiring alterations to the Constitution before ratification, he now believed that "amendments, if pursued with a proper moderation and in a proper mode ... may serve the double purpose of satisfying the minds of well-meaning opponents, and of providing additional guards in favor of liberty." Madison's promise paid off, as in Virginia's 5th district election, he gained a seat in Congress with 57 percent of the vote. Madison became a key adviser to Washington, who valued Madison's understanding of the Constitution. Madison helped Washington write his first inaugural address and also prepared the official House response to Washington's speech. He played a significant role in establishing and staffing the three Cabinet departments, and his influence helped Thomas Jefferson become the first Secretary of State. At the start of the first Congress, he introduced a tariff bill similar to the one he had advocated for under the Articles of the Confederation, and Congress established a federal tariff on imports by enacting the Tariff of 1789. The following year, Secretary of the Treasury Hamilton introduced an ambitious economic program that called for the federal assumption of state debts and the funding of that debt through the issuance of federal securities. Hamilton's plan favored Northern speculators and was disadvantageous to states, such as Virginia, that had already paid off most of their debt; Madison emerged as one of the principal Congressional opponents of the plan. After prolonged legislative deadlock, Madison, Jefferson, and Hamilton agreed to the Compromise of 1790, which provided for the enactment of Hamilton's assumption plan, as part of the Funding Act of 1790. In return, Congress passed the Residence Act, which established the federal capital district of Washington, D.C., on the Potomac River. ### Bill of Rights During the first Congress, Madison took the lead in advocating for several constitutional amendments to the Bill of Rights. His primary goals were to fulfill his 1789 campaign pledge and to prevent the calling of a second constitutional convention, but he also hoped to safeguard the rights and liberties of the people against broad actions of Congress and individual states. He believed that the enumeration of specific rights would fix those rights in the public mind and encourage judges to protect them. After studying more than two-hundred amendments that had been proposed at the state ratifying conventions, Madison introduced the Bill of Rights on June 8, 1789. His amendments contained numerous restrictions on the federal government and would protect, among other things, freedom of religion, freedom of speech, and the right to peaceful assembly. While most of his proposed amendments were drawn from the ratifying conventions, Madison was largely responsible for proposals to guarantee freedom of the press, protect property from government seizure, and ensure jury trials. He also proposed an amendment to prevent states from abridging "equal rights of conscience, or freedom of the press, or the trial by jury in criminal cases". Madison's Bill of Rights faced little opposition; he had largely co-opted the Anti-Federalist goal of amending the Constitution but had avoided proposing amendments that would alienate supporters of the Constitution. His amendments were mostly adopted by the House of Representatives as proposed, but the Senate made several changes. Madison's proposal to apply parts of the Bill of Rights to the states was eliminated, as was his change to the Constitution's preamble which he thought would be enhanced by including a prefatory paragraph indicating that governmental power is vested by the people. He was disappointed that the Bill of Rights did not include protections against actions by state governments, but the passage of the document mollified some critics of the original constitution and shored up his support in Virginia. Ten amendments were finally ratified on December 15, 1791, becoming known in their final form as the Bill of Rights. ### Founding the Democratic–Republican Party After 1790, the Washington administration became polarized into two main factions. One faction, led by Jefferson and Madison, broadly represented Southern interests and sought close relations with France. This faction became the Democratic-Republican Party opposition to Secretary of the Treasury Hamilton. The other faction, led by Hamilton and the Federalists, broadly represented Northern financial interests and favored close relations with Britain. In 1791, Hamilton introduced a plan that called for the establishment of a national bank to provide loans to emerging industries and oversee the money supply. Madison and the Democratic-Republican Party fought back against Hamilton's attempt to expand the power of the Federal Government with the formation of a national bank. Therefore, they opposed Hamilton's plan and Madison argued that under the Constitution, Congress did not have the power to create a federally empowered national bank. Despite Madison's opposition, Congress passed a bill to create the First Bank of the United States, which Washington signed into law in February 1791. As Hamilton implemented his economic program and Washington continued to enjoy immense prestige as president, Madison became increasingly concerned that Hamilton would seek to abolish the federal republic in favor of a centralized monarchy. When Hamilton submitted his Report on Manufactures, which called for federal action to stimulate the development of a diversified economy, Madison once again challenged Hamilton's proposal. Along with Jefferson, Madison helped Philip Freneau establish the National Gazette, a Philadelphia newspaper that attacked Hamilton's proposals. In an essay published in the newspaper in September 1792, Madison wrote that the country had divided into two factions: his faction, which believed "that mankind are capable of governing themselves", and Hamilton's faction, which allegedly sought the establishment of an aristocratic monarchy and was biased in favor of the wealthy. Those opposed to Hamilton's economic policies, including many former Anti-Federalists, continued to strengthen the ranks of the Democratic–Republican Party, while those who supported the administration's policies supported Hamilton's Federalist Party. In the 1792 presidential election, both major parties supported Washington for re-election, but the Democratic–Republicans sought to unseat Vice President John Adams. Because the Constitution's rules essentially precluded Jefferson from challenging Adams, the party backed New York Governor George Clinton for the vice presidency, but Adams won nonetheless. With Jefferson out of office after 1793, Madison became the de facto leader of the Democratic–Republican Party. When Britain and France went to war in 1793, the U.S. needed to determine which side to support. While the differences between the Democratic–Republicans and the Federalists had previously centered on economic matters, foreign policy became an increasingly important issue, as Madison and Jefferson favored France and Hamilton favored Britain. War with Britain became imminent in 1794 after the British seized hundreds of American ships that were trading with French colonies. Madison believed that a trade war with Britain would probably succeed, and would allow Americans to assert their independence fully. The British West Indies, Madison maintained, could not live without American foodstuffs, but Americans could easily do without British manufacturers. Washington then secured friendly trade relations with Britain through the Jay Treaty of 1794. Madison and his Democratic–Republican allies were outraged by the treaty; the Democratic–Republican Robert R. Livingston wrote to Madison that the treaty "sacrifices every essential interest and prostrates the honor of our country". Madison's strong opposition to the treaty led to a permanent break with Washington, ending their friendship. ### Marriage and family On September 15, 1794, Madison married Dolley Payne Todd, the 26-year-old widow of John Todd, a Quaker farmer who died during a yellow fever epidemic in Philadelphia. Earlier that year, Madison and Dolley Todd had been formally introduced at Madison's request by Aaron Burr. Burr had become friends with her when staying at the same Philadelphia boardinghouse. After an arranged meeting in early 1794, the two quickly became romantically engaged and prepared for a wedding that summer, but Todd suffered recurring illnesses because of her exposure to yellow fever in Philadelphia. They eventually traveled to Harewood in Virginia for their wedding. Only a few close family members attended, and Winchester reverend Alexander Balmain presided. Dolley became a renowned figure in Washington, D.C., and excelled at hosting dinners and other important political occasions. She subsequently helped to establish the modern image of the first lady of the United States as an individual who has a leading role in the social affairs of the nation. Throughout his life, Madison maintained a close relationship with his father, James Sr. Eventually at age 50, Madison inherited the large plantation of Montpelier and other possessions, including his father's numerous slaves. While Madison never had children with Dolley, he adopted her one surviving son, John Payne Todd (known as Payne), after the couple's marriage. Some of his colleagues, such as Monroe and Burr, believed Madison's lack of offspring weighed on his thoughts, though he never spoke of any distress. Meanwhile, oral history has suggested Madison may have fathered a child with his enslaved half-sister, a cook named Coreen, but researchers were unable to gather the DNA evidence needed to determine the validity of the accusation. ### Adams presidency Washington chose to retire after serving two terms and, in advance of the 1796 presidential election, Madison helped convince Jefferson to run for the presidency. Despite Madison's efforts, Federalist candidate John Adams defeated Jefferson, taking a narrow majority of the electoral vote. Under the rules of the Electoral College then in place, Jefferson became vice president because he finished with the second-most electoral votes. Madison, meanwhile, had declined to seek re-election to the House, and he returned to Montpelier. On Jefferson's advice, Adams considered appointing Madison to an American delegation charged with ending French attacks on American shipping, but Adams's cabinet members strongly opposed the idea. Though he was out of office, Madison remained a prominent Democratic–Republican leader in opposition to the Adams administration. Madison and Jefferson believed that the Federalists were using the Quasi-War with France to justify the violation of constitutional rights by passing the Alien and Sedition Acts, and they increasingly came to view Adams as a monarchist. Both Madison and Jefferson, as leaders of the Democratic–Republican Party, expressed the belief that natural rights were non-negotiable even during a time of war. Madison believed that the Alien and Sedition Acts formed a dangerous precedent, by giving the government the power to look past the natural rights of its people in the name of national security. In response to the Alien and Sedition Acts, Jefferson argued that the states had the power to nullify federal law on the basis of the Constitution was a compact among the states. Madison rejected this view of nullification and urged that states respond to unjust federal laws through interposition, a process by which a state legislature declared a law to be unconstitutional but did not take steps to actively prevent its enforcement. Jefferson's doctrine of nullification was widely rejected, and the incident damaged the Democratic–Republican Party as attention was shifted from the Alien and Sedition Acts to the unpopular nullification doctrine. In 1799, Madison was elected to the Virginia legislature. At the same time, Madison planned for Jefferson's campaign in the 1800 presidential election. Madison issued the Report of 1800, which attacked the Alien and Sedition Acts as unconstitutional. That report held that Congress was limited to legislating on its enumerated powers and that punishment for sedition violated freedom of speech and freedom of the press. Jefferson embraced the report, and it became the unofficial Democratic–Republican platform for the 1800 election. With the Federalists divided between supporters of Hamilton and Adams, and with news of the end of the Quasi-War not reaching the United States until after the election, Jefferson and his running mate, Aaron Burr, defeated Adams, allowing Jefferson to prevail as president. ## Secretary of State (1801–1809) Madison was one of two major influences in Jefferson's Cabinet, the other being Secretary of the Treasury Albert Gallatin. Madison was appointed secretary of state despite lacking foreign policy experience. An introspective individual, he received assistance from his wife, relying deeply on her in dealing with the social pressures of being a public figure both in his own Cabinet appointment as secretary of state and afterward. As the ascent of Napoleon in France had dulled Democratic–Republican enthusiasm for the French cause, Madison sought a neutral position in the ongoing Coalition Wars between France and Britain. Domestically, the Jefferson administration and the Democratic–Republican Congress rolled back many Federalist policies; Congress quickly repealed the Alien and Sedition Act, abolished internal taxes, and reduced the size of the army and navy. Gallatin, however, did convince Jefferson to retain the First Bank of the United States. Though the Federalist political power was rapidly fading away at the national level, Chief Justice John Marshall ensured that Federalist ideology retained an important presence in the judiciary. In the case of Marbury v. Madison, Marshall simultaneously ruled that Madison had unjustly refused to deliver federal commissions to individuals who had been appointed by the previous administration, but that the Supreme Court did not have jurisdiction over the case. Most importantly, Marshall's opinion established the principle of judicial review. While attaining the position of secretary of state and throughout his life, Madison maintained contact with his father, James Sr., who died in 1801 and which allowed Madison to inherit the large plantation of Montpelier. Jefferson took office and was sympathetic to the westward expansion of Americans who had settled as far west as the Mississippi River; his sympathy for expansion was supported by his concern for the sparse regional demographics in the far west compared to the more populated eastern states, the far west being inhabited almost exclusively by Native Americans. Jefferson promoted such western expansion and hoped to acquire the Spanish territory of Louisiana, west of the Mississippi River, for expansionist purposes. Early in Jefferson's presidency, the administration learned that Spain planned to retrocede the Louisiana territory to France, raising fears of French encroachment on U.S. territory. In 1802, Jefferson and Madison sent Monroe, a sympathetic fellow Virginian, to France to negotiate the purchase of New Orleans, which controlled access to the Mississippi River and thus was immensely important to the farmers of the American frontier. Rather than merely selling New Orleans, Napoleon's government, having already given up on plans to establish a new French empire in the Americas, offered to sell the entire territory of Louisiana. Despite lacking explicit authorization from Jefferson, Monroe, along with Livingston, whom Jefferson had appointed as America's minister to France, negotiated the Louisiana Purchase, in which France sold more than 827,987 square miles (2,144,480 square kilometers) of land in exchange for \$15 million (). Despite the time-sensitive nature of negotiations with the French, Jefferson was concerned about the constitutionality of the Louisiana Purchase, and he privately favored introducing a constitutional amendment explicitly authorizing Congress to acquire new territories. Madison convinced Jefferson to refrain from proposing the amendment, and the administration ultimately submitted the Louisiana Purchase Treaty for approval by the Senate, without an accompanying constitutional amendment. Unlike Jefferson, Madison was not seriously concerned with the constitutionality of the purchase. He believed that the circumstances did not warrant a strict interpretation of the Constitution, because the expansion was in the country's best interest. The Senate quickly ratified the treaty, and the House, with equal alacrity, passed enabling legislation. Early in his tenure, Jefferson was able to maintain cordial relations with both France and Britain, but relations with Britain deteriorated after 1805. The British ended their policy of tolerance towards American shipping and began seizing American goods headed for French ports. They also impressed American sailors, some of whom had originally defected from the British navy, but some of whom had never been British subjects. In response to the attacks, Congress passed the Non-importation Act, which restricted many, but not all, British imports. Tensions with Britain were heightened due to the Chesapeake–Leopard affair, a June 1807 naval confrontation between American and British naval forces, while the French also began attacking American shipping. Madison believed that economic pressure could force the British to end their seizure of American shipped goods, and he and Jefferson convinced Congress to pass the Embargo Act of 1807, which banned all exports to foreign nations. The embargo proved ineffective, unpopular, and difficult to enforce, especially in New England. In March 1809, Congress replaced the embargo with the Non-Intercourse Act, which allowed trade with nations other than Britain and France. ### 1808 presidential election Speculation regarding Madison's potential succession to Jefferson commenced early in Jefferson's first term. Madison's status in the party was damaged by his association with the embargo, which was unpopular throughout the country and especially in the Northeast. With the Federalists collapsing as a national party after 1800, the chief opposition to Madison's candidacy came from other members of the Democratic–Republican Party. Madison became the target of attacks from Congressman John Randolph, a leader of a faction of the party known as the tertium quids. Randolph recruited Monroe, who had felt betrayed by the administration's rejection of the proposed Monroe–Pinkney Treaty with Britain, to challenge Madison for leadership of the party. Many Northerners, meanwhile, hoped that Vice President Clinton could unseat Madison as Jefferson's successor. Despite this opposition, Madison won his party's presidential nomination at the January 1808 congressional nominating caucus. The Federalist Party mustered little strength outside New England, and Madison easily defeated Federalist candidate Charles Cotesworth Pinckney in the general election. ## Presidency (1809–1817) ### Inauguration and cabinet Madison's inauguration took place on March 4, 1809, in the House chamber of the U.S. Capitol. Chief Justice Marshall administered the presidential oath of office to Madison while outgoing President Jefferson watched from a seat close by. Vice President George Clinton was sworn in for a second term, making him the first U.S. vice president to serve under two presidents. Unlike Jefferson, who enjoyed relatively unified support, Madison faced political opposition from previous political allies such as Monroe and Clinton. Additionally, the Federalist Party was resurgent owing to opposition to the embargo. Aside from his planned nomination of Gallatin for secretary of state, the remaining members of Madison's Cabinet were chosen merely to further political harmony, and, according to historians Ketcham and Rutland, were largely unremarkable or incompetent. Due to the opposition of Monroe and Clinton, Madison immediately faced opposition to his planned nomination of Secretary of the Treasury Gallatin as secretary of state. Madison eventually chose not to nominate Gallatin, keeping him in the treasury department. Madison settled instead for Robert Smith, the brother of Maryland Senator Samuel Smith, to be the secretary of state. However, for the next two years, Madison performed most of the job of the secretary of state due to Smith's incompetence. After bitter intra-party contention, Madison finally replaced Smith with Monroe in April 1811. With a Cabinet full of those he distrusted, Madison rarely called Cabinet meetings and instead frequently consulted with Gallatin alone. Early in his presidency, Madison sought to continue Jefferson's policies of low taxes and a reduction of the national debt. In 1811, Congress allowed the charter of the First Bank of the United States to lapse after Madison declined to take a strong stance on the issue. ### War of 1812 #### Prelude to war Congress had repealed the Embargo Act of 1807 shortly before Madison became president, but troubles with the British and French continued. Madison settled on a new strategy that was designed to pit the British and French against each other, offering to trade with whichever country would end their attacks against American shipping. The gambit almost succeeded, but negotiations with the British collapsed in mid-1809. Seeking to drive a wedge between the Americans and the British, Napoleon offered to end French attacks on American shipping so long as the United States punished any countries that did not similarly end restrictions on trade. Madison accepted Napoleon's proposal in the hope that it would convince the British to finally end their policy of commercial warfare. Notwithstanding, the British refused to change their policies, and the French reneged on their promise and continued to attack American shipping. With sanctions and other policies having failed, Madison determined that war with Britain was the only remaining option. Many Americans called for a "second war of independence" to restore honor and stature to their new nation, and an angry public elected a "war hawk" Congress, led by Henry Clay and John C. Calhoun. With Britain already engaged in the Napoleonic Wars, many Americans including Madison believed that the United States could easily capture Canada, using it as a bargaining chip for other disputes or simply retaining control of it. On June 1, 1812, Madison asked Congress for a declaration of war, stating that the United States could no longer tolerate Britain's "state of war against the United States". The declaration of war was passed along sectional and party lines, with opposition to the declaration coming from Federalists and from some Democratic–Republicans in the Northeast. In the years prior to the war, Jefferson and Madison had reduced the size of the military, leaving the country with a military force consisting mostly of poorly trained militia members. Madison asked Congress to quickly put the country "into an armor and an attitude demanded by the crisis", specifically recommending expansion of the army and navy. #### Military actions Given the circumstances involving Napoleon in Europe Madison initially believed the war would result in a quick American victory. Madison ordered three landed military spearhead incursions into Canada, starting from Fort Detroit, designed to loosen British control around American-held Fort Niagara and destroy the British supply lines from Montreal. These actions would gain leverage for concessions to protect American shipping in the Atlantic. Without a standing army, Madison counted on regular state militias to rally to the flag and invade Canada: still, governors in the Northeast failed to cooperate. The British army was more organized, used professional soldiers, and fostered an alliance with Native American tribes led by Tecumseh. On August 16, during the British siege of Detroit, Major General William Hull panicked, after the British fired on the fort, killing two American officers. Terrified of an Indian attack, Hull ordered a white tablecloth out a window and unconditionally surrendered Fort Detroit and his entire army to British Major-General Sir Issac Brock. Hull was court-martialed for cowardness, but Madison intervened and saved him from being shot. On October 13, a separate force from the United States was defeated at Queenston Heights, although Brock was killed. Commanding General Henry Dearborn, hampered by mutinous New England infantry, retreated to winter quarters near Albany, failing to destroy Montreal's vulnerable British supply lines. Lacking adequate revenue to fund the war, the Madison administration was forced to rely on high-interest loans furnished by bankers based in New York City and Philadelphia. In the 1812 presidential election, held during the early stages of the war, Madison was re-nominated without opposition. A dissident group of New York Democratic-Republicans nominated DeWitt Clinton, the lieutenant governor of New York and a nephew of recently deceased Vice President George Clinton, to oppose Madison in the 1812 election. This faction of Democratic-Republicans hoped to unseat the president by forging a coalition among Democratic-Republicans opposed to the coming war, as well as those party faithful angry with Madison for not moving more decisively toward war, northerners weary of the Virginia dynasty and southern control of the White House, and many New Englanders wanted Madison replaced. Dismayed about their prospects of beating Madison, a group of top Federalists met with Clinton's supporters to discuss a unification strategy. Difficult as it was for them to join forces, they nominated Clinton for President and Jared Ingersoll, a Philadelphia lawyer, for vice president. Hoping to shore up his support in the Northeast, where the War of 1812 was unpopular, Madison selected Governor Elbridge Gerry of Massachusetts as his running mate, though Gerry would only survive two years after the election due to advanced old age. Despite the maneuverings of Clinton and the Federalists, Madison won re-election, though by the narrowest margin of any election since that of 1800 in the popular vote as later supported by the electoral vote as well. He received 128 electoral votes to 89 for Clinton. With Clinton winning most of the Northeast, Madison won Pennsylvania in addition to having swept the South and the West which ensured his victory. After the disastrous start to the war, Madison accepted Russia's invitation to arbitrate and sent a delegation led by Gallatin and John Quincy Adams (the first son of former President John Adams) to Europe to negotiate a peace treaty. While Madison worked to end the war, the United States experienced some impressive naval successes, by the USS Constitution and other warships, that boosted American morale. Victorious in the Battle of Lake Erie, the U.S. crippled the supply and reinforcement of British military forces in the western theater of the war. In the aftermath of the Battle of Lake Erie, General William Henry Harrison defeated the forces of the British and of Tecumseh's confederacy at the Battle of the Thames. The death of Tecumseh in that battle marked the permanent end of armed Native American resistance in the Old Northwest and any hope of a united Indian nation. In March 1814, Major General Andrew Jackson broke the resistance of the British-allied Muscogee Creek in the Old Southwest with his victory at the Battle of Horseshoe Bend. Despite those successes, the British continued to repel American attempts to invade Canada, and a British force captured Fort Niagara and burned the American city of Buffalo in late 1813. On August 24, 1814, the British landed a large force on the shores of Chesapeake Bay and routed General William Winder's army at the Battle of Bladensburg. Madison, who had earlier inspected Winder's army, escaped British capture by fleeing to Virginia on a fresh horse, though the British captured Washington and burned many of its buildings, including the White House. Escaping capture by the British, Dolley had abandoned the capital and fled to Virginia, but only after securing the portrait of George Washington. The charred remains of the capital signified a humiliating defeat for James Madison and America. On August 27, Madison returned to Washington to view the carnage of the city. Dolley returned the next day, and on September 8, the Madisons moved into the Octagon House. The British army next advanced on Baltimore, but the U.S. repelled the British attack in the Battle of Baltimore, and the British army departed from the Chesapeake region in September. That same month, U.S. forces repelled a British invasion from Canada with a victory at the Battle of Plattsburgh. The British public began to turn against the war in North America, and British leaders began to look for a quick exit from the conflict. In January 1815, Jackson's troops defeated the British at the Battle of New Orleans. Just more than a month later, Madison learned that his negotiators led by John Quincy Adams had concluded the Treaty of Ghent on December 24, 1814, which ended the war. Madison quickly sent the treaty to the Senate, which ratified it on February 16, 1815. Although the overall result of the war ended in a standoff, the quick succession of events at the end of the war, including the burning of the capital, the Battle of New Orleans, and the Treaty of Ghent, made it appear as though American valor at New Orleans had forced the British to surrender. This view, while inaccurate, strongly contributed to bolstering Madison's reputation as president. Native Americans lost the most, including their land and independence. Napoleon's defeat at the June 1815 Battle of Waterloo brought a final close to the Napoleonic Wars and thus an end to the hostile seizure of American shipping by British and French forces. ### Postwar period and decline of the Federalist opposition The postwar period of Madison's second term saw the transition into the "Era of Good Feelings" between mid-1815 and 1817, with the Federalists experiencing a further decline in influence. During the war, delegates from the New England states held the Hartford Convention, where they asked for several amendments to the Constitution. Though the Hartford Convention did not explicitly call for the secession of New England, the Convention became an adverse political millstone around the Federalist Party as general American sentiment had moved towards a celebrated unity among the states in what they saw as a successful "second war of independence" from Britain. Madison hastened the decline of the Federalists by adopting several programs he had previously opposed. Recognizing the difficulties of financing the war and the necessity of an institution to regulate American currency, Madison proposed the re-establishment of a national bank. He also called for increased spending on the army and the navy, a tariff designed to protect American goods from foreign competition, and a constitutional amendment authorizing the federal government to fund the construction of internal improvements such as roads and canals. Madison's initiatives to now act on behalf of a national bank appeared to reverse his earlier opposition to Hamilton and were opposed by strict constructionists such as John Randolph, who stated that Madison's proposals now "out-Hamiltons Alexander Hamilton". Responding to Madison's proposals, the 14th Congress compiled one of the most productive legislative records up to that point in history. Congress granted the Second Bank of the United States a twenty-five-year charter and passed the Tariff of 1816, which set high import duties for all goods that were produced outside the United States. Madison approved federal spending on the Cumberland Road, which provided a link to the country's western lands; still, in his last act before leaving office, he blocked further federal spending on internal improvements by vetoing the Bonus Bill of 1817 arguing that it unduly exceeded the limits of the General Welfare Clause concerning such improvements. ### Native American policy Upon becoming president, Madison said the federal government's duty was to convert Native Americans by the "participation of the improvements of which the human mind and manners are susceptible in a civilized state". In 1809, General Harrison began to push for a treaty to open more land for white American settlement. The Miami, Wea, and Kickapoo were vehemently opposed to selling any more land around the Wabash River. To motivate those groups to sell their land, Harrison decided, against the wishes of Madison, to first conclude a treaty with the tribes who were willing to sell and use those treaties to help influence those who held out. In September 1809, Harrison invited the Potawatomie, Delaware, Eel Rivers, and the Miami to a meeting in Fort Wayne. During the negotiations, Harrison promised large subsidies and direct payments to the tribes if they would cede the other lands under discussion. On September 30, 1809, little more than six months into his first term, Madison agreed to the Treaty of Fort Wayne, negotiated and signed by Indiana Territory's Governor Harrison. In the treaty, the American Indian tribes were compensated \$5,200 () in goods and \$500 in cash (), with \$250 in annual payments (), in return for the cession of 3 million acres of land (approximately 12,140 square kilometers) with incentivized subsidies paid to individual tribes for exerting their influence over less cooperative tribes. The treaty angered Shawnee leader Tecumseh, who said, "Sell a country! Why not sell the air, the clouds and the great sea, as well as the earth?" Harrison responded that tribes were the owners of their land and could sell it to whomever they wished. Like Jefferson, Madison had a paternalistic attitude toward American Indians, encouraging them to become farmers. Madison believed the adoption of European-style agriculture would help Native Americans assimilate the values of British–U.S. civilization. As pioneers and settlers moved West into large tracts of Cherokee, Choctaw, Creek, and Chickasaw territory, Madison ordered the U.S. Army to protect Native lands from intrusion by settlers. This was done to the chagrin of his military commander Andrew Jackson, who wanted Madison to ignore Indian pleas to stop the invasion of their lands. Tensions continued to mount between the United States and Tecumseh over the 1809 Treaty of Fort Wayne, which ultimately led to Tecumseh's alliance with the British and the Battle of Tippecanoe, on November 7, 1811, in the Northwest Territory. The divisions among the Native American leaders were bitter and before leaving the discussions, Tecumseh informed Harrison that unless the terms of the negotiated treaty were largely nullified, he would seek an alliance with the British. The situation continued to escalate, eventually leading to the outbreak of hostilities between Tecumseh's followers and American settlers later that year. Tensions continued to rise, leading to the Battle of Tippecanoe during a period sometimes called Tecumseh's War. Tecumseh was defeated and Indians were pushed off their tribal lands, replaced entirely by white settlers. In addition to the Battle of the Thames and the Battle of Horseshoe Bend, other wars with American Indians took place, including the Peoria War, and the Creek War. Negotiated by Jackson, in the aftermath of the Creek War, the Treaty of Fort Jackson of August 9, 1814, added approximately 23 million acres of land to the United States (93,000 square kilometers) in Georgia and Alabama. Privately, Madison did not believe American Indians could be fully assimilated to the values of Euro-American culture. He believed that Native Americans may have been unwilling to make "the transition from the hunter, or even the herdsman state, to the agriculture". Madison feared that Native Americans had too great an influence on the settlers they interacted with, who in his view was "irresistibly attracted by that complete liberty, that freedom from bonds, obligations, duties, that absence of care and anxiety which characterize the savage state". Later in Madison's term, in March 1816, Madison's Secretary of War William Crawford advocated for the government to encourage intermarriages between Native Americans and whites as a way of assimilating the former. This prompted public outrage and exacerbated anti-indigenous bigotry among white Americans, as seen in hostile letters sent to Madison, who remained publicly silent on the issue. ### Election of 1816 In the 1816 presidential election, Madison and Jefferson both favored the candidacy of Secretary of State James Monroe, who defeated Secretary of War William H. Crawford in the party's congressional nominating caucus. As the Federalist Party continued to collapse, Monroe easily defeated Federalist candidate, New York Senator Rufus King, in the 1816 election. Madison left office as a popular president; former president Adams wrote that Madison had "acquired more glory, and established more union, than all his three predecessors, Washington, Adams, and Jefferson, put together". ## Post-presidency (1817–1836) When Madison left office in 1817 at age 65, he retired to Montpelier, not far from Jefferson's Monticello. As with both Washington and Jefferson, Madison left the presidency a poorer man than when he came in. His plantation experienced a steady financial collapse, due to price declines in tobacco and his stepson's mismanagement. In his retirement, Madison occasionally became involved in public affairs, advising Andrew Jackson and other presidents. He remained out of the public debate over the Missouri Compromise, though he privately complained about the North's opposition to the extension of slavery. Madison had warm relations with all four of the major candidates in the 1824 presidential election, but, like Jefferson, largely stayed out of the race. During Jackson's presidency, Madison publicly disavowed the Nullification movement and argued that no state had the right to secede. Madison also helped Jefferson establish the University of Virginia. In 1826, after the death of Jefferson, Madison was appointed as the second rector of the university. He retained the position as college chancellor for ten years until his death in 1836. In 1829, at the age of 78, Madison was chosen as a representative to the Virginia Constitutional Convention for revision of the commonwealth's constitution. It was his last appearance as a statesman. Apportionment of adequate representation was the central issue at the convention for the western districts of Virginia. The increased population in the Piedmont and western parts of the state were not proportionately represented in the legislature. Western reformers also wanted to extend suffrage to all white men, in place of the prevailing property ownership requirement. Madison made modest gains but was disappointed at the failure of Virginians to extend suffrage to all white men. In his later years, Madison became highly concerned about his historical legacy. He resorted to modifying letters and other documents in his possession, changing days and dates, and adding and deleting words and sentences. By his late seventies, Madison's self-editing of his own archived letters and older materials had become almost an obsession. As an example, he edited a letter written to Jefferson criticizing Gilbert du Motier, Marquis de Lafayette; Madison not only inked out original passages but in other correspondence he even forged Jefferson's handwriting. Historian Drew R. McCoy wrote that "During the final six years of his life, amid a sea of personal [financial] troubles that were threatening to engulf him ... At times mental agitation issued in physical collapse. For the better part of a year in 1831 and 1832 he was bedridden, if not silenced ... Literally sick with anxiety, he began to despair of his ability to make himself understood by his fellow citizens." ### Death Madison's health slowly deteriorated through the early-to-mid-1830s. Approaching the Fourth of July, he died of congestive heart failure at Montpelier on the morning of June 28, 1836, at the age of 85. According to one common account of his final moments, he was given his breakfast, which he tried eating but was unable to swallow. His favorite niece, who sat by to keep him company, asked him, "What is the matter, Uncle James?" Madison died immediately after he replied, "Nothing more than a change of mind, my dear." He was buried in the family cemetery at Montpelier. He was one of the last prominent members of the Revolutionary War generation to die. His last will and testament left significant sums to the American Colonization Society, Princeton, and the University of Virginia, as well as \$30,000 (\$897,000 in 2021) to his wife, Dolley. Left with a smaller sum than James had intended, Dolley suffered financial troubles until her death in 1849. In the 1840s Dolley sold Montpelier, its remaining slaves, and the furnishings to pay off outstanding debts. Paul Jennings, one of Madison's younger slaves, later recalled in his memoir, > In the last days of her life, before Congress purchased her husband's papers, she was in a state of absolute poverty, and I think sometimes suffered for the necessaries of life. While I was a servant to Mr. Webster, he often sent me to her with a market-basket full of provisions, and told me whenever I saw anything in the house that I thought she was in need of, to take it to her. I often did this, and occasionally gave her small sums from my own pocket, though I had years before bought my freedom of her. ## Political and religious views ### Federalism During his first stint in Congress in the 1780s, Madison came to favor amending the Articles of Confederation to provide for a stronger central government. In the 1790s, he led the opposition to Hamilton's centralizing policies and the Alien and Sedition Acts. Madison's support of the Virginia and Kentucky Resolutions in the 1790s has been referred to as "a breathtaking evolution for a man who had pleaded at the Constitutional Convention that the federal government should possess a veto over state laws". ### Religion Although baptized as an Anglican and educated by Presbyterian clergymen, young Madison was an avid reader of English deist tracts. As an adult, Madison paid little attention to religious matters. Though most historians have found little indication of his religious leanings after he left college, some scholars indicate he leaned toward deism. Others maintain that Madison accepted Christian tenets and formed his outlook on life with a Christian worldview. Regardless of his own religious beliefs, Madison believed in religious liberty, and he advocated for Virginia's disestablishment of religious institutions sponsored by the state. He also opposed the appointments of chaplains for Congress and the armed forces, arguing that the appointments produce religious exclusion as well as political disharmony. ## Slavery Throughout his life, Madison's views on slavery were conflicted. He was born into a plantation society that relied on slave labor, and both sides of his family profited from tobacco farming. While he viewed slavery as essential to the Southern economy, he was troubled by the instability of a society that depended on a large slave population. Madison also believed slavery was incompatible with American Revolutionary principles, though he owned over one hundred African American slaves. ### History Madison grew up on Montpelier, his family's plantation in Virginia. Like other southern plantations, Montpelier depended on slave labor. When Madison left for college on August 10, 1769, he arrived at Princeton accompanied by his slave Sawney, who was charged with Madison's expenses and with relaying messages to his family back home. In 1783, fearing the possibility of a slave rebellion at Montpelier, Madison emancipated one slave, Billey, selling him into a seven-year apprentice contract. After his manumission, Billey changed his name to William Gardner, married and had a family, and became a shipping agent, representing Madison in Philadelphia. In 1795, Gardner was swept overboard and drowned on a voyage to New Orleans. `Madison inherited Montpelier and its more than one hundred slaves, after his father's death in 1801. That same year, Madison was appointed Secretary of State by President Jefferson, and he moved to Washington D.C., running Montpelier from afar making no effort to free his slaves. After his election to the Presidency in 1808, Madison brought his slaves to the White House. During the 1820s and 1830s, Madison sold some of his land and slaves to repay debt. In 1836, at the time of Madison's death, he owned 36 taxable slaves. In his will, Madison gave his remaining slaves to his wife Dolley and charged her not to sell the slaves without their permission. For reasons of necessity, Dolley did not comply and sold the slaves without their permission to pay off debts.` ### Treatment of slaves As was consistent with the "established social norms of Virginia society," Madison was known from his farm papers for advocating the humane treatment of his slaves at Montpelier. He instructed an overseer to "treat the Negroes with all the humanity and kindness consistent with their necessary subordination and work." Madison also ensured that his slaves had milk cows and meals for their daily food. By the 1790s, Madison's slave Sawney was an overseer of part of the plantation. Madison ordered Sawney by letter to ready fields for growing apples, corn, tobacco, and Irish potatoes. Like Sawney, some slaves at Montpelier could read. Enslaved people at Montpelier worked six days a week from dawn to dusk, with a mid-day break, and got Sundays off. Paul Jennings was a slave of the Madisons for 48 years. Jennings, born into slavery in 1799 at the Montpelier plantation, served as Madison's footman at the White House. In his memoir A Colored Man's Reminiscences of James Madison, published in 1865, Jennings said that he "never knew [Madison] to strike a slave, although he had over one hundred; neither would he allow an overseer to do it." As a house slave, Jennings had a basic education and was literate, taught in mathematics, and played the violin. Although Jennings condemned slavery, he said that James was "one of the best men that ever lived", and that Dolley was "a remarkably fine woman." ### Views on slavery Madison called slavery "the most oppressive dominion" that ever existed, and he had a "lifelong abhorrence" for it. In 1785 Madison spoke in the Virginia Assembly favoring a bill that Thomas Jefferson had proposed for the gradual abolition of slavery, and he also helped defeat a bill designed to outlaw the manumission of individual slaves. As a slaveholder, Madison was aware that owning slaves was not consistent with revolutionary values, but, as a pragmatist, this sort of self-contradiction was a common feature in his political career. Historian Drew R. McCoy said that Madison's antislavery principles were indeed "impeccable." Historian Ralph Ketcham said, "[a]lthough Madison abhorred slavery, he nonetheless bore the burden of depending all his life on a slave system that he could never square with his republican beliefs." There is no evidence Madison thought black people were inferior. Madison believed blacks and whites were unlikely to co-exist peacefully due to "the prejudices of the whites" as well as feelings on both sides "inspired by their former relation as oppressors and oppressed." As such, he became interested in the idea of freedmen establishing colonies in Africa and later served as the president of the American Colonization Society, which relocated former slaves to Liberia. Madison believed that this solution offered a gradual, long-term, but potentially feasible means of eradicating slavery in the United States. Madison nevertheless thought that peaceful co-existence between the two racial groups could eventually be achieved in the long run. Madison initially opposed the Constitution's 20-year protection of the foreign slave trade, but he eventually accepted it as a necessary compromise to get the South to ratify the document. He also proposed that apportionment in the House of Representatives be according to each state's free and enslaved population, eventually leading to the adoption of the Three-fifths Compromise. Madison supported the extension of slavery into the West during the Missouri crisis of 1819–1821, asserting that the spread of slavery would not lead to more slaves, but rather diminish their generative increase through dispersing them, thus substantially improving their condition, accelerating emancipation, easing racial tensions, and increasing "partial manumissions." Madison thought of slaves as "wayward (but still educable) students in need of regular guidance." According to historian Paris Spies-Gans, Madison's anti-slavery thought was strongest "at the height of Revolutionary politics. But by the early 1800s, when in a position to truly impact policy, he failed to follow through on these views." Spies-Gans concluded, "[u]ltimately, Madison's personal dependence on slavery led him to question his own, once enlightened, definition of liberty itself." ## Legacy ### Historical reputation Regarded as one of the Founding Fathers of the United States, Madison had a wide influence on the founding of the nation and upon the early development of American constitutional government and foreign policy. Historian J.C.A. Stagg writes that "in some ways—because he was on the winning side of every important issue facing the young nation from 1776 to 1816—Madison was the most successful and possibly the most influential of all the Founding Fathers." Though he helped found a major political party and served as the fourth president, his legacy has largely been defined by his contributions to the Constitution; even in his own life he was hailed as the "Father of the Constitution". Law professor Noah Feldman writes that Madison "invented and theorized the modern ideal of an expanded, federal constitution that combines local self-government with an overarching national order". Feldman adds that Madison's "model of liberty-protecting constitutional government" is "the most influential American idea in global political history". Various rankings of historians and political scientists tend to rank Madison as an above average president with a 2018 poll of the American Political Science Association's Presidents and Executive Politics section ranking Madison as the twelfth best president. Various historians have criticized Madison's tenure as president. In 1968, Henry Steele Commager and Richard B. Morris said the conventional view of Madison was of an "incapable President" who "mismanaged an unnecessary war". A 2006 poll of historians ranked Madison's failure to prevent the War of 1812 as the sixth-worst mistake made by a sitting president. Regarding Madison's consistency and adaptability of policy-making during his many years of political activity, historian Gordon S. Wood says that Lance Banning, as in his Sacred Fire of Liberty (1995), is the "only present-day scholar to maintain that Madison did not change his views in the 1790s". During and after the War of 1812, Madison came to support several of the policies that he opposed in the 1790s, including the national bank, a strong navy, and direct taxes. Wood notes that many historians struggle to understand Madison, but Wood looks at him in the terms of Madison's own times—as a nationalist but one with a different conception of nationalism than that of the Federalists. Gary Rosen uses other approaches to suggest Madison's consistency. Historian Garry Wills wrote, "Madison's claim on our admiration does not rest on a perfect consistency, any more than it rests on his presidency. He has other virtues. ... As a framer and defender of the Constitution he had no peer. ... No man could do everything for the country—not even Washington. Madison did more than most, and did some things better than any. That was quite enough." ### Popular culture Madison, portrayed by Burgess Meredith, is a key protagonist in the 1946 Hollywood film Magnificent Doll, which focuses on a fictionalized account of Dolley Madison's romantic life. Madison is also portrayed in the popular musical Hamilton, played by Joshua Henry in the original 2013 Vassar version and then revised by Okieriete Onaodowan for the 2015 Broadway opening. In the Broadway musical, Madison, joined by Jefferson and Burr, confront Hamilton about his affaire de cœur in the Reynolds affair by intoning the rap lyrics to the song "We Know". Onaodowan won a Grammy Award for his portrayal of Madison. ### Memorials Montpelier, the Madison family's plantation, has been designated a National Historic Landmark. The James Madison Memorial Building is part of the United States Library of Congress and serves as the official memorial to Madison. In 1986, Congress created the James Madison Memorial Fellowship Foundation as part of the bicentennial celebration of the Constitution. Other memorials include Madison, Wisconsin and Madison County, Alabama which were both named for Madison, as were Madison Square Garden, James Madison University, and the USS James Madison. In 2021, the Madison Metropolitan School District renamed James Madison Memorial High School following community opposition to commemorating someone who used slave labor.
929,458
Suffolk Punch
1,115,123,169
English breed of draught horse
[ "Animal breeds on the RBST Watchlist", "Conservation Priority Breeds of the Livestock Conservancy", "Horse breeds", "Horse breeds originating in England" ]
The Suffolk Horse, also historically known as the Suffolk Punch or Suffolk Sorrel, is an English breed of draught horse. The first part of the name is from the county of Suffolk in East Anglia, and the word "Punch" is an old English word for a short stout person. It is a heavy draught horse which is always chestnut in colour, traditionally spelled "chesnut". Suffolk Punches are known as good doers, and tend to have energetic gaits. The breed was developed in the early 16th century, and remains similar in phenotype to its founding stock. The Suffolk Punch was developed for farm work, and gained popularity during the early 20th century. However, as agriculture became increasingly mechanised, the breed fell out of favour, particularly from the middle part of the century, and almost disappeared completely. The breed's status is listed as critical by the UK Rare Breeds Survival Trust and the American Livestock Breeds Conservancy. The breed pulled artillery and non-motorised commercial vans and buses, as well as being used for farm work. It was also exported to other countries to upgrade local equine stock. Today, they are used for draught work, forestry and advertising. ## History The Suffolk Punch registry is the oldest English breed society. The first known mention of the Suffolk Punch is in William Camden's Britannia, published in 1586, in which he describes a working horse of the eastern counties of England that is easily recognisable as the Suffolk Punch. This description makes them the oldest breed of horse that is recognisable in the same form today. A detailed genetic study shows that the Suffolk Punch is closely genetically grouped not only with the Fell and Dales British ponies, but also with the European Haflinger. They were developed in Norfolk and Suffolk in the east of England, a relatively isolated area. The local farmers developed the Suffolk Punch for farm work, for which they needed a horse with power, stamina, health, longevity, and docility, and they bred the Suffolk to comply with these needs. Because the farmers used these horses on their land, they seldom had any to sell, which helped to keep the bloodlines pure and unchanged. The foundation sire of the modern Suffolk Punch breed was a 157 centimetres (15.2 h) stallion foaled near Woodbridge in 1768 and owned by Thomas Crisp of Ufford. At this time, the breed was known as the Suffolk Sorrel. This horse was never named, and is simply known as "Crisp's horse". Although it is commonly (and mistakenly) thought that this was the first horse of the breed, by the 1760s, all other male lines of the breed had died out, resulting in a genetic bottleneck. Another bottleneck occurred in the late 18th century. In his History and Antiquities of Hawsted, in the County of Suffolk of 1784, Sir John Cullum describes the Suffolk Punch as "... generally about 15 hands high, of a remarkably short and compact make; their legs bony; and their shoulders loaded with flesh. Their colour is often of a light sorrel". During its development, the breed was influenced by the Norfolk Trotter, Norfolk Cob, and later the Thoroughbred. The uniform colouring derives in part from a small trotting stallion named Blakes Farmer, foaled in 1760. Other breeds were crossbred in an attempt to increase the size and stature of the Suffolk Punch, as well as to improve the shoulders, but they had little lasting influence, and the breed remains much as it was before any crossbreeding took place. The Suffolk Horse Society, formed in Britain in 1877 to promote the Suffolk Punch, published its first stud book in 1880. The first official exports of Suffolks to Canada took place in 1865. In 1880, the first Suffolks were imported into the United States, with more following in 1888 and 1903 to begin the breeding of Suffolk Punches in the US. The American Suffolk Horse Association was established and published its first stud book in 1907. By 1908, the Suffolk had also been exported from England to Spain, France, Germany, Austria, Russia, Sweden, various parts of Africa, New Zealand, Australia, Argentina and other countries. By the time of the First World War, the Suffolk Punch had become a popular workhorse on large farms in East Anglia due to its good temperament and excellent work ethic. It remained popular until the Second World War, when a combination of the need for increased wartime food production (which resulted in many horses being sent to the slaughterhouse), and increased farm mechanisation which followed the war-decimated population numbers. Only nine foals were registered with the Suffolk Horse Society in 1966, but a revival of interest in the breed has occurred since the late 1960s, and numbers have risen continuously. The breed did remain rare, and in 1998, only 80 breeding mares were in Britain, producing around 40 foals per year. In the United States, the American Suffolk Horse Association became inactive after the war and remained so for 15 years, but restarted in May 1961 as the draught-horse market began to recover. In the 1970s and early 1980s, the American registry allowed some Belgians to be bred to Suffolk Punches, but only the fillies from these crosses were permitted registry with the American Suffolk Horse Association. As of 2001, horses bred with American bloodlines were not allowed to be registered with the British Association, and the breed was considered the rarest horse breed in the United Kingdom. Although the Suffolk Punch population has continued to increase, the Rare Breeds Survival Trust of the UK considers their survival status critical, in 2011, between 800 and 1,200 horses were in the United States and around 150 were in England. The American Livestock Breeds Conservancy also lists the breed as critical. The Suffolk Horse Society recorded the births of 36 purebred foals in 2007, and a further 33 foals as of March 2008. By 2016, about 300 Suffolk Punches were in the UK with 30 to 40 purebred foals being born annually. ## Characteristics Suffolk Punches generally stand 165 to 178 centimetres (16.1 to 17.2 h), weigh 900 to 1000 kilograms (2000 to 2200 lb), and are always chestnut in colour. The traditional spelling, still used by the Suffolk Horse Society, is "chesnut" (with no "t" in the middle of the word). Horses of the breed come in many different shades of chestnut, ranging from dark to red to light. Suffolk horse breeders in the UK use several different colour terms specific to the breed, including dark liver, dull dark, red, and bright. White markings are rare and generally limited to small areas on the face and lower legs. Equestrian author Marguerite Henry described the breed by saying, "His color is bright chestnut – like a tongue of fire against black field furrows, against green corn blades, against yellow wheat, against blue horizons. Never is he any other color." The Suffolk Punch tends to be shorter but more massively built than other British heavy draught breeds, such as the Clydesdale or the Shire, as a result of having been developed for agricultural work rather than road haulage. The breed has a powerful, arching neck; well-muscled, sloping shoulders; a short, wide back; and a muscular, broad croup. Legs are short and strong, with broad joints; sound, well-formed hooves; and little or no feathering on the fetlocks. The movement of the Suffolk Punch is said to be energetic, especially at the trot. The breed tends to mature early and be long-lived, and is economical to keep, needing less feed than other horses of similar type and size. They are hard workers, said to be willing to "pull a heavily laden wagon till [they] dropped." In the past, the Suffolk was often criticised for its poor feet, having hooves that were too small for its body mass. This was corrected by the introduction of classes at major shows in which hoof conformation and structure were judged. This practice, unique among horse breeds, resulted in such an improvement that the Suffolk Punch is now considered to have excellent foot conformation. ## Uses The Suffolk Punch was used mainly for draught work on farms but was also often used to pull heavy artillery in wartime. Like other heavy horses, they were also used to pull non-motorised vans and other commercial vehicles. Today, they are used for commercial forestry operations, for other draught work, and in advertising. They are also used for crossbreeding, to produce heavy sport horses for use in hunter and show jumping competition. As a symbol of the county in which they are based, Ipswich Town F.C. incorporate a Suffolk Punch as a dominant part of their team crest. The Suffolk Punch contributed significantly to the creation of the Jutland breed in Denmark. Oppenheimer LXII, a Suffolk Punch imported to Denmark in the 1860s by noted Suffolk dealer Oppenheimer of Hamburg, was one of the founding stallions of the Jutland. Oppenheimer specialised in selling Suffolk Punches, importing them to the Mecklenburg Stud in Germany. The stallion Oppenheimer founded the Jutland breed's most important bloodline, through his descendant Oldrup Munkedal. Suffolks were also exported to Pakistan in the 20th century, to be used in upgrading native breeds, and they have been crossed with Pakistani horses and donkeys to create army remounts and mules. Suffolks have adapted well to the Pakistani climate, despite their large size, and the programme has been successful. The Vladimir Heavy Draft, a draught breed from the former USSR, is another which has been influenced by the Suffolk.
51,466,749
Lazarus (comics)
1,147,234,964
Comic book series
[ "2013 comics debuts", "Dystopian comics", "Image Comics titles", "Science fiction comics" ]
Lazarus is an American dystopian science fiction comic book series created by writer Greg Rucka and artist Michael Lark. The two began developing the idea in 2012 and partnered with colorist Santi Arcas to finish the art. Image Comics has been publishing the book since the first issue was released on June 23, 2013. Other creators were brought in later to assist with lettering and inking. A six-issue spin-off limited series, Lazarus: X+66, was released monthly in 2017 between issues 26 and 27 of the regular series. Rucka initially said the series could run for up to 150 issues, but later reduced the estimate by half. Lazarus is being collected into paperback and hardcover editions, which sell better than the monthly issues. In the series, the world has been divided among sixteen rival families, who run their territories in a feudal system. The main character is Forever Carlyle, the military leader of the Carlyle family. The major themes of Lazarus are the meaning of "family" and nature versus nurture. Critics have given it mostly positive reviews and have praised its worldbuilding. It has received particular attention for its political themes. Lazarus is being adapted into other media. Green Ronin Publishing is using the plot as a campaign setting for their Modern AGE role-playing game in 2018. A television adaptation is in development with Legendary Television and Amazon Studios. ## Publication history ### Early development American writer Greg Rucka and artist Michael Lark had previously collaborated on the comic series Gotham Central for DC Comics between 2002 and 2004 and various small projects for Marvel Comics in the years following. Lark wanted to work with Rucka on a creator-owned comic because he felt he was at his best drawing the kind of stories Rucka writes. In June 2012, Rucka was in Dallas as part of a book-signing tour. He had dinner with Lark, who lived nearby, and shared an idea for a scene involving a woman who had been shot rising from the dead and pursuing her attackers. Lark liked the story and committed to drawing the comic as soon as a full script was ready. Although Rucka had previously published his creator-owned material through Oni Press, his friend Ed Brubaker had been pushing him to work with Image Comics. When they contacted Image's Eric Stephenson and pitched the project as "The Godfather meets Children of Men", he immediately expressed interest. The project, titled Lazarus, was officially announced at the San Diego Comic Con on July 14, 2012. The announcement was accompanied by promotional artwork colored by American Elizabeth Breitweiser and featured a prototype logo design and typeface. Image Comics provided David Brothers to serve as the series' editor. Unlike traditional comic editors who focus on coordinating schedules and pushing deadlines, Brothers only reviews the work and provides responses that help the team create better work with more internal consistency. Eric Trautmann, who had previously edited two of Rucka's novels, was recruited to help with research, timelines, and design work. Lark wanted to work with a European colorist to provide a look distinct from traditional American comics. Rucka suggested Santi Arcas, a Spanish colorist he had worked with in the past, and Lark particularly liked Arcas' skies and textures. ### Production Rucka and Lark developed the setting for Lazarus by looking at the Occupy movement and the underlying economics, then asking themselves "What happens if it goes horribly wrong?" They decided how the story would end before work began on the first issue. They initially gave their lead character the name Endeavor, but Rucka changed the name to Forever to avoid a conflict with a different comic being developed at the same time about a young Inspector Morse. Lark based her body type on the soccer player Hope Solo. Lark was disappointed by the first script as he felt none of the characters were likable, and the scene described to him over dinner was not included. In response, Rucka wrote a new draft restoring the missing opening scene. Lark began drawing the first issue in January 2013, basing the opening scene on the reconstruction sequence in 1997 film The Fifth Element. When writing a new script, Rucka tries to follow the world-building model used by William Gibson in his 1984 novel Neuromancer and provide information about the environment through context instead of exposition. His biggest struggle is delivering details while maintaining a proper narrative pace. He sometimes self-censors "exceptionally dark" material because he does not want to make Lark draw it. After Lark receives a new script, the collaboration between them is "immediate and constant". Lark questions Rucka about characterization and the direction of the story, leading Rucka to rewrite scripts resulting in what he believes is a better final product. Lark refuses to read scripts in advance so he will stay focused on what is in front of him, not what he will be drawing next. Rucka says Lark intuitively knows what is happening in the story even when it isn't clearly scripted. Rucka and Lark have an ongoing conversation about how to show the injustice in the way Forever is treated without being complicit in it themselves. For example, medics must remove Forever's clothes to treat her wounds. Lark wanted to avoid sexualizing the images, but also avoid being "coy" by simply blocking parts of her body with another character's arm. The script gives Lark no direction for aspects like architecture, clothing, or vehicle design. Designing these technical details involves research into prototype technology and takes almost as long as drawing the actual pages for the comics. The time required to create the sets is the primary reason Lark sometimes falls behind schedule. Lark works on Lazarus ten or more hours per day. He uses photo references and digital tools in the early stages of his art, but the layouts and drawing are done with traditional tools. He is more involved with the coloring on Lazarus than any other comic he has illustrated. The logo design was finalized by Trautmann and Lark. Lark initially did all the lettering and inking for Lazarus, but doing so made it impossible to release new issues on a regular schedule. To give him more time to focus on drawing, some of the smaller tasks like logo and type design were given to other people. Brian Level assisted with inking on issues three through ten, when he was replaced by Tyler Boss. Beginning with issue ten, Jodi Wynne took over the lettering duties and Owen Freeman started creating the cover art. Fake advertisements found on the back covers and many of the computer screens and holographic images in the artwork are created by Trautmann. Lark and Rucka often discuss whether to use sound effects in scenes or limit their use. Lark does not want to rely on them to convey information because they may become a "crutch" in place of including important details in the art. Issue 15 features a silent, thirteen-page fight between two characters. Rucka, who used to be a choreographer, filmed himself acting out the battle with a friend. Lark used the film for reference as he drew. Following the 2016 United States presidential election, the creators' vision for the comic shifted. Rucka, who had used the letter columns in the series to discuss his concerns about then-candidate Donald Trump, told Oregon Public Broadcasting that after the election results Lazarus had changed from a dystopian science fiction story to a documentary. During a discussion panel at the 2017 Chicago Comic & Entertainment Expo, Rucka described Lazarus as being "about the blood red rage that leads to a Trump administration" before joking that he had "tried to warn you three years ago!" Although the overall plan for Lazarus did not change, Rucka said he had a growing interest in writing about a brighter future instead. ### Publication Brubaker advised Rucka to create a four-page "trailer" to promote the book, a strategy Brubaker had used with The Fade Out. Rucka was not initially interested, but Lark liked the idea. The trailer debuted at the 2013 Emerald City Comic Con before appearing online and in Previews, the catalog for Diamond Distribution. The scene was not reproduced in any issue of the series. Most comics sold to specialty stores in the direct market are non-returnable. To reduce the financial risk for retailers who were uncertain about its sales potential, unsold copies from qualifying orders of the first three issues of Lazarus could be returned to the publisher. The first issue went on sale on June 26, 2013 and sold out of its approximately 48,000 copy print run at the distributor level. A second printing was announced to coincide with the release of issue two. After a second sellout, it was added to the "Image Firsts" program, a line of discounted first-issue reprints continuously available for retailers to order. By the end of 2013, the first issue had sold an estimated 50,200 copies. The second issue, which also went through multiple printings, sold an estimated 30,600 copies. Over the next two years, sales fell steadily to about 14,500 copies. Because of scheduling issues, Rucka and Lark were behind on the series from the start. The problems were exacerbated by illness and poor communication during the "Lift" arc, causing issue 9 to be delayed by more than a month. Further late issues led retailers to reduce their orders for new issues. In fall 2015, the team announced a four-month hiatus between issues 21 and 22 to allow Lark to get ahead of schedule. They said they would not solicit any more issues until the next story arc was completed, and the hiatus actually lasted six months, in part because of miscommunication between Image Comics and Diamond Distribution. During the hiatus, they released a sourcebook providing additional, non-essential background on the Carlyle family. The sourcebook was created with input from Robert Mackenzie and David Walker, who had been providing annotations for the series at NerdSpan. Despite the break, five months passed between the fourth and fifth chapters of the Cull arc. A second sourcebook detailing the Hock Family was released in April 2017. In the letter column of issue 26, Rucka announced a six-issue limited series titled Lazarus: X+66 would be released monthly beginning July 2017. The series was written by Rucka and Trautmann, and each issue focused on different supporting characters from the main series. Lark was involved as a consultant, but each issue was drawn by a new artist. This decision gave Lark time to work on something unrelated to Lazarus, which had been his only project since the series began. A four-page preview of the first installment, drawn by Steve Lieber, was included with the book's solicitation in Image Plus \#16. A third source book, this time covering the Vassalovka family, was released one week after the limited series ended. Lark returned to Lazarus in April 2018 with issue 27. In the letter column of issue 27, Rucka announced the series would change to a 64-page quarterly format beginning with issue 29 in September 2018. The new format features 44 pages of comic story and the remaining 20 pages are a variety of prose material including short stories and role-playing game supplements. When the proposed issue 29 was released, it was retitled and renumbered as Lazarus: Risen \#1. When the series began, Rucka estimated it would take between 100 and 150 issues to reach the ending. In May 2016, he revised his estimate downward, saying Lazarus was "25–30% complete at issue 21". ### Collected editions The series has been compiled in six trade paperbacks and three hardcovers. The first paperback collection appeared on the New York Times Best Seller List for Paperback Graphic Books in eighth position for two weeks in November 2013. The second appeared in the tenth spot for one week in August 2014. The hardcovers include introductions from notable comic creators like Warren Ellis and behind-the-scenes material not otherwise available. Rucka and Lark take the extra content in them "very seriously" because hardcovers are expensive. In 2015, Rucka said sales of single issues "aren't great", but went on to say the series is selling better in a collected format. That year, the first paperback collection sold close to the same number of copies to comic specialty shops as it did in 2013, the year it was released. Lazarus has been translated into several European languages by Italian publisher Panini Comics and released in hardcover formats containing the same material as the English paperbacks. ## Plot ### Synopsis Lazarus is a coming of age story for a young woman named Forever Carlyle who is questioning her identity. Its major themes are the meaning of "family" and nature versus nurture. It is set in a bleak future a number of decades from now after the current world order has broken down, possibly due to climate change. Sixteen families each control the territory, resources, and technology in their part of the world, as per mutual agreement, though each family has their own technological strengths and may govern their territory through differing methodology. The Carlyle family rules the western half of North America in a feudal system, dividing people into three tiers: "family", "serfs" (skilled laborers), and "waste" (everyone else). The families have formed alliances to protect themselves from other families, and each family has a chosen warrior, trained and modified as per the family's strengths, known as a "Lazarus" who represents them in combat. Forever is the Carlyle Lazarus. She obeys the family patriarch, Malcolm Carlyle, and has four siblings: Steven, Beth, and twins Jonah and Johanna. The original source of the Carlyle's fortune and power is from their various developments in genetic technology. Among other advancements, their modified seeds provide food for most of the world. The Carlyles have also altered their own genetics, which has allowed all of them to grow old without suffering the consequences of age, thereby engendering jealousy and fear in many of the other families. ### Plot All issues written by Greg Rucka and illustrated by Michael Lark unless otherwise stated. ## Critical reception Lazarus has received positive reviews since its debut. According to review aggregator Comic Book Roundup, critics gave the first issue an average score of 8.7/10 based on 32 reviews. The series as a whole averages 8.6/10 based on 284 reviews. Critics and fans often praise the world-building in Lazarus, but Lark and Rucka see it as secondary and think it receives too much focus. Publishers Weekly said Forever's "fascinating complexity" made Lazarus stand out from other graphic novels. Writing for Comics Alliance, KM Bezner said every character, including the diabolical ones, displayed humanity and "[blurred] the lines between shades of morality". On Broken Frontier, Tyler Chin-Tanner described "Lift", the series' second story arc, as "a moving tale of family sacrifice". The series has appeared on many comic critics' "best of" lists. Many critics compared Lazarus with other genre works. The timeliness of Rucka's premise made the series stand out among dystopian fiction for IGN reviewer Melissa Grey. Garrett Martin wrote in Paste Magazine that the series was unlike other contemporary class warfare genre fiction like the novels Hunger Games or Blackacre because it is told from the oppressors' point of view. Oliver Sava reviewed the series for The A.V. Club and said it stood out from Image's other science fiction comics "because it's more grounded in current political and economic trends". Rucka specifically addressed fan-drawn parallels to the television series Game of Thrones, saying he had not read the books and purposely avoids watching the show to avoid unintentionally borrowing ideas from it. Lark thinks the comparison to Game of Thrones works to some extent, but points out that Lazarus concentrates more on a single character. Lark was praised for being equally good at depicting violence and introspection, and Martin said it was Lark's finest work. According to Lark, the characters in Lazarus rarely say what they mean, and some vital story beats are depicted by wordless art. Arcas received notice for adding texture and depth to Lark's art and using pallette changes to help tell the story. ### Political themes Because of its economic themes, Bezner warned that the political elements of Lazarus would not be for everyone. In The Jersey Journal, critic William Kulesa believed the "deeply considered speculation on society, technology, and the future" is what made the series high-quality science fiction. Chin-Tanner found it to be a character driven story even though it dealt with political and scientific issues, and Newsarama reviewer Vanessa Gabriel felt Lazarus "engages the reader with plausibility". Following the election of Donald Trump as President of the United States in 2016, Salon writer Mark Peters called the series "newly relevant" and compared Trump to the Carlyle family. In an in-depth review of the series for the Los Angeles Review of Books in 2017, Evan McGarvey praised the research and thought that went into Lazarus, but expressed concern that the visual requirements of the art conflicted with the political themes. He specifically noted the ruling families and their soldiers "simply look cooler" than the waste with whom the audience is meant to identify, and concluded that this dissonance may skew the real message Rucka and Lark want to send. McGarvey went on to compare the Carlyles to the Mercer family and the lift to China's Gaokao. ## Collected editions ## Adaptations in other media ### Television Legendary Television bought the rights to adapt Lazarus following a competitive bidding war in March 2015. Rucka and Lark will be executive producers along with David Manpearl and Matt Tolmach. A pilot script written by Rucka entered its final draft in late 2015 and Legendary began looking for a network willing to purchase it. During the hiatus between issues 21 and 22, Rucka and Lark were able to devote more time to developing the adaptation. Rucka said the development process for Lazarus has been better than any of his previous Hollywood experiences, and that he hopes the show will be able to explore characters more deeply using scenes cut from the book. In September 2017, Deadline Hollywood reported the adaptation was being developed as a potential series for Amazon Studios, who made a "significant production investment" in it. In the letter column of Lazarus X+66 \#4 (November 2017), Rucka said this announcement included some inaccuracies, and emphasized the show is still a long way from being released. He said the casting process had not yet begun. ### Role playing In the Spring of 2017, Green Ronin Publishing announced The World of Lazarus, a campaign setting in their Modern AGE role-playing game. Although initially planned for a November 2017 release, it was delayed until 2018 to allow more time for development. Rucka said role playing games had an important part in his development as a writer, and that having one of his ideas turned into one "might just possibly be the greatest compliment I could ever receive."
26,161,597
History of Tottenham Hotspur F.C.
1,169,884,624
History of an English football club
[ "History of association football clubs in England", "History of sport in London", "Tottenham Hotspur F.C." ]
Tottenham Hotspur Football Club is a football club based in Tottenham, north London, England. Formed in 1882 as "Hotspur Football Club" by a group of schoolboys, it was renamed to "Tottenham Hotspur Football Club" in 1884, and is commonly referred to as "Tottenham" or "Spurs". Initially amateur, the club turned professional in 1895. Spurs won the FA Cup in 1901, becoming the first, and so far only non-League club to do so since the formation of the Football League. The club has won the FA Cup a further seven times, the Football League twice, the League Cup four times, the UEFA Cup twice and the UEFA Cup Winners' Cup in 1963, the first UEFA competition won by an English team. In 1960–61, Tottenham became the first team to complete The Double in the 20th century. Tottenham played in the Southern League from 1896 until 1908, when they were elected to the Football League Second Division. They won promotion to the First Division the following year, and stayed there until the late 1920s. The club played mostly in the Second Division until the 1950s, when it enjoyed a revival, reaching a peak in the 1960s. Fortunes dipped after the early 1970s, but resurged in the 1980s. Tottenham was a founding member of the Premier League in 1992; they finished in mid-table most seasons, but now rank as one of the top six clubs. Of the club's thirty-two managers, John Cameron was the first to win a major trophy, the 1901 FA Cup. Peter McWilliam added a second FA Cup win for the club in 1921. Arthur Rowe developed the "push and run" style of play in the 1950s and led the club to its first league title. Bill Nicholson oversaw the Double winning side as well as the most successful period of the club's history, in the 1960s and early 1970s. Later managers include Keith Burkinshaw, the second most successful in terms of major trophies won, with two FA Cups and a UEFA Cup, and Terry Venables, under whom the club won the FA Cup in 1991. Spurs played their early games on public land at Tottenham Marshes, but by 1888 they were playing on rented ground at Northumberland Park. In 1899, the club moved to White Hart Lane, where a stadium was gradually developed. Spurs remained there until 2017. Its replacement, Tottenham Hotspur Stadium, was completed in 2019 on the same site; during its construction, home matches were played at Wembley Stadium. ## Formation The Hotspur Football Club was formed in 1882 by a group of schoolboys from Saint John's Middle Class School and Tottenham Grammar School. Mostly aged 13 to 14, the boys were members of the Hotspur Cricket Club formed two years earlier. Robert Buckle with his two friends Sam Casey and John Anderson conceived the idea of a football club so they could continue to play sport during the winter months. Club lore states that the boys gathered one night under a lamppost along Tottenham High Road about 100 yards/metres from the now-demolished White Hart Lane ground, and agreed to form a football club. The date of their meeting is unknown, so the Hotspur Football Club is taken to have been formed on 5 September 1882, when the eleven boys had to start paying their first annual subscriptions of sixpence. By the end of the year the club had eighteen members. Although the name Northumberland Rovers was mooted, the boys settled on the name Hotspur. As with the cricket club, it was chosen in honour of Sir Henry Percy (better known as Harry Hotspur, the rebel of Shakespeare's Henry IV, part 1), whose Northumberland family once owned land in the area, including Northumberland Park in Tottenham, where the club is located. The boys initially held their meetings under lampposts in Northumberland Park or in half-built houses on the adjoining Willoughby Lane in Tottenham. In August 1883 the boys sought the assistance of John Ripsher, the warden of Tottenham YMCA and Bible-class teacher at All Hallows Church, who became the first president of the club and its treasurer. A few days later he presided over a meeting of twenty-one club members in the basement kitchen of the YMCA at Percy House or its annex on the High Road, which became the club's first headquarters. Ripsher, who continued as president until 1894, supported the boys through the club's formative years, reorganising it and establishing the club's ethos. The stability he provided in the early years helped the club survive, unlike many others of the period that did not. Ripsher found new premises for the club when the boys were evicted in 1884 after a YMCA council member was accidentally hit by a soot-covered football: first at 1 Dorset Villa, a church-owned property on Northumberland Park where they stayed for two years, then to the Red House on High Road around 1885–86 after they were again asked to leave, this time for playing cards in church. The Red House, which stood beside the entrance gate to White Hart Lane but was demolished in 2016 during the ground's redevelopment, was the club's headquarters until its move to 808 High Road in 1891. It was later bought by the club in 1921 and became its official address (748 Tottenham High Road) in 1935. In April 1884, owing to letters for another London club named Hotspur being misdirected to North London, the club was renamed Tottenham Hotspur Football Club to avoid any further confusion. The team were referred to as "Spurs" in press reports as early as 1883 (the use of "Spurs" for a team called Hotspur predates the formation of Tottenham Hotspur), and they have also been called "the Lilywhites" after the white jerseys they have worn since the late 19th century. ## Early years The boys played their early matches on public ground at Tottenham Marshes, where they needed to mark out and prepare their own pitch, and on occasions had to defend against other teams who might try to take it over. Local pubs were used as dressing rooms. Robert Buckle was the team's first captain, and for two years the boys largely played games among themselves, but the number of friendly fixtures against other clubs gradually increased. The first recorded match took place on 30 September 1882 against a local team named the Radicals, a game Hotspur lost 2–0. The first game reported by the local press was on 6 October 1883 against Brownlow Rovers, which Spurs won 9–0. The team played their first competitive match on 17 October 1885 against St Albans, a company works team, in the London Football Association Cup. The match was attended by 400 spectators, and Spurs won 5–2. In the early days Spurs were essentially a schoolboy team, sometimes playing against adults, but older players later joined, and the squad strengthened as they absorbed players from other local clubs. Some of the early members such as Buckle, Sam Casey, John Thompson and Jack Jull stayed with the club for many years as players, committee members or directors; for example Jull played until 1896, while Buckle served in various roles on the club committee and was on the first board of directors of Tottenham formed in 1898 until he left in 1900. Spurs attracted the interest of the local community soon after their formation, and the number of spectators for their matches grew to 4,000 within a few years. As their games were played on public land, no admission fees could be charged for spectators. In 1888 Tottenham moved their home fixtures from the Tottenham Marshes to Northumberland Park, where they rented an enclosed ground and were able to charge for admission and control the crowd. The first match there was on 13 October 1888, a reserve match that yielded gate receipts of 17 shillings. A week later they were beaten 8–2 by Old Etonians in their first senior game at the ground. Spectators were usually charged 3d a game, raised to 6d for cup ties. By the early 1890s, a cup tie may draw a few thousand paying supporters. In the early days there were no stands except for a couple of wagons as seats and wooden trestles for spectators to stand on, but for the 1894–95 season, the first stand with just over 100 seats and a changing room underneath was built on the ground. The club attempted to join the Southern League in 1892 but failed when it received only one vote. Instead, the club played in the short-lived Southern Alliance for the 1892–93 season. Spurs initially played in navy-blue shirts with a letter H on a scarlet shield on the left breast and white breeches. The club colours were changed in 1884 or 1885 to light blue and white halved jerseys and white shorts, inspired by watching Blackburn Rovers win the FA Cup at the Kennington Oval in 1884, before returning to the original dark blue shirts for the 1889–90 season. From 1890 to 1895, the club had red shirts and blue shorts, changed in 1895 to chocolate brown and gold narrow striped shirts and dark-blue shorts. Finally, in the 1898–99 season, the strip was changed to the now familiar white shirts and navy-blue shorts. ### Professional status The club became unwittingly involved in a controversy known as the Payne Boots Affair in October 1893. A reserve player from Fulham, Ernie Payne, agreed to play for Spurs but arrived without any kit as it had apparently been stolen at Fulham. As no suitable boots could be found, the club gave him 10 shillings to buy his boots. Fulham then complained to the London Football Association that Tottenham had poached their player, and accused them of professionalism breaching amateur rules. On the latter charge, the London Football Association found Tottenham guilty, as the payment for the boots was judged an "unfair inducement" to attract the player to the club. Spurs were suspended for a fortnight, had to forfeit their second-round match against Clapham Rovers in the FA Amateur Cup and were therefore eliminated from the competition. Despite the punishment, the controversy had an unexpected positive effect on the club. Press coverage of the incident raised the national profile of what was then a local amateur club, and gained them sympathy for what many thought was unfair treatment. Invitations from other clubs to play games increased, and attendance at their matches rose. The publicity also brought on board local businessman John Oliver, who became chairman in 1894 after Ripsher retired, and provided funding for the club. With an increasing number of teams to play against, the quality of Spurs' opposition also improved. To compete against better teams, the club committee, led by the second club president John Oliver, agreed that the club should turn professional. Robert Buckle made the proposal at a meeting on 16 December 1895, which was accepted after a vote, and the club gained its professional status on 20 December 1895. Spurs made a failed attempt to join the Football League, but they were admitted to Division One of the Southern League in mid-1896. They also participated in other leagues, initially the United League and later Western League, in which they did not always field the full first team. The team was almost entirely rebuilt over the next two years; the first two professional players, Jock Montgomery and J. Logan, were quickly recruited from Scotland (a number of Scottish footballers would have significant influences in the club's history), and in 1897 they signed their first international, Jack Jones. On 2 March 1898, Spurs decided to become a limited company—the Tottenham Hotspur Football and Athletic Company—to raise funds and to limit the personal liability of its members. Eight thousand shares were issued at £1 each, but only 1,558 were taken up by the public in the first year. A board of directors was formed with Oliver as chairman, but he retired after the company reported a loss of £501 at the end of the season in 1899. Charles Roberts took over as chairman and remained in the post until 1943. Soon after the club became a limited company, on 14 March 1898, Frank Brettell was appointed the first manager of Spurs. He signed several players from northern clubs, including Harry Erentz, Tom Smith and, in particular, John Cameron, who signed from Everton in May 1898 and was to have a considerable impact on the club. Cameron became player-manager the following February after Bretell left to take a better-paid position at Portsmouth, and led the club to its first trophies: the Southern League title in 1899–1900 and the 1901 FA Cup. In his first year as manager, Cameron signed seven players: George Clawley, Ted Hughes, David Copeland, Tom Morris, Jack Kirwan, Sandy Tait and Tom Pratt. The following year Sandy Brown replaced Pratt, who wanted to return to the North despite being the top goalscorer. They, together with Cameron, Erentz, Smith and Jones, formed the 1901 Cup-winning team. ### Move to White Hart Lane On Good Friday 1898 a match was held against Woolwich Arsenal at Northumberland Park, attended by a record crowd of 14,000. In the overcrowded ground, fans climbed up onto the roof of the refreshment stand to get a better view. The stand then collapsed under their weight causing a few injuries, which prompted the club to start looking for a new ground. In 1899 the club moved a short distance to a piece of land behind the White Hart pub. The White Hart Lane site, actually located behind Tottenham High Road, was a nursery owned by the brewery chain Charringtons. The club initially leased the ground from Charringtons, but development of the ground was restricted by the terms of the lease. In 1905, after issuing shares towards the cost of purchase, Spurs bought the freehold for £8,900 and paid a further £2,600 for a piece of land at the northern end. By then the ground had a covered stand on the west side and earth mounds on the other three sides. The ground was never officially named, but it became popularly known as White Hart Lane, also the name of a local thoroughfare. The first game at White Hart Lane was a friendly against Notts County on 4 September 1899 that Spurs won 4–1, and the first competitive game on the ground was held five days later against Queens Park Rangers, a game won by Spurs 1–0. With an effective attacking triangle of Jones, Kirwan and Copeland, Tottenham won 11 games in their first 13 games in the 1908–09 season. On 28 April 1900, they finished top of the Southern League, and won the club their first trophy. After the win, the club was dubbed "Flower of the South" by the press. ### 1901 FA Cup Tottenham first took part in the FA Cup in the 1894–95 season, but never got beyond the third round proper in six years. In the 1901 FA Cup, Spurs managed to reach the final after beating Preston North End, Bury, Reading and West Bromwich Albion; all apart from Reading were Football League Division One teams of the 1900–01 season. The final against Division One Sheffield United was played at Crystal Palace and attended by 110,820 spectators, at that time the largest crowd ever for a football match. The game ended in a 2–2 draw, with both of Spurs' goals coming from Sandy Brown, and a disputed goal from Sheffield. The final was the first to be filmed, and it contained the first referee decision demonstrated by film footage to be incorrect, as it showed that the ball did not cross the line for the Sheffield goal. In the replay at Burnden Park, Bolton on 27 April 1901, Spurs won 3–1 with goals from Cameron, Smith and another by Brown. By winning the FA Cup, Spurs became the only non-League club to have achieved the feat since the formation of the Football League in 1888. The club also inadvertently introduced the tradition of tying ribbons in the colours of the winning team on the FA Cup when the wife of a Spurs director tied blue and white ribbons to the handles of the cup. The win started a trend for Spurs' success in years ending in a one, with further FA Cup wins in 1921, 1961, 1981 and 1991, the League Cup in 1971, the league in 1951, and in 1961, which is the double winning year. Following the 1901 Cup win, Spurs failed to repeat the success in the next few seasons but they were runners-up in the Southern League twice, and won the London League in the 1902–03 season as well as the Western League in the 1903–04 season. They went on their first overseas tour, to Austria and Hungary, in May 1905. Cameron left on 13 March 1907, citing "differences with the directorate", and he was replaced in April 1907 by Fred Kirkham, a referee with no managerial experience. Kirkham was disliked by players and fans alike, and he left on 20 July 1908 after a year as manager. ## Early decades in the Football League (1908–1949) ### Election to the Football League Tottenham resigned from the Southern League in 1908 and sought to join the Football League. Their initial application was unsuccessful, but after the resignation of Stoke from the league for financial reasons, Tottenham won election to the Second Division of the Football League for the 1908–09 season to replace them. As Spurs had no manager following Kirkham's departure, the directors took on the role of choosing the team, while the club secretary Arthur Turner was tasked with overseeing the team's affairs. Spurs played their first league game in September 1908 against Wolverhampton Wanderers and won 3–0, with their first ever goal in the Football League scored by Vivian Woodward. Woodward was also instrumental in the club's immediate promotion to the First Division after finishing runners-up in their first year. Before the start of the following season, Woodward left football to pursue other interests, although he soon returned to the game and joined Chelsea. Spurs struggled in their first year in the First Division, but avoided relegation by beating Chelsea in the last game of the season with goals from Billy Minter and a former Chelsea player Percy Humphreys, sending their opponents down instead. The club started an ambitious plan to redevelop White Hart Lane in 1909, beginning with the construction of the West Stand designed by Archibald Leitch. The North and South stands were then built in the early 1920s, with the East Stand completed in 1934, bringing the capacity of the finished stadium to almost 80,000. A bronze cockerel was placed on top of the West Stand in 1909. The cockerel was adopted as an emblem because Harry Hotspur, after whom the club was named, was believed to have gained the nickname wearing fighting spurs in battles, and spurs were also worn by fighting cock. Tottenham had initially used spurs as a symbol in 1900, as Harry Hotspur was said to have charged into battles by digging in his spurs to make the horse go faster, a symbol that evolved into a fighting cock. In April 1912 Jimmy Cantrell, Bert Bliss and Arthur Grimsdell arrived at the club. Late that year Peter McWilliam was appointed manager. He became a significant and popular figure at the club, managing the team in two separate periods, both interrupted by world wars. The first significant signing by McWilliam was winger Fanny Walden, who was signed for a record £1,700 in April 1913, later signings included goalkeeper Bill Jaques and right-back Tommy Clay. McWilliam's record in the early years was poor, and Tottenham were bottom of the league at the end of the 1914–15 season when league football was suspended owing to the First World War that had started a year earlier. During the war years, White Hart Lane was taken over by the government and turned into a factory for making gas masks, gunnery and protection equipment. The London clubs organised their own matches, and Tottenham played their home games at Arsenal's Highbury and Clapton Orient's Homerton grounds. When football resumed in 1919, the First Division was expanded from 20 to 22 teams. The Football League offered one of the additional places to 19th-placed Chelsea, who would otherwise have been relegated with Spurs for the 1915–16 season, and the other, controversially, to Arsenal, who had finished only sixth in Division Two the previous season. This decision cemented a bitter rivalry that continues to this day, a rivalry that had begun six years earlier when Arsenal relocated from Plumstead to Highbury, a move opposed by Tottenham, who considered Highbury their territory, as well as by Clapton Orient and Chelsea. Matches between the two clubs, called the North London derby, became the most fiercely-contested derby in London. ### Interwar years In the first season after the war McWilliam took Tottenham straight back to Division One when they became Division Two Champions of the 1919–20 season. They won with what was then a league record of 70 points, losing only 4 games all season with 102 goals scored. Two players signed that season, Jimmy Dimmock and Jimmy Seed, became crucial members of the team together with Grimsdell. Other notable players of the period include Tommy Clay, Bert Smith and Charlie Walters. The following year Spurs reached their second FA Cup Final after beating Preston North End in the semi-final with two goals from Bliss. On 23 April 1921, in a game dominated by Walters, Spurs beat Wolverhampton Wanderers 1–0 in the final at Stamford Bridge, with 20-year-old Dimmock scoring the winning goal. They also won their first Charity Shield the same year. Following their FA Cup victory in 1921, Spurs players started to wear the cockerel emblem on their shirts. At McWilliam's instigation a nursery club was established at Northfleet in about 1922, an arrangement that was formalised in 1931 and lasted until the Second World War. Thirty-seven Spurs players, nine of whom became internationals, started their playing career at Northfleet. They include Bill Nicholson, Ron Burgess, Taffy O'Callaghan, Vic Buckingham and Ted Ditchburn. In the 1921–22 season, Spurs finished second to Liverpool in the league, their first serious challenge for the title. After the success of the two post-war seasons, Spurs only managed to finish mid-table in the next five. The team had begun to deteriorate, and new signings Jack Elkes and Frank Osborne could not overcome weaknesses in other positions. They were in first place for a while in 1925, until Grimsdell broke his leg and they dropped down the table. McWilliam left for Middlesbrough in February 1927 when Middlesbrough offered him significantly better pay while the Tottenham board refused his request for a smaller increase. Billy Minter, who first joined the club as a player in 1907 and became the first player to score 100 goals for the club in 1919, took over as manager. In the 1927–28 season, his first full season in charge, Spurs were unexpectedly relegated despite finishing with 38 points, only 6 points behind 4th-placed Derby County. One factor in their relegation may have been the sale of Jimmy Seed to Sheffield Wednesday—Wednesday had looked certain to be relegated, but Seed helped them escape, beating Tottenham twice along the way, and they went on to win the League Championship title in each of the next two seasons. Minter struggled to return Spurs to the top flight despite signing Ted Harper in February 1929. Harper was a prolific goalscorer in the few years he was at the club; his 36 league goals scored in a season in 1930–31 remained a record until 1963, when it was broken by Jimmy Greaves. The stress of being manager affected Minter's health; he resigned, and was given another position at the club in November 1929. Percy Smith was appointed manager at the start of 1930, and he strengthened the team with imported and home-grown talents including George Hunt, Willie Hall and Arthur Rowe. The team, aided by the goal-scoring exploits of Hunt, were nicknamed the "Greyhounds" in the 1932–33 season as they raced up the Second Division from near the bottom of the table, and won promotion after finishing second to Stoke City. Spurs only managed to stay in Division One for two seasons; injuries (especially to Rowe and Hall) left the team weakened and at the bottom of the table in the 1934–35 season by April 1935. Smith then resigned, claiming that the club's directors had interfered with his team selection. Jack Tresadern took over from caretaker manager Wally Hardinge in July 1935, but failed to lift the club out of the Second Division in the three years of his tenure. He promoted centre-forward Johnny Morrison in place of fans' favourite George Hunt, and decided to sell Hunt to rival Arsenal in 1937, a decision that made him unpopular. He left in April 1938 for Plymouth Argyle when it became apparent that he would likely be sacked at the end of the season. Peter McWilliam was brought back as manager, and he tried to rebuild the team by promoting young players from Northfleet including Nicholson, Burgess and Ditchburn, but his second stint at the club was again interrupted by world war. Spurs also failed to advance beyond the quarter-finals of the FA Cup in the 1930s, getting that far three years running from 1935 to 1938. Despite Tottenham's lack of success in this period, 75,038 spectators still squeezed into White Hart Lane in March 1938 for a cup tie against Sunderland—the club's largest gate until it was surpassed in 2016 when more than 85,000 attended the 2016–17 UEFA Champions League home match against Monaco held at Wembley Stadium. ### War and post-war lull On 3 September 1939 Neville Chamberlain declared war, and league football was abandoned with only three games played. Nevertheless, matches continued to be arranged and played during the Second World War. The London clubs first played in the Wartime League and Football League War Cup, and Spurs won the Regional League South 'C' in 1940. After a reorganisation in 1941, they also competed in the Football League South. Owing to the difficult wartime conditions, Spurs along with other London clubs refused to travel long distances for the matches drawn up by the Football League and decided to run their own competitions: London War League and London War Cup. They (eleven London clubs and five other clubs from the south) were temporarily expelled from the Football League; after paying a £10 fine they were readmitted, but they played in the Football League South in the way the London clubs had suggested. McWilliam went back to the North during the war and the team was managed by Arthur Turner; under him Spurs won the regional league twice. Charles Robert who had been chairman since 1898 died in 1943, and he was replaced by Fred Bearman. As it was difficult to field a full squad in the war years, many young players, among them Sonny Walters, Les Medley, Les Bennett and Arthur Willis, gained their first playing experience for the club in this period. Spurs also shared White Hart Lane with Arsenal when Highbury was requisitioned by the government and used as an Air Raid Precautions centre. After the war ended, McWilliam decided that he was too old to return to the club, and a former Arsenal player Joe Hulme was given the job of managing Tottenham. Hulme failed to win promotion for the club, although Spurs managed to stay in the top half of the Second Division for the three seasons he was manager, and they reached the semi-final of the FA Cup in 1948. Players who joined the squad under Hulme include Eddie Baily, Len Duquemin, Harry Clarke and Charlie Withers. Football was popular in the post-war era, and although in the three post-war seasons during which Spurs languished in the Second Division, some games still drew large crowds, particularly for cup ties. Hulme was sacked after he refused a suggestion to resign following an illness in March 1949. ## Arthur Rowe and title win (1949–1958) ### League title In May 1949 Rowe became Spurs manager at a salary of £1,500 a year. He inherited the squad assembled by Hulme except for the one crucial signing he made when he took over, Alf Ramsey. Rowe introduced a new style of play, Push and run, which proved to be highly successful, and transformed the players into a team that were hard to beat in the 1949–50 season. Rowe started his tenure as manager with a 4–1 victory at Brentford, the beginning of an unbeaten run of 23 League and Cup games between 27 August 1949 and 14 January 1950. With a free-scoring attack force, the team won the Second Division convincingly with six games yet to play. They ended the season nine points in front, elevating them back into the top flight. After a shaky start to their 1950–51 season when they were trounced 4–1 at home by the Blackpool side of Stanley Matthews, Tottenham won eight consecutive games in October and November that included a 7–0 defeat of Jackie Milburn's Newcastle. Some considered the team of this season the best in Tottenham's history; guarding the goal was Ditchburn, one of Spurs' best ever goalkeepers, who was aided in defence by Ramsey, Clarke, and Willis; also influential were the captain Burgess and Nicholson, with the five forwards of Baily, Medley, Duquemin, Bennett and Walters completing the regular starting eleven. They finished the season ahead of Manchester United by three points, having won their First Division Championship title in the penultimate game of the season by beating Sheffield Wednesday with a solitary goal from Duquemin. That was Tottenham's first ever League title. ### Spurs Way The push and run tactics developed by Rowe were successful in his early years as manager. Rowe credited McWilliam for learning to play a quick passing game, which he developed into a style involving players playing in triangles, quickly laying the ball off to a teammate and running past the marking tackler to collect the return pass. Keeping to Rowe's maxim of "make it simple, make it quick", this method proved an effective way of moving the ball at pace with players' positions and responsibilities being totally fluid. It became an attractive fast-moving attacking style of play regarded by Tottenham fans to be the Spurs Way, which was adjusted and perfected in the later period under Bill Nicholson. Spurs finished second in the 1951–52 season, beaten to the title by a young Manchester United team. A bad winter and the poor state of the White Hart Lane pitch were contributory, as the "push and run" style of play required a good firm playing surface. The following years witnessed a period of decline, as great players aged but younger players such as Tony Marchi and Tommy Harmer were not yet experienced enough, and with injuries and other teams adapting to Spurs' revolutionary style of play, it meant a struggle for the once-dominant team. Spurs could only finish tenth in the 1952–53 season. The "push and run" team also started to disperse; Medley, Willis, Bennett and Burgess left to join other teams while Nicholson moved into coaching. In 1954 Rowe signed future captain Danny Blanchflower for a club record fee of £30,000. Blanchflower won the FWA Footballer of the Year twice while at Tottenham. ### Post-Rowe The stress of managing the team had affected Rowe's health, and he suffered a breakdown in 1954; he resigned after falling ill again the following year. Rowe's assistant and long-time club servant Jimmy Anderson stepped into the breach as manager, but the season ended with Spurs in the lower half of the table. With Blanchflower in the team, Ramsey was dropped from the line-up, and he soon left to start his managerial career with Ipswich Town, later guiding England to a World Cup win. Spurs were almost relegated at the end of the 1955–56 season, finishing just two points above the drop zone. In the 1956–57 season, under the guidance of Bill Nicholson as coach, the creative pairing of Blanchflower and Harmer, and the scoring prowess of Bobby Smith, the club experienced a revival and finished in second place, albeit eight points behind the winners, the "Busby Babes" of Manchester United. Tottenham also fared well in the following season, finishing third. As manager, Anderson started to build a new team by signing, bringing in or promoting some of the players who became members of the team that saw major success later: Cliff Jones, Terry Medwin, Peter Baker, Ron Henry, Terry Dyson, Maurice Norman and Smith. ## Bill Nicholson and the glory years (1958–1974) In October 1958, with the pressure of a poor start to the season and failing health, Anderson resigned, to be replaced by Bill Nicholson. Nicholson had joined Tottenham as an apprentice in 1936, and the following sixty-eight years saw him serve the club in every capacity from boot room to president. He became the most successful Spurs manager, guiding Tottenham to major trophy success three seasons in a row in the early 1960s: the double in 1961, the FA Cup and European Cup semi-final in 1962 and the Cup Winners' Cup in 1963. In Nicholson's first game as manager, on 11 October 1958, Spurs beat Everton 10–4, their then record win, but the team finished 18th in the league in his first season in charge. In the following 1959–60 season, Spurs improved to third place in the league, two points behind the champions Burnley. They also beat Crewe Alexandra 13–2 in the 1959–60 FA Cup with five goals coming from Les Allen. It remains the highest scoring FA Cup tie of the 20th century and is still the club's record win. In his first two years in charge, Nicholson made several signings—Dave Mackay and John White, the two influential players of the Double-winning team, as well as Allen and goalkeeper Bill Brown. ### Double winners The 1960–61 season started with a 2–0 home win against Everton, the beginning of a run of eleven wins. The winning run was interrupted by a 1–1 draw against Manchester City, followed by another four wins before the unbeaten streak was broken by a loss to Sheffield Wednesday at Hillsborough in November. It remained the best ever start by any club in the top flight of English football until it was surpassed by Manchester City in 2017. The title was won on 17 April 1961 when Spurs beat the eventual runners-up Sheffield Wednesday at home 2–1, with three games still to play. In this Double-winning season only seventeen players were used in all forty-nine league and cup games, three of them playing only once. The team was built around the quartet of Blanchflower, Mackay, Jones, and White; completing the side were Smith (top scorer of the season), Allen, Henry, Norman, Baker, Dyson and Brown, with Medwin, Marchi, and a young Frank Saul among the reserves. Spurs reached the final of the 1960–61 FA Cup, beating along the way Sunderland 5–0 in the sixth-round replay and Burnley 3–0 in the semi-final. Spurs met Leicester City in the 1961 FA Cup Final and won 2–0, with Smith scoring the first and setting up the second for Dyson, helped in part by Leicester being effectively reduced to ten men owing to injury (no substitutions were allowed at that time). Spurs became the first team in England to win the Double in the 20th century, and the first since Aston Villa achieved the feat in 1897. ### First European triumph Tottenham competed for the first time in a European competition in the 1961–62 European Cup. Their first opponents were Górnik Zabrze, who beat Spurs 4–2. After the match the Polish press described Spurs players as "no angels"; in response, in the return leg at White Hart Lane, some Spurs fans dressed up as angels holding placards with slogans such as "Glory be to shining White Hart Lane", and other fans joined in by singing the refrains of "Glory Glory Hallelujah". Spurs won 8–1 to the sound of fans singing "Glory glory hallelujah, and the Spurs go marching on", which became an anthem for Tottenham from that night onwards. Spurs eventually lost in the semi-final to the holders Benfica, who went on to win the competition. A month later Spurs won the FA Cup again after beating Burnley in the 1962 FA Cup Final. The first goal of the game was scored by Jimmy Greaves, who was signed in December 1961 for £99,999 (so as not to be the first £100,000 player). Greaves became the top goal scorer for Tottenham with 220 league goals and 321 goals in all appearances for the club, and the most prolific scorer ever in the top tier of English football. In the 1962–63 European Cup Winners' Cup, Spurs reached the final, beating Rangers 8–4 on aggregate, Slovan Bratislava 6–2, and OFK Belgrade 5–2 on aggregate. Spurs' 5–1 win in the Cup Winners' Cup Final against Atlético Madrid in Rotterdam on 15 May 1963, during which Terry Dyson scored a goal from 25 yards out, made Tottenham the first British team to win a European trophy. ### Continuing success By 1964 the Double-winning side was beginning to break up owing to age, injuries and transfers. Captain Danny Blanchflower hung up his boots that spring at the age of 38, troubled by a knee injury, and Dave Mackay was sidelined for a long period with his leg broken twice—the first occurred during Spurs' defence of the Cup Winners' Cup against Manchester United, resulting in the 10-men Spurs being eliminated from the competition. In the summer of 1964, John White was tragically killed by lightning on a golf course. Nicholson rebuilt the team with new players, most of them imports, including Alan Mullery, Pat Jennings, Cyril Knowles, Mike England, Terry Venables, Jimmy Robertson, Phil Beal, Joe Kinnear, and Alan Gilzean who formed a goal-scoring partnership with Greaves. The rebuilding culminated in a win at the 1967 FA Cup Final over Chelsea with goals from wingers Robertson and Saul, and a third-place finish in the league. The team failed to make much of an impact in the following two seasons, reaching only in 1969 the semi-final of the League Cup, a competition created in 1960 but one in which Tottenham did not participate until 1966. Mackay, Jones and Robertson left the club in 1968, followed by Venables in 1969. In their place came the home-grown players Steve Perryman, Ray Evans and John Pratt, and new signing Martin Chivers. Martin Peters also arrived from West Ham United for a record fee of £200,000 in 1970 in part-exchange for a reluctant Greaves, while Ralph Coates was signed in the summer of 1971 for £192,000 from Burnley. Gilzean recreated the goal-scoring partnership with Chivers that he had with Greaves, this time aided by the blindside runs of Peters. The revitalised team reached the League Cup final in 1971, where they beat Aston Villa 2–0 to win their first League Cup with Chivers scoring both goals. They won the League Cup again in 1973 after beating Norwich City by a single goal from Coates in the final. The 1971 Cup win and a third-place finish in the 1970–71 season earned Spurs a place in the inaugural UEFA Cup. They reached the final after a battling performance to draw against A.C. Milan at the San Siro stadium in the second leg of the semi-final, giving them a 3–2 win on aggregate. In the first leg of the UEFA Cup final against Wolverhampton Wanderers, Chivers scored twice to give Spurs a 2–1 lead. In the return leg at White Hart Lane, team captain Mullery headed in a goal in his last game for the club for a 3–2 win on aggregate. With this victory, Spurs became the first British team to win two different European trophies. In total Nicholson had won eight major trophies in sixteen years; his spell in charge was the most successful period in the club's history. ## Decline and revival under Keith Burkinshaw (1974–1984) Although Tottenham managed to reach four cup finals in four years and winning three of them from 1971 to 1974, the team began to decline as Nicholson was unable to sign the players he wanted, in part because of his refusal to meet the demands for under-the-counter payments. The early seventies was also the beginning of a period of increasing football violence; rioting by Spurs fans in Rotterdam in their loss to Feyenoord in the 1974 UEFA Cup Final added to his disillusionment. Nicholson resigned after a poor start to the 1974–75 season and a 4–0 loss to Middlesbrough in the League Cup, but his tenure ended on a sour note. He had sought to be succeeded by Blanchflower as manager and Johnny Giles as player-coach, but the chairman Sidney Wale was angered by Nicholson contacting the pair without informing him first. The club then severed all ties with a £10,000 payoff, even though Nicholson had wanted to stay on as an advisor, and refused him a testimonial (Nicholson was later brought back as advisor by Keith Burkinshaw and was only given a testimonial in 1983 under a different chairman). Terry Neill was appointed manager by the board, and Spurs narrowly avoided relegation at the end of the 1974–75 season. Spurs performed better the following season, in which Glenn Hoddle played his first game for the club. A former Arsenal player, Neill was never accepted by the fans, and he left to manage Arsenal in mid-1976, replaced by his assistant Keith Burkinshaw whom he had recruited the previous year. ### Relegation In Burkinshaw's first year as manager in the 1976–77 season, Tottenham slipped out of the First Division after 27 years in the top flight. Many of the early '70s Cup-winning team had by now left or retired. This was followed in the summer of 1977 by the sale of their Northern Ireland international goalkeeper Pat Jennings for a bargain £45,000 to Arsenal as Burkinshaw had started to use Barry Daines, a move that shocked the club's fans and one that Burkinshaw later admitted was a great error. Jennings played on for another seven seasons for Spurs' arch-rivals. Despite relegation, the board kept faith with Burkinshaw and the team immediately won promotion to the top flight, although it took until the final league game of the season for them to be promoted. A sudden loss of form at the end of the 1977–78 season meant the club needed a point in the last game at Southampton. To Tottenham's great relief, the game ended 0–0 and Spurs returned to the First Division. Early in the season, Spurs had won 9–0 at home to Bristol Rovers, with four of their goals coming from debutant striker Colin Lee. The glut of goals proved significant later on as Tottenham won promotion through goal difference. ### Cup wins and European success In the summer of 1978, Burkinshaw caused a stir by signing for £750,000 two Argentinian internationals Osvaldo Ardiles and Ricardo Villa—players from beyond the British Isles in English football were rare at the time. This was also a period of rebuilding as young players were brought in from the youth team, such as Mark Falco, Paul Miller, Chris Hughton and Micky Hazard, as well as other players signed from other clubs such as Graham Roberts, Tony Galvin, and in particular the twin strike force of Garth Crooks and Steve Archibald. Spurs opened the 1980s by reaching the 100th FA Cup Final in 1981 against Manchester City, and won the replay 3–2 in a match notable for the winning goal from Ricardo Villa. They lifted the FA Cup again the next season, beating Queens Park Rangers in the 1982 Final. Although they were also in contention for three other trophies that season, they finished fourth in the First Division, lost to Liverpool in the League Cup final in extra time, while Barcelona won at home in the Cup Winners' Cup semi-final after a 1–1 draw at White Hart Lane. The club began a new phase of redevelopment at White Hart Lane in 1980, starting with the rebuilding of the West Stand initiated by a new chairman Arthur Richardson. The West Stand was demolished and a new stand, which took 15 months to complete, opened in 1982. Cost overruns on the project, which rose from £3.5 million to £6 million, as well as the cost of rebuilding the team in the return to Division One resulted in financial difficulties for the club. In this period, property developer and Spurs fan Irving Scholar began buying up shares in the club. He took advantage of a rift in the boardroom between Richardson and former chairman Sidney Wale, and persuaded Wale to sell his shares. Scholar bought up 25% of the club for £600,000, and with the help of Paul Bobroff who had bought 15% of shares from the family of previous chairman Bearman, took control in December 1982. Scholar inherited a club in debt to the tune of nearly £5 million, what was then the largest debt in English football, but a rights issue after he took over brought in a million pounds. In 1983, a new holding company, Tottenham Hotspur plc, was formed with the football club run as a subsidiary of the company. With a valuation of £9 million, the company was floated on the London Stock Exchange, the first sports club to do so. Together with Martin Edwards of Manchester United and David Dein of Arsenal, Scholar transformed English football clubs into business ventures that applied commercial principles to the running of the clubs to maximise their revenues, a process that eventually led to the formation of the Premier League. In 1984, Tottenham reached the final of the UEFA Cup. After 1–1 scores in both legs against Anderlecht, Tottenham emerged the victor when Tony Parks saved a penalty in the penalty shootout. This was the third of the major trophies won by the club under Burkinshaw in the 1980s. Several weeks before this victory, Burkinshaw announced that he would be leaving at the end of that season after disagreements with the directors and becoming disenchanted with the club. ## Shreeves and Pleat (1984–1987) Burkinshaw was succeeded as manager by his assistant Peter Shreeves in June 1984. According to Scholar, Aberdeen manager Alex Ferguson, who joined Manchester United two years later, had reneged on an agreement to take over. Tottenham enjoyed a strong start to the 1984–85 season and looked poised to win the league title by the winter, but a series of poor home results in 1985 resulted in the team being leapfrogged by eventual champions Everton and runners-up Liverpool. Their final position of third place in the league should have secured them a UEFA Cup place, but the Heysel disaster on 29 May 1985, which saw 39 spectators crushed to death when Liverpool fans rioted at the 1985 European Cup Final, resulted in all English clubs being banned from European competitions. In 1986, Perryman departed after 19 years at the club (17 years in the first team) and a record 655 league appearances. The same year Spurs also sold its training ground at Cheshunt that the club had owned since 1952 for over £4 million. The 1985–86 season proved disappointing despite the signings of Chris Waddle and Paul Allen in the summer of 1985, and Shreeves was sacked at the end of the season. Luton Town manager David Pleat was appointed the new manager following Shreeves' dismissal. For much of 1986–87, Spurs played with what was for that time an unusual five-man midfield formation: Hoddle, Ardiles, Allen, Waddle and Steve Hodge. The lone striker Clive Allen scored 49 goals in all competitions that season, still a club record. Tottenham remained in contention for all three major domestic honours throughout the season, but finished the season empty-handed. In the League Cup, Tottenham lost to eventual competition winners Arsenal in the semi-final. Spurs also missed out on the First Division title to Everton, and stumbled to a 3–2 loss in the FA Cup final to Coventry City, whose winning goal was deflected off Gary Mabbutt's knee for an own goal in the final minutes. The close season of 1987 saw the sale of Glenn Hoddle to Monaco after a decade as the driving force in Tottenham's midfield. ## Terry Venables (1987–1993) In October 1987, Pleat quit the club following allegations about his private life. He was succeeded by former player Terry Venables, who had by then built up an impressive managerial record. He inherited a Spurs side that was struggling in the league with a quarter of the 1987–88 season played. Earlier in the season veteran goalkeeper Ray Clemence had to retire after suffering an Achilles tendon injury, and new signings by Venables Terry Fenwick and Paul Walsh failed to lift the team, which finished in 13th place. Striker Clive Allen was also less prolific in attack in this season, and he was sold to French club Bordeaux in March 1988. To invigorate the Tottenham side, Venables paid a national record £2 million for Newcastle midfielder Paul Gascoigne in June 1988, and also signed striker Paul Stewart from Manchester City for £1.7 million. Spurs made a shaky start to the 1988–89 season; incomplete refurbishment of the East Stand caused the postponement of the opening game against Coventry just a few hours before kickoff, which earned the club a two-point deduction (later replaced by a £15,000 fine), and this was followed by a string of losses in October. They were second from bottom at the end of October, but improved to ninth place by the turn of 1989 and finished sixth in the final table. ### Cup win and boardroom drama By the end of the 1980s and the beginning of 1990s, Spurs had become mired in considerable financial difficulties, with a debt reported to be £20 million in 1991. The East Stand was refurbished in 1989 but its cost had doubled to over £8 million, while the company's attempts to diversify into other businesses such as the clothing firms Hummel UK and Martex failed to generate the income expected and were in fact losing them money. July 1989 saw the arrival at White Hart Lane of England striker Gary Lineker from Barcelona for a fee of £1.2 million; however, the cash-strapped club was unable to pay Barcelona in full even though Waddle was sold days later to Marseille for £4.25 million. Scholar had to organise a secret £1.1 million loan from Robert Maxwell, which caused an uproar and resulted in an attempt to oust Scholar from the boardroom when it was revealed. Maxwell, who first owned Oxford United and then Derby County, became interested in the club, putting Derby County up for sale so that he could acquire Tottenham. Venables, who had previously attempted to buy the club but failed, then joined forces with businessman Alan Sugar in June 1991 to forestall a takeover by Maxwell and gain control of Tottenham Hotspur plc, buying out Scholar for £2 million. With Lineker and Gascoigne in the team, Spurs finished third in the 1989–90 title race won by Liverpool. In the following 1990–91 season, they began the season unbeaten in ten games, but failed to rediscover their earlier league form in the second half of the season, eventually finishing tenth in the final table. Despite the middling ranking, this season remains a highlight for Tottenham for their performances in the 1990–91 FA Cup. A 3–1 semi-final win over Arsenal featured a 30-yard free kick from Gascoigne considered to be one of the best goals ever seen in the competition. In the final against Nottingham Forest, Gascoigne suffered serious cruciate ligament damage in his knee when he made a reckless tackle on opponent Gary Charles. Winning the match 2–1 after extra time, Spurs became the first team to collect eight FA Cups, a record later surpassed by Manchester United in 1996. The excitement in North London over the win also had the unexpected result of prompting Sugar, who had little knowledge of the club's history (and was alleged to have asked "What Double?" when someone mentioned Tottenham's Double), to contact Venables and jointly buy the club. Gascoigne was a transfer target for Italian club Lazio, but his knee injury (aggravated later in a nightclub incident) meant that he missed the 1991–92 season, and his transfer to Lazio was put on hold. By the summer of 1992, his knee had recovered and he completed his move to Lazio for £5.5 million, reduced from the £7.9 million fees agreed before his injury. Gary Lineker then announced in November that he would be leaving Spurs at the end of the season to play in Japan, while Paul Walsh and Paul Stewart also left the club. In the 1991–92 season, Venables became chief executive of the club, and Shreeves again took charge of first-team duties. During the summer of 1992, Venables decided to return to team management; Shreeves was sacked, and a European style of management was instituted with Doug Livermore the head coach and Clemence the assistant. Although Sugar and Venables began as equal partners with each investing £3.25 million in the club, Sugar's financial clout allowed him to increase his stake to £8 million in December 1991, thereby gaining control of the club. In May 1993, after a row at a board meeting, Terry Venables was controversially dismissed from the Tottenham board by Sugar, whose decision was overturned in the High Court but reversed on appeal. Despite being initially seen as a saviour of the club, the ousting of a popular figure, later aggravated by a perceived lack of investment in the club, earned Sugar long-lasting animosity from some fans who repeatedly called for his resignation. ## Beginning of Premier League football (1992–2004) Spurs were one of the five clubs that pushed for the founding of the Premier League, created with the approval of The Football Association to replace the Football League First Division as the highest division of English football. To coincide with the massive changes in English football, Tottenham made major signings, including winger Darren Anderton, defender Neil Ruddock, and striker Teddy Sheringham for what was then a club record £2.1 million from Nottingham Forest. The Sheringham transfer was later the subject of allegations of "bungs" against Forest manager Brian Clough. In the first ever Premier League season—Venables' final year as Spurs' manager—Spurs finished eighth. Teddy Sheringham was the division's top scorer with 22 goals, 21 of which were scored for Tottenham. ### Ardiles, Francis and Gross The departure of Venables saw Tottenham return to a conventional management setup after two seasons of a two-tier structure. Former player Osvaldo Ardiles took charge of the first team. In the 1993–94 season, Sheringham's injury in October 1993 impacted on Spurs' performance, and relegation became a real possibility. In the end, the club managed a 15th-place finish, its survival only guaranteed by a win in the penultimate game of the season. By this time Spurs had come under investigation for financial irregularities alleged to have taken place in the 1980s while Irving Scholar was chairman, and in June 1994 the club was found guilty of making illegal payments to players. Spurs were fined £600,000, had twelve league points deducted for the 1994–95 season and were banned from the 1994–95 FA Cup. Following an appeal the number of points deducted was reduced, but the fine was increased to £1.5 million. A further arbitration (after Sugar threatened to sue the FA) quashed the points deduction and FA Cup ban, although the fine remained in place. Despite the penalty, the club aimed to have a successful season in 1994–95, and signed three players who had appeared at that summer's World Cup: German striker Jürgen Klinsmann and two Romanians, Ilie Dumitrescu and Gheorghe Popescu. Forward players in Spurs line-up already included Sheringham, Anderton and Nick Barmby, and Ardiles chose to play five attacking players, dubbed the "Famous Five" of Klinsmann, Sheringham, Anderton, Barmby, and Dumitrescu. Their debut in the 1994–95 season against Sheffield Wednesday in August 1994, which Spurs won 4–3, was described as a "breathtaking exhibition of football", but the imbalance in the team also leaked goals (33 in 15 games). Spurs struggled in September with a series of defeats, and after the team lost 3–0 in the League Cup in October, Ardiles was dismissed. At that time Spurs were just two places above the relegation zone, although they would have been 11th in the table with the deducted points that were later restored. Ardiles was replaced by Gerry Francis, who alleviated relegation fears and oversaw the club's climb to seventh place in the league, just missing out on a 1995–96 UEFA Cup place. When the FA Cup ban was lifted, Spurs reached the FA Cup semi-final where they were defeated 4–1 by eventual winners Everton. Klinsmann was top scorer at the club with twenty-nine in all competitions, but he felt that Spurs would not be able to challenge for the title in future seasons, and returned to his homeland to sign with Bayern Munich. Barmby, Dumitrescu and Popescu also departed, and Francis signed the likes of Ruel Fox and Chris Armstrong for more than £4 million each but spurned the chance to sign Dennis Bergkamp who was a fan of Glenn Hoddle and was interested in joining Spurs. Other signings include future captain Ledley King who did not start in the first team for a few years. Francis' transfer dealings failed to deliver European qualification or higher, and Spurs finished eighth in 1996 and tenth in 1997. Sheringham left in the summer of 1997 for Manchester United while Les Ferdinand joined the team for a record £6 million and David Ginola for £2.5 million from Newcastle. Ferdinand was soon hit by injury, and in November 1997, Francis decided to resign after Spurs were beaten 4–0 by Liverpool. Christian Gross, coach of Swiss champions Grasshoppers, was chosen as the successor to Francis. Gross failed to turn around the club's fortunes in the 1997–98 season, and the team battled against the drop for the remainder of the campaign. Klinsmann returned to Spurs in December on loan, and four goals in a 6–2 win away to Wimbledon in the penultimate game of the season was enough to secure survival. By the end of the season, the renovation of the White Hart Lane stadium was completed. White Hart Lane was converted into an all-seater stadium in the 1990s, the South Stand was rebuilt, and a new tier added to the North Stand, leaving the stadium with a capacity of about 36,240. The stadium remained in this form bar some minor changes until 2016. ### George Graham and League Cup win After losing two of the first three games of the 1998–99 season, Gross was sacked, and former Arsenal manager George Graham was hired to take over. Graham signed Steffen Freund, who became a fans' favourite, as well as Tim Sherwood. Fans were critical of Graham due to his association with Arsenal and disliked his defensive style of football, especially when Arsène Wenger was starting to achieve major success at Arsenal with an attacking football style previously associated with Spurs. Nevertheless, in Graham's first season as Spurs manager, 1998–99, the club secured a mid-table finish and won the League Cup. In the final against Leicester City at Wembley Stadium, full-back Justin Edinburgh was sent off after an altercation with Robbie Savage, but the ten-men Spurs secured a dramatic victory through Allan Nielsen's diving header in the 93rd minute of the game. To cap a good season, David Ginola won both the PFA Players' Player of the Year and Football Writers' Association Footballer of the Year awards in the year Manchester United won the Treble. The club finished mid-table the following year. In May 2000, Tottenham signed Ukrainian striker Serhii Rebrov from Dynamo Kyiv for a club record £11 million. But Rebrov was not a success at White Hart Lane, managing just ten goals over the next four seasons. ### New ownership and Glenn Hoddle In late 2000, Sugar decided to sell his share holding in the club, a decision he blamed on the hostility of fans towards him and his family. 27% of his 40% share holding in Tottenham were sold for £22m to ENIC Sports PLC, and he stepped down as chairman in February 2001 on the completion of the sale. The rest of his shares were sold in 2007 for £25m. ENIC, owned by Joe Lewis and Daniel Levy with the latter responsible for the running of the club, would eventually acquire 85% of Tottenham. The club was transferred into its private ownership in 2012. A month after Levy took over as chairman, George Graham was sacked as manager for alleged breach of contract by vice-chairman David Buchler after Graham commented on the financial position of Spurs. Team management passed to former Tottenham player Glenn Hoddle, who took over in the final weeks of the 2000–01 season from caretaker manager David Pleat. His first game was a defeat to Arsenal in the 2001 FA Cup semi-final. That summer, club captain Sol Campbell joined Arsenal on a Bosman free transfer. The loss of a transfer fee by Spurs, the move to their bitterest rivals, and the perceived underhanded fashion in which he negotiated his move (claimed to be a record £100,000 per week) led to long-term enmity towards Campbell from Spurs fans. Hoddle turned to more experienced players in the shape of Teddy Sheringham who returned to Spurs in May 2001, as well as new signings Gus Poyet and Christian Ziege. Spurs played some encouraging football in the opening months of his management, but they ended the 2001–02 season in ninth place. They reached the League Cup final, where they lost to Blackburn, despite being seen as the favourites after their 5–1 defeat of Chelsea in the previous round. The only significant outlay before the following campaign was the £7 million signing of Robbie Keane, who joined from Leeds United. Jamie Redknapp also joined earlier on a free transfer. The 2002–03 season started well, with Tottenham top of the league after three successive wins, and Hoddle voted manager of the month in the division for August. They were still in the top six as late as early February, but the season ended with a tenth-place finish, the result of a barren final ten games of the league campaign that delivered a mere seven points. Several players publicly criticised Hoddle's management and communication skills. In the following 2003–04 season, Spurs started the season poorly, gaining only four points out of six games. With Spurs struggling third from bottom at the table, Hoddle was sacked by Levy, and Pleat again took over as caretaker manager. ## Resurgence and the Champions League (2004–2014) In June 2004, Tottenham appointed French team manager Jacques Santini as head coach, with Martin Jol as his assistant and Frank Arnesen as sporting director. In early November, after only 13 games in charge, Santini decided to quit the club, making his the shortest stint by any Spurs manager. Santini was replaced by Jol, and the team managed to secure a ninth-place finish in the 2004–05 season. In June 2005, when Arnesen moved to Chelsea, Spurs appointed Damien Comolli as sporting director. Among the players signed by Jol were Edgar Davids in 2005, Dimitar Berbatov in 2006, and Gareth Bale in May 2007. Players who left included Michael Carrick, who went to Manchester United for a reported £18.6 million in 2006. During the 2005–06 season, Spurs spent six months in the top four. Going into the final game of the season, they led Arsenal by a point, but were defeated in their final match—away to West Ham—after many players including Keane and Carrick succumbed to food poisoning from a meal they had the night before. Spurs were pipped to a UEFA Champions League place, but gained a place in the UEFA Cup and achieved their highest finish for 16 years. In 2006–07, they finished fifth for the second-straight year. The 2007–08 season saw the club win only one of their first ten League matches, their worst start in 19 years. Jol was sacked, learning of his removal just before a Uefa Cup game on 25 October 2007. Juande Ramos, formerly of Sevilla, then replaced the Dutchman as manager. Captained by Ledley King, Spurs went on to win the League Cup, beating Chelsea 2–1 in the League Cup Final in February 2008. Luka Modrić was signed towards the end of the season, but Berbatov and Keane were sold to Manchester United and Liverpool respectively in the summer as both wanted to leave. Tottenham made their worst start to a season in the club's history in 2008–09, their failure to register a win in eight League games leaving them bottom of the Premier League. Ramos and director of football Damien Comolli were dismissed on 25 October 2008, amid Levy's criticism of the failure to recruit suitable replacements for Berbatov and Keane. ### Harry Redknapp Portsmouth manager Harry Redknapp was appointed as Ramos' replacement, and Tottenham reverted to a traditional setup with Redknapp responsible for both coaching and player transfers. Redknapp took the club out of the relegation zone by winning 10 out of the 12 points available in his first two weeks in charge, and finished the 2008–09 campaign eighth in the league table. The January transfer window saw the return of Keane and Jermain Defoe to the club after spells at Liverpool and Portsmouth respectively, later joined by Peter Crouch signed in the summer, and Rafael van der Vaart in 2010. The team's performance improved in the two following seasons. Notable matches in the 2009–10 season include the 9–1 home win against Wigan Athletic, with Defoe scoring five, a record win in the top flight for the club, and a 2–1 against Arsenal, with goals from Bale and debutant Danny Rose, giving them their first Premier League victory against their rivals. Spurs finished the 2009–10 season in fourth place, and reached the qualifying rounds of the Champions League for the first time in their history. In the 2010–11 season, they qualified for the group stages of the Champions League, came top of their group and went on to beat A.C. Milan 1–0 on aggregate in the knockout stage. At the quarter-finals, Spurs suffered a heavy defeat against Real Madrid after Crouch was sent off early in the game, and the 10-men team was beaten 4–0 at the Santiago Bernabéu Stadium, and 5–0 on aggregate. Earlier in the season they won at the Emirates with goals from van der Vaart and Bale, their first win at Arsenal in 17 years. At the start of the 2011–12 season, the home league game to Everton was cancelled because of rioting in Tottenham a week before the game, and Spurs then lost the next two games. Tottenham managed to record ten wins and one draw in their next eleven Premier League matches, and finished the season in fourth place in the Premier League but failed to qualify for the Champions League. On 13 June 2012, after brief contract renewal talks during which Redknapp and the Tottenham board failed to agree terms, Redknapp was dismissed by the club. ### Villas-Boas and Sherwood Following Redknapp's departure, the club appointed former Chelsea and Porto coach André Villas-Boas as manager. Shortly after his appointment, the club pipped Liverpool for the signature of Swansea City midfielder Gylfi Sigurðsson. Several days later, the club also resolved the protracted transfer saga surrounding Ajax defender Jan Vertonghen. They were soon followed by Hugo Lloris and Mousa Dembélé, while Modric left for Real Madrid. In the 2012–13 season, they finished fifth. Despite winning a dramatic match against Sunderland with a goal from Gareth Bale in the final match of the season, Arsenal won their last match to take the 2013–14 Champions League spot, and Spurs dropped to the Europa League for the second successive season. In their concurrent 2012–13 Europa League campaign, they were eliminated in the quarter-finals by Swiss side Basel on penalties. At the beginning of the 2013–14 season, Bale moved to Real Madrid for what was then a world record transfer fee of €100.8 million (£85.1 million). This money paid for several players signed in that transfer window, including Christian Eriksen and Erik Lamela. Following a 6–0 defeat against Manchester City and a 5–0 defeat against Liverpool, Villas-Boas was dismissed from his role on 16 December 2013. A week later, former Spurs player Tim Sherwood became Villas-Boas' replacement as manager as a stopgap measure. Although Sherwood led Spurs to a sixth-place finish in the Premier League, his results against the top teams were disappointing, and he was sacked on 13 May 2014. ## Pochettino era (2014–2019) Mauricio Pochettino was appointed Tottenham manager on 27 May 2014, on a five-year contract. In his first season Spurs finished fifth in the 2014–15 Premier League with 64 points, and were runners-up in the 2015 Football League Cup Final. Pochettino chose to promote young players, and in his second season in charge, Spurs had the youngest team in the Premier League. A new generation of players that included Harry Kane, Dele Alli, and Eric Dier were all aged 22 or younger that season. Spurs had a much improved 2015–16 season, but their title challenge ended with a 2–2 draw at Stamford Bridge on 2 May 2016, and they finished third behind winners Leicester. The following 2016–17 season began with a series of 12 unbeaten league matches, but the team performed inconsistently during the first half of the season. They put in a much better performance in 2017, including a win in the North London derby that ensured a higher finish in the Premier League than their rivals Arsenal for the first time in 22 years. Their early inconsistency meant that Spurs were unable to overhaul the lead of the eventual champions Chelsea in the league table (13 points over Spurs at one stage in March), and finished the season in second place with 86 points, their highest ever points tally since the Premier League began. This is their highest ranking in 54 years since the 1962–63 season under Bill Nicholson, and the team also achieved their first unbeaten home run in 52 years since the 1964–65 season. ### New stadium The construction of a new stadium was initiated at on land adjacent to White Hart Lane in 2015. The new stadium has a seating capacity of 62,062, considerably greater than the 36,000 of White Hart Lane. A section of the North Stand was removed to allow building work on the new stadium to proceed next to the old stadium. The removal of part of the stand reduced the capacity of the stadium, and European matches were held at Wembley Stadium for the 2016–17 season to comply with the ticketing requirement for European games. A club attendance record of 85,512 spectators was reported for the 2016–17 UEFA Champions League game against Bayer Leverkusen, which Spurs lost 1–0. Spurs played their last game at White Hart Lane on 14 May 2017, a 2–1 victory over Manchester United that secured their second place in the Premier League. In the summer of 2017, White Hart Lane was demolished to allow the new stadium to be completed, and all Tottenham's home games were played at Wembley for the 2017–18 season. As the stadium has a higher capacity, this season saw a series of record attendances for Premier League games, the highest at the North London Derby on 10 February 2018 when 83,222 spectators witnessed Spurs' 1–0 win over Arsenal. Tottenham failed to make any new signings in the summer transfer window of 2018, the first club not to do so in the Premier League. They also failed to sign anyone in the next transfer window in January. They nevertheless had their best start in the Premier League in the 2018–19 season, ended by a home defeat against Manchester City. After some delay, the new stadium, Tottenham Hotspur Stadium, was completed and opened on 3 April 2019. The first game at the stadium, the Premier League game against Crystal Palace, was won by Tottenham 2–0, with Son Heung-min scoring the first-ever official goal at the new stadium. Tottenham started the 2018–19 UEFA Champions League poorly, gaining only one point in the first three games of the group stage. They managed to qualified for the knockout phase with a late equalising goal against Barcelona. After beating Borussia Dortmund in the Round of 16, the team staged a dramatic win on away goals (4–4 on aggregate) against Manchester City in the quarter-final, followed by a last-gasp victory over Ajax in the dying seconds of the semi-final, achieved with a hat-trick by Lucas Moura to overcome a 3–0 deficit on aggregate in the second half of the return leg. Tottenham reached their first ever Champions League final. However, they were beaten 2–0 in an uninspiring final, and finished runners-up. ## José Mourinho, Nuno Espírito Santo and Antonio Conte (2019–present) The 2019–20 season started poorly for Tottenham, winning only three of their first 12 league games and struggled in the cup tournaments, including a disastrous 7–2 defeat to Bayern Munich in the Champions League group stage. As a result, Pochettino was subsequently sacked on 19 November 2019, to be replaced by José Mourinho as coach the following day. In a season disrupted by the COVID-19 pandemic, Mourinho failed to lift the team to qualify for the Champions League for the first time in five years, but managed to qualify for the 2020–21 Europa League. In the following 2020–21 season, Tottenham had a good run of form in the first few months, with Harry Kane and Son Heung-min forming a formidable goal-scoring partnership, seeing Spurs to the top of the league table in late November following a win 2–0 against Manchester City. However, a string of bad results saw them dropping down the league table through the second half of the season, and Mourinho's defensive style of football further drew criticisms. Mourinho was dismissed from Tottenham on 19 April 2021, just six days prior to the League Cup Final, to be replaced by ex-Tottenham player and interim head coach, Ryan Mason, for the remainder of the season. Tottenham lost 1–0 to Man City in the League Cup Final and finished seventh in the league table, thereby qualifying for the inaugural 2021–22 UEFA Europa Conference League (UECL) but failing to qualify for either Champions or Europa Leagues the first time since the 2009–10 season. On 30 June 2021, Nuno Espírito Santo was named the new manager of Tottenham, but was replaced by Antonio Conte in early November after only four months in charge, the shortest tenure as permanent manager in the club's history since Santini. Despite an early exit in the Conference League, Conte guided Spurs to fourth and back to a Champions League spot at the end of the 21-22 season, after two seasons out of the European competition. Conte left on 26 March 2023 following his criticism of the players and the club, and his assistant Cristian Stellini then served as the acting head coach, followed again by Ryan Mason. ## See also - List of Tottenham Hotspur F.C. records and statistics - List of Tottenham Hotspur F.C. players - List of Tottenham Hotspur F.C. managers
739,672
Douglas Jardine
1,168,243,660
English cricketer
[ "1900 births", "1958 deaths", "Alumni of New College, Oxford", "British Army personnel of World War II", "Cricketers from Hertfordshire", "Cricketers from Mumbai", "Deaths from cancer in Switzerland", "Deaths from lung cancer", "England Test cricket captains", "England Test cricketers", "English cricketers", "English cricketers of 1919 to 1945", "English expatriate sportspeople in Switzerland", "English people of Scottish descent", "Free Foresters cricketers", "Gentlemen cricketers", "H. D. G. Leveson Gower's XI cricketers", "Harlequins cricketers", "L. H. Tennyson's XI cricket team", "Marylebone Cricket Club cricketers", "North v South cricketers", "Oxford University cricketers", "People educated at Winchester College", "People from Radlett", "Royal Berkshire Regiment officers", "Surrey cricket captains", "Surrey cricketers", "Wisden Cricketers of the Year" ]
Douglas Robert Jardine (23 October 1900 – 18 June 1958) was an English cricketer who played 22 Test matches for England, captaining the side in 15 of those matches between 1931 and 1934. A right-handed batsman, he is best known for captaining the English team during the 1932–33 Ashes tour of Australia. During that series, England employed "Bodyline" tactics against the Australian batsmen, headed by Donald Bradman, wherein bowlers pitched the ball short on the line of leg stump to rise towards the bodies of the batsmen in a manner that most contemporary players and critics viewed as intimidatory and physically dangerous. As captain, Jardine was the person responsible for the implementation of Bodyline. A controversial figure among cricketers, partially for what was perceived by some to be an arrogant and patrician manner, he was well known for his dislike of Australian players and crowds, and thus was unpopular in Australia, especially so after the Bodyline tour. However, many who played under his leadership regarded him as an excellent and dedicated captain. He was also famous in cricket circles for wearing a multi-coloured Harlequin cap. After establishing an early reputation as a prolific schoolboy batsman, Jardine played cricket for Winchester College, attended the University of Oxford, playing for its cricket team, and then played for Surrey County Cricket Club as an amateur. He developed a stubborn, defensive method of batting which was considered unusual for an amateur at the time, and he received occasional criticism for negative batting. Nonetheless, Jardine was selected in Test matches for the first time in 1928, and went on to play with some success in the Test series in Australia in 1928–29. Following this tour, his business commitments prevented him from playing as much cricket. However, in 1931, he was asked to captain England in a Test against New Zealand. Although there were some initial misgivings about his captaincy, Jardine led England in the next three cricket seasons and on two overseas tours, one of which was the Australian tour of 1932–33. Of his 15 Tests as captain, he won nine, drew five and lost only one. He retired from all first-class cricket in 1934 following a tour to India. Although Jardine was a qualified solicitor he did not work much in law, choosing instead to devote most of his working life to banking and, later on, journalism. He joined the Territorial Army in the Second World War and spent most of it posted in India. After the war, he worked as company secretary at a paper manufacturer and also returned to journalism. While on a business trip in 1957, he became ill with what proved to be lung cancer and died, aged 57, in 1958. ## Early life Douglas Jardine was born on 23 October 1900 in Bombay, British India, to Scottish parents, Malcolm Jardine—a former first-class cricketer who became a barrister—and Alison Moir. At the age of nine, he was sent to St Andrews in Scotland to stay with his mother's sister. He attended Horris Hill School, near Newbury, Berkshire, from May 1910. There, Jardine was moderately successful academically, and from 1912, he played cricket for the school first eleven, enjoying success as a bowler and as a batsman. He led the team in his final year, and the team were unbeaten under his captaincy. As a schoolboy, Jardine was influenced by the writing of former England captain C. B. Fry on batting technique, which contradicted the advice of his coach at Horris Hill. The coach disapproved of Jardine's batting methods, but Jardine did not back down and quoted a book by Fry to support his viewpoint. In 1914, Jardine entered Winchester College. At the time, life for pupils at Winchester was arduous and austere; discipline was harsh. Sport and exercise were vital parts of the school day. In Jardine's time, preparing the pupils for war was also important. According to Jardine's biographer, Christopher Douglas, the pupils were "taught to be honest, practical, impervious to physical pain, uncomplaining and civilised." All pupils were required to be academically competent and as such Jardine was able to get along satisfactorily without exhibiting academic brilliance; successful sportsmen, on the other hand, were revered. Jardine enjoyed a slightly better position than some pupils, already possessing a reputation as a very fine cricketer and excelling at other sports; he represented the school at football as a goalkeeper and rackets, and played Winchester College football. But it was at cricket that he particularly excelled. He was in the first eleven for three years from 1917 and received coaching from Harry Altham, Rockley Wilson and Schofield Haigh, the latter two of whom were distinguished cricketers. In 1919, his final year, Jardine came top of the school batting averages with 997 runs at an average of 66.46. He also became captain despite some doubts within the school about his ability to unify the team. Under Jardine, Winchester won their annual match against Eton College in 1919, a fixture in which Eton had usually held the upper hand. Jardine's batting (35 and 89 in the match) and captaincy were key factors in his side's first victory over Eton for 12 years. Years later, after his retirement from cricket, he named his 89 in that match as his personal favourite innings. Jardine went on to score 135 not out against Harrow School. Jardine's achievements in the season were widely reported in the local and national press. He played two representative matches, for the best schoolboy cricketers, at Lord's Cricket Ground, scored 44, 91, 57 and 55 and won favourable reviews in the press. Wisden, in 1928, described Jardine at this time as being obviously of a much higher standard than his contemporaries, particularly in defence and on side batting. However, he was criticised for being occasionally too cautious and not using all the batting strokes of which he was capable—his good batting technique gave the impression that he could easily score more quickly if he so desired. ## First-class career ### Oxford University Jardine entered New College, Oxford, in September 1919 at a time when the university was more crowded than usual due to the arrival of men whose entrance had been delayed due to the war. He took part in several sports, representing New College as goalkeeper in matches between the colleges, and being given a trial for the university football team, although he was not chosen. He continued to play rackets and began to play real tennis, making such progress and showing such promise that he went on to represent the university successfully and won his Blue. In cricket, Jardine came under the coaching of Tom Hayward who influenced his footwork and defence. Wisden commented in 1928 that Jardine had come with an excellent reputation, but did not quite achieve the success which was expected. His batting ability, particularly defensively, remained unquestioned. In the 1920 season, Jardine made his first-class debut, played eight first-class matches and scored two fifties. Playing mainly as an opening batsman, he won his Blue, appearing in the University Match against Cambridge but fell short of expectations, and continued to be criticised for over-caution with the bat. In all, he scored 217 runs at an average of 22.64. In the match for Oxford against Essex, he took six wickets for six runs in a bowling spell of 45 balls, bowling leg breaks, to have bowling figures of six for 28. It was the only occasion in his career where he took five or more wickets in an innings. Playing more confidently and fluently in 1921, Jardine began the season well, scoring three fifties in his first three first-class matches. Oxford then played against the Australian touring side which dominated the season. In the second innings, Jardine scored 96 not out to save the game but was unable to complete his century before the game ended. The innings was praised by those who saw it and the Australians were criticised in the press for not allowing Jardine to reach his hundred, particularly as the match had been reduced from three days to two at their request. They had tried to help him with some easy bowling but the situation was confusing as batsmen's scores were not displayed on the ground's scoreboard. Some critics have speculated that this incident led to Jardine's later hatred of Australians, although Christopher Douglas does not believe this. Cricket historian David Frith believes that Australian captain Warwick Armstrong may have addressed sarcastic comments to Jardine but Wisden blamed Jardine himself for batting too slowly to score a century. The Australian manager expressed regret that he missed out. This innings was the highest that had been played to that point in the season against the Australians, and only one higher score was made before the first Test. Consequently, Plum Warner, an influential figure who had recently captained Middlesex, suggested in The Cricketer magazine that Jardine should play for England in the first Test, which followed the Oxford match. Warner had been previously impressed by Jardine. The latter remained in Test contention for a short time, but was not selected. In the meantime, he scored his maiden first-class hundred against The Army and another followed against Sussex. Both innings were cautious, with defence his main priority for much of the innings, but he failed in the match against Cambridge. Jardine played for Surrey, for whom he was qualified, in the remainder of the season. He replaced the injured Jack Hobbs as an opening batsman before dropping down the order to number five on Hobbs' return. In a situation of great pressure, Jardine scored a vital 55 in an important match against reigning County Champions Middlesex, although Surrey lost the game. Jardine finished the season with 1,015 first-class runs at an average of 39.03, although critics argued that he was still yet to fulfill his full potential. Jardine missed most of the 1922 season owing to a serious knee injury; he played only four matches at a time when he was expected to make a big impression. He missed Oxford's match against Cambridge and was unable to play for Surrey at all that season. Even so, in 1922 he was selected by The Isis as one of its men of the year. After some problems with his troublesome knee, Jardine returned to cricket by May of the 1923 season. He was not given the Oxford captaincy in his final year, which has led to later speculation that his manner and unfriendliness was held against him. However, his persistent injury and the availability of other deserving candidates may have provided some of the explanation. Jardine gradually found his batting form, and contributed to Oxford's only win over Cambridge in the decade. During one innings of another match, he received criticism for using his pads to stop the ball from hitting the wickets: this was fully within the laws of the game but was considered controversial, being seen by critics to be against the spirit of the game. Christopher Douglas traces Jardine's hostility towards the press and critics to this incident. He also received criticism for his slow batting for Oxford, again being singled out due to his known ability to play attacking shots. Partly this was because Jardine held a responsible position, with the team often reliant on his personal success. The complaints against him were a manifestation of wider criticism of young amateur batting at the time for its supposed lack of verve and enterprise, as older commentators began to hark back to the "golden age" before the war. Jardine left Oxford in 1923 having scored a total of 1,381 runs and was awarded a fourth class degree in modern history. When Jardine went on to play for Surrey that season, and now in an already strong batting side, he played with more freedom. Batting at number five, he had to adapt his style depending on the match situation. He was successful, playing either long defensive innings or sacrificing his innings in an attempt to hit quick runs. His captain Percy Fender retained him in the role for the rest of the season. He scored his first century for Surrey against Yorkshire and was awarded his County Cap, making 916 runs at an average of 38.16 in the whole season. ### County cricketer Once Jardine left Oxford, he began to qualify as a solicitor while still playing for Surrey. He made steady progress over the next three seasons but was overshadowed by other amateur batsmen. His contemporaries at Oxford and Cambridge attracted more attention in the press, as did the next generation of amateur batsmen. He was appointed vice-captain to Fender for the 1924 season. Several professionals, such as Jack Hobbs, could have been made vice-captain, but Jardine was preferred as an amateur. In that season, Jardine was selected for the Gentlemen v Players match for the first time and came third in the Surrey averages. In all first-class matches, he scored 1,249 runs at an average of 40.29. In the following season, Jardine was less successful, scoring fewer runs at a lower average and with a highest score of 87 (1,020 runs at 30.90). Suggestions made in the press that Jardine should captain the Gentlemen with a view towards the future of the England Test team, were ignored. In the event, owing to an ankle injury sustained playing village cricket, he was unable to appear in the Gentlemen v Players match at Lord's. In 1926, Jardine had his most successful season to date, with 1,473 runs (average 46.03), although he was again overshadowed by other players and by the attention given to the Ashes series being played. Towards the end of the season, his batting became more attractive and his rate of scoring increased as he began to play more attacking shots. His assurance and judgement against all bowling, even international bowlers, increased and he scored 538 runs in his final ten innings. In 1927, Jardine achieved his highest average in a season, scoring 1,002 runs and averaging 91.09 in a very wet summer which led to difficult wickets to bat on. Wisden named him as one of its Cricketers of the Year, commenting that he had improved his style and footwork. That season, he only played 11 matches due to work commitments as a clerk with Barings Bank, for whom he had worked since qualifying as a solicitor. Despite his comparative lack of practice, he scored centuries in his first three matches and came top of the Surrey batting averages. He scored a century in the Gentlemen v Players match, which impressed influential observers at Lord's, and represented England in a trial match against The Rest. In this latter match, when Percy Chapman withdrew at the last minute, Jardine took over the captaincy, earning praise in the press for his performance. By this stage, he was considered a certainty to tour Australia the following winter. ### Test cricketer Jardine's batting performance in 1928 was similar to that from the previous season. He played 14 matches, scoring 1,133 runs at an average of 87.15. He was successful in high-profile matches, scoring 193 for Gentlemen at the Oval, where the crowd had booed his slow start (at one stage, he took half an hour to score two runs) but later cheered him as his last fifty runs were scored in half an hour. For the same team at Lord's, he scored 86 and 40. He captained The Rest against England in a Test trial and made the highest score in each innings, scoring 74 not out in the fourth innings to help his team to draw the game on a difficult pitch, against international bowlers Maurice Tate and Harold Larwood. Immediately after this match, Jardine made his Test debut against the West Indies who were touring England that season. This was West Indies' first ever Test match. The team possessed several fast bowlers who had enjoyed some success on the tour of England. Many batsmen only played them with difficulty, particularly on the occasional fast-paced pitch, but Jardine played them confidently. Jardine played in the first two Tests, both of which were won by England by an innings, but missed the third for reasons that were not revealed. He scored 22 on his debut, but was more successful in the second Test, scoring 83. During this innings, when he had scored 26, he accidentally hit his wicket when setting off for a run but was given not out. At the time, the laws of the game stated that a batsman was not out if he had completed his shot and was setting off for a run; the West Indian cricketer Learie Constantine believed that Jardine was only given not out because he told the umpire his shot was complete. Later, while he was batting with Tate, a player with whom he did not have a good relationship, Jardine was run out when Tate refused to go for a run. ### First tour to Australia Jardine was selected to tour Australia with the M.C.C. team in 1928–29 as part of a very strong batting side, playing in all five Test matches and scoring 341 runs at an average of 42.62. In all first-class matches, he scored 1,168 runs (average 64.88). He was also on the five-man selection committee for the tour, which chose teams to play in specific games but had not chosen the touring party. Wisden judged that he had been as great a success as had been expected and impressed everyone with the strength of his defensive shots and his play on the back foot. It said that he played some delightful innings. Percy Fender, covering the tour as a journalist, believed that Jardine never had the chance to play a normal innings in the Test, having to provide the stability to the batting, and often seeming to come out to bat in a crisis. Jardine was the centre of attention at the start of the tour. He began the tour with three consecutive hundreds and was seen as one of the main English threats. In his first hundred, the crowd engaged in some good-natured joking at Jardine's expense, but he was jeered by the crowd in his second hundred for batting too slowly. His third hundred was described by Bradman as one of the finest exhibitions of strokeplay that he had seen; Jardine accelerated after another slow start, during which he was again barracked, to play some excellent shots. The crowds took an increasing dislike to him, partially for his success with the bat, but mainly for his superior attitude and bearing, his awkward fielding, and particularly his choice of headwear. His first public action in South Australia was to take out the members of the South Australian team who had been to Oxford or Cambridge Universities. Then, he wore a Harlequin cap, given to people who played good cricket at Oxford. It was not unusual for Oxford and Cambridge cricketers to wear similar caps while batting, as both Jardine and M.C.C. captain Percy Chapman did so on this tour, although it was slightly unorthodox to wear them while fielding. However, this was neither understood nor acceptable to the Australian crowds. They quickly took exception to the importance he seemed to place on class distinction. Although Jardine may simply have worn the cap out of superstition, it conveyed a negative impression to the spectators, with his general demeanour drawing one comment of "Where's the butler to carry the bat for you?" Jardine's cap became a focus for criticism and mockery from the crowds throughout the tour. Nevertheless, Jack Fingleton later claimed that Jardine could still have brought the crowds onto his side by exchanging jokes or pleasantries with them. It is certain that Jardine by this stage had developed an intense dislike for Australian crowds. During his third century at the start of the tour, during a period of abuse from the spectators, he observed to a sympathetic Hunter Hendry that "All Australians are uneducated, and an unruly mob". After the innings, when Patsy Hendren said that the Australian crowds did not like Jardine, he replied "It's fucking mutual". Due to the large number of good close fielders in the side, Jardine did not field in the slips, his usual position for Surrey, but next to the crowd on the boundary. There, he was roundly abused and mocked, particularly when chasing the ball: he was not a good fielder on the boundary. In one of the Test matches, he spat towards the crowd while fielding on the boundary as he changed position for the final time. In the first Test, Jardine scored 35 and 65 not out. His first innings began with England in an uncertain position, having lost three wickets for 108 on a very good batting wicket. His innings led England to a stronger position. He played very cautiously, being troubled by Clarrie Grimmett and Bert Ironmonger, the Australian spinners. Jardine believed that Ironmonger threw the ball, and this bowler gave him considerable trouble throughout his career. Thanks to the bowling of Harold Larwood, England took a huge first innings lead. In his second innings, although he played well in his 65, Jardine was not under much pressure. He scored a large number of singles, giving his partners most of the bowling and building up the lead to the point where England achieved a massive victory by 675 runs. This victory surprised and troubled the Australian cricketing public. Jardine played a similar role in the second Test, batting with Wally Hammond to retrieve a poor start for England in his only innings as they won by eight wickets. Jardine scored 62 in the third Test, supporting Hammond who made a double century. However, when Australia batted a second time, they built up a big lead and left England needing 332 to win on an exceptionally bad wicket which had been damaged by rain. Jack Hobbs and Herbert Sutcliffe, in one of their most famous partnerships, put on 105. Hobbs sent a message to the team that Jardine should be the next batsmen to come in, even though he usually batted later on, as he was the batsman most likely to survive in the conditions. When Hobbs was dismissed, Jardine came in to bat. He survived, although finding batting exceptionally difficult, until the day's play ended. Percy Fender believed that Jardine was the only batsman in the side who could have coped with the difficult conditions. He went on to make 33 next day, and England won by three wickets. During the team's brief visit to Tasmania, Jardine made his highest first-class score of 214. In the fourth Test, Jardine only scored one run in the first innings, before he was given out leg before wicket (lbw) despite obviously hitting the ball. In the second innings, coming out to bat with the score 21 for two, Jardine scored 98 in a partnership of 262 with Hammond which was then the highest partnership for the third wicket in all Test matches. The scoring was very slow, and the crowd protested throughout Jardine's innings, even though he scored faster than Hammond. He was out when Wisden believed he looked certain to reach a century. England went on to win the match by 12 runs. Jardine was not successful in the final Test, won by Australia. He was used as an opener, due to an injury to Sutcliffe, and made just 19 and a first ball duck. Once both of his innings were completed, on the fifth day of a match which lasted eight days, he left the match and set off across Australia to catch a boat to India for a holiday. It is not clear if this was planned or if he had simply had enough. Jardine never provided an explanation, to the Australian press nor afterwards. Later, Jardine wrote about the Australian crowds, complaining over their involvement, but praising their knowledge and judgement of the game and describing them as more informed than English crowds. He also expressed later reservations to Bob Wyatt about Percy Chapman, saying that he would have shot him if a gun was available. Jardine did not appear in first-class cricket in the 1929 season due to business commitments. ## England captain ### Appointment as captain At the beginning of the 1930 season, Jardine was offered the vice-captaincy of Surrey. He was unable to accept owing to business commitments and played just nine matches for the season, scoring 402 runs at an average of 36.54 and managing one century and one fifty. He was never in the running for Test selection that season, although his presence may have been missed as the English batting was unreliable in the Tests. Christopher Douglas argues that had Jardine been playing regularly, he would have been made captain for the final Test, when Chapman was dropped in favour of the sounder batsman Bob Wyatt. The sensation of the Test series was Donald Bradman, who dominated the English bowling to score 974 runs with unprecedented speed and certainty, making the English selectors realise that something must be done to address his skill. With Bradman at the fore, Australia regained the Ashes 2–1. Jardine played a full season of cricket in 1931. In June, he was appointed as captain for the Test against New Zealand (two more Tests were later added). The English selectors were searching for possible captains for the 1932–33 tour of Australia, with Bradman and Australia's strong batting line up foremost in their minds. Christopher Douglas believes that, as Jardine was not a regular county captain, the selectors wanted to assess his leadership ability but had probably not settled on him as a final choice. He was also chosen as a dependable, proven batsman. While Percy Fender approved of his appointment, The Times''' correspondent believed that he was unproven and others were more deserving of the leadership. Ian Peebles, writing 40 years later, claimed that Jardine's appointment was popular but cricket administrators had misgivings. Alan Gibson believed that Jardine was chosen because the other candidates were either not worth their place in the side, too old or had controversy attached to them. Furthermore, Jardine impressed the chairman of selectors, Pelham Warner, who stated that he was very effective in selection meetings through his knowledge of cricket history and went into great detail to choose the correct players; it seems that Warner was the driving force behind Jardine's appointment. In his first Test as captain, Jardine clashed with several players. Frank Woolley was unhappy with his captain's manner, feeling humiliated at his treatment in the field at one point. He also rebuked Ian Peebles and Walter Robins, two young amateur bowlers, for their amusement over an incident in the match. The home team's fortunes were mixed, as New Zealand put up a very good fight in their first Test in England, and both sides could have won. The New Zealanders were so successful that a further two Tests were arranged. Jardine was criticised in the press for not instructing his batsmen to score quickly enough to win in the fourth innings, although this strategy was unlikely to succeed, and the match was drawn. England won the second Test by an innings and the third Test was drawn, sealing the series 1–0. Jardine had a top score of just 38 in the series, but only batted four times and was not out in three of the innings. At the beginning of the following season, Wisden's editor believed that, as Jardine had failed to impress (unspecified) people with his captaincy, he was no longer a certainty to lead the side to Australia, and only Percy Chapman's lack of form prevented his reinstatement at Jardine's expense. As a batsman, Jardine was more impressive in Wisden's opinion, showing himself to be good in defence despite his lack of cricket in the past two seasons. A notable innings was his 104 for The Rest to prevent defeat against champion county Yorkshire. The opposition bowling, particularly from Bill Bowes, was short and hostile, but Jardine survived for over four hours. He scored 1,104 first-class runs for the season at an average of 64.94. At the beginning of the 1932 season, Jardine became captain of Surrey. There was much speculation that Fender had been replaced due to disputes with the Surrey committee but it was some time before this, and Jardine's appointment, was confirmed. Fender was supportive of Jardine and happy to play under him. Jardine overcame a cautious beginning to develop a more aggressive captaincy style, and Surrey finished in their highest position in the championship for six years. England played one international match that season, India's first ever Test match, and Jardine was selected as captain. India possessed a very effective bowling attack on this tour, which surprised many teams, and England's batsmen struggled against them. Jardine, who had played a long innings against the tourists for M.C.C. earlier in the season, was the only English batsman to pass 30 in both innings. He scored 79 and 85 not out, and was praised for two excellent defensive innings in a difficult situation by Wisden and The Cricketer. During the match, Jardine again clashed with his team. He gave Bill Bowes and Bill Voce the very unusual instruction to bowl one full toss each over to take advantage of the batsmen's trouble seeing the ball against the crowd. The bowlers did not do so, and were later reprimanded by Jardine who told them to obey orders. Jardine himself went on to score 1,464 runs in the season at an average of 52.28. ### Planning for the 1932–33 tour A week after the Test, it was announced that Jardine would captain the M.C.C. team to Australia that winter, although he seemed to have had last minute doubts about accepting. Others were also concerned about whether he was the best choice. For example, Rockley Wilson is reputed to have said that with Jardine as captain, "We shall win the Ashes ... but we may well lose a Dominion". However, the selectors thought that a determined leader was needed to defeat the Australians and a more disciplined approach than that of Percy Chapman on the previous tour was needed. Jardine began to plan tactics from this point, discussing ideas with various people. He was aware that Bradman, Australia's star batsman and the main worry of the selectors, had occasionally shown vulnerability to pace bowling. During the final Test of the 1930 Ashes at the Oval, during Bradman's innings of 232, the wicket became difficult for a time following rain. Bradman was briefly seen to be uncomfortable facing deliveries that bounced higher than usual at a faster pace. Percy Fender was one of many cricketers who noticed, and he discussed this with Jardine in 1932. When Jardine later saw film footage of the Oval incident and noticed Bradman's discomfort, he shouted, "I've got it! He's yellow!" Further details that developed his plans came from letters Fender received from Australia in 1932 describing how Australian batsmen were increasingly moving across the stumps towards the off-side to play the ball on the on-side. Fender showed these letters to Jardine. It was also known in England that Bradman had shown some discomfort during the 1931–32 Australian season against pace bowling. Following Jardine's appointment, a meeting was arranged with Nottinghamshire captain Arthur Carr and his two fast bowlers Larwood and Voce at London's Piccadilly Hotel. Jardine explained his belief that Bradman was weak against bowling directed at leg stump and that if this line of attack could be maintained, it would restrict Bradman's scoring to one side of the field, giving the bowlers greater control of his scoring. Jardine asked Larwood and Voce if they could bowl accurately on leg stump and make the ball rise up into the body of the batsman. The bowlers agreed that they could, and that it might prove effective, but Jardine stressed that bowling accurately was vitally important, or Bradman would dominate the bowling. Larwood believed that Jardine saw Bradman as his main target and wished to attack him psychologically as well as in a cricketing sense. At the same time, other Australian batsmen were also discussed. Larwood and Voce practised the plan over the remainder of the 1932 season with mixed success. Jardine also visited Frank Foster who had toured Australia in 1911–12 to discuss field placings appropriate to Australian conditions. Foster had bowled leg theory on that tour with his fielders placed close in on the leg-side, as had George Hirst in 1903–04. During the second half of the season, the team to tour Australia was announced. The selection of four fast bowlers and a few medium pacers was very unusual at the time, and it was commented on by the hosts' media, including Bradman. The selection of Eddie Paynter, who did not have a strong record, to replace the ill Kumar Shri Duleepsinhji was very likely a choice of Jardine. He had a history of good performances against Yorkshire, and Jardine considered that a player's record against northern counties was a good indication of his potential at international level. ## Bodyline tour ### Beginning of the tour In Jardine's obituary, Wisden described this tour as "probably the most controversial tour in history. England won four of the five Tests, but it was the methods they employed rather than the results which caused so much discussion and acrimony." On the journey to Australia, by the boat Orontes, Jardine kept away from his team. He issued some instructions on their conduct, such as giving autographs or keeping out of the sun. He also began to have disagreements with Plum Warner, who was one of the two team managers along with Richard Palairet. He discussed tactics with Harold Larwood and other bowlers, spoke to Hedley Verity about his role in the team, and he may have met batsmen Wally Hammond and Herbert Sutcliffe. Some players reported that Jardine told them to hate the Australians in order to defeat them, while instructing them to refer to Bradman as "the little bastard." At this stage, he seems to have settled on leg theory, if not full Bodyline, as his main tactic. Once the team arrived in Australia, Jardine quickly alienated the press by refusing to give team details before a match and being uncooperative when interviewed by journalists. The press printed some negative stories as a result and the crowds barracked as they had done on his previous tour, which made him angry. Jardine still wore his Harlequin cap and began the tour well with 98 and 127 before the first Test. Once again, he clashed with paceman Bill Bowes, refusing to give his bowler the requested field placings in an early match. As a result, Bowes deliberately gave away easy runs in an attempt to get his way, but following a discussion, Bowes was converted to Jardine's tactics and ultimately to his ability as a captain. In a tour match, Jardine also instructed Hammond to attack the bowling of Chuck Fleetwood-Smith, whom he considered dangerous and thus did not want him to play in the Tests. Up until this point, there had been little unusual about the English bowling except the number of fast bowlers. Larwood and Voce were given a light workload in the early matches by Jardine. This changed in the match against an Australian XI, from which Jardine rested himself, where the bowlers first used the tactics that came to be known as Bodyline. Under the captaincy of Wyatt, the bowlers bowled short and around leg stump, with fielders positioned close by on the leg side to catch any deflections. Wyatt later claimed that this was not planned beforehand and he simply passed on to Jardine what happened after the match. These tactics continued in the next match; several players were hit. Many commentators criticised this style of bowling; although bowlers had previously used leg theory bowling, where bowlers bowled outside leg stump with a concentration of fielders on the leg side; using these tactics with fast bowlers dropping the ball short was almost unprecedented. It was seen as dangerous and against the spirit of the game. In a letter, Jardine told Fender that his information about the Australian batting technique was correct and that it meant he was having to move more and more fielders onto the leg side. He said that "if this goes on I shall have to move the whole bloody lot to the leg side." Jardine increasingly came into disagreement with Warner over Bodyline as the tour progressed, but his tactics were successful in one respect: in six innings against the tourists ahead of the Tests, Bradman had scored only 103 runs, causing concern among the Australian public who expected much more from him. ### Test matches When the first Test began, Jardine persisted with Bodyline tactics, even though Bradman, the main target, did not play in the match. David Frith has pointed out that Bradman would have been watching and seeing the tactics that England were using. However, when Stan McCabe was scoring 187 not out, Jardine was briefly seen to be unsettled as runs came quickly, and he may not have been fully convinced that the tactics would be successful. England eventually won the match comfortably. In the second Test, Jardine completely misjudged the pitch and left out a specialist spinner when conditions later in the match favoured one. The match seemed to be going well when Bill Bowes unexpectedly bowled the returning Bradman first ball in the first innings; Jardine was seen to be so delighted that he had clasped his hands above his head and performed a "war dance". This was an extremely unusual reaction in the 1930s, particularly from Jardine who rarely showed any emotion while playing cricket. In the second innings, Bradman scored an unbeaten century which helped Australia to win the match and level the series at one match each. This made it seem to critics that Bodyline was not quite the threat that had been perceived and Bradman's reputation, which had suffered slightly with his earlier failures, was restored. On the other hand, the pitch was slightly slower than was customary throughout the series, and Larwood was suffering from problems with his boots which reduced his effectiveness. Jardine had clashed with more of his team by this stage: he had argued with Gubby Allen at least twice about his refusal to bowl Bodyline (although he did bowl bouncers and fielded in the "leg trap", the fielders who waited for catches close in on the leg side); and the Nawab of Pataudi had refused to field in the "leg trap", to which Jardine responded, "I see his highness is a conscientious objector", and subsequently allowed Pataudi to play little part in the tour. The teams went into the third Test with the series level; England won that match but the controversy nearly ended the tour. Jardine, concerned by his poor run of batting form, had promoted himself to open the batting but was part of a drastic England collapse to 30 for four in the first innings. However, the trouble began when Bill Woodfull was struck on the chest by a Larwood delivery, drawing the comment from Jardine of "Well bowled, Harold", aimed mainly at Bradman who was also batting at the time. For the next ball faced by Woodfull, at the start of Larwood's next over, the fielders moved into the Bodyline fielding positions for the next ball he faced. Jardine wrote that Larwood had asked for the field to be moved, while Larwood said that it was Jardine's decision. The crowd became noisily angry as the ill feeling caused by the English bowling tactics spilled out, and Jardine later expressed regret that he had moved the fielders when he did. There was further anger later in the innings when Bert Oldfield suffered a fractured skull. At this point, several of the players feared that there might be a riot and that the crowd would jump onto the field to attack them; mounted police were deployed as a precaution, but the spectators remained behind the fences. Jardine then batted very slowly in an innings of 56, during which he was continuously barracked by the crowd. During the match one of the Australian team called Jardine a "bastard". He went to the Australian dressing room during the Test to demand an apology. The Australian vice-captain Vic Richardson who answered the door turned to his team and asked "OK, which of you bastards called this bastard a bastard?". Despite England's win, Wisden believed that it was probably the most unpleasant match ever played. However, it commended Jardine's courage, claimed that praise of his leadership was unanimous, and said that "above all he captained his team in this particular match like a genius". In the immediate aftermath, journalists in England and Australia took up viewpoints both for and against Jardine. The M.C.C. sent a telegram congratulating him on winning the match. Following the third Test, strongly worded cables passed between the Australian Board of Control and the M.C.C. at Lord's. The Australian Board accused the English team of unsportsmanlike tactics, stating that "Bodyline bowling has assumed such proportions as to menace the best interests of the game, making protection of the body the main consideration." The M.C.C. responded angrily to the accusations of unsporting conduct, played down the Australian claims about the danger of Bodyline and threatened to call off the tour. The series was becoming a major diplomatic incident by this stage, and many people saw Bodyline as damaging to an international relationship that needed to remain strong. Public reaction in both England and Australia was outrage directed at the other nation. The Governor of South Australia, Alexander Hore-Ruthven, who was in England at the time, expressed his concern to British Secretary of State for Dominion Affairs James Henry Thomas that this would cause a significant impact on trade between the nations. The standoff was settled only when Australian Prime Minister Joseph Lyons met members of the Australian Board and outlined to them the severe economic hardships that could be caused in Australia if the British public boycotted Australian trade. Given this understanding, the Board withdrew the allegation of unsportsmanlike behaviour two days before the fourth Test, thus saving the tour. However, correspondence continued for almost a year. Jardine was shaken by the events and by the hostile reactions that his team were receiving. Stories appeared in the press, possibly leaked by the disenchanted Nawab of Pataudi, about fights and arguments between the England players. Jardine offered to stop using Bodyline if the team did not support him, but after a private meeting (not attended by Jardine or either of the team managers) the players released a statement fully supporting Jardine and the Bodyline tactics. It was subsequently revealed that several of the players had private reservations, but they did not express them publicly at the time. Even so, Jardine would not have played in the fourth Test without the withdrawal of the unsportsmanlike accusation. Once the fourth Test got underway, England won the match to take the series. Partly prompted by Jardine, Eddie Paynter scored 83 having released himself from hospital. Jardine went on to make a painstaking 24, at one point facing 82 balls without scoring a single run. He was not proud of his batting performance, being shamefaced to Australian Test opener Jack Fingleton, and describing his batting to Bill O'Reilly as being "like an old maid defending her virginity." England also won the final Test ending on 28 February, with a final clash taking place between Jardine and Larwood. After a long bowling spell, Larwood was furious when Jardine sent him in to bat as nightwatchman but went on to score 98 runs. Later, Larwood broke his foot while bowling in the second innings, but Jardine was not convinced that he was seriously injured. He made him stay on the field until Bradman was out. Larwood, partly through this injury and partly through political repercussions from this series, never played another Test. Also in this match, Jardine enraged Harry Alexander by asking him not to run on the pitch as he was damaging it and giving his side an advantage. He proceeded to bowl hostile bouncers at Jardine, who was struck painfully to the delight of the crowd. While Jardine won the series as captain, he contributed just 199 runs at an average of 22.11 in the Tests, and 628 runs (average 36.94) in all first-class cricket in Australia. Jardine only played in the first Test of the short series which followed in New Zealand, due to rheumatism. All the players enjoyed the short tour although rain ruined the cricket, and Jardine was observed to show signs of paranoia towards all things Australian. Pelham Warner, although he later stated that he disapproved of Bodyline bowling, praised Jardine's captaincy on the tour and believed that he was cruelly treated by the Australian crowds. He further believed that Jardine was convinced that the tactics were legitimate. ### Aftermath and 1933 season Controversy over Bodyline continued throughout the following summer. Jardine himself contributed his opinion in a book, In Quest for the Ashes, a first-hand account of the Bodyline tour. He defended his tactics and heavily criticised the Australian barrackers, to the extent of suggesting that fixtures between England and Australia should be halted until this problem was solved. While arguments continued to rage in print and discussion, even at government level, Jardine received a hero's welcome on his return to England, making several public appearances. Despite his fears that the M.C.C. might sack him in the face of criticism, he was appointed as England captain for the series against the West Indies in 1933. He continued to captain Surrey during his infrequent first-class appearances that summer, although business commitments prevented him from playing a full season. He was cheered by the crowd or given a standing ovation when he came out to bat as M.C.C. captain against the West Indians in May, at Sheffield for Surrey against Yorkshire, and in the first and second Test matches. In all first-class cricket that season, Jardine scored 779 runs at an average of 51.93, including three hundreds. One of these centuries came in the second Test (Jardine missed the third with an injury that ended his season). Some bowlers had experimented with Bodyline in the season, and the West Indian team, 1–0 down in the series and frustrated by the lack of pace in the pitches, decided to experiment with the tactic. Facing a good West Indies total, England suffered a batting collapse, at one point falling to 134 for four. With Les Ames in difficulty against the short-pitched bowling, Jardine said, "You get yourself down this end, Les. I'll take care of this bloody nonsense." He went right back to the bouncers, standing on tiptoe, and stopped them with a dead bat, sometimes playing the ball one handed for more control. Wisden described how he never flinched despite facing the greatest amount of Bodyline. It also believed that he played it "probably better than any other man in the world was capable of doing." He batted for nearly five hours, scoring 127, his only Test century. England then retaliated by bowling Bodyline in the West Indies' second innings, but the slow pitch meant that the match was drawn. However, this performance played a large part in turning English opinion against Bodyline. The Times used the word "Bodyline", without using inverted commas or using the qualification "so-called", for the first time. Wisden said that "most of those watching it for the first time must have come to the conclusion that, while strictly within the law, it was not nice." ### Retirement During the 1933 season, Jardine was appointed as captain for the M.C.C. tour of India that winter which would feature the hosts' first Tests at home. This continued support for Jardine in the face of growing unhappiness towards Bodyline bowling came with some reservations, as the President and Secretary of the M.C.C. met Jardine for discussions prior to his appointment. This was probably about the need for diplomacy and tact on what may have proved to be a sensitive tour. With only two players from the Bodyline tour, Jardine and Verity, taking part, it was not a full-strength side but won the Test series 2–0. India were weaker than expected, and lacked a large group of quality players. Jardine nevertheless won praise from Wisden for his captaincy and his batting. He approached the matches with a very competitive spirit, seeking to gain every advantage with his tactics and research. At the same time, he was far more willing to take up speaking engagements than on the Bodyline tour, showed an appreciation and regard for Indian crowds which he had never extended to Australia, and played the diplomatic role that was usually expected of a captain of the M.C.C. at the time. He often spoke of his affection for India, describing it as the land of his birth and seemed to be relaxed and happy on this tour. England won the Test series 2–0. Jardine contributed three fifties in four innings in the series, scoring 221 runs at an average of 73.66. He scored 60, 61 and 65 before his final Test innings ended at 35 not out. Jardine scored 831 first-class runs on the Indian leg of the tour—he played one match in Ceylon (now Sri Lanka)—averaging 55.40. Although Jardine enjoyed the tour, there were still clashes evident. There was an argument with the Viceroy over Jardine selecting the Maharaja of Patiala to play for the M.C.C. in one match; in a subsequent match, Jardine complained that the pitch was rolled for too long. He also clashed, later on, with the umpire Frank Tarrant, initially due to suspicion over the number of lbw decisions given against the M.C.C., but also because Tarrant had warned him against using Bodyline and was employed by Indian princes. Jardine threatened to stop him umpiring and sent a telegram to Lord's, with the result that Tarrant, having officiated the first two Tests, was not used in the third. For much of the time, Jardine used different tactics to those employed in Australia. Slow bowling, particularly that of Hedley Verity, played a key part in the bowling attack. At times, the faster bowlers Nobby Clark and Stan Nichols bowled Bodyline, resulting in several injuries. In this case, the Indian bowlers Mohammad Nissar and Amar Singh retaliated with Bodyline bowling of their own. As the tour went on, there was discussion at a high level over Jardine's future. The M.C.C. authorities had realised that Bodyline was dangerous and should not be continued, but some figures such as Lord Hawke did not want to let Jardine down. Australians saw him as more of a problem; the likes of Alexander Hore-Ruthven wanted guarantees that Jardine would not use Bodyline and even that he not play. Plum Warner also believed that Jardine should no longer captain. Jardine himself saved the English selectors from any possible dilemma. In March 1934, he first told Surrey that he would be unable to play regularly any more and he resigned as captain. Then in an announcement in the Evening Standard, he stated that "I have neither the intention nor the desire to play cricket against Australia this summer." It is unclear whether this was due to the pressure over Bodyline, over assurances that the M.C.C. may have asked him to give or simply due to financial worries. This decision effectively ended his first-class career. He never played another Test and played only two more first-class matches in England, in 1937 and 1948, and one in India in 1943–44. Jardine played in 22 Test matches for England, scoring 1,296 runs at an average of 48.00. In his first-class cricket career, he played 262 matches, scoring 14,848 runs at an average of 46.83. His occasional bowling brought him 48 wickets at an average of 31.10. ## Style and personality ### Batting Jardine was seen as having a classical technique. While batting, he stood very straight and side on to the bowler. His off-driving was powerful, his defence was excellent, and he was superb at judging the line of the ball and letting it pass by if it was going to miss his wickets. His on-side play was also excellent, being able to place the ball between fielders for easy runs. Christopher Douglas described Jardine as "the epitome of the old-fashioned amateur". However, he also comments that his approach to batting was like that of a professional and that his back-foot batting was of a quality that few amateurs could manage. In 1928, Wisden's correspondent described Jardine as the most secure amateur batsman of the time, and identified his greatest strength as his defence and his "mental gifts." He played very straight and hit the ball hard in defence, but could not play all the strokes, particularly on the off side. R. C. Robertson-Glasgow believed that Jardine had modelled himself on C. B. Fry. He also noted that Jardine displayed good concentration, a strong desire to improve his batting and a fighting spirit that brought out his best in a crisis. He also said that Jardine could play every recognised cricket shot, but would not do so in a match and Robertson-Glasgow believed it was Jardine's one weakness as a batsman. The more important the occasion, the more defensive and restricted Jardine's batting became: "In general, as the task grew greater, the strokes grew fewer." Christopher Douglas argues that Jardine liked to make his runs when his side was in difficulty and enjoyed being tested; his approach would often lead his team to recovery from an unfavourable situation. Douglas comments that Jardine held his place in the England side despite strong competition from other batsmen. His defensive technique rescued England from weak positions in around a dozen innings and only played in two losses with England (which were his two least successful games with the bat). He also excelled in the main Gentlemen v Players fixture at Lord's, making a good score in each of his appearances in this match. Jack Hobbs classed him as a great batsman and believed that he was under-rated by his contemporaries. Wisden believed that Jardine's effective batting technique meant that fast bowlers troubled him less than other batsmen. He did have difficulties with a few bowlers. Alec Kennedy, a medium paced inswing bowler, took Jardine's wicket eleven times, eight of these occasions before the batsman had scored 20 runs. Kennedy found that Jardine had slightly slow footwork, often bowling him or trapping him lbw. Bert Ironmonger also troubled Jardine, taking his wicket in five of the eleven Test innings in which they faced each other. Jardine displayed a slight weakness against Australian slow bowlers, not moving his feet well enough against them. In 16 Test innings in Australia, he was out to slow bowlers ten times, but he rarely experienced similar difficulties against English spinners. One other bowler to cause Jardine problems was the Australian paceman Tim Wall, who took his wicket five times on the nine occasions he bowled to him. ### Captaincy As a captain, Jardine inspired great loyalty in his players, even if they did not approve of his tactics. Christopher Douglas judges that Jardine did very well to keep the team united and loyal on the Bodyline tour. He points out that team spirit was always excellent and the players showed great determination and resolve. Jardine particularly impressed Yorkshiremen who played under him, as they believed he thought about cricket in a similar way to their county colleagues. He became close to Herbert Sutcliffe during the Bodyline tour, even though Sutcliffe was sceptical about Jardine on the previous Australian tour in 1928–29. Hedley Verity was impressed by Jardine's tactical understanding and named his younger son Douglas after the captain. Bill Bowes expressed approval of his leadership after initial misgivings, and went on to call him England's greatest captain. Nevertheless, some players such as Arthur Mitchell who played under Jardine believed he was intolerant and unsupportive of players of lesser talent, expecting everyone to perform at world-class standards. Jardine insisted on strict discipline from his players but in return he went to great lengths to look after them, such as organising dental treatment or providing champagne for his tired bowlers. Critics praised his skill in field placing, which was sometimes interpreted as panic when he made frequent changes if the batsmen were on top. He also displayed great physical courage, such as when he was struck by a ball hard enough to draw blood on the Bodyline tour, but refused to show pain before reaching the dressing room. On the same tour, he instructed his men not to be friendly or to socialise with the Australian players; Gubby Allen even claimed that Jardine instructed the team to hate the Australians. Robertson-Glasgow wrote that Jardine made thorough preparation for games in which he was captain, studying individual batsmen at great length to find weaknesses. He had very clear plans, judged the strengths and weaknesses of his teams and knew how to get the best out of individual players. However, Robertson-Glasgow considered it a grave misjudgement to make Jardine captain of England, particularly given his known antipathy towards Australia. Pelham Warner described how Jardine "was a master of tactics and strategy, and was especially adept in managing fast bowlers and thereby preserving their energy. He possessed a great capacity for taking pains, which, it has been said, is the mark of a genius ... As a field tactician and selector of teams he was, I consider, surpassed by no one and equalled by few, if any." Laurence Le Quesne argues that one of Jardine's greatest talents, and at the same time greatest weaknesses, was his ability to formulate a winning strategy without consideration of wider contexts such as the social aspect of the game. On the Bodyline tour, he ignored the diplomacy required of an M.C.C. delegation. Instead, he embarked, according to Le Quesne, to win the Tests and settle personal scores with the Australians. Jardine was personally incapable of reacting to the crowds or responding to the controversy in a way that would have eased tensions, and so was not a good choice as captain given what the selectors already knew of him. Nevertheless, Le Quesne believed that when trouble arose, Jardine conducted himself with "great moral courage and an impressive degree of dignity and restraint." In his Wisden obituary, Jardine was described as one of England's best captains, while Jack Hobbs rated him the second best captain after Percy Fender. Warner also said that he was a fine captain on and off the field, and in dealing with administrators. In fact, he stated that, "If ever there was a cricket match between England and the rest of the world and the fate of England depended upon its result, I would pick Jardine as England captain every time." ### Personality Jardine divided opinion among those with whom he played. He could be charming and witty or ruthless and harsh, while many people who knew him believed him to be innately shy. David Frith describes him as a complex figure who could change moods quickly. Although he could be friendly off the field, he became hostile and determined once he stepped onto the field. At his memorial service, he was described by Hubert Ashton as being "provocative, austere, brusque, shy, humble, thoughtful, kindly, proud, sensitive, single-minded and possessed of immense moral and physical courage," and Frith argues that these varied qualities are easily proven by what was said about him. Harold Larwood maintained great respect for Jardine, treasuring a gift his captain gave him after the Bodyline tour and believing him to be a great man. Jardine showed affection for Larwood in return even after both of their retirements; he expressed his concern for the way Larwood was treated, hosted a lunch for the former fast bowler shortly before he emigrated to Australia and met him there in 1954. On the other hand, Donald Bradman would never speak to journalists about Bodyline or Jardine, and refused to give a tribute when Jardine died in 1958. Jack Fingleton admitted that he had liked Jardine and stated that he and Larwood had each done their job on the Bodyline tour, and expressed regret at the way both left cricket in acrimonious circumstances. Fingleton also described Jardine as an aloof individual who preferred to take his time in judging a person before befriending them, a quality that caused problems in Australia. Bill O'Reilly stated that he disliked Jardine at the time of Bodyline, but on meeting him later found him agreeable and even charming. Alan Gibson said that Jardine had "irony rather than humour". He sent Herbert Sutcliffe an umbrella as a joke on the day of his benefit match, when rain would have ruined the match and lost Sutcliffe a considerable amount of money. Many people who knew Jardine later in his life described him as having a sense of humour. Robertson-Glasgow noted that while he could curse very eloquently, Jardine displayed "dislike of waste in material or words." He also commented that "if he has sometimes been a fierce enemy, he has also been a wonderful friend." ## Later life ### Career after cricket Shortly before the tour of India in 1933–34, Jardine became engaged and on 14 September 1934, married Irene "Isla" Margaret Peat in London. She had met Jardine at shooting parties at her father's Norfolk home. According to Gerald Howat, Jardine's marriage was the probable reason for his giving up playing first-class cricket. Jardine's father-in-law was keen for him to pursue his law career but he instead continued as a bank clerk and began to work as a journalist. He reported on the 1934 Ashes for the Evening Standard. His writing for the press, and in a follow-up book on the series, was critical of selectors but less so of the players. In 1936, he penned Cricket: how to succeed, which was written as an instruction book for the National Union of Teachers. There was a possibility of his going to Australia as a journalist to cover the M.C.C. tour of 1936–37, to the dismay of Hore-Ruthven, but nothing came of this. With alterations to the law in 1935, changing the lbw law and preventing Bodyline bowling, Jardine became increasingly disillusioned with top-level cricket. He had grown uncomfortable with the nationalism stirred up by Tests, the greed of clubs and the large public following of individual players, particularly Bradman. At the same time, Jardine seemed to be ostracised by cricket writers and commentators, who simply ignored him. For example, Wisden'' made no mention of his retirement. Christopher Douglas believes that Jardine was used as a scapegoat for Bodyline once the M.C.C. stopped supporting the tactic and that a stigma was attached to him for the rest of his life and beyond. Although Jardine had retired from regular first-class competition, he continued to play club cricket. Jardine and his wife initially lived in Kensington but moved to Reading after the birth of their first child, daughter Fianach. A second daughter, Marion, followed but the family suffered from financial worries. Jardine, as well as working in journalism, earned money from playing bridge. The family also tried unsuccessfully to engage in market gardening. To make more money, Jardine became a salesman with Cable & Wireless before working for a coal mining company in the late 1930s. In 1939, he returned to cricket journalism and according to Christopher Douglas, achieved his highest standard as a writer. ### Career in the Second World War Jardine joined the Territorial Army in August 1939. Once World War II began, he was commissioned into the Royal Berkshire Regiment and went with the British Expeditionary Force to France. He served at Dunkirk, where he was fortunate to escape but suffered some injuries. After serving as staff captain at St. Albans, he was posted to India for the remainder of the war. He served in Quetta, then Simla as a major in the Central Provisions Directorate. He became fluent in the Hindustani language and although friendly, never formed close relationships with other officers. He gave lectures and played some cricket while in India. He left the army in 1945 only to find his job with the coal mining company was no longer available. In the meantime, his wife had moved to Somerset. In 1940, she gave birth to a son, Euan, who had many medical problems, and in 1943 she bore a third daughter, Iona. The pressure of running the household and caring for Euan led Isla to have a nervous breakdown after Iona's birth. When Jardine returned from the war, the family moved to Radlett to be closer to London. Isla recovered and Jardine found a job with paper manufacturers Wiggins Teape. In 1946, Jardine was chosen to play for Old England in a popular and successful fund raising match against Surrey. He displayed much of his old batting skill but did not show much involvement with his team-mates. By 1948, Jardine was more accepted in the cricket world. This was partly due to English perception of the short-pitched fast bowling of Australian pairing Ray Lindwall and Keith Miller as being hostile. England's poor performance in the 1946–47 and 1948 Ashes also caused writers to remember Jardine more fondly as an icon of past English success. ### Final years In 1953, Jardine resumed journalism for the Ashes series and expressed a high opinion of Len Hutton's captaincy. He also did some broadcasting and wrote short stories to supplement his income; Isla was in poor health and her medical care was expensive. In the same year, he became the first President of the Umpires Association, while from 1955 to 1957 he was President of the Oxford University Cricket Club. In 1953, he travelled, with some trepidation, as a board member of the Scottish Australian Company to inspect some land in Australia. While there, he struck up a friendship with Fingleton and was surprised to be well received in the country, in his own words, as "an old so-and-so who got away with it." In 1957, Jardine travelled to Rhodesia, again to inspect some land, with his daughter Marion. While there, he became ill with tick fever. He showed no improvement upon his return to England and further tests revealed that he had advanced lung cancer. After some treatment, he travelled with his wife to a clinic in Switzerland but it was discovered that the cancer had spread and was incurable. He died in Switzerland on 18 June 1958 and his ashes were scattered at the summit of Cross Craigs overlooking Loch Rannoch in Perthshire, Scotland. His family had enquired about having his ashes dispersed at Lord's, but this honour was restricted to war dead. When he died, his estate was valued at just over £71,000, which would have been worth around £1,765,000 in 2022 terms. ## Legacy Jardine is inextricably associated with Bodyline. John Arlott wrote in 1989 that "It is no exaggeration to say that, among Australians, Douglas Jardine is probably the most disliked of cricketers." In the view of Christopher Douglas, his name "stands for the legendary British qualities of cool-headed determination, implacable resolve, patrician disdain for crowds and critics alike – if you're English that is. To Australians the name is synonymous with the legendary British qualities of snobbishness, cynicism and downright Pommie arrogance." He also argues that Bodyline, which was legal at the time, was a necessary step to overcome the unfair advantage which batsmen of the time enjoyed. After the Bodyline tour, according to cricket writer Gideon Haigh, Jardine was seen as "the most reviled man in sport." This perception faded from the 1950s onwards, and in more recent times, Jardine has been viewed more sympathetically. In 2002, the England captain Nasser Hussain was compared to Jardine as a compliment when he displayed ruthlessness against the opposition.
14,466,100
République-class battleship
1,142,784,091
Pre-dreadnought French battleships
[ "Battleship classes", "République-class battleships", "Ship classes of the French Navy" ]
The République class consisted of a pair of pre-dreadnought battleships—République, the lead ship, and Patrie—built for the French Navy in the early 1900s. They were ordered as part of a naval expansion program directed at countering German warship construction authorized by the German Naval Law of 1898. The French program called for six new battleships; the last four became the very similar Liberté class. République and Patrie, designed by Louis-Émile Bertin, were a significant improvement over previous French battleships. They carried a similar offensive armament of four 305 mm (12 in) guns and eighteen 164 mm (6.5 in) guns, though most of the 164 mm guns were now mounted in more flexible gun turrets rather than in casemates. They also had a much more effective armor protection arrangement that remedied the tendency of earlier battleships to lose stability from relatively minor damage. Both ships entered service with the fleet in 1907, after the revolutionary British battleship HMS Dreadnought had been commissioned into the Royal Navy and made all existing battleships obsolescent. They nevertheless served as front-line units in the French fleet for most of their careers, well into World War I. Their peacetime careers were largely uneventful, consisting of a normal routine of training exercises, visits to French and foreign ports, and naval reviews for French politicians and foreign dignitaries. At the outbreak of war in August 1914, the ships escorted troop ship convoys carrying units of the French Army from French North Africa to France before joining the rest of the main fleet to seek battle with the Austro-Hungarian Navy; this resulted in the minor Battle of Antivari in September, where the French battle fleet caught and sank the Austro-Hungarian cruiser SMS Zenta. The fleet thereafter patrolled the southern end of the Adriatic Sea until repeated attacks from Austro-Hungarian U-boats forced them to withdraw. Patrie was transferred to the Gallipoli campaign in May 1915 and République joined her there in January 1916 to cover the Allied evacuation from the Gallipoli Peninsula. The two ships thereafter became involved in Greece, where they assisted a coup against the neutral but pro-German government that ultimately led to Greece's entry into the war on the side of the Allies. République and Patrie were then sent to Mudros, but they saw no further action during the war. In January 1918, République had two of her 305 mm guns removed for use by the army and was converted into a training ship. After the war ended, Patrie was similarly converted for training purposes. République was decommissioned in 1921 and broken up in Italy, but Patrie lingered on in her training role until 1936, when she was decommissioned. She was sold for scrap the following year. ## Design République ("Republic") and Patrie ("Fatherland") were authorized by the Fleet Law of 1900, which called for a total of six battleships. The law was a reaction to the German 1898 Naval Law, which marked a significant expansion of their fleet under Admiral Alfred von Tirpitz. Since Germany was France's primary enemy, a considerable strengthening of its fleet pressured the French parliament to authorize a similar program. Louis-Émile Bertin, who had become the Directeur central des constructions navales (DCCN—Central Director of Naval Construction) in 1896, was responsible for preparing the new design. Bertin had campaigned through the early 1890s for revisions to the battleships then being built, as he correctly determined that their shallow belt armor would render them vulnerable to flooding from hits above the belt that could dangerously destabilize the vessels. Upon becoming the DCCN, Bertin was in a position to advance his ideas on battleship construction. In November 1897, he called for a battleship displacing 13,600 metric tons (13,400 long tons), a significant increase in size over earlier battleships, which would allow him to incorporate the more comprehensive armor layout he deemed necessary to protect against contemporary armor-piercing shells. The new ship would be protected by a tall belt that covered most of the length of the hull topped with a flat armored deck; combined, these created a large armored box which was highly subdivided with watertight compartments to reduce the risk of uncontrollable flooding. Design work on the ship continued for the next two years as the staff worked out various particulars. The staff submitted a revised proposal on 20 April 1898, with the displacement now increased to 15,000 t (14,800 long tons), which was on par with contemporary British designs. To ensure passage through the Suez Canal, draft was limited to 8.4 m (28 ft). The staff specified the standard main armament of four 305 mm (12 in) guns in two twin-gun turrets. The naval command approved the submission, but requested alterations to the design, particularly to the arrangement of the secondary battery layout. These proved difficult to incorporate, as the requested changes increased top weight, which necessitated reductions in armor thicknesses to keep the ship from becoming too top-heavy. The navy refused to allow the reductions, so further rearrangements were considered. On 23 December, the designers evaluated a pair of proposals for the secondary gun turrets from Schneider-Creusot and the government-run Direction de l'artillerie (Artillery Directorate), and that from the latter was adopted for the new ship. These were new two-gun turrets that allowed for more secondary weapons to be carried in turrets, which were more flexible mounts than traditional casemates. Another meeting on 28 April 1899 settled on the final characteristics of the design, and on 29 May, Bertin was directed to alter the design to conform to the adopted specifications. Final design work took another two months, and Bertin submitted the finalized version on 8 August. After nearly a year of inaction, Jean Marie Antoine de Lanessan, the Minister of the Navy approved the design on 10 July 1900, and on 9 December parliament approved the 1900 Fleet Law that authorized a total of six ships. The French originally planned to build six vessels of the class, which is sometimes referred to as the Patrie class, but developments abroad, particularly the construction of the British King Edward VII-class battleships, led to a re-design of the last four members of the class. Foreign battleships began to carry a heavy secondary battery, such as the 9.2 in (234 mm) guns of the King Edward VIIs, which prompted an increase in French secondary batteries from 164.7 to 194 mm (6.48 to 7.64 in), producing the Liberté class, though these are sometimes considered to be a sub-class of the République class rather than a distinct class of their own. Unfortunately for République and Patrie, they entered service shortly after the revolutionary all-big-gun battleship HMS Dreadnought entered service with the Royal Navy, rendering pre-dreadnoughts like them obsolescent. ### General characteristics The ships were 131 m (429 ft 9 in) long at the waterline, 133.8 m (439 ft) long between perpendiculars, and 135.25 m (443 ft 9 in) long overall. They had a beam of 24.25 m (79 ft 7 in) at the waterline and an average draft of 8.2 m (26 ft 11 in). The République-class ships had a designed displacement of 14,870 metric tons (14,640 long tons), though in service République displaced 14,605 metric tons (14,374 long tons) at full load, and Patrie displaced slightly more, 14,900 metric tons (14,660 long tons) at full load. The ships' hulls were modelled on the Gloire-class cruisers, which Bertin had also designed. The hulls were divided into 15 watertight compartments below the lower armor deck. Bilge keels were fitted to improve their stability. République and Patrie were built with a tall forecastle deck that extended all the way to the mainmast. République and Patrie retained a small fighting mast for the foremast, but had a lighter pole mast for the mainmast. The forward superstructure consisted of a four-deck structure erected around the forward mast and the conning tower. The charthouse, commander's quarters, and bridge were located here. In service, the arrangement proved to have several problems; the conning tower was too small to accommodate the crew, and the bridge wings obstructed views aft, which forced the commander to leave the safety of the armored conning tower to see all around the ship. In 1912–1913, the wings were removed to reduce the problem. Similar problems caused difficulties in the aft superstructure as well, particularly with the rear fire control system. They had a crew of 32 officers and 710 enlisted men, though while serving as a flagship, their crews were increased to 44 officers and 765 enlisted men to include an admiral's staff. Each battleship carried eighteen smaller boats, including pinnaces, cutters, dinghies, whalers, and punts. As a flagship, these boats were augmented with an admiral's gig, another cutter, and three more whalers. As completed, the ships wore the standard paint scheme of the French fleet: green for the hull below the waterline and black above, and buff for the superstructure. This scheme was replaced in 1908 with a medium blue-gray that replaced the black and buff, while the green hull paint was eventually replaced with dark red. ### Machinery The ships were powered by three vertical triple-expansion steam engines with twenty-four Niclausse boilers. République's engines were four-cylinder models, while Patrie had three-cylinder machinery. The boilers were divided into four boiler rooms, the forward three trunked into two funnels and the aft room ducted into the rear funnel. The engines were located amidships in separate watertight compartments, between the forward group of three boiler rooms and the aft one. Each engine drove a bronze, three-bladed screw; the centerline propeller was 4.85 m (15 ft 11 in) in diameter for both ships, and République had 4.8 m (15 ft 9 in) outer screws while Patrie had 5 m (16 ft 5 in) screws. The ships were equipped with six electric generators; two 500-amp generators were used to power the main battery turrets and ammunition hoists and four 800-amp generators provided power for the rest of the ships' systems. The propulsion system was rated at 17,500 metric horsepower (17,260 ihp) and provided a top speed of 18 knots (33 km/h; 21 mph) as designed. On speed trials shortly after entering service, both vessels handily exceeded these figures, République reaching 19.15 knots (35.47 km/h; 22.04 mph) from 19,898 metric horsepower (19,626 ihp) and Patrie making 19.13 knots (35.43 km/h; 22.01 mph) from 18,107 metric horsepower (17,859 ihp). Coal storage amounted to 900 t (890 long tons) normally and up to 1,800 t (1,800 long tons) at full load. At an economical cruising speed of 10 kn (19 km/h; 12 mph), the ships could steam for 8,400 nautical miles (15,600 km; 9,700 mi). ### Armament The main battery for the République-class ships consisted of four Canon de 305 mm Modèle 1893/96 guns mounted in two twin-gun turrets, one forward and one aft. These guns fired a 350-kilogram (770 lb) shell at a muzzle velocity of 865 meters per second (2,840 ft/s). At their maximum elevation of 12 degrees, the guns had a range of 12,500 m (13,700 yd). Their rate of fire was one round per minute. Both the turrets and the guns were electrically operated; both guns were typically elevated together, but they could be decoupled and operated independently if the need arose. The guns had to be depressed to a fixed loading position, −5 degrees, between shots. Ready ammunition storage amounted to eight rounds per turret. Though earlier French battleships had carried a mix of several types of shells, including armor-piercing (APC), semi-armor-piercing (SAPC), cast iron, high-explosive, and shrapnel shells, République and Patrie standardized on a load-out of just APC and SAPC shells. In peacetime, each gun was supplied with 65 shells, for a total of 260 per ship, of which 104 were APC and the remaining 156 were SAPC. The wartime supply was three times that, at 780 shells in total. The secondary battery consisted of eighteen Canon de 164 mm Modèle 1893 guns; twelve were mounted in twin wing turrets and six in casemates in the hull. The turret guns had a maximum range of 10,800 m (11,800 yd) while the casemate guns could engage targets out to 9,000 m (9,800 yd). They were supplied with APC and SAPC ammunition, weighing 54.9 kg (121 lb) and 52.3 kg (115 lb), respectively, which was fired at a muzzle velocity of 900 m/s (3,000 ft/s). Their rate of fire was three rounds per minute. As with the main battery turrets, the secondary turrets were electrically operated, though elevation was done by hand. Unlike the main battery guns, they could be loaded at any angle. The casemate guns were entirely hand-operated. Though designed with a tertiary battery of twenty-four 47 mm (1.9 in) guns for defense against torpedo boats, during construction it had become clear that the gun was no longer adequate for use against the latest torpedo boats. Accordingly, on 22 August 1905, the navy ordered that sixteen of those guns, all of which were to be mounted in the hull, be replaced with thirteen 65 mm (2.6 in) Modèle 1902 guns, which had a rate of fire of 15 shots per minute and a maximum range of 8,000 m (8,700 yd). The remaining eight 47 mm Modèle 1902 guns, which were located in the foremast fighting top and in the forward and aft superstructure, were retained. These guns had the same rate of fire as the 65 mm guns, but their range was less, at 6,000 m (6,600 yd). They also fired a significantly lighter shell, 2 kg (4.4 lb), compared to the 4.17 kg (9.2 lb) shell of the larger gun. Ammunition stowage amounted to 450 rounds per gun for the 65 mm weapons and 550 shells per gun for the 47 mm guns. The ships were also armed with two 450 mm (17.7 in) torpedo tubes submerged in the hull, abreast the forward 164.7 mm gun turrets. They were arranged at a fixed angle, 19 degrees forward of the beam. Each tube was supplied with three Modèle 1904 torpedoes, which had a range of 1,000 m (1,100 yd) at a speed of 32.5 kn (60.2 km/h; 37.4 mph), carrying a 100 kg (220 lb) warhead. Each ship carried twenty naval mines that could be laid by the vessels' pinnaces. ### Armor The ship's main-belt armor consisted of two strakes of cemented steel that was 280 mm (11 in) thick amidships, which was reduced to 180 mm (7.1 in) toward the bow and stern. The belt terminated close to the stern and was capped with a transverse bulkhead that was 200 mm (7.9 in) thick backed with 80 mm (3.1 in) of teak planking, which was in turn supported by two layers of 10 mm (0.39 in) steel plating. Forward, it continued all the way forward to the stem. It extended from 0.5 m (1 ft 8 in) below the waterline to 2.3 m (7 ft 7 in) above the line, and along the upper edge of the belt, it tapered slightly to 240 mm (9.4 in). A third, thinner strake of armor covered the upper hull at the main deck and 1st deck levels; it consisted of 64 mm (2.5 in) of steel plating on 80 mm of teak. It was connected to the forward main battery barbette by a 120 mm (4.7 in) bulkhead. Horizontal protection consisted of two armored decks. The upper deck, at main deck level, covered almost the entire ship, from the bow to the aft transverse bulkhead. It consisted of three layers of 18 mm (0.71 in) steel for a total thickness of 54 mm (2.1 in). Below that, the lower deck was flat over the engine and boiler rooms, consisting of three layers of 17 mm (0.67 in) steel, the total thickness being 51 mm (2 in). On the sides of the deck, it angled down to connect to the lower edge of the main belt. The sloped sides were two layers of 36 mm (1.4 in) steel. Sandwiched between the two decks and directly behind the belt was an extensively subdivided cofferdam, which Bertin intended to limit flooding in the event of battle damage. Coal storage bunkers were placed behind the cofferdam to absorb shell splinters or armor fragments. The main-battery turrets received the heaviest armor; the faces of the gunhouses were 360 mm (14 in) thick and the sides and rears were 280 mm thick, all cemented steel. Behind each plate were two layers of 20 mm (0.79 in) thick steel. The roof consisted of three layers of 24 mm (0.94 in) of steel. Their barbettes were 246 mm (9.7 in) thick above the main deck and reduced to 66 mm (2.6 in) below the deck; for the forward barbette, a transitional thickness of 166 mm (6.5 in) was used where the barbette was covered by the thin upper belt. The secondary turrets had cemented 138 mm (5.4 in) faces and sides and 246 mm (9.7 in) of mild steel, the greater thickness being used to counterbalance the weight of the guns. The roof consisted of three layers of 13 mm (0.51 in) of steel. The secondary casemates were 140 mm (5.5 in) thick, backed with two layers of 10 mm of steel; the guns themselves were fitted with gun shields of the same thickness as the casemate wall. The forward conning tower had 266 mm (10.5 in) of steel on the front and side, with a 216 mm (8.5 in) thick rear wall. All four sides were backed by two layers of 17 mm plating. Access to the rear entrance to the tower was shielded by a curved bulkhead that was 174 mm (6.9 in) thick. A heavily armored tube that consisted of 200 mm thick steel protected the communication system that connected the conning tower with the transmitting station lower in the ship. Below the upper deck, it was reduced to 20 mm on two layers of 10 mm steel. ### Modifications Tests were carried out to determine whether the main-battery turrets could be modified to increase the elevation of the guns (and hence their range), but the modifications proved to be impractical. The Navy did determine that tanks on either side of the vessel could be flooded to induce a heel of 2 degrees. This increased the maximum range of the guns from 12,500 to 13,500 m (41,000 to 44,300 ft). New motors were installed in the secondary turrets in 1915–1916 to improve their training and elevation rates. Also in 1915, the 47 mm guns located on either side of the bridge were removed and the two on the aft superstructure were moved to the roof of the rear turret. On 8 December 1915, the naval command issued orders that the light battery was to be revised to just four of the 47 mm guns and eight 65 mm (2.6 in) guns. The light battery was revised again in 1916, the four 47 mm guns being converted with high-angle anti-aircraft mounts. They were placed atop the rear main battery turret and the number 5 and 6 secondary turret roofs. In 1912–1913, each ship received two 2 m (6 ft 7 in) Barr & Stroud rangefinders, though Patrie later had these replaced with 2.74 m (9 ft) rangefinders taken from the dreadnought battleship Courbet. Tests revealed the wider rangefinders were more susceptible to working themselves out of alignment, so the navy decided to retain the 2 m version for the other battleships of the fleet. By 1916, the command determined to modernize the fleet's rangefinding equipment, and Patrie was fitted with one 2.74 m and two 2 m rangefinders for her primary and secondary guns, and one 0.8 m (2 ft 7 in) Barr & Stroud rangefinder for her anti-aircraft guns. Details of République's later rangefinding equipment have not survived, and the historians John Jordan and Philippe Caresse note that "this [program] was never fully implemented", leaving it unclear whether République's equipment was altered at all. ## Ships ## History ### Prewar careers Despite having been built to counter German naval expansion, République and Patrie spent their careers in the Mediterranean Sea. In May 1907, France concluded an informal agreement with Britain and Spain after Germany had provoked the First Moroccan Crisis. It included plans to concentrate the British fleet against Germany, while the French fleet, with Spanish support, would face those of Italy and Austria-Hungary. The ships were assigned to the 1st Division of the Mediterranean Squadron after entering service, Patrie serving as the flagship. Toulon served as the squadron's home port, though they frequently also lay in Golfe-Juan and Villefranche-sur-Mer. Throughout the 1900s and early 1910s, the ships were occupied with routine peacetime training exercises in the western Mediterranean and Atlantic. They also held naval reviews for the President of France, other government officials, and foreign dignitaries during this period. The ships also made frequent visits to foreign ports in the Mediterranean, including visits to Spain, Monaco, and Italy, among others. By early 1911, the Danton-class battleships had begun to enter service, displacing République and Patrie to what was now the 2nd Squadron of the Mediterranean Fleet, Patrie still serving as the unit's flagship. Throughout their peacetime careers, the ships were involved in several accidents. During maneuvers in February 1910, Patrie accidentally hit République with a torpedo, forcing her to return to port for repairs. On 25 September 1911, République was damaged by the accidental explosion of the battleship Liberté in Toulon; the blast hurled a large section of the ship's armor plate into the air, striking République near her forward main battery turret, killing twenty-three men. Repairs were nevertheless completed quickly and the ships conducted their typical training routine that year. Following the assassination of Archduke Franz Ferdinand in June 1914 and during the ensuing July Crisis, the ships remained close to Toulon to be prepared for the possibility of war. ### World War I At the outbreak of World War I in August 1914, the French fleet was mobilized to defend the troop convoys carrying elements of the army from French North Africa to Metropolitan France. The German battlecruiser SMS Goeben was in the Mediterranean at the time, and the French high command feared it would try to interdict the convoys. The ships of the 2nd Squadron steamed to Algiers, escorted a convoy of troop ships carrying some 7,000 men until they were relieved midway to France by the dreadnoughts Jean Bart and Courbet. They thereafter joined the rest of the main French fleet and made a sweep into the Adriatic Sea to attempt to bring the Austro-Hungarian Navy to battle in September. The French encountered just the protected cruiser SMS Zenta and a torpedo boat, sinking the former in the Battle of Antivari. Patrols in the southern Adriatic followed, but after repeated attacks by Austro-Hungarian U-boats, the battleships of the fleet withdrew to Corfu and Malta, while lighter units continued the sweeps. In May 1915, Patrie was sent to reinforce the Dardanelles Division fighting Ottoman forces in the Gallipoli campaign; she provided gunfire support to Allied troops ashore until they were evacuated in January 1916, which République was sent to help cover. The 2nd Squadron ships then were sent to Greece to put pressure on the neutral but pro-German government; they sent men ashore in December to support a coup launched by pro-Allied elements in the government, but were compelled to retreat by the Greek army. The Greek monarch, Constantine I, was forced to abdicate in June 1917 and his replacement led the country into the war on the side of the Allies. Both ships were then sent to Mudros off the Dardanelles to guard against the possibility of a sortie by Goeben, which had fled to the Ottoman Empire at the start of the war, transferred to Ottoman service, and had been renamed Yavuz Sultan Selim, though the only attempt made ended in failure when the battlecruiser struck several mines and ran aground. ### Fates In late January 1918, République steamed to Toulon for maintenance, and while there, had two of her main battery guns removed for use by the French Army. Since replacements were not available, she was reduced to a training ship. Patrie's crew suffered an outbreak of influenza that killed eleven men while at Mudros in July, and was used as a barracks ship in Constantinople during the Allied intervention in the Russian Civil War in 1919. She joined République in the Training Division in August, though the latter vessel was replaced by another ship in December 1920. Decommissioned in May 1921 and stricken from the naval register in June, République was thereafter sold to ship breakers in Italy. Patrie remained in service until a pair of accidents in 1924 forced her out of service for repairs, after which she served as a stationary training vessel until 1936, when she too was decommissioned, sold in September 1937, and broken up.
927,116
Ace Books
1,171,236,103
American specialty publisher of science fiction and fantasy books
[ "1952 establishments in the United States", "Ace Books books", "American speculative fiction publishers", "Book publishing companies based in New York (state)", "Fantasy book publishers", "Lists of Ace Books books", "Pearson plc", "Publishing companies established in 1952", "Science fiction publishers" ]
Ace Books is a publisher of science fiction (SF) and fantasy books founded in New York City in 1952 by Aaron A. Wyn. It began as a genre publisher of mysteries and westerns, and soon branched out into other genres, publishing its first science fiction title in 1953. This was successful, and science fiction titles outnumbered both mysteries and westerns within a few years. Other genres also made an appearance, including nonfiction, gothic novels, media tie-in novelizations, and romances. Ace became known for the tête-bêche binding format used for many of its early books, although it did not originate the format. Most of the early titles were published in this "Ace Double" format, and Ace continued to issue books in varied genres, bound tête-bêche, until 1973. Ace, along with Ballantine Books, was one of the leading science fiction publishers for its first ten years of operation. The death of owner A. A. Wyn in 1967 set the stage for a later decline in the publisher's fortunes. Two leading editors, Donald A. Wollheim and Terry Carr, left in 1971, and in 1972 Ace was sold to Grosset & Dunlap. Despite financial troubles, there were further successes, particularly with the third Ace Science Fiction Specials series, for which Carr came back as editor. Further mergers and acquisitions resulted in the company becoming absorbed by Berkley Books. Ace later became an imprint of Penguin Group (USA). ## History ### 1952: Ace Doubles concept Editor Donald A. Wollheim was working at Avon Books in 1952, but disliked his job. While looking for other work, he tried to persuade A. A. Wyn to begin a new paperback publishing company. Wyn was already a well-established publisher of books and pulp magazines under the name A. A. Wyn's Magazine Publishers. His magazines included Ace Mystery and Ace Sports, and it is perhaps from these titles that Ace Books got its name. Wyn liked Wollheim's idea but delayed for several months; meanwhile, Wollheim was applying for other jobs, including assistant editor at Pyramid Books. Pyramid mistakenly called Wyn's wife Rose for a reference, thinking Wollheim had worked for her. When Rose told her husband that Wollheim was applying for another job, Wyn made up his mind: he hired Wollheim immediately as an editor. The first book published by Ace was a pair of mysteries bound tête-bêche: Keith Vining's Too Hot for Hell, backed with Samuel W. Taylor's The Grinning Gismo, priced at 35 cents, with serial number D-01. A tête-bêche book has the two titles bound upside-down with respect to each other, so that there are two front covers and the two texts meet in the middle. This format is generally regarded as an innovation of Ace's; it was not, but Ace published hundreds of titles bound this way over the next twenty-one years. Books by established authors were often bound with those by lesser-known writers. Ace was "notorious for cutting text", in the words of bibliographer James Corrick: even some novels labeled "Complete and Unabridged" were cut. Isaac Asimov's The Stars Like Dust was one such: it was reprinted by Ace under the title The Rebellious Stars, and cuts were made without Asimov's approval. Similarly John Brunner repudiated the text of his novel Castaway's World because of unauthorized cuts to the text. Some important titles in the early D-series novels are D-15, which features William S. Burroughs's first novel, Junkie (written under the pseudonym "William Lee"), and many novels by Philip K. Dick, Robert Bloch, Harlan Ellison, Harry Whittington, and Louis L'Amour, including those written under his pseudonym "Jim Mayo". The last Ace Double in the first series was John T. Phillifent's Life with Lancelot, backed with William Barton's Hunting on Kunderer, issued August 1973 (serial \#48245). Although Ace resumed using the "Ace Double" name in 1974, the books were arranged conventionally rather than tête-bêche. ### 1953–1963: Genre specialization Ace's second title was a western (also tête-bêche): William Colt MacDonald's Bad Man's Return, bound with J. Edward Leithead's Bloody Hoofs. Mysteries and westerns alternated regularly for the first thirty titles, with a few books not in either genre, such as P. G. Wodehouse's Quick Service, bound with his The Code of the Woosters. In 1953, A.E. van Vogt's The World of Null-A, bound with his The Universe Maker, appeared; this was Ace's first foray into science fiction. (Earlier in 1953, Ace had released Theodore S. Drachman's Cry Plague!, with a plot that could be regarded as science fiction, but the book it was bound with—Leslie Edgley's The Judas Goat—is a mystery.) Another science fiction double followed later in 1953, and science fiction rapidly established itself, alongside westerns and mysteries, as an important part of Ace's business. By 1955, the company released more science fiction titles each year than in either of the other two genres, and from 1961 onward, science fiction titles outnumbered mysteries and westerns combined. Ace also published a number of lurid juvenile delinquent novels in the 1950s that are now very collectible, such as D-343, The Young Wolves by Edward De Roo and D-378, Out for Kicks by Wilene Shaw. With Ballantine Books, Ace was the dominant American science fiction paperback publisher in the 1950s and 1960s. Other publishers followed their lead, catering to the increasing audience for science fiction, but none matched the influence of either company. Ace published, during this period, early work by Philip K. Dick, Gordon R. Dickson, Samuel R. Delany, Ursula K. Le Guin, and Roger Zelazny. ### 1964–1970: Financial struggles In 1964, science fiction author Terry Carr joined the company, and in 1967, he initiated the Ace Science Fiction Specials line, which published critically acclaimed original novels by such authors as R. A. Lafferty, Joanna Russ and Ursula K. Le Guin. Carr and Wollheim also co-edited an annual Year's Best Science Fiction anthology series; and Carr also edited Universe, a well-received original anthology series. Universe was initially published by Ace, although when Carr left in 1971 the series moved elsewhere. In 1965, Ace published an unauthorized American paperback edition of The Lord of the Rings by J. R. R. Tolkien, believing that the copyright had expired in the U.S. Tolkien had not wanted to publish a paperback edition, but changed his mind after the Ace edition appeared, and an authorized paperback edition was subsequently published by Ballantine Books, which included on the back cover of the paperbacks a message urging readers not to buy the unauthorized edition. Ace agreed to pay royalties to Tolkien and let its still-popular edition go out of print. Wyn died in 1967, and the company grew financially overextended, failing to pay its authors reliably. Without money to pay the signing bonus, Wollheim was unwilling to send signed contracts to authors. On at least one occasion, a book without a valid contract went to the printer, and Wollheim later found out that the author, who was owed \$3,000 by Ace, was reduced to picking fruit for a living. ### 1971–2015: Ace becomes a subsidiary Both Wollheim and Carr left Ace in 1971. Wollheim had made plans to launch a separate paperback house, and in cooperation with New American Library, he proceeded to set up DAW Books. Carr became a freelance editor; both Carr and Wollheim went on to edit competing Year's Best Science Fiction anthology series. In 1969 Ace Books was acquired by Charter Communications in New York City. In 1977 Charter Communications was acquired by Grosset & Dunlap, and in 1982, Grosset & Dunlap was in turn acquired by G. P. Putnam's Sons. Ace was reputedly the only profitable element of the Grosset & Dunlap empire by this time. Ace soon became the science fiction imprint of its parent company. Carr returned to Ace Books in 1984 as a freelance editor, launching a new series of Ace Specials devoted entirely to first novels. This series was even more successful than the first: it included, in 1984 alone, William Gibson's Neuromancer, Kim Stanley Robinson's The Wild Shore, Lucius Shepard's Green Eyes, and Michael Swanwick's In the Drift. All were first novels by authors now regarded as major figures in the genre. Other prominent science fiction publishing figures who have worked at Ace include Tom Doherty, who left to start Tor Books, and Jim Baen, who left to work at Tor and who eventually founded Baen Books. Writers who have worked at Ace include Frederik Pohl and Ellen Kushner. In 1996, Penguin Group (USA) acquired the Putnam Berkley Group, and has retained Ace as its science fiction imprint. As of December 2012, recently published authors included Joe Haldeman, Charles Stross, Laurell K. Hamilton, Alastair Reynolds, and Jack McDevitt. Penguin merged with Random House in 2013 to form Penguin Random House, which continues to own Berkley. Ace's editorial team is also responsible for the Roc Books imprint, although the two imprints maintain a separate identity. ## People The following people have worked at Ace Books in various editorial roles. The list is sorted in order of the date they started working at Ace, where known. It includes editors who are notable for some reason, as well as the most recent editors at the imprint. - A. A. Wyn, owner (1952–1967) - Donald A. Wollheim, editor (1952–1971) - Terry Carr, editor (1964–1971); freelance editor (1983–1987) - Pat LoBrutto, mail room (1969–1972); science fiction editor (1974–1977) - Frederik Pohl, executive editor (December 1971 – July 1972) - Tom Doherty, publisher (1975–1980) - Jim Baen, complaints department (c. 1973–1974); gothics editor (c. 1974); science fiction editor (c. 1977–1980) - Ellen Kushner - Terri Windling, editor (1979–1987) - Harriet McDougal, editorial director - Susan Allison, editor (1980–1982); editor-in-chief (1982–2006); vice president (1985 – July 2015) - Beth Meacham, editorial assistant (1981–1982); editor (1982–1983) - Ginjer Buchanan, editor (1984–1987); senior editor (1987–1994); executive editor, science fiction and fantasy (1994 – January 1996); senior executive editor and marketing director (January 1996 – 2006); editor-in-chief (2006–2014). - Peter Heck (c. 1991–1992) - Laura Anne Gilman (c. 1991) - Lou Stathis, editor (? – c. 1994) - Anne Sowards, editorial assistant/associate editor (1996–2003); editor (2003 – February 2007), senior editor (from February 2007), executive editor (by September 2010) ## Ace nomenclature Until the late 1980s, Ace titles had two main types of serial numbers: letter series, such as "D-31" and "H-77", and numeric, such as "10293" and "15697". The letters were used to indicate a price. The following is a list of letter series with their date ranges and prices. - D-series: 35¢, 1952 to 1962. - S-series: 25¢, 1952 to 1956. - T-series: 40¢. This series is listed in Tuck's Encyclopedia, but he gives no examples in his index and there are none cited in other bibliographic sources. This series may, therefore, not exist. - F-series: 40¢, 1961 to 1967. - M-series: 45¢, 1964 to 1967. - G-series: 50¢, 1958 to 1960 (D/S/G series); 1964 to 1968 (later series). - K-series: various prices, 1959 to 1966. - H-series: 60¢, 1966 to 1968. - A-series: 75¢, 1963 to 1968. - N-series: 95¢, 1968. The first series of Ace books began in 1952 with D-01, a western in tête-bêche format: Keith Vining's Too Hot for Hell backed with Samuel W. Taylor's The Grinning Gismo. That series continued until D-599, Patricia Libby's Winged Victory for Nurse Kerry, but the series also included several G and S serial numbers, depending on the price. The D and S did not indicate "Double" (i.e., tête-bêche) or "Single"; there are D-series titles that are not tête-bêche, although none of the tête-bêche titles have an S serial number. Towards the end of this initial series, the F series began (at a new price), and thereafter there were always several different letter series in publication simultaneously. The D and S prefixes did not appear again after the first series, but the G prefix acquired its own series starting with G-501. Hence the eight earlier G-series titles can be considered part of a different series to the G-series proper. All later series after the first kept independent numbering systems, starting at 1 or 101. The tête-bêche format proved attractive to book collectors, and some rare titles in mint condition command prices over \$1,000.
58,158,363
Panagiotis Kavvadias
1,171,279,076
Greek archaeologist (1850–1928)
[ "1850 births", "1928 deaths", "Archaeologists from Athens", "Cawadias family", "Ephors General of Greece", "Greek archaeologists", "Members of the Academy of Athens (modern)", "People from Cephalonia" ]
Panagiotis Kavvadias or Cawadias (Greek: Παναγιώτης Καββαδίας; – 20 July 1928) was a Greek archaeologist. He was responsible for the excavation of ancient sites in Greece, including Epidaurus in Argolis and the Acropolis of Athens, as well as archaeological discoveries on his native island of Kephallonia. As Ephor General (the head of the Greek Archaeological Service) from 1885 until 1909, Kavvadias oversaw the expansion of the Archaeological Service and the introduction of Law 2646 of 1899, which increased the state's powers to address the illegal excavation and smuggling of antiquities. Kavvadias's work had a particular impact on the Acropolis of Athens, and has been credited with completing its "transformation [...] from castle to monument". Between 1885 and 1890, he removed almost all of the Acropolis's remaining medieval and modern structures, uncovering many ancient monuments in the process. He also played a role in the extensive reconstruction of the site by the architect and engineer Nikolaos Balanos. Though praised initially, the work caused considerable damage to several monuments and was almost completely deconstructed and rebuilt during the later 20th and early 21st centuries. Kavvadias oversaw the opening of the National Archaeological Museum in Athens, organised its first collections, and wrote some of its first catalogues. As an administrator, Kavvadias was regarded as energetic, centralising and autocratic. His career saw significant modernisation in the practice of archaeology in Greece, and he reformed and professionalised the Archaeological Service. His patronage of Athens's foreign archaeological schools was credited with promoting the development of Greek archaeology, but was also criticised by native Greek archaeologists. He created further discontent among the Archaeological Society of Athens by reducing its role in favour of the governmental Archaeological Service. After the Goudi coup of 1909, dissatisfaction in the Greek press and among his subordinates in the Archaeological Service led to his removal from office, from the Archaeological Society and from his professorship at the University of Athens, though he was able to return to public and academic life from 1912, and remained active in Greek archaeology until his death in 1928. ## Early life and education Panagiotis Kavvadias was born on in Kothreas [el], a village on the island of Kephallonia. His family had been prominent during the Venetokratia, the period of Venetian occupation which lasted from 1500 until the French conquest of 1797. At the time of his birth, Kephallonia and the other Ionian Islands were a protectorate of the United Kingdom; they were transferred to Greece in 1864. Kavvadias studied philology at the National University of Athens, and was awarded a scholarship by the Greek government for postgraduate study at the University of Munich. At Munich, he studied archaeology under Heinrich Brunn. Brunn, credited as "perhaps the foremost German archaeologist of [his] era", had revolutionised the study of Greek art history in the 1850s through his methodical, analytical study of literary texts alongside works of art. His use of the anatomical details of ancient sculpture to draw conclusions about its chronology, place of origin and authorship has been called the most important influence in the nineteenth-century "narrowing and sharpening" of the discipline of Classical art history, and therefore in moving the basis of the discipline away from connoisseurship towards empirical observation. Kavvadias later credited Brunn as a great influence on his own archaeological practice. Kavvadias also followed a course in epigraphy at the Collège de France in Paris under Paul Foucart, a French epigrapher later credited as "the doyen of our field" by the Classical archaeologist Salomon Reinach, and also studied in Berlin, London and Rome. ## Archaeological career After finishing his studies, Kavvadias returned to Greece, where he entered the Archaeological Service. In 1879, he was appointed as an ephor, an official with the responsibility of supervising, managing and protecting archaeological heritage – the first such official to be retained by the Archaeological Service in addition to the Ephor General, its professional head. In 1881, he published a short history of Greek archaeology. One of his first postings was to the excavations of the French School at Athens on the island of Delos, which had been running since 1873: he was there in 1882, working alongside Reinach, who later wrote that Kavvadias had seemed "full of enthusiasm and ambition". The first major excavations Kavvadias led personally were at Epidaurus in Argolis, which began in March 1881. In their first year, the excavations uncovered the theatre, and subsequently revealed several buildings and inscriptions within the Sanctuary of Asclepius and the nearby Sanctuary of Apollo Maleatas on Mount Kynortion. Following the retirement of Panagiotis Efstratiadis in 1884, Kavvadias was elevated to the position of Ephor General in 1885. He handed over responsibility for the site to his protégé Valerios Stais, but continued both to work at the site and publish the results of its excavation until his death in 1928. Kavvadias excavated frequently around Kephallonia, aiming to discover so-called 'Homeric' sites and remains of Odysseus' Ithaca. He made his first excavations on the acropolis of the island of Same, near the island known in modern times as Ithaca, in 1883. Kavvadias uncovered a gate, but considered his finds insignificant as the only material uncovered dated from the Archaic to the Roman period (that is, c. 850 BCE – c. 500 CE), rather than the 'Homeric' Late Bronze Age (c. 1600 – c. 1180 BCE). In 1889, he discovered Mycenaean chamber tombs and fragments of Mycenaean vessels in the area of Leivatho, near the village of Mazarakata, which provided the first proof of the presence of Mycenaean civilisation on the island. He excavated again at Same and Leivatho in 1899, with funding from Adriaan Goekoop, a wealthy Dutch amateur archaeologist, finding more structures on Same but none which predated the Classical period. He carried out further work on the island in 1908, in 1909 – when he discovered two small Mycenaean tholos tombs at Kokolata [el] – and in 1913. Kavvadias published the first reports of his excavations in the Government Gazette (Greek: Ἐφημερίς τῆς Κυβερνήσεως), an official publication normally used for laws and royal decrees. The Archaeological Service had lacked an official publication since 1860, when it had ceased to produce the Archaeological Newsletter (Ἀρχαιολογικὴ Ἐφημερίς), taken on in 1862 by the Archaeological Society of Athens as its own journal. Instead, news of its excavations and activities was normally released in journals or newsletters. In 1888, Kavvadias began to publish the monthly Archaeological Bulletin (Ἀρχαιολογικὸν Δελτίον) on behalf of the Service. He edited all of its volumes between 1885 and 1892 himself, after which publication of the journal ceased until 1915. During his period as Ephor General between 1885 and 1909, Kavvadias's main project was the excavation and subsequent restoration of the Acropolis of Athens. Until 1890, in collaboration with the German archaeologist and architect Georg Kawerau, he excavated or re-excavated almost the whole site, removing nearly all of its remaining post-Classical structures and discovering dozens of works of ancient sculpture, particularly Archaic korai. After 1890, the work on the Acropolis primarily consisted of restoration, particularly of the Parthenon, Erechtheion and Propylaia, overseen by Nikolaos Balanos, who directed the project largely independently. Kavvadias initiated the excavation of the Kabeirion in Boeotia in 1887, later continued by the German Archaeological Institute at Athens. In 1889, he conducted excavations at the sanctuary of Lycosura, which he took to be the sanctuary of Despoina described by the Ancient Greek geographer Pausanias. He discovered part of a cult group of statues – the work of the Messenian sculptor Damophon – showing Despoina seated on a double throne alongside Demeter, accompanied by Artemis and the Titan Anytos. In 1900, during rescue excavations in the Outer Kerameikos, he uncovered the Nessos Amphora, a late seventh-century BCE amphora which he took to be a container for a cremation burial from the nearby Dipylon cemetery. In modern times, the vase has become the name-piece of the Nessos Painter, and was described by John Beazley as the "chief example" of early black-figure vase painting, as well as establishing the Nessos Painter as "the earliest Greek artist whose personality we can grasp." In 1902–1903, he excavated the Heraion of Samos alongside the future prime minister, Themistoklis Sofoulis, then a lecturer at the University of Athens. He also oversaw the first reconstruction of the Temple of Apollo at Bassae, excavated by Konstantinos Kourouniotis [el], between 1902 and 1908. According to his obituary in the Greek newspaper Skrip, he also served as director of the archaeological department of the Ministry of Education. ## Excavations at Epidaurus (1881–1928) On , Kavvadias began excavations on behalf of the Archaeological Society of Athens at Epidaurus, with the aim of uncovering the theatre described by Pausanias. These were the first excavations undertaken by the Society outside Athens, apart from minor and small-scale rescue excavations. In 1881, the excavations uncovered the theatre, as well as two stelae (inscribed stone slabs) in the Sanctuary of Asclepius. The stelae, dating to the late fourth or early third century BCE and sometimes called 'miracle inscriptions', recorded the names of at least twenty individuals and the means by which they were healed – usually miraculous dreams or visions. The excavation and publication of these stelae contributed significantly to Kavvadias's early archaeological reputation. In 1882, Kavvadias uncovered the tholos (circular temple) and the Temple of Asclepius, followed by the abaton in 1883. In 1884, he excavated the Temple of Artemis and the Great Propylaia, and reconstructed a row of columns in the western stoa of the abaton. The excavations continued until 1927: Valerios Stais, whom Kavvadias appointed as an ephor of the Archaeological Service in 1885, joined them as a supervisor in early 1886, after Kavvadias's elevation to Ephor General, and became field director in 1887. In 1896, he excavated the first parts of the nearby Sanctuary of Apollo Maleatas on Mount Kynortion. That year, the French architectural historian Charles Chipiez described the excavation of Epidaurus as "of capital importance to the history of Greek architecture", though he criticised the restrained and limited reconstructions drawn up by the German Wilhelm Dörpfeld, who worked with Kavvadias and illustrated his publication of the excavations, in favour of the more lavish reconstructions created in 1895 by the French architect Alphonse Defrasse – reconstructions which, by the later 20th century, were considered largely erroneous. Kavvadias's report on his excavations of the Roman-period odeion at the site, which he published in 1900, has been described as "invaluable" for the amount of evidence it preserves, much of which has been lost through later deterioration in the building's condition. Kavvadias returned to Epidaurus throughout his career: in a 1929 obituary, the British archaeologist Robert Carr Bosanquet wrote that the summer excavation season there was "almost the only holiday [Kavvadias] permitted himself". In 1902, he discovered the first parts of a building adjacent to the stadium (which had already been discovered by 1893), connected directly to it by means of an entrance tunnel. The findings from the building's excavation were never fully published; in 1992, the archaeologist Stephen G. Miller suggested that it may have been an apodyterium (changing room) for the athletes. In 1903, Kavvadias published part of the inscription upon a third stele, detailing further accounts of miraculous healings; he published the inscription in full in 1918. In his last excavation season at Epidaurus, which lasted from June 1928 until shortly before his death in July, he uncovered an elaborate building, possibly used by athletes preparing for competition, to the north of the stadium. The excavation of Epidaurus has been described as a "landmark", both for its nature as the first state-led excavation in Greece outside Athens and for the finds uncovered there. Reinach called the excavations one of Kavvadias's "two immortal daughters", the other being his work on the Acropolis of Athens. Kavvadias was more ambivalent about his work there: when showing a fellow archaeologist, Stratis Paraskeviadis, around the site, he pointed to the theatre and said "there I sacrificed and destroyed". Vasileios Petrakos, a historian of Greek archaeology, has suggested that he may have been alluding to the clearing of an expansive forest which had originally covered the ruins. ## Excavations and restorations on the Acropolis (1885–1909) ### Excavations with Kawerau (1885–1890) Kavvadias's predecessor as Ephor General of Antiquities, Panagiotis Stamatakis, had planned to complete the excavation of the Acropolis of Athens, but died suddenly in 1884 before work could commence. Kavvadias therefore carried out the excavations with funding from the Archaeological Society of Athens. He undertook the work, which lasted from until the end of 1890, in collaboration with the German architect Georg Kawerau. Kavvadias excavated the entire Acropolis down to bedrock, leaving, as he claimed, "not the slightest quantity of soil ... which has not been investigated." All remaining post-Classical buildings on the site were demolished. The excavation has been described as "unsystematic": it has also been criticised for keeping no record of stratigraphy, and for only making partial records through drawing and photography. Throughout 1885, the excavations moved from the western side of the Acropolis, beginning near the Propylaia (the monumental gateway to the site), towards the east. In 1886, three areas were added: the part of the North Circuit Wall between the Erechtheion and the Propylaia; the area between the Parthenon and the Erechtheion (which contained the remains of the Hekatompedon, or 'Old Parthenon', first discovered in 1882) and the area east of the Parthenon. In 1887, Kavvadias excavated the area to the east of the Erechtheion, along the East Circuit Wall to the Belvedere tower, and from the Belvedere tower to the area between the Parthenon and the Acropolis Museum. In 1888, he excavated the area around the museum, as well as the area between the Parthenon and the South Circuit Wall, and uncovered the Parthenon's stylobate to its full depth of 14 metres (46 ft), or twenty-two levels of masonry. In 1889, most of the southern and western part of the Acropolis was cleared of post-Classical remains, as were the interiors of the Parthenon and the Pinakotheke (a chamber in the monument's northern wing) of the Propylaia. Finally, in 1890, Kavvadias cleared the route onto the Acropolis from the Beulé Gate. The excavations uncovered thousands of fragments of Archaic and pre-Classical art – the largest quantity of such material ever discovered. In particular, the 1887 and 1888 excavations found the remaining parts of the sculptures of 'Heracles and the Hydra', which once formed a pediment of the Hekatompedon. Much of this material came from the so-called Perserschutt, the layer of debris left by the Persian destruction of the Acropolis in 480 BCE and the ritual burial of the damaged statuary by the Athenians after the Persian Wars. A particularly fruitful area was the so-called 'kore pit', north-west of the Erechtheion, which is the major known source for kore and kouros sculptures of the Archaic period: Kavvadias uncovered somewhere between nine and fourteen korai in the initial excavation alone. Other notable finds from the Perserschutt included the Persian Rider sculpture. Kavvadias also excavated an early Christian church, as well as significant remains of Mycenaean fortification of the western side of the Acropolis near the Propylaia. On the northern side of the Acropolis, Kavvadias excavated in 1887 a cave (later identified by the archaeologist Oscar Broneer as part of the Sanctuary of Eros and Aphrodite) in which he found pieces of black-figure pottery, the head of a female sculpture, and what he believed were traces of the secret route described by Pausanias as being used by the arrephoroi during the rite of the Arrhephoria. Modern research by Rachel Rosenzweig has questioned whether this secret route, only vaguely described by Pausanias, ever truly existed. His excavations also uncovered the remains of the Archaic 'Building B' beneath the Pinakotheke of the Propylaia, as well as the Brauroneion, the Chalkotheke and the Temple of Roma and Augustus. The archaeological finds from the excavations, including sculptures, vases, architectural remains, figurines and inscriptions, became the core of the collection of the Old Acropolis Museum. Kavvadias's work has been described by the archaeological historian Fani Mallouchou-Tufano as finishing "the transformation of the [Acropolis] from castle to monument". The demolished structures included a late Roman reservoir near the Propylaia, a structure known as the tholos near the Erechtheion, a medieval building to the south of the Parthenon, as well as various Late Roman fortifications. Kavvadias also removed the 'walls' or 'panels' (Greek: πίνακες, romanized: pinakes), built by his predecessor Kyriakos Pittakis from various scattered antiquities. Pittakis had intended the pinakes to prevent looting, but had been criticised in the contemporary press for presenting artefacts of different periods and provenances together, and for breaking up groups of sculptures that originally formed single ensembles into different pinakes.Kavvadias made minor excavations in the caves on the northern side of the Acropolis during 1896 and 1897, uncovering one with what he believed to be the remains of an altar, as well as ten marble plaques with inscriptions marking them as a dedication to Apollo, who was identified by the epithet 'under the cliffs' (Greek: ὑπὸ Μάκραις or ὑπ' Ἄκρας). The inscriptions, dating from between 40 CE and the later 3rd century CE, identified the dedicators as senior Athenian officials ('archons') and their secretaries from the Roman period, which has given the site the name of the 'Archons' Cult'. Between 1887 and 1888, a second museum, nicknamed the 'little one' (μικρό), was built by Kawerau to the east of the main Acropolis museum, in the area of the Sanctuary of Pandion. During the Acropolis Museum's expansion in the 1950s, it was demolished and the space incorporated into the main structure. ### Balanos's restorations (1894–1909) The Atalanti earthquakes of 1894 damaged the Parthenon, causing the fall of parts of its opisthodomos (rear porch). The Archaeological Service, led by Kavvadias, commissioned the architects Francis Penrose, Josef Durm [de] and Lucien Magne to investigate possible responses, and decided upon a partial reconstruction which would strengthen the damaged parts and replace, where necessary, ancient marble with modern. They also decided to use, as far as possible, the original building methods (dry-stone masonry held together with metal clamps) in the restoration work, and Kavvadias later wrote in favour of this approach. A full-scale reconstruction was ruled out, and the primary aim of the project was defined as strengthening the extent parts of the building which had suffered damage. Penrose, Durm and Magne formed a supervising committee, but the operational direction was delegated to the 'Committee for the Conservation of the Parthenon', a body which included academics, members of Athens's foreign schools of archaeology, and representatives of the Greek government. Nikolaos Balanos, Athens's Chief Engineer of Public Works, was invited to join this committee after its formation, and effectively took control of the reconstructions, operating, according to Mallouchou-Tufano, "independently and unchecked". Between 1898 and 1909, Balanos worked almost continuously on the Parthenon, the Erechtheion and the Propylaia. His work was financed by the General Ephorate of Antiquities, of which Kavvadias was head, and by the Archaeological Society of Athens, of which Kavvadias was secretary. The restorations were initially praised by contemporaries, but were later criticised for their invasive methodology and for the lack of archaeological expertise shown in some of the work. Balanos's use of reinforced concrete to fill the gaps in marble masonry led to water ingress and the corrosion of the iron clamps used to reinforce the structure, cracking the marble and causing blocks to fall apart. In the Erechtheion, this problem was compounded by corrosion caused by the exposure of the original Caryatid sculptures to air pollution. In 1977, a programme was announced to address the consequences of Balanos's restorations, which included the removal of the Caryatids to the Acropolis Museum and their replacement on the temple by replicas, and eventually involved (at least partially) dismantling and rebuilding every structure on which Balanos had worked. ## Ephor General of Antiquities (1885–1909) In 1885, Kavvadias, the favoured candidate of Prime Minister Charilaos Trikoupis, succeeded Panagiotis Stamatakis as Ephor General of Antiquities, the head of the Greek Archaeological Service. Kavvadias's time as Ephor General saw the opening of the National Archaeological Museum of Athens in 1889. He took a centralising approach to its collection, which he composed of material from all over Greece, except for Olympia and Delphi. He produced two catalogues of its sculptures, published in 1890 and 1892, assisted by Christos Tsountas for the prehistoric material. Under his leadership, the Archaeological Service expanded its portfolio of museums in Greece, building on the work of his predecessor Stamatakis in opening museums for local archaeological collections around the country. Kavvadias assisted with the planning and design of the Heraklion Archaeological Museum in Ottoman-ruled Crete, which opened in 1883, drawing up the plan for the museum's Neoclassical buildings in collaboration with Wilhelm Dörpfeld. In 1909, he was invited, along with the historian George Soteriadis and other members of the Archaeological Society, to arrange the first collections of the Cyprus Museum. Between 1901 and 1905, Kavvadias organised the First International Archaeological Conference, which was held in Athens from . The conference has been described as a "flanking move" by Kavvadias to diminish the influence of the Archaeological Society in favour of the Archaeological Service: the Archaeological Society protested at the government's ownership of the conference, represented by Kavvadias and the Minister for Education, Emmanuel Stais. Pressure from the Society also forced Kavvadias to reverse his decision to exclude Greek archaeologists from the conference. Kavvadias spoke at the funeral of the German archaeologist Heinrich Schliemann in the First Cemetery of Athens on , giving a short eulogy in Greek. He credited Schliemann with much of the creation of the study of Greek prehistory, and expressed his view that Greek archaeology was both "peculiarly Greek" and had "the whole civilised world for its home." He was elected as a professor of the University of Athens on , alongside Christos Tsountas, by a vote of seventeen to two of the nineteen professor-electors present. ### Reorganisation of the Archaeological Service Kavvadias's own appointment in 1879, made by Panagiotis Efstratiadis, had marked the beginning of the expansion of the Archaeological Service, raising the number of its ephors from one to two. Kavvadias continued the recruitment of new ephors: by the end of his tenure, the Service had recruited over a dozen (having previously employed only the Ephor General between 1836 and 1866), including Habbo Gerhard Lolling and Konstantinos Kourouniotis, and established operations on the island of Crete, then an autonomous province of the Ottoman Empire. He also imposed the first formal academic criteria for ephors – his predecessor as Ephor General, Panagiotis Stamatakis, had received no university education or formal archaeological training – requiring that all ephors be graduates of the University of Athens, and either to have undertaken postgraduate study in archaeology or to pass an examination in archaeology, history, Ancient Greek and Latin. In 1887, he imposed the stricter requirement that all potential ephors hold a doctorate in either philology or archaeology, and that they subsequently pass an interview before a board composed of professors of classics, archaeology and history, which included the Ephor General. Kavvadias created much of the bureaucratic apparatus of the modern Archaeological Service. Through a royal decree of , he established the Archaeological Receipts Fund, which used the proceeds of the sales of tickets, casts and catalogues by museums to fund the conservation and restoration of ancient monuments. He was also behind the Royal Decree of , which created the first systematic division of Greece into archaeological regions. ## Archaeological Society of Athens Kavvadias was an active member of the Archaeological Society of Athens, a learned society with a significant role in organising excavations and protecting cultural heritage in Greece. In particular, he mounted a long-running campaign to become secretary of the society, which has been interpreted by Vasileios Petrakos, a later secretary and historian of the archaeological society, as a means of bringing its financial revenues under the effective control of the state. From at least 1886, when Kavvadias intervened on behalf of the government in an investigation into the society's financial mismanagement, he acted to increase the influence of the General Ephorate over its affairs, creating animosity between the state and the society which had become noticed and regularly remarked upon in the press by 1888. Although the Archaeological Society had traditionally supported the aims of the state, tensions had already begun to develop between the society and the Archaeological Service, particularly as the society often bore the cost for work initiated by Kavvadias in his capacity as the Service's Ephor General, and ephors employed by the society were often seconded to work for the government: the society voted to end this practice in 1882. Kavvadias intensified his efforts to gain control of the society in 1894, using his own allies in the press and within the society to attack its secretary, Stefanos Koumanoudis. In December 1894, elections were held for the society's officers: Koumanoudis was re-elected as secretary, but resigned in protest after one of Kavvadias's allies was also appointed to the council. Several of the newly elected officers followed Koumanoudis: Dimitrios Filios [el] resigned on , followed by the numismatist Ioannis Svoronos on and the folklorist Nikolaos Politis [el]. Kavvadias therefore became secretary by a near-unanimous vote on . Svoronos was briefly imprisoned later in 1895 after Kavvadias sued him for insulting remarks Svoronos made about him at the society's general assembly on . As secretary, Kavvadias increased the society's revenues as well as its activities in both excavations and restorations. He initiated the drafting in 1895 of a new constitution for the society, which expanded its sphere of operations and made the Crown Prince of Greece, Constantine, its president. He also oversaw the society's move to new premises in 1899, and wrote a history of it to commemorate the occasion. ## Efforts against antiquities crime By the 1880s, it was clear that the legal mechanisms available for the protection of cultural heritage were inadequate to the challenge posed by illegal excavations and export of antiquities. The main law governing antiquities was the Archaeological Law of , which has been described as "loosely interpreted and even more loosely enforced": antiquities from unauthorised, illegal excavations were openly advertised for sale both within and outside Greece. Under the 1834 law, antiquities discovered on private land could remain in private possession, despite legally being jointly owned by the state and the private 'owners': this created ambiguity which reduced the state's ability to control antiquities. In the early 1870s, the looting of the necropolis of Tanagra had seen some 10,000 tombs robbed and hundreds of antiquities, including vases and figurines, sold abroad, which outraged the Greek press and raised the issue of archaeological crime among the general population. During his tenure as Ephor General between 1864 and 1884, Panagiotis Efstratiadis had attempted to work against looters and smugglers, but was hamstrung by the legal framework then in place. In 1866, he was legally forced to permit an excavation on private land by two Athenian art dealers, despite his distrust of their intentions. He was also unable to prevent the export of significant antiquities, such as the Aineta aryballos (a seventh-century BCE Corinthian vase sold to the British Museum in 1866 by the epigrapher and art dealer Athanasios Rousopoulos [el]) and a series of funerary plaques, painted by Exekias, sold illegally to the German archaeologist Gustav Hirschfeld by the art dealer Anastasios Erneris in 1873. Kavvadias has been credited with shaping Law 2646 of 1899, subtitled On Antiquities (Περὶ Ἀρχαιοτήτων). Under the new law, all antiquities ever discovered in Greece, whether on public or private land, were considered property of the state, closing the previous 'joint ownership' loophole. The law was dated and was followed on by a series of six royal decrees, giving the state additional powers to oversee the excavation of antiquities and to prevent their sale overseas. This included the power to confiscate any antiquities not declared to the state within six months of excavation, a total prohibition on unauthorised excavations, and severe legal penalties for those contravening the new law. The law of 1899 also centralised power in the hands of the Ephor General, who was given the final decision over most critical matters. It also, for the first time, formally identified the Byzantine period as part of "Hellenism" – the idea of Greek history and culture – and has been described as part of "the rehabilitation and incorporation of Byzantium ... into the national narrative". Greece's first ephor of Byzantine antiquities, Adamantios Adamantiou, was appointed under Kavvadias in 1908. Kavvadias was known for his determination to oppose the export of antiquities: Reinach wrote in his obituary of the "fever of confiscations" that Kavvadias launched. However, Reinach also judged that his efforts "produced hardly any useful effects", pointing to an 1886 case in which Kavvadias seized a group of fake terracotta plaques, which were being exported from Athens to Paris wrapped in pages from a journal with only a single subscriber in Athens, a merchant by the name of Lambros. Lambros had influence with King George of Greece, and had a relative who was a tutor to the future King Constantine I; Kavvadias therefore abandoned the case. In 1887, several items were stolen from the Numismatic Museum of Athens by a thief named Periklis Raftopoulos, who was apprehended by police in Paris. Kavvadias dismissed the founder and director of the museum, Achilleus Postolakas [el], and accused him of complicity in the theft. Kavvadias also dismissed Ioannis Svoronos, Postolakas' deputy, and attempted to prosecute the French buyers who had attempted to purchase Raftoupoulos' stolen antiquities. One of those buyers killed himself before the French cabinet minister Édouard Lockroy, through his subordinate Gustave Larroumet, made clear to Kavvadias that the French government would not pursue what he considered to be a matter for Greek customs. Postolakas was acquitted by an Athenian court in April 1889, and the affair made Kavvadias several enemies: Reinach later wrote that the Ephor General had "lost his head a little". ## Dismissal, exile and return (1909–1928) On , a group of army officers known as the 'Military League' carried out the Goudi coup, which led to popular demonstrations against the political establishment and the resignation of the prime minister, Dimitrios Rallis. Kavvadias's subordinates launched their own so-called "mutiny of the ephors", angered by his style of leadership, which has since been described as both "authoritarian" and "tyrannical". Another source of opposition to Kavvadias within Greece was his support of the foreign schools of archaeology, which he was accused of privileging above the interests of native Greek archaeologists. Discontent with Kavvadias reached the Greek press on with an article in the newspaper Chronos, considered the mouthpiece of the Military League. Entitled "Need for Honesty" (Ἀνάγκη Εἰλικρινείας), the article accused Kavvadias of "humiliating Greek science for the profit of foreign science" through being overly "accommodating" to the foreign schools, supporting them from Greek resources, and giving them access to the best archaeological sites in preference to Greek archaeologists. Kavvadias's long-time opponent Ioannis Svoronos was accused of being behind the article, though he denied any involvement and expressed his support for the foreign schools. Three days later, the directors of the foreign schools published a joint riposte in the journal Estia, denying the accusations made in Chronos and praising Kavvadias for his "dominant role" in "render[ing] Athens its former prestige as metropolis for ancient studies". Nevertheless, criticism of Kavvadias continued to build. On , Svoronos wrote a letter in Chronos in which he accused Kavvadias of misappropriating 80,000 drachmas from the sale of his museum catalogues, and another journal accused Kavvadias of improperly favouring foreign archaeologists, and criticised the Archaeological Society for its inaction. Kavvadias also came under pressure from within the government, and the Minister for Education advised him to step down. He asked the board of the Archaeological Society for temporary leave from his role as secretary, which was granted. Before the end of 1909, he had been removed from his post as Ephor General, and ordered to leave Greece by the Military League, who labelled him "a dangerous reactionary" and had him escorted to the harbour of Piraeus by a military non-commissioned officer. He left for Vienna, and subsequently settled in Paris. His protégé Stais, who had served on the Archaeological Society's council since 1896, was forced to resign at the same time, and in 1910 Kavvadias was stripped of his professorship at the University of Athens. Kavvadias's downfall was met by protests from many of the foreign schools, who had benefitted from his liberal attitudes towards their activities. In Britain, the university professors Robert Carr Bosanquet, Percy Gardner and Ernest Arthur Gardner organised a collection of funds to support him. Following Kavvadias's ousting, the Greek government reorganised the Archaeological Service. Kavvadias's duties were given to the archaeologist Gabriel Byzantinos, who was shortly afterwards replaced by Vasileios Leonardos [el], the director of Athens's Epigraphical Museum. Under Law 3721, dated , the General Ephorate was abolished, in favour of a more collective system of management where the function of the Ephor General was assumed by the 'Archaeological Board', a ten-member committee of university professors, ephors and the directors of Athens's museums, on which the newly titled Director of the Archaeological Service had a single vote. The country was re-divided into seven archaeological districts, replacing the nine established by Kavvadias in 1886. A further outcome of the reforms was that the Archaeological Society was no longer permitted to conduct restoration work, which now had to be undertaken by the Archaeological Service. When the National Assembly was elected on – a measure negotiated in exchange for the Military League's disbandment by Eleftherios Venizelos, a Cretan politician invited by the League's leaders to help negotiate a political settlement to the coup – Kavvadias was elected by the people of Kephallonia as their representative. On the dissolution of the Assembly in 1912, Kavvadias regained his post as secretary of the Archaeological Society, and subsequently held it until 1920. He also regained his professorship at Athens, which he would hold until 1922, and became chairman of the Archaeological Board, which he remained until his resignation in 1920. In 1920, he began work on a corpus of Greek mosaics, funded by the Greek government and the Union Académique Internationale, which remained unfinished at the time of his death. Kavvadias returned to Epidaurus for the final time in June 1928. There, he suffered a seizure and returned to Athens, where he died on 20 July. ## Impact upon Greek archaeology ### As an archaeologist In 1910, the journal of the British Classical Association described Kavvadias as a "household name" among archaeologists. His work at Epidaurus was recognised in his lifetime as a crowning achievement: a 1909 American handbook, written while much of Kavvadias's excavations there remained unfinished, described Epidaurus as "one of the most important sites in Greece", and the excavations of the Acropolis under Kavvadias as the most important achievement of the Archaeological Society of Athens. Writing of his work on the Acropolis in the Archaeological Bulletin, Kavvadias boasted to have "deliver[ed] the Acropolis back to the civilised world, cleansed of all barbaric additions, a noble monument to the Greek genius, a modest and unique treasury of superb works of ancient art". Within the field of Greek art history, his discovery of red-figure pottery in the debris of the Perserschutt provided a terminus ante quem demonstrating that this style had been in use before 479 BCE, which contradicted the then-current understanding of the chronological relationship between red-figure and black-figure vase painting. For his own part, Kavvadias described the findings of his work on the Acropolis as "most significant, unexpected and awesome". He was also notable in the study of epigraphy, a field of archaeology closely linked with the identity of the nineteenth-century Greek state. In 1906, he was listed in a French academic journal as one of the three "particularly illustrious" epigraphers of his day, alongside Koumanoudis and Konstantinos Karapanos [el]. Nikolaos Papazarkadas, a modern historian of Greek epigraphy, has also praised Kavvadias's work on the inscriptions he uncovered on the Acropolis, as well as the 'miracle inscriptions' from Epidaurus. As Ephor General, Kavvadias oversaw other significant archaeological excavations, particularly that of Delphi, which was undertaken between 1892 and 1903 by the French School at Athens as the Archaeological Society lacked the funding for the work. He was also responsible for the recovery and study of the Antikythera mechanism under Valerios Stais from 1900 to 1902. In Athens, Kavvadias's period as Ephor General saw significant excavations beyond the Acropolis: the partial excavation and identification of Hadrian's Library by Koumanoudis in 1885–1886, and excavations in the Roman Agora in 1890–1891, which involved the expropriation and demolition of several residential, religious and military buildings, including the total removal of Epameinondas Street. According to Mallouchou-Tufano, his excavations, particularly on the Acropolis, provided "a huge impetus, both internationally and in Greece ... to the scientific investigation of the history of the [Acropolis] ... [and to] epigraphy, pottery and the history of ancient art." Although the restorations made under Kavvadias's supervision by Nikolaos Balanos were later criticised and mostly reversed, the vision of the Acropolis and its monuments they created has been termed "the 'trade-mark' of modern Greece". ### As an archaeological administrator Kavvadias has been termed "a dominant personality" in Greek archaeology around the turn of the 20th century, and his position as a "virtual dictatorship ... in archaeological matters". The rapid expansion of the Archaeological Service between 1883 and 1908, nearly all of which Kavvadias oversaw, has been described as "the beginning of a new era in [its] history". His approach to its organisation has been described as "centralising", and as marked by the energetic way in which he pursued his objectives. Several archaeologists hired as ephors under Kavvadias became significant figures in Greek archaeology: Kourouniotis, for example, would be director of the National Archaeological Museum (from 1922 to 1925) and serve two terms as director of the Archaeological Service (1914–1920 and 1925–1933). As an administrator, Kavvadias has been praised for his archaeological and legal expertise, and the administrative and legal structures he created within the Archaeological Service and through laws and royal decrees have been credited with creating "the shape, in miniature" of the twenty-first century administration of antiquities in Greece. Kavvadias was recognised for his support of Athens's foreign archaeological institutes, which multiplied in number and activity during his tenure. The British School at Athens was founded in 1886 and the Austrian Archaeological Institute at Athens in 1898; the Italian School of Archaeology at Athens was founded in July 1909, shortly before Kavvadias's removal as Ephor General. He had a particularly warm relationship with Charles Waldstein, director of the American School of Classical Studies at Athens from 1889 until 1893. Thomas Day Seymour, chairman of the school's managing committee, remarked that Kavvadias, whom he usually judged "surly", became "genial" in Waldstein's presence, and suggested that Kavvadias would make Waldstein "a present of the whole Acropolis, if it were in his power." It was at Waldstein's instigation that Kavvadias issued the American School its permit to excavate at Eretria on Euboea in January 1891. Kavvadias's contemporary Bosanquet wrote that his patronage of the foreign schools was a significant factor in promoting the "study and preservation of his country's heritage". In 2007, Vasileios Petrakos named the foreign schools, and the excavations, publications and lectures that have taken part under their auspices, as a substantial factor behind making Athens "the major centre of Greek archaeology". Within the archaeological service and society, Kavvadias's style of leadership – described in modern times as "tyrannical" and a "monocracy" – was unpopular. By the end of his tenure, he had made enemies among the archaeological society, among his subordinates and among university-based archaeologists. Petrakos has accused him of "deliberate defamation" in his handling of the Archaeological Society of Athens, and characterised his centralising approach to administration as "suffocating" the ephors who worked under him. In turn, this discontent was a major factor behind Kavvadias's removal under the Military League in 1909. ## Personal life and honours Kavvadias had two sons: Alexander Polycleitos Cawadias, a medical doctor known for his work on intersexuality, and Epameinondas Kavvadias, an admiral in the Hellenic Navy who served as its commander during the Second World War. He was elected an honorary member of the British Society of Antiquaries in 1893, a corresponding member of the French Académie des Inscriptions et Belles-Lettres in 1894, a member of the Royal Academy of Belgium and a corresponding member of the Prussian Academy of Sciences in Berlin. He was also awarded an honorary doctorate by the University of Cambridge in 1904, as well as an honorary professorship at Leipzig University. In 1926, he was elected as a founding member of the Academy of Athens, Greece's national academy. He was also an honorary member of the Royal Society of Medicine. ## Selected publications
54,350,456
Pali-Aike volcanic field
1,146,567,116
Cluster of volcanoes in Argentina and Chile
[ "Andean Volcanic Belt", "Argentina–Chile border", "Cinder cones of Chile", "Geology of Patagonia", "Holocene volcanoes", "Inactive volcanoes", "Maars of Argentina", "Pleistocene South America", "Pleistocene volcanoes", "Quaternary South America", "Volcanic crater lakes", "Volcanic fields", "Volcanoes of Magallanes Region", "Volcanoes of Santa Cruz Province, Argentina" ]
The Pali-Aike volcanic field is a volcanic field along the Argentina–Chile border. It is part of a family of back-arc volcanoes in Patagonia, which formed from processes involving the collision of the Chile Ridge with the Peru–Chile Trench. It lies farther east than the Austral Volcanic Zone, the volcanic arc that makes up the Andean Volcanic Belt at this latitude. Pali-Aike formed over sedimentary rock of Magallanes Basin, a Jurassic-age basin starting from the late Miocene as a consequence of regional tectonic events. The volcanic field consists of an older plateau basalt formation and younger volcanic centres in the form of pyroclastic cones, scoria cones, maars and associated lava flows. There are approximately 467 vents in an area of 4,500 square kilometres (1,700 square miles). The vents often form local alignments along lineaments or faults, and there are a number of maars and other lakes, both volcanic and non-volcanic. The volcanic field is noteworthy for the presence of large amounts of xenoliths in its rocks and because the maar Potrok Aike is located here, where palaeoclimate data have been obtained. The field was active starting from 3.78 million years ago. The latest eruptions occurred during the Holocene, as indicated by the burial of archaeological artifacts; the Laguna Azul maar formed about 3,400 years before present. Humans have lived in the region for thousands of years, and a number of archaeological sites such as the Fell Cave are located in the field. Presently, parts of the volcanic field are protected areas in Chile and Argentina, and the city of Rio Gallegos in Argentina is within 23 kilometres (14 mi) of the volcanic field. ## Name The name Pali-Aike comes from the Tehuelche language, where pale means "hunger" and aike means "location". Originally it was the name of a farm (estancia) and was later applied to the volcanic field. ## Human geography The Pali-Aike volcanic field spans the border between Argentina and Chile, northwest of the Magellanes Strait. Most of the field lies in Argentina within the southernmost part of Santa Cruz Province, while the Chilean part is in the commune of San Gregorio, Chile. The cities of Rio Gallegos (Argentina) and Punta Arenas (Chile) lie northeast and southwest of Pali-Aike respectively. Unusually for Argentine volcanoes, Pali-Aike volcanoes are close to urban areas since the closest vent is only 23 kilometres (14 mi) or 30 kilometres (19 mi) away from Rio Gallegos; the vents are easily observed from the city. The Monte Aymond border pass lies next to the volcanic field and Argentine National Route 3 passes through the Pali-Aike volcanic field. The border crossing Paso Integración Austral lies next to the volcanic field. On the Chilean side there are hiking trails. ## Geography and structure ### Local The Pali-Aike volcanic field covers a surface area of 4,500 square kilometres (1,700 square miles), and extends over 150 kilometres (93 mi) from northwest to southeast. It is formed by a plateau of lava flows that is up to 120 metres (390 ft) thick (in its northwestern reach), with an average relief of 20–100 metres (66–328 ft). This plateau is formed by tables containing depressions and lakes, and whose margins are steep-dipping slopes that accumulate blocks at their feet. It includes remnants of individual volcanic centres, and some volcanic necks situated in the west–central part of the field may be the formerly underground components of now-eroded volcanic edifices. Among these volcanic necks are the Cuadrado, Domeyko, Gay and Philippi hills, which conspicuously stick out of the surrounding plains. The volcanic rocks were emplaced atop Tertiary-age sediments, which were smoothened by glacial action. The sediments are often unstable and prone to mass wasting and landslides. There are 467 volcanic vents in the field. Monogenetic volcanoes are emplaced on the lava plateau at elevations of 110–180 metres (360–590 ft) above sea level and include maars, tuff rings and scoria cones. These various centres rise between 20–160 metres (66–525 ft) above the surrounding terrain. Nested craters, breached craters and fissure vents are common among the various vents, as are lava flows, but there has been little research on the scoria cones. Lava flows embedded in valleys reach lengths of 8 kilometres (5 mi). Pyroclastic cones in Pali-Aike include Aymond, Colorado, Dinero, Fell and Negro. The vent Cerro del Diablo, a pyroclastic cone, is the youngest volcano in the field and has emitted both ʻaʻā and pahoehoe lava, which have a fresh appearance and no soil cover. The vents were origins of lava flows, which sometimes breached the vents. Some flows are older and covered with soil while younger ones are not. Such young lava flows also have surface features including lava tunnels, hornitos, tumuli and a wrinkled surface. Some of these are heavily eroded while the southeastern part of the field features fresh-looking centres, where they form the "Basaltos del Diablo". The individual volcanoes are subdivided into three groups, which are referred to as "U1" (the plateau lavas), "U2" (the older centres) and "U3" (for the more recent vents). Maars are depressions in the ground which are encircled by a ring of sediment that rises above the surrounding terrain; they typically form where frozen or liquid water interacts with rising magma and causes explosions. In Pali-Aike there are about 100 of them, with diameters ranging from 500 metres (1,600 ft) to about 4,000 metres (13,000 ft), and they make up the characteristic topography of the volcanic field. The periglacial ground is rich in ice and water, which might explain why there are so many maars in Pali-Aike. Notable among these lakes is Laguna Azul, a crater lake which is located within a pyroclastic ring at the side of a scoria cone. This maar formed during three stages in three separate craters and is also the source of a lava flow. Potrok Aike in comparison is much larger (crater diameter of 5 kilometres (3.1 mi)); its rim is barely recognizable and appears to be more akin to a maar. Additional maars in the southwestern part of the field are the so-called "West Maar" and "East Maar", which contain the lakes Laguna Salsa and Laguna del Ruido respectively, Bismarck, Carlota, Los Flamencos, Laguna Salida/Laguna Ana and Timone Lake. A number of vents form various alignments, usually along northwest–southeast and east–northeast–west–southwest lines; some older centres show a north–south pattern. Such alignments occur when local lineations act as a pathway for magma to ascend to the crust and control not only the position of the vents, but also the shape of the volcanoes forming on top of the vents. These lines match the strike of the Magallanes-Fagnano fault zone and the older Patagonian Austral Rift. Faults within the field have been active in the Tertiary and into the Holocene, and a graben in the southwestern part of the field has diverted lava flows. The Gallegos River passes north of the volcanic field, while its tributary Rio Chico crosses the volcanic field from southwest to northeast. The terrain of the field is highly permeable to water, which later forms wetlands that attract a number of birds and springs that are used as a source of water. Maars are not the only water bodies within the field; lakes formed by lava dams, glacial lakes and lakes formed by wind deflation also exist. Some of these water bodies dry up late in summer, allowing wind to remove sediments from their lakebeds, which thus become the origin of long dune fields. Active growth of such windstreaks has been observed in Pali-Aike. Windstreaks are an uncommon occurrence on Earth; they are much more common on Mars. ### Regional Pali-Aike is part of the Patagonian back-arc, a province of plateau lavas of Cenozoic age. These plateau lavas are of alkaline to tholeiitic composition; hawaiite, trachyandesite and trachyte are present in smaller amounts. From south to north these plateau lavas include Pali-Aike itself, Meseta Vizcachas, Meseta de la Muerte, Gran Meseta Central, Meseta Buenos Aires, Cerro Pedrero, Meseta de Somuncura, Pino Hachado and Buta Ranquil. Their activity began 16 million years ago, when the Chile Ridge collided with the Peru–Chile Trench and thus caused a tear in the subducting slab and the formation of a slab window beneath Patagonia. Another theory is that slab rollback might instead be the mechanism by which volcanism is triggered in the Pali-Aike region. The age trends of volcanism have been interpreted as indicating either a southward migration or a northeastward one in the case of the plateau lavas, following the movement of the triple junction to the north; in that case Pali-Aike would be an exception, probably due to local tectonic effects. However, some older plateau lavas in the north formed in response to an earlier ridge subduction event in the Eocene and Palaeocene. The actual Andean volcanic arc is located 300 kilometres (190 mi) west of Pali-Aike, in the form of the Austral Volcanic Zone, a chain of stratovolcanoes and one volcanic field (Fueguino), which is South America's southernmost volcano. The Camusu Aike volcanic field, dated at 2.5–2.9 million years old, is 200 kilometres (120 mi) northwest and the Morro Chico volcano about 50 kilometres (31 mi) west of Pali-Aike. ## Geology At the southern end of South America, the Antarctic Plate subducts beneath South America at a rate of 2 centimetres per year (0.79 in/year) in the Peru–Chile Trench. This subduction process has caused adakitic volcanism on the western margin of southernmost South America, forming the Austral Volcanic Zone. Patagonia is a region where four tectonic plates, the Antarctic Plate, the Nazca Plate, the Scotia Plate and the South America Plate, interact. Starting 4 million years ago the Chile Ridge collided with the Peru–Chile Trench. This collision originally occurred west of Tierra del Fuego, but has since moved northward towards the Taitao Peninsula. Farther south the interaction between the Scotia and South America plates gave rise to the Deseado and Magallanes-Fagnano faults. ### Composition The Pali-Aike volcanic field is mainly made up of alkali basalt and basanite, which form a sodium-rich alkaline suite; nephelinite has been reported and hawaiite is rare. The most important phenocrystic phase is olivine, which also appears as xenocrysts; other minerals include clinopyroxene, diopside and plagioclase. The groundmass has a similar composition with the addition of augite, feldspar and magnetite and occasionally ilmenite and nepheline. Pali-Aike rocks typically feature ultramafic xenoliths containing augite, dunite, eclogite, garnet, harzburgite, lherzolite, peridotite, phlogopite, pyroxenite, spinel and wehrlites. The composition of these xenoliths indicates that they originated from both the crust and the mantle. In addition, rocks from Pali-Aike contain inclusions of fluids consisting of carbon dioxide. Elemental composition is typical for alkaline intraplate basalts. The geochemistry of Pali-Aike rocks has been interpreted as originating from the melting of peridotite in the mantle along with fractionation of olivine and with residual garnet; there is no trace of geochemical influence of the adjacent Andean Volcanic Belt and the associated subduction zone. An older oceanic lithosphere that was emplaced during the Proterozoic-Palaeozoic in the area is also involved in magma genesis. The various isotope ratios are typical for so-called "cratonic" Patagonian back-arc basalts that are remote from the Andean Volcanic Belt and resemble ocean island basalts; a role of the Bouvet hotspot of the Atlantic in generating them has been discussed. ### Geologic record The basement beneath Pali-Aike contains the Magallanes Basin of Jurassic age, which formed during the breakup of Gondwana and was later filled by volcanic and sedimentary rocks. The mantle underneath Pali-Aike is up to 2.5 billion years old. The partly Neoproterozoic Deseado Massif lies north of Pali-Aike and may extend beneath the field to Tierra del Fuego; there is no evidence that a Precambrian basement exists in the Pali-Aike area. During the Oligocene a marine transgression deposited the Patagonia Formation, and during the Miocene fluvial sediments formed the Santa Cruz Formation. Sedimentation ceased in the region 14 million years ago, probably because by that time the rain shadow of the Andes was effective in the area. At that time, the Chile Ridge first collided with the Peru–Chile Trench west of Tierra del Fuego; since then the collision zone has migrated north to the Taitao Peninsula off western Chile. Moraines occur west and south from the volcanic field. The Pali-Aike area was glaciated during the middle Pleistocene, and glaciers eroded contemporary lava flows. In part on the basis of the dates of these lava flows, it was established that the older and larger glaciation (Bella Vista Glaciation) occurred between 1.17 and 1.02 million years ago. The last glaciation (Cabo Vírgenes, Río Ciaike and Telken VI-I) was less extensive but reached the Atlantic Ocean at times. This glaciation ended before 760,000 years ago; there is no evidence of last glacial maximum/Llanquihue glaciation glaciers in the area. ### Cause of volcanism The origin of oceanic-type magmas close to plate boundaries, which occur in other places of the world as well, is usually attributed to slab-dependent processes. The most important among these is the formation of slab windows (gaps in the downgoing plate which allow asthenosphere to ascend) when spreading ridges collide with subduction zones. The slab window generated by the Chile Ridge's subduction passed at the latitudes of Pali-Aike about 4.5 million years ago; volcanic activity commenced soon afterwards but the time difference was enough for any subduction-influenced mantle to be displaced by fresher mantle moving through the window, which is the main source of the Pali-Aike volcanic rocks. Eight to six million years ago, a change in the motion of the South America Plate relative to the Scotia Plate caused the onset of a stretching tectonic regime in the Pali-Aike area, thus allowing the ascent of magmas. The large amounts of xenoliths and primitiveness of the magmas suggest that once they had formed, they very quickly rose through the crust to the surface. ## Eruptive history Volcanic activity at Pali-Aike spans the late Pliocene to Holocene and has been subdivided into the three units U1, U2 and U3. The oldest U1 unit consists of basaltic plateaus, while U2 and U3 are individual vents with accompanying lava flows. An additional Miocene volcanic stage ("Basaltos Bella Vista") crops out at the northwestern end of the volcanic field and is heavily eroded. There is no evidence of a systematic migration of vent sites. Potassium–argon dating has yielded ages of between 3.78 and 0.17 million years ago. The age of Potrok Aike is not known with certainty but its minimum age on the basis of sediment core data is 240,000 years before present. The youngest vent is Diablo Negro-La Morada del Diablo along the Chile-Argentina border, which covered an area of 100 square kilometres (39 sq mi) with lava. Volcanic deposits have covered archaeological artifacts at the Pali-Aike Cave, indicating volcanic activity between 10,000 and 5,000 years before present and within the last 15,000 years; the Global Volcanism Program mentions a 5,550 ± 2,500 BCE eruption. Sediment cores from Laguna Azul give an approximate age of 3,400 years before present, suggesting that this vent formed during the late Holocene. Tephra deposits in the region may have originated at Pali-Aike. The volcanic field was rated Argentina's 18th (out of 38) most dangerous volcano in a 2016 study. ## Climate, vegetation and fauna The climate in the region is windy and cold, with mild winters owing to the oceanic influence, and dry, bordering on semi-desert with precipitation ranging between 300–150 millimetres per year (11.8–5.9 in/year). These patterns are owing to the closeness of Antarctica, the cold Humboldt current and Falklands current ocean currents and the rain shadow of the Andes. Some maars and craters in Pali-Aike have been used for palaeoclimatological research, in the form of sediment core analysis, such as Laguna Azul, Potrok Aike and Magallanes Maar. The regional vegetation is grassland and shrubs. The dominant grass species is Festuca gracillima, although Festuca pallescens has been described as the dominant species in the wetter west. Festuca is accompanied by bushes of Chiliotrichum diffusum and red crowberry in the wetter regions and by bushes of Nardophyllum bryoides and Nassauvia ulicina in the drier regions. Various herbs and dicots complete the regional flora. The highly permeable basalts intercept precipitation, forming active aquifers that feed into wetlands. Animal species present in the Chilean national park include armadillos, gray foxes, guanacos, Humboldt's hog-nosed skunks, pumas and red foxes. Bird species include Chloephaga and Theristicus species, black-chested buzzard-eagles, cinereous harriers, crested caracaras, harriers, kestrels, peregrine falcons, rheas and southern lapwings, but also aquatic birds like Calidris species, Coscoroba swans, flamingos, two-banded plovers, yellow-billed pintails and yellow-billed teals. Palaeorecords indicate that ecological conditions varied from place to place in the wider region and during the last 50,000 years. Caves have yielded fossils of animals that lived there during the Holocene and Pleistocene such as big cats and ground sloths, although the former fauna in the region is poorly studied. Since the arrival of Europeans in the late 19th century, invasive European weeds and sheep farming have altered the regional ecosystem. ## Archaeology and human history Early humans inhabited the Pali-Aike region since about 10,000 years ago, including various caves such as Fell Cave, Pali-Aike cave, Condor 1, Cueva del Puma, Las Buitreras, Orejas de Burro but also non-cave sites such as Laguna Thomas Gould. Human use of Fell Cave goes back at least 8,000 years and their presence at Pali-Aike is among the oldest human activities in Patagonia. Archaeological research in the volcanic field began in the 1930s. Prehistoric human activity was concentrated in the southern, wetter sector of the volcanic field. The lakes and the volcanic landscape have a reliable supply of water and offered refuge to these people, drawing them to the volcanic field; in turn they might have settled the rest of the wider region starting from Pali-Aike. They left archaeological sites, petroglyphs, rock carvings and stone tools behind; even some ancient burials have been found. The volcanic field was a source of volcanic rocks such as obsidian for the manufacturing of archaeological artifacts but, perhaps because of the low quality of the rocks, they had only limited use. Weathered volcanic rocks from the Pali-Aike volcanic field were used as red pigments. Today sheep are farmed in the volcanic field. On the Chilean side, the Pali-Aike volcanic field is part of the Pali-Aike National Park and a few volcanic centres have been investigated as possible geosites. Laguna Azul is already a provincial geosite and tourism target. The Pali-Aike National Park was created in 1970 on the Chilean side and the Laguna Azul Provincial Reserve on the Argentine side, which encompasses Laguna Azul, in 2005. ## See also - Carrán-Los Venados, another volcanic field with very large maars in Chile
2,103,725
Charing Cross, Euston and Hampstead Railway
1,169,237,375
Underground railway company in London
[ "Predecessor companies of the London Underground", "Railway companies disestablished in 1933", "Railway companies established in 1891", "Railway lines opened in 1907", "Transport in the City of Westminster", "Transport in the London Borough of Barnet", "Transport in the London Borough of Camden", "Transport in the London Borough of Lambeth", "Transport in the London Borough of Southwark", "Underground Electric Railways Company of London" ]
The Charing Cross, Euston and Hampstead Railway (CCE&HR), also known as the Hampstead Tube, was a railway company established in 1891 that constructed a deep-level underground "tube" railway in London. Construction of the CCE&HR was delayed for more than a decade while funding was sought. In 1900 it became a subsidiary of the Underground Electric Railways Company of London (UERL), controlled by American financier Charles Yerkes. The UERL quickly raised the funds, mainly from foreign investors. Various routes were planned, but a number of these were rejected by Parliament. Plans for tunnels under Hampstead Heath were authorised, despite opposition by many local residents who believed they would damage the ecology of the Heath. When opened in 1907, the CCE&HR's line served 16 stations and ran for 7.67 miles (12.34 km) in a pair of tunnels between its southern terminus at Charing Cross and its two northern termini at Archway and Golders Green. Extensions in 1914 and the mid-1920s took the railway to Edgware and under the River Thames to Kennington, serving 23 stations over a distance of 14.19 miles (22.84 km). In the 1920s the route was connected to another of London's deep-level tube railways, the City and South London Railway (C&SLR), and services on the two lines were merged into a single London Underground line, eventually called the Northern line. Within the first year of opening, it became apparent to the management and investors that the estimated passenger numbers for the CCE&HR and the other UERL lines had been over-optimistic. Despite improved integration and cooperation with the other tube railways, and the later extensions, the CCE&HR struggled financially. In 1933 the CCE&HR and the rest of the UERL were taken into public ownership. Today, the CCE&HR's tunnels and stations form the Northern line's Charing Cross branch from Kennington to Camden Town, the Edgware branch from Camden Town to Edgware, and the High Barnet branch from Camden Town to Archway. ## Establishment ### Origin, 1891–1893 In November 1891, notice was given of a private bill that would be presented to Parliament for the construction of the Hampstead, St Pancras & Charing Cross Railway (HStP&CCR). The railway was planned to run entirely underground from Heath Street, Hampstead to Strand in Charing Cross. The route was to run beneath Hampstead High Street, Rosslyn Hill, Haverstock Hill and Chalk Farm Road to Camden Town and then under Camden High Street and Hampstead Road to Euston Road. The route then continued south, following Tottenham Court Road, Charing Cross Road and King William Street (now William IV Street) to Agar Street adjacent to Strand. North of Euston Road, a branch was to run eastwards from the main alignment under Drummond Street to serve the main line stations at Euston, St Pancras and King's Cross. Stations were planned at Hampstead, Belsize Park, Chalk Farm, Camden Town, Seymour Street (now part of Eversholt Street), Euston Road, Tottenham Court Road, Oxford Street, Agar Street, Euston and King's Cross. Although a decision had not been made between the use of cable haulage or electric traction as the means of pulling the trains, a power station was planned on Chalk Farm Road close to the London and North Western Railway's Chalk Farm station (later renamed Primrose Hill) which had a coal depot for deliveries. The promoters of the HStP&CCR were inspired by the recent success of the City and South London Railway (C&SLR), the world's first deep-tube railway. This had opened in November 1890 and had seen large passenger numbers in its first year of operation. Bills for three similarly inspired new underground railways were also submitted to Parliament for the 1892 legislative session, and, to ensure a consistent approach, a Joint Select Committee was established to review the proposals. The committee took evidence on various matters regarding the construction and operation of deep-tube railways, and made recommendations on the diameter of tube tunnels, method of traction, and the granting of wayleaves. After preventing the construction of the branch beyond Euston, the Committee allowed the HStP&CCR bill to proceed for normal parliamentary consideration. The rest of the route was approved and, following a change of the company name, the bill received royal assent on 24 August 1893 as the Charing Cross, Euston, and Hampstead Railway Act, 1893. ### Search for financing, 1893–1903 Although the company had permission to construct the railway, it still had to raise the capital for the construction works. The CCE&HR was not alone; four other new tube railway companies were looking for investors – the Baker Street & Waterloo Railway (BS&WR), the Waterloo & City Railway (W&CR) and the Great Northern & City Railway (GN&CR) (the three other companies that put forward bills in 1892) and the Central London Railway (CLR, which had received assent in 1891). Only the W&CR, which was the shortest line and was backed by the London and South Western Railway with a guaranteed dividend, was able to raise its funds without difficulty. For the CCE&HR and the rest, much of the remainder of the decade saw a struggle to find investors in an uninterested market. A share offer in April 1894 had been unsuccessful and in December 1899 only 451 out of the company's 177,600 £10 shares had been part sold to eight investors. Like most legislation of its kind, the act of 1893 imposed a time limit for the compulsory purchase of land and the raising of capital. To keep the powers granted by the act alive, the CCE&HR submitted a series of further bills to Parliament for extensions of time. Extensions were granted by the Charing Cross Euston and Hampstead Railway Acts, 1897, 1898, 1900, and 1902. A contractor was appointed in 1897, but funds were not available and no work was started. In 1900, foreign investors came to the rescue of the CCE&HR: American financier Charles Yerkes, who had been lucratively involved in the development of Chicago's tramway system in the 1880s and 1890s, saw the opportunity to make similar investments in London. Starting with the purchase of the CCE&HR in September 1900 for £100,000, he and his backers purchased a number of the unbuilt tube railways, and the operational but struggling Metropolitan District Railway (MDR). With the CCE&HR and the other companies under his control, Yerkes established the UERL to raise funds to build the tube railways and to electrify the steam-operated MDR. The UERL was capitalised at £5 million with the majority of shares sold to overseas investors. Further share issues followed, which raised a total of £18 million (equivalent to approximately £ today) to be used across all of the UERL's projects. ### Deciding the route, 1893–1903 While the CCE&HR raised money, it continued to develop the plans for its route. On 24 November 1894, a bill was announced to purchase additional land for stations at Charing Cross, Oxford Street, Euston and Camden Town. This was approved as the Charing Cross, Euston and Hampstead Railway Act, 1894 on 20 July 1895. On 23 November 1897, a bill was announced to change the route of the line at its southern end to terminate under Craven Street on the south side of Strand. This was enacted as the Charing Cross, Euston and Hampstead Railway Act, 1898 on 25 July 1898. On 22 November 1898, the CCE&HR published another bill to add an extension and to modify part of the route. The extension was a branch from Camden Town to Kentish Town where a new terminus was planned as an interchange with the Midland Railway's Kentish Town station. Beyond the terminus, the CCE&HR line was to come to the surface for a depot on vacant land to the east of Highgate Road (occupied today by the Ingestre Road Estate). The modification changed the Euston branch by extending it northwards from Euston to connect to the main route at the south end of Camden High Street. The section of the main route between the two ends of the loop was omitted. Included in the bill were powers to purchase a site in Cranbourn Street for an additional station (Leicester Square). It received royal assent as the Charing Cross, Euston and Hampstead Railway Act, 1899 on 9 August 1899. On 23 November 1900, the CCE&HR announced its most wide-ranging modifications to the route. Two bills were submitted to Parliament, referred to as No. 1 and No. 2. Bill No. 1 proposed the continuation of the railway north from Hampstead to Golders Green, the purchase of land and properties for stations and the construction of a depot at Golders Green. Also proposed were minor adjustments to route alignments previously approved. Bill No. 2 proposed two extensions: from Kentish Town to Brecknock Road, Archway Tavern, Archway Road and Highgate in the north and from Charing Cross to Parliament Square, Artillery Row and Victoria station in the south. The extension to Golders Green would take the railway out of the urban and suburban areas and into open farmland. While this provided a convenient site for the CCE&HR's depot it is believed that underlying the decision was Yerkes' plan to profit from the sale of development land previously purchased in the area that would rise in value when the railway opened. The CCE&HR's two bills were submitted to Parliament at the same time as a large number of other bills for underground railways in the capital. As it had done in 1892, Parliament established a joint committee under Lord Windsor to review the bills. By the time the committee had produced its report, the parliamentary session was almost over and the promoters of the bills were asked to resubmit them for the following 1902 session. Bills No. 1 and No. 2 were resubmitted in November 1901 together with a new bill – bill No. 3. The new bill modified the route of the proposed extension to Golders Green and added a short extension running beneath Charing Cross main line station to the Victoria Embankment where it would provide an interchange with the existing MDR station (then called Charing Cross). The bills were again examined by a joint committee, this time under Lord Ribblesdale. The sections which dealt with the proposed north-eastern extension from Archway Tavern to Highgate and the southern extension from Charing Cross to Victoria were deemed to not comply with parliamentary standing orders and were struck-out. #### Hampstead Heath controversy A controversial element of the CCE&HR's plans was the extension of the railway to Golders Green. The route of the tube tunnels took the line under Hampstead Heath and strong opposition was raised, concerned about the effect that the tunnels would have on the ecology of the Heath. The Hampstead Heath Protection Society claimed that the tunnels would drain the sub-soil of water and the vibration of passing trains would damage trees. Taking its lead from the Society's objections, The Times published an alarmist article on 25 December 1900 claiming that "a great tube laid under the heath will, of course, act as a drain; and it is quite likely that the grass and gorse and trees on the Heath will suffer from the loss of moisture ... Moreover, it seems to be established beyond question that the trains passing along these deep-laid tubes shake the earth to its surface, and the constant jar and quiver will probably have a serious effect upon the trees by loosening their roots." In fact, the tunnels were to be excavated at a depth of more than 200 feet (61 m) below the surface, the deepest of any on the London Underground. In his presentation to the joint committee, the CCE&HR's counsel disparagingly refuted the objections: "Just see what an absurd thing! Disturbance of the water when we are 240 feet down in the London clay – about the most impervious thing you can possibly find; almost more impervious than granite rock! And the vibration on this railway is to shake down timber trees! Could anything be more ludicrous than to waste the time of the Committee in discussing such things presented by such a body!" A second railway company, the Edgware & Hampstead Railway (E&HR), also had a bill before Parliament which proposed tunnels beneath the Heath as part of its planned route between Edgware and Hampstead. The E&HR had planned to connect to the CCE&HR at Hampstead but, to avoid the needless duplication of tunnels between Golders Green and Hampstead, the two companies agreed that the E&HR would instead connect to the CCE&HR at Golders Green. The Metropolitan Borough of Hampstead had initially objected to the line but gave consent on the condition that a station be constructed between Hampstead and Golders Green to provide access for visitors to the Heath. A new station was added to the plans at the northern edge of the Heath at North End where it could also serve a new residential development planned for the area. Once Parliament was satisfied that the extension would not damage the Heath, the CCE&HR bills jointly received royal assent on 18 November 1902 as the Charing Cross, Euston and Hampstead Railway Act, 1902. On the same date, the E&HR bill received its assent as the Edgware and Hampstead Railway Act, 1902. ### Construction, 1902–1907 With the funds available from the UERL and the route decided, the CCE&HR started site demolitions and preparatory works in July 1902. On 21 November 1902, the CCE&HR published another bill which sought compulsory purchase powers for additional buildings for its station sites, planned the take-over of the E&HR and abandoned the permitted but redundant section of the line from Kentish Town to the proposed depot site near Highgate Road. This bill was approved as the Charing Cross, Euston and Hampstead Railway Act, 1903 on 21 July 1903. Tunnelling began in September 1903. Stations were provided with surface buildings designed by architect Leslie Green in the UERL house-style. This consisted of two-storey steel-framed buildings faced with red glazed terracotta blocks with wide semi-circular windows on the upper floor. Each station was provided with two or four lifts and an emergency spiral staircase in a separate shaft. While construction proceeded, the CCE&HR continued to submit bills to Parliament. The Charing Cross, Euston and Hampstead Railway Act, 1904, which received assent on 22 July 1904, granted permission to buy additional land for the station at Tottenham Court Road, for a new station at Mornington Crescent and for changes at Charing Cross. The Charing Cross, Euston and Hampstead Railway Act, 1905 received assent on 4 August 1905. It dealt mainly with the acquisition of the subsoil under part of the forecourt of the South Eastern Railway's Charing Cross station so that the CCE&HR's station could be excavated during the 3 months closure following the recent roof collapse. The sale of the building land at North End to conservationists to form the Hampstead Heath extension in 1904, meant a reduction in the number of residents who might use the station there. Work continued below ground at a reduced pace, and the platform tunnels and some passenger circulation tunnels were excavated, but North End station was abandoned in 1906 before the lift and stair shafts were dug and before a surface building was constructed. Tunnelling was completed in December 1905, after which work continued on the construction of the station buildings and the fitting-out of the tunnels with tracks and signalling equipment. As part of the UERL group, the CCE&HR obtained its electricity from the company's Lots Road Power Station, originally built for the electrification of the MDR; the proposed Chalk Farm generating station was not built. The final section of the approved route between Charing Cross and the Embankment was not constructed, and the southern terminus on opening was Charing Cross. After a period of test running, the railway was ready to open in 1907. ## Opening The CCE&HR was the last of the UERL's three tube railways to open and was advertised as the "Last Link". The official opening on 22 June 1907 was made by David Lloyd George, President of the Board of Trade, after which the public travelled free for the rest of the day. From its opening, the CCE&HR was generally known by the abbreviated names Hampstead Tube or Hampstead Railway and the names appeared on the station buildings and on contemporary maps of the tube lines. The railway had stations at: - Charing Cross - Leicester Square - Oxford Street (now Tottenham Court Road) - Tottenham Court Road (now Goodge Street) - Euston Road (now Warren Street) - Euston - Mornington Crescent - Camden Town Golders Green branch - Chalk Farm - Belsize Park - Hampstead - Golders Green Highgate branch - South Kentish Town (closed 1924) - Kentish Town - Tufnell Park - Highgate (now Archway) The service was provided by a fleet of carriages manufactured for the UERL by the American Car and Foundry Company and assembled at Trafford Park in Manchester. These carriages were built to the same design used for the BS&WR and the GNP&BR and operated as electric multiple unit trains without the need for separate locomotives. Passengers boarded the trains via folding lattice gates at each end of cars which were operated by Gate-men who rode on the outside platform and announced station names as trains arrived. The design became known on the Underground as the 1906 stock or Gate stock. ## Co-operation and consolidation, 1907–1910 Despite the UERL's success in financing and constructing the Hampstead Railway in only seven years, its opening was not the financial success that had been expected. In the Hampstead Tube's first twelve months of operation it carried 25 million passengers, just half of the 50 million that had been predicted during the planning of the line. The UERL's pre-opening predictions of passenger numbers for its other new lines proved to be greatly over-optimistic, as did the improvement in passenger numbers expected on the newly electrified MDR – in each case achieving only around fifty per cent of their targets. The lower than expected passenger numbers were partly due to competition between the tube and sub-surface railway companies, but the introduction of electric trams and motor buses, replacing slower, horse-drawn road transport, took a large number of passengers away from the trains. The problem was not limited to the UERL; all of London's seven tube lines and the sub-surface MDR and Metropolitan Railway were affected to a degree and the reduced revenues generated from the lower numbers of passengers made it difficult for the UERL and the other railways to pay back the capital borrowed and pay dividends to shareholders. In an effort to improve the financial situation, the UERL together with the C&SLR, the CLR and the GN&CR began, from 1907, to introduce fare agreements. From 1908, they began to present themselves through common branding as the Underground. The W&CR was the only tube railway that did not participate in the arrangement as it was owned by the mainline London and South Western Railway. The UERL's three tube railway companies were still legally separate entities with their own management and shareholder and dividend structures. There was duplicated administration between the three companies and, to streamline the management and reduce expenditure, the UERL announced a bill in November 1909 that would merge the Hampstead Tube, the Piccadilly Tube and the Bakerloo Tube into a single entity, the London Electric Railway (LER), although the lines retained their own individual branding. The bill received assent on 26 July 1910 as the London Electric Railway Amalgamation Act, 1910. ## Extensions ### Embankment, 1910–1914 In November 1910, the LER published notice of a bill to revive the unused 1902 permission to continue the line from Charing Cross to Embankment. The extension was planned as a single tunnel, running in a loop under the Thames, connecting the ends of the two existing tunnels. Trains were to run in one direction around the loop stopping at a single-platform station constructed to provide an interchange with the BS&WR and MDR at Embankment station. The bill received assent as the London Electric Railway Act, 1911 on 2 June 1911. The loop was constructed from a large excavation north-west of the MDR station and was connected to the sub-surface line with escalators. The station opened on 6 April 1914 as: - Charing Cross (Embankment) (now Embankment) ### Hendon and Edgware, 1902–1924 In the decade after the E&HR received royal assent for its route from Edgware to Hampstead, the company continued to search for finance and revised its plans in conjunction both with the CCE&HR and a third railway company, the Watford & Edgware Railway (W&ER) which had plans to build a line linking the E&HR to Watford. Following the enactment of the Watford and Edgware Railway Act, 1906, the W&ER briefly took over the powers of the E&HR to construct the line from Golders Green to Edgware. Struggling to find funds, the W&ER attempted a formal merger with the E&HR through a bill submitted to Parliament in 1906, with the intention of constructing and operating the whole of the route from Golders Green to Watford as a light railway but the bill was rejected by Parliament and, when the W&ER's powers lapsed, control returned to the CCE&HR. The E&HR company had remained in existence and had obtained a series of acts to preserve and develop its plans. The Edgware and Hampstead Railway Acts, 1905, 1909 and 1912 granted extensions of time, approved changes to the route, gave permissions for viaducts and a tunnel and allowed the closure and re-routeing of roads to be crossed by the railway's tracks. It was intended that the CCE&HR would provide and operate the trains and this was formalised by the London Electric Railway Act, 1912, which approved the LER's take over of the E&HR. No immediate effort was made to start the works and they were postponed indefinitely when World War I started. With wartime restrictions in place, construction work for the railway was prevented. Yearly extensions to the earlier E&HR acts were granted under special wartime powers each year from 1916 until 1922, giving a final date by which compulsory purchases had to be made of 7 August 1924. Although the permissions had been maintained, the UERL could not raise the money needed for the works. Construction costs had increased considerably during the war years and the returns produced by the company could not cover the cost of repaying loans. The project was made possible when the government introduced the Trade Facilities Act 1921 by which the Treasury underwrote loans for public works as a means of alleviating unemployment. With this support, the UERL raised the funds and work began on extending the Hampstead tube to Edgware. The UERL group's Managing Director/Chairman, Lord Ashfield, ceremonially cut the first sod to begin the works at Golders Green on 12 June 1922. The extension crossed farmland, meaning it could be constructed on the surface more easily and cheaply than a deep tube line below the surface. A viaduct was constructed across the Brent valley and a short section of tunnel was required at The Hyde, Hendon. Stations were designed in a suburban pavilion style by the UERL's architect Stanley Heaps. The first section opened on 19 November 1923 with stations at: - Brent (now Brent Cross) - Hendon Central The remainder of the extension opened on 18 August 1924 with stations at: - Colindale - Burnt Oak (opened 27 October 1924) - Edgware ### Kennington, 1922–1926 On 21 November 1922, the LER announced a bill for the 1923 parliamentary session. It included the proposal to extend the line from its southern terminus to the C&SLR's station at Kennington where an interchange would be provided. The bill received royal assent as the London Electric Railway Act, 1923 on 2 August 1923. The work involved the rebuilding of the below ground parts of the CCE&HR's former terminus station to enable through running and the loop tunnel was abandoned. Tunnels were extended under the Thames to Waterloo station and then to Kennington where two additional platforms were constructed to provide the interchange to the C&SLR. Immediately south of Kennington station, the CCE&HR tunnels connected to those of the C&SLR. The new service was opened on 13 September 1926 to coincide with the opening of the extension of the C&SLR to Morden. The Charing Cross to Kennington link had stations at: - Waterloo - Kennington The C&SLR had been under the control of the UERL since its purchase by the group in 1913. An earlier connection between the CCE&HR and the C&SLR had been opened in 1924 linking the C&SLR's station at Euston with the CCE&HR's at Camden Town. With the opening of the Kennington extension, the two railways began to operate as an integrated service using the newly built Standard Stock trains. On tube maps the combined lines were shown in a single colour although the separate names continued in use into the 1930s. ## Move to public ownership, 1923–1933 Despite improvements made to other parts of the network, the Underground railways were still struggling to make a profit. The UERL's ownership of the highly profitable London General Omnibus Company (LGOC) since 1912 had enabled the UERL group, through the pooling of revenues, to use profits from the bus company to subsidise the less profitable railways. However, competition from numerous small bus companies during the early 1920s eroded the profitability of the LGOC and had a negative impact on the profitability of the whole UERL group. In an effort to protect the UERL group's income Lord Ashfield lobbied the government for regulation of transport services in the London area. Starting in 1923, a series of legislative initiatives were made in this direction, with Ashfield and Labour London County Councillor (later MP and Minister of Transport) Herbert Morrison, at the forefront of debates as to the level of regulation and public control under which transport services should be brought. Ashfield aimed for regulation that would give the UERL group protection from competition and allow it to take substantive control of the LCC's tram system; Morrison preferred full public ownership. After seven years of false starts, a bill was announced at the end of 1930 for the formation of the London Passenger Transport Board (LPTB), a public corporation that would take control of the UERL, the Metropolitan Railway and all bus and tram operators within an area designated as the London Passenger Transport Area. The Board was a compromise – public ownership but not full nationalisation – and came into existence on 1 July 1933. On this date, the LER and the other Underground companies were liquidated. ## Legacy Finding a suitable name for the combined CCE&HR and C&SLR routes proved a challenge for the LPTB and a number of variations were used including Edgware, Morden & Highgate Line in 1933 and Morden-Edgware Line in 1936. In 1937, Northern line was adopted in preparation for the uncompleted Northern Heights plan. Today, the Northern line is the busiest on the London Underground system, carrying 206.7 million passengers annually, a level of usage which led it to be known as the Misery line during the 1990s due to overcrowding and poor reliability.
28,283,124
Suillus pungens
1,171,005,992
Species of fungus in the family Suillaceae found in California
[ "Edible fungi", "Fungi described in 1964", "Fungi of California", "Fungi without expected TNC conservation status", "Suillus", "Taxa named by Alexander H. Smith" ]
Suillus pungens, commonly known as the pungent slippery jack or the pungent suillus, is a species of fungus in the genus Suillus. The fruit bodies of the fungus have slimy convex caps up to 14 cm (5.5 in) wide. The mushroom is characterized by the very distinct color changes that occur in the cap throughout development. Typically, the young cap is whitish, later becoming grayish-olive to reddish-brown or a mottled combination of these colors. The mushroom has a dotted stem (stipe) up to 7 cm (2.8 in) long, and 2 cm (0.8 in) thick. On the underside on the cap is the spore-bearing tissue consisting of minute vertically arranged tubes that appear as a surface of angular, yellowish pores. The presence of milky droplets on the pore surface of young individuals, especially in humid environments, is a characteristic feature of this species. S. pungens can usually be distinguished from other similar Suillus species by differences in distribution, odor and taste. The mushroom is considered edible, but not highly regarded. An ectomycorrhizal species, S. pungens forms an intimate mutualistic relationship between its underground mycelium and the young roots of the associated host tree. The fungus—limited in distribution to California—fruits almost exclusively with Monterey and bishop pine, two trees with small and scattered natural ranges concentrated in the West Coast of the United States. Several studies have investigated the role of S. pungens in the coastal Californian forest ecosystem it occupies. Although the species produces more fruit bodies than other competing ectomycorrhizal fungi in the same location, it is not a dominant root colonizer, and occupies only a small percentage of ectomycorrhizal root tips. The fungus's propensity to fruit prolifically despite minimal root colonization is a result of its ability to efficiently transfer nutrients from its host for its own use. ## Taxonomy, classification, and phylogeny The fungus was first described scientifically by American mycologists Harry D. Thiers and Alexander H. Smith in their 1964 monograph on North American Suillus species. The type collection was made on the campus of San Francisco State University in San Francisco. Smith and Thiers classified S. pungens in section Suilli—a grouping of related species characterized by the presence of either a ring on the stipe, a partial veil adhering to the cap margin, or a "false veil" not attached to the stipe but initially covering the tube cavity. A 1996 molecular analysis of 38 different Suillus species used the sequences of their internal transcribed spacers to infer phylogenetic relationships and clarify the taxonomy of the genus. The results suggest that S. pungens was genetically similar to S. collinitus, S. neoalbidipes, S. pseudobrevipes, S. luteus, S. brevipes, S. weaverae, and certain isolates of S. granulatus. The specific epithet is derived from the Latin pungens, and refers to the pungent aroma of the fruit bodies. The mushroom is commonly known as the "pungent slippy jack" or the "pungent suillus". It has also been referred to as the "slippery jack", a common name applied to several Suillus species. ## Description The cap of S. pungens is roughly convex when young, becoming plano-convex (flat on one side and rounded on the other) to somewhat flat with age, and reaches diameters of 4–14 cm (1.6–5.5 in). The cap surface is sticky to slimy when moist, becoming shiny when dried. The surface is smooth but is sometimes streaked with the sticky glue-like cap slime when older. The cap color is highly variable in this species, and the cap is often variegated with a mixture of light and dark colors. When young it is dirty-white to olive with pale olive splotches. Maturing caps can retain the color they had when young, or become tawny to orange-yellow to reddish-brown, or a combination of these colors. The cap margin is initially rolled inward and has a cottony roll of white tissue, but becomes naked and curves downward with age. The flesh is 1–2 cm (0.4–0.8 in) thick, white and unchanging in young fruit bodies, frequently changing to yellow when older. The tubes that comprise the hymenium (spore-bearing tissue) on the underside of the cap are up to 1 cm (0.4 in) long, adnate when young, becoming decurrent or nearly so with age. In young specimens, they are whitish to pale buff, and are covered with milky droplets that become brown to ochraceous when dried. As specimens mature the color of the pore surface changes to yellowish, and finally to dark yellow. The angular pores, which are 1–1.5 mm in diameter, are not radially arranged, and do not change color when bruised. The stipe is solid (rather than hollow), 3–7 cm (1.2–2.8 in) long, and 1–2 cm (0.4–0.8 in) thick near the top. Its shape is variable: either roughly equal in width throughout, thicker at the base, or somewhat thicker in the middle. Its surface is dry and smooth, and covered with irregularly shaped glandular dots. The dots—minute clumps of pigmented cells—are initially reddish before becoming brownish. The background color of the stipe is initially whitish (roughly the same color as the tubes), but becomes more yellow with age. It does not change color when bruised, and does not have a ring. The flesh of the stipe is white, and does not change color when exposed to air. The spore print is olive-brown to pale cinnamon-brown. Individual spores are thin-walled, hyaline (translucent), and smooth. Their shape is ellipsoid to roughly cylindrical in face view or inequilateral when viewed in profile, and they measure 9.5–10 by 2.8–3.5 μm. The basidia (spore-bearing cells of the hymenium) are hyaline, club-shaped and four-spored, with dimensions of 33–36 by 8–10 μm. The thin-walled cystidia are rare to scattered on the tube surface but abundant on the pores, where they usually occur in massive clusters. They appear dark brown when mounted in a dilute (3%) solution of potassium hydroxide (KOH), and are cylindric to roughly club-shaped, measuring 43–79 by 7–10 μm. They are usually encrusted with pigment, although some may be hyaline. The tissue comprising the tube is hyaline, and made of divergent to nearly parallel hyphae that are 3–5 μm wide. The pileipellis is a tissue type known as an ixotrichodermium (made of interwoven gelatinized hyphae); it stains brown in KOH, and is made of hyphae that are 4–5 μm wide. The stipe cuticle is made of clusters of cystidia similar to those found in the hymenium. Clamp connections are absent in the hyphae of S. pungens. Several chemical tests can be employed in the field to aid in the identification of S. pungens. With an application of a drop of KOH, the flesh will turn vinaceous (the color of red wine), the tubes red, the cap cuticle black, and the stipe cuticle pale vinaceous. With ammonium hydroxide (NH<sub>4</sub>OH), the flesh becomes very pale vinaceous, and the tubes turn bright red. Iron(II) sulfate (FeSO<sub>4</sub>) turns the flesh gray, the tubes dark gray to black, and the stipe cuticle light gray. ### Edibility The mushroom is considered edible, but not choice. Its taste is harsh, nauseating, and weakly acidic; the odor is strong and ranges from pleasant, resembling bananas, to pungent. When collecting for the table, young specimens are preferred, as older ones "literally seethe with fat, agitated maggots and sag with so much excess moisture that they practically demand to be wrung out like a sponge!" Michael Kuo's 100 Edible Mushrooms (2007) rates the mushroom's edibility as "bad" and warns that dishes cooked with the mushroom will assume an unpleasant taste. ### Similar species Suillus pungens is characterized by the very distinct color changes that occur in the cap as it develops. The range of color variation makes it possible to misidentify the species with others whose color overlaps. Suillus pungens has been misidentified as S. placidus because of the white color of the young fruit bodies and the droplets of exudate. S. placidus has a wider distribution, is usually found in association with eastern white pine, is generally smaller, with a maximum cap diameter up to 9 cm (3.5 in), and has smaller spores, measuring 7–9 by 2.5–3.2 μm. It does not have any distinctive taste or odor. North American S. granulatus is another potential lookalike species, and at maturity it is nearly identical to Suillus pungens. The cap of S. granulatus is variable in color, ranging from pale yellow to various shades of brown, while the pore surface is initially whitish, later becoming yellowish, and similar to S. placidus, its typical host is eastern white pine. Unlike S. pungens, it lacks a characteristic odor and taste. The Californian species Suillus glandulosipes has veil material attached to the edge of the cap when young. It also lacks the distinctive changes in cap color during development, is associated with lodgepole pine, has smaller spores (6.6–8.8 by 2.5–3 μm), and lacks any obvious taste and odor. Another Californian species, Suillus quiescens, newly described in 2010, may resemble S. pungens, especially at maturity. S. quiescens can be distinguished by its lack of white or olive colors when young and by a less glandular stipe when mature. ## Ecology, habitat and distribution Suillus pungens is an ectomycorrhizal (EM) basidiomycete that forms symbiotic relationships almost exclusively with Monterey pine (Pinus radiata) and bishop pine (Pinus muricata); some collections have been made under knobcone pine (Pinus attenuata) and ponderosa pine (Pinus ponderosa), but only within the range of Monterey pine. All these trees have small scattered natural ranges largely restricted to California. An EM symbiosis is a mutualistic relationship between an EM fungus and the root tip of a compatible EM plant. The fruit bodies of Suillus pungens grow solitarily, scattered or in groups in humus. They are often found growing near fruit bodies of Chroogomphus vinicolor and Helvella lacunosa. Suillus pungens is often the most abundant Suillus in the San Francisco Bay Area. The type collection was made on the campus of San Francisco State University in San Francisco, where it occurs in abundance during the fall and winter seasons. Although it occurs most frequently in the autumn and winter, it is one of the few species of Suillus that continue to fruit sporadically throughout the year, especially in wet weather. It has also been identified in the southeastern Sierra Nevada and on Santa Cruz Island. A genet is a group of genetically identical individuals that have grown in a given location, all originating vegetatively from a single ancestor. Once established, genets vegetatively spread hyphae out from the root tip into the soil and may connect two or more trees to form a network of mycorrhizae. In field studies, the approximate size of fungal genets is typically estimated by collecting and mapping fruit bodies on a site, determining which fruit bodies are genetically identical by either somatic incompatibility (a method fungi use to distinguish self from non-self by delimiting their own mycelia from that of other individuals of the same species) or various molecular techniques, and then determining the distance between identical fruit bodies. In a 1996 study, mycologists Monique Gardes and Thomas Bruns hypothesized that S. pungens, an abundant fruiter in pine forests, would be dominant on the roots of the pine trees. However, by sampling underground ectomycorrhizae in addition to above-ground fruit bodies, they found that the fungus can fruit prolifically while occupying only a small fraction of the ectomycorrhizal root assemblage, which was otherwise dominated by Russula species and Tomentella sublilacina. Gardes and Bruns hypothesized that the disparity between above- and below-ground representation may be because the fungus invests less energy in vegetative growth and persistence and more in fruiting, or alternatively, because the species is particularly efficient at acquiring carbon from its hosts and so needs to colonize only a few rootlets to obtain enough to allow abundant fruiting. A 1998 study by Pierluigi Bonello and colleagues used single-strand conformation polymorphism analysis to detect minute genetic differences among S. pungens genets, and showed that most of the fruiting occurred from a single large genet. This result indicates that the fungus persists because of extensive vegetative growth, rather than frequent establishment of new genets from spores, and that it uses carbon resources efficiently. The study also described an S. pungens genet with an area of approximately 300 m<sup>2</sup> (3,200 sq ft) and a span greater than 40 m (130 ft) across, which was at the time the largest EM fungal genet reported. The large S. pungens genet was not detected after wildfire, demonstrating that it did not survive in the absence of a host, and suggesting that spores are the primary means by which the fungus recolonizes after a fire. ## See also - List of North American boletes
28,833
Sir Gawain and the Green Knight
1,172,160,894
14th-century Middle English chivalric romance
[ "14th-century books", "14th-century poems", "Arthurian literature in Middle English", "Cephalophores", "Cheshire in fiction", "Cotton Library", "Middle English poems", "Romance (genre)", "Works of unknown authorship" ]
Sir Gawain and the Green Knight is a late 14th-century chivalric romance in Middle English. The author is unknown; the title was given centuries later. It is one of the best-known Arthurian stories, with its plot combining two types of folk motifs: the beheading game, and the exchange of winnings. Written in stanzas of alliterative verse, each of which ends in a rhyming bob and wheel; it draws on Welsh, Irish, and English stories, as well as the French chivalric tradition. It is an important example of a chivalric romance, which typically involves a hero who goes on a quest which tests his prowess. It remains popular in modern English renderings from J. R. R. Tolkien, Simon Armitage, and others, as well as through film and stage adaptations. The story describes how Sir Gawain, a knight of King Arthur's Round Table, accepts a challenge from a mysterious "Green Knight" who dares any knight to strike him with his axe if he will take a return blow in a year and a day. Gawain accepts and beheads him, at which point, the Green Knight stands, picks up his head, and reminds Gawain of the appointed time. In his struggles to keep his bargain, Gawain demonstrates chivalry and loyalty until his honour is called into question by a test involving the lord and the lady of the castle at which he is a guest. The poem survives in one manuscript, Cotton Nero A.x., which also includes three religious narrative poems: Pearl, Cleanness, and Patience. All four are written in a North West Midlands dialect of Middle English, and are thought to have been written by the same author, dubbed the "Pearl Poet" or "Gawain Poet". ## Synopsis In Camelot on Christmas Day, King Arthur's court is exchanging gifts and waiting for the feasting to start, when the king asks to see or hear of an exciting adventure. A gigantic figure, entirely green in appearance and riding a green horse, rides unexpectedly into the hall. He wears no armour but bears an axe in one hand and a holly bough in the other. Refusing to fight anyone there on the grounds that they are all too weak, he insists he has come for a friendly Christmas game: someone is to strike him once with his axe, on the condition that the Green Knight may return the blow in a year and a day. The axe will belong to whoever accepts this deal. King Arthur is prepared to accept the challenge when it appears no other knight will dare, but Sir Gawain, youngest of Arthur's knights and his nephew, asks for the honour instead. The giant bends and bares his neck before him and Gawain neatly beheads him in one stroke. However, the Green Knight neither falls nor falters, but instead reaches out, picks up his severed head, and mounts his horse. The Green Knight shows his bleeding head to Queen Guinevere, while it reminds Gawain that the two must meet again at the Green Chapel in a year and a day, before the knight rides away. Gawain and Arthur admire the axe, hang it up as a trophy, and encourage Guinevere to treat the whole matter lightly. As the date approaches, Sir Gawain leaves to find the Green Chapel and keep his part of the bargain. Many adventures and battles are alluded to but not described, until Gawain comes across a splendid castle, where he meets the lord of the castle and his beautiful wife, who are pleased to have such a renowned guest. Also present is an old and ugly lady, unnamed but treated with great honour by all. Gawain tells them of his New Year's appointment at the Green Chapel, and that he has only a few days remaining. The lord laughs, explaining that there is a path that will take him to the chapel less than two miles away, and proposes that Gawain rest at the castle until then. Relieved and grateful, Gawain agrees. The lord proposes a bargain to Gawain: he goes hunting every day, and he will give Gawain whatever he catches, on the condition that Gawain give him whatever he may gain during the day; Gawain accepts. After he leaves, his wife visits Gawain's bedroom and behaves seductively, but despite her best efforts he allows her nothing but a single kiss. When the lord returns and gives Gawain the deer he has killed, Gawain gives a kiss to him without divulging its source. The next day the lady returns to Gawain, who again courteously foils her advances, and later that day there is a similar exchange of a hunted boar for two kisses. She comes once more on the third morning, but once her advances are denied, she offers Gawain a gold ring as a keepsake. He gently but steadfastly refuses, but she pleads that he at least take her sash, a girdle of green and gold silk. The sash, the lady assures him, is charmed, and will keep him from all physical harm. Tempted, as he may otherwise die the next day, Gawain accepts it, and they exchange three kisses. The lady has Gawain swear that he will keep the gift secret from her husband. That evening, the lord returns with a fox, which he exchanges with Gawain for the three kisses; Gawain does not mention the sash. The next day, Gawain binds the sash around his waist. Outside the Green Chapel – only an earthen mound containing a cavern – he finds the Green Knight sharpening an axe. As promised, Gawain bends his bared neck to receive his blow. At the first swing, Gawain flinches slightly and the Green Knight belittles him for it. Ashamed of himself, Gawain does not flinch with the second swing, but again, the Green Knight withholds the full force of his blow. The knight explains he was testing Gawain's nerve. Angrily, Gawain tells him to deliver his blow, and so the knight does, causing only a slight wound on Gawain's neck, and ending the game. Gawain seizes his sword, helmet, and shield, but the Green Knight, laughing, reveals himself to be none other than the lord of the castle, Bertilak de Hautdesert, transformed by magic. He explains that the entire adventure was a trick of the unnamed "elderly lady" Gawain saw at the castle, who is the sorceress Morgan le Fay, Arthur's stepsister, who intended to test Arthur's knights and frighten Guinevere to death. The nick Gawain suffered at the third stroke was because of his attempt to conceal the gift of the sash. Gawain is ashamed to have behaved deceitfully, but the Green Knight laughs and pronounces him the most blameless knight in all the land. The two part on cordial terms. Gawain returns to Camelot wearing the sash as a token of his failure to keep his promise. The Knights of the Round Table absolve him of the blame and decide that henceforth each will wear a green sash in recognition of Gawain's adventure and as a reminder to be honest. ## "Gawain Poet" Though the real name of the "Gawain Poet" (or poets) is unknown, some inferences about them can be drawn from an informed reading of their works. The manuscript of Gawain is known in academic circles as Cotton Nero A.x., following a naming system used by one of its owners, the 16th century Sir Robert Bruce Cotton, a collector of Medieval English texts. Before the Gawain manuscript came into Cotton's possession, it was in the library of Henry Savile in Yorkshire. Little is known about its previous ownership, and until 1824, when the manuscript was introduced to the academic community in a second edition of Thomas Warton's History, edited by Richard Price, it was almost entirely unknown. Even then, the Gawain poem was not published in its entirety until 1839, which is when it was given its present title. Now held in the British Library, it has been dated to the late 14th century, meaning the poet was a contemporary of Geoffrey Chaucer, author of The Canterbury Tales, though it is unlikely that they ever met, and the Gawain poet's English is considerably different from Chaucer's. The three other works found in the same manuscript as Gawain (commonly known as Pearl, Patience, and Cleanness or Purity) are often considered to be written by the same author. However, the manuscript containing these poems was transcribed by a copyist and not by the original poet. Although nothing explicitly suggests that all four poems are by the same poet, comparative analysis of dialect, verse form, and diction have pointed towards single authorship. What is known today about the poet is general. J.R.R. Tolkien and E.V. Gordon, after reviewing the text's allusions, style, and themes, concluded in 1925: > He was a man of serious and devout mind, though not without humour; he had an interest in theology, and some knowledge of it, though an amateur knowledge perhaps, rather than a professional; he had Latin and French and was well enough read in French books, both romantic and instructive; but his home was in the West Midlands of England; so much his language shows, and his metre, and his scenery. The most commonly suggested candidate for authorship is John Massey of Cotton, Cheshire. He is known to have lived in the dialect region of the Gawain Poet and is thought to have written the poem St. Erkenwald, which some scholars argue bears stylistic similarities to Gawain. St. Erkenwald, however, has been dated by some scholars to a time outside the Gawain Poet's era. Thus, ascribing authorship to John Massey is still controversial and most critics consider the Gawain Poet an unknown. ## Verse form The 2,530 lines and 101 stanzas that make up Sir Gawain and the Green Knight are written in what linguists call the "Alliterative Revival" style typical of the 14th century. Instead of focusing on a metrical syllabic count and rhyme, the alliterative form of this period usually relied on the agreement of a pair of stressed syllables at the beginning of the line and another pair at the end. Each line always includes a pause, called a caesura, at some point after the first two stresses, dividing it into two half-lines. Although he follows the form of his day, the Gawain Poet was freer with convention than his or her predecessors. The poet broke the alliterative lines into variable-length groups and ended these nominal stanzas with a rhyming section of five lines known as the bob and wheel, in which the "bob" is a very short line, sometimes of only two syllables, followed by the "wheel," longer lines with internal rhyme. ## Similar stories The earliest known story to feature a beheading game is the 8th-century Middle Irish tale Bricriu's Feast. This story parallels Gawain in that, like the Green Knight, Cú Chulainn's antagonist feints three blows with the axe before letting his target depart without injury. A beheading exchange also appears in the late 12th-century Life of Caradoc, a Middle French narrative embedded in the anonymous First Continuation of Chrétien de Troyes' Perceval, the Story of the Grail. A notable difference in this story is that Caradoc's challenger is his father in disguise, come to test his honour. Lancelot is given a beheading challenge in the early 13th-century Perlesvaus, in which a knight begs him to chop off his head or else put his own in jeopardy. Lancelot reluctantly cuts it off, agreeing to come to the same place in a year to put his head in the same danger. When Lancelot arrives, the people of the town celebrate and announce that they have finally found a true knight, because many others had failed this test of chivalry. The stories The Girl with the Mule (alternately titled The Mule Without a Bridle) and Hunbaut [fr] feature Gawain in beheading game situations. In Hunbaut, Gawain cuts off a man's head and, before he can replace it, removes the magic cloak keeping the man alive, thus killing him. Several stories tell of knights who struggle to stave off the advances of women sent by their lords as a test; these stories include Yder, the Lancelot-Grail, Hunbaut, and The Knight with the Sword. The last two involve Gawain specifically. Usually, the temptress is the daughter or wife of a lord to whom the knight owes respect, and the knight is tested to see whether or not he will remain chaste in trying circumstances. In the first branch of the medieval Welsh collection of tales known as The Four Branches of the Mabinogi, Pwyll exchanges places for a year with Arawn, the lord of Annwn (the Otherworld). Despite having his appearance changed to resemble Arawn exactly, Pwyll does not have sexual relations with Arawn's wife during this time, thus establishing a lasting friendship between the two men. This story may, then, provide a background to Gawain's attempts to resist the wife of the Green Knight; thus, the story of Sir Gawain and the Green Knight may be seen as a tale which combines elements of the Celtic beheading game and seduction test stories. Additionally, in both stories a year passes before the completion of the conclusion of the challenge or exchange. Some scholars disagree with this interpretation, however, as Arawn seems to have accepted the notion that Pwyll may reciprocate with his wife, making it less of a "seduction test" per se, as seduction tests typically involve a Lord and Lady conspiring to seduce a knight, seemingly against the wishes of the Lord. After the writing of Sir Gawain and the Green Knight, several similar stories followed. The Greene Knight (15th–17th century) is a rhymed retelling of nearly the same tale. In it, the plot is simplified, motives are more fully explained, and some names are changed. Another story, The Turke and Gowin (15th century), begins with a Turk entering Arthur's court and asking, "Is there any will, as a brother, To give a buffett and take another?" At the end of this poem the Turk, rather than buffeting Gawain back, asks the knight to cut off his head, which Gawain does. The Turk then praises Gawain and showers him with gifts. The Carle of Carlisle (17th century) also resembles Gawain in a scene in which the Carle (Churl), a lord, takes Sir Gawain to a chamber where two swords are hanging and orders Gawain to cut off his head or suffer his own to be cut off. Gawain obliges and strikes, but the Carle rises, laughing and unharmed. Unlike the Gawain poem, no return blow is demanded or given. ## Themes ### Temptation and testing At the heart of Sir Gawain and the Green Knight is the test of Gawain's adherence to the code of chivalry. The typical temptation fable of medieval literature presents a series of tribulations assembled as tests or "proofs" of moral virtue. The stories often describe several individuals' failures after which the main character is tested. Success in the proofs will often bring immunity or good fortune. Gawain's ability to pass the tests of his host are of utmost importance to his survival, though he does not know it. It is only by fortuity or "instinctive-courtesy" that Sir Gawain can pass his test. Gawain does not realise, however, that these tests are all orchestrated by the lord, Bertilak de Hautdesert. In addition to the laws of chivalry, Gawain must respect another set of laws concerning courtly love. The knight's code of honour requires him to do whatever a damsel asks. Gawain must accept the girdle from the Lady, but he must also keep the promise he has made to his host that he will give whatever he gains that day. Gawain chooses to keep the girdle out of fear of death, thus breaking his promise to the host but honouring the lady. Upon learning that the Green Knight is actually his host (Bertilak), he realises that although he has completed his quest, he has failed to be virtuous. This test demonstrates the conflict between honour and knightly duties. In breaking his promise, Gawain believes he has lost his honour and failed in his duties. ### Hunting and seduction Scholars have frequently noted the parallels between the three hunting scenes and the three seduction scenes in Gawain. They are generally agreed that the fox chase has significant parallels to the third seduction scene, in which Gawain accepts the girdle from Bertilak's wife. Gawain, like the fox, fears for his life and is looking for a way to avoid death from the Green Knight's axe. Like his counterpart, he resorts to trickery to save his skin. The fox uses tactics so unlike the first two animals, and so unexpectedly, that Bertilak has the hardest time hunting it. Similarly, Gawain finds the Lady's advances in the third seduction scene more unpredictable and challenging to resist than her previous attempts. She changes her evasive language, typical of courtly love relationships, to a more assertive style. Her dress, modest in earlier scenes, is suddenly voluptuous and revealing. The deer- and boar-hunting scenes are less clearly connected, although scholars have attempted to link each animal to Gawain's reactions in the parallel seduction scene. Attempts to connect the deer hunt with the first seduction scene have unearthed a few parallels. Deer hunts of the time, like courtship, had to be done according to established rules. Women often favoured suitors who hunted well and skinned their animals, sometimes even watching while a deer was cleaned. The sequence describing the deer hunt is unspecific and nonviolent, with an air of relaxation and exhilaration. The first seduction scene follows in a similar vein, with no overt physical advances and no apparent danger; the entire exchange is humorously portrayed. The boar-hunting scene is, in contrast, laden with detail. Boars were (and are) much more difficult to hunt than deer; approaching one with only a sword was akin to challenging a knight to single combat. In the hunting sequence, the boar flees but is cornered before a ravine. He turns to face Bertilak with his back to the ravine, prepared to fight. Bertilak dismounts and in the ensuing fight kills the boar. He removes its head and displays it on a pike. In the seduction scene, Bertilak's wife, like the boar, is more forward, insisting that Gawain has a romantic reputation and that he must not disappoint her. Gawain, however, is successful in parrying her attacks, saying that surely, she knows more than he about love. Both the boar hunt and the seduction scene can be seen as depictions of a moral victory: both Gawain and Bertilak face struggles alone and emerge triumphant. Masculinity has also been associated with hunting. The theme of masculinity is present throughout. In an article by Vern L. Bullough, "Being a Male in the Middle Ages," he discusses Sir Gawain and how normally, masculinity is often viewed in terms of being sexually active. He notes that Sir Gawain is not part of this normalcy. ### Nature and chivalry Some argue that nature represents a chaotic, lawless order which is in direct confrontation with the civilisation of Camelot throughout Sir Gawain and the Green Knight. The green horse and rider that first invade Arthur's peaceful halls are iconic representations of nature's disturbance. Nature is presented throughout the poem as rough and indifferent, constantly threatening the order of men and courtly life. Nature invades and disrupts order in the major events of the narrative, both symbolically and through the inner nature of humanity. This element appears first with the disruption caused by the Green Knight, later when Gawain must fight off his natural lust for Bertilak's wife, and again when Gawain breaks his vow to Bertilak by choosing to keep the green girdle, valuing survival over virtue. Represented by the sin-stained girdle, nature is an underlying force, forever within man and keeping him imperfect (in a chivalric sense). In this view, Gawain is part of a wider conflict between nature and chivalry, an examination of the ability of man's order to overcome the chaos of nature. Several critics have made exactly the opposite interpretation, reading the poem as a comic critique of the Christianity of the time, particularly as embodied in the Christian chivalry of Arthur's court. In its zeal to extirpate all traces of paganism, Christianity had cut itself off from the sources of life in nature and the female. The green girdle represents all the pentangle lacks. The Arthurian enterprise is doomed unless it can acknowledge the unattainability of the ideals of the Round Table, and, for the sake of realism and wholeness, recognise and incorporate the pagan values represented by the Green Knight. The chivalry that is represented within Gawain is one which was constructed by court nobility. The violence that is part of this chivalry is steeply contrasted by the fact that King Arthur's court is Christian, and the initial beheading event takes place while celebrating Christmas. The violence of an act of beheading seems to be counterintuitive to chivalric and Christian ideals, and yet it is seen as part of knighthood. The question of politeness and chivalry is a main theme during Gawain's interactions with Bertilak's wife. He cannot accept her advances or else lose his honour, and yet he cannot utterly refuse her advances or else risk upsetting his hostess. Gawain plays a very fine line and the only part where he appears to fail is when he conceals the green girdle from Bertilak. ### Games The word gomen (game) is found 18 times in Gawain. Its similarity to the word gome (man), which appears 21 times, has led some scholars to see men and games as centrally linked. Games at this time were seen as tests of worthiness, as when the Green Knight challenges the court's right to its good name in a "Christmas game". The "game" of exchanging gifts was common in Germanic cultures. If a man received a gift, he was obliged to provide the giver with a better gift or risk losing his honour, almost like an exchange of blows in a fight (or in a "beheading game"). The poem revolves around two games: an exchange of beheading and an exchange of winnings. These appear at first to be unconnected. However, a victory in the first game will lead to a victory in the second. Elements of both games appear in other stories; however, the linkage of outcomes is unique to Gawain. ### Times and seasons Times, dates, seasons, and cycles within Gawain are often noted by scholars because of their symbolic nature. The story starts on New Year's Eve with a beheading and culminates one year later on the next New Year's Day. Gawain leaves Camelot on All Saints Day and arrives at Bertilak's castle on Christmas Eve. Furthermore, the Green Knight tells Gawain to meet him at the Green Chapel in "a year and a day"—in other words, the next New Year's Day. Some scholars interpret the yearly cycles, each beginning and ending in winter, as the poet's attempt to convey the inevitable fall of all things good and noble in the world. Such a theme is strengthened by the image of Troy, a powerful nation once thought to be invincible which, according to the Aeneid, fell to the Greeks due to pride and ignorance. The Trojan connection shows itself in the presence of two nearly identical descriptions of Troy's destruction. The poem's first line reads: "Since the siege and the assault were ceased at Troy" and the final stanzaic line (before the bob and wheel) is "After the siege and the assault were ceased at Troy". ## Symbolism ### The Green Knight Scholars have puzzled over the Green Knight's symbolism since the discovery of the poem. British medievalist C. S. Lewis said the character was "as vivid and concrete as any image in literature" and J. R. R. Tolkien said he was the "most difficult character" to interpret in Sir Gawain. His major role in Arthurian literature is that of a judge and tester of knights, thus he is at once terrifying, friendly, and mysterious. He appears in only two other poems: The Greene Knight and King Arthur and King Cornwall. Scholars have attempted to connect him to other mythical characters, such as Jack in the green of English tradition and to Al-Khidr, but no definitive connection has yet been established. He represents a mix of two traditional figures in romance and other medieval narratives: "the literary green man" and "the literary wild man." The Green Knight challenges Gawain to rise to the ideals of honour and religious practices. His name, the Green Knight, shows his opposition to nature: the colour green represents forces of nature, and the word "knight" connects him to society and civilisation. While the Green Knight represents the primitive, and uncivilised side of man's nature, he also opposes nature as well. The description of the Green Knight, which he shares with his green horse, shows the central idea of human nature's potential. ### The colour green Given the varied and even contradictory interpretations of the colour green, its precise meaning in the poem remains ambiguous. In English folklore and literature, green was traditionally used to symbolise nature and its associated attributes: fertility and rebirth. Stories of the medieval period also used it to allude to love and the base desires of man. Because of its connection with faeries and spirits in early English folklore, green also signified witchcraft, devilry and evil. It can also represent decay and toxicity. When combined with gold, as with the Green Knight and the girdle, green was often seen as representing youth's passing. In Celtic mythology, green was associated with misfortune and death, and therefore avoided in clothing. The green girdle, originally worn for protection, became a symbol of shame and cowardice; it is finally adopted as a symbol of honour by the knights of Camelot, signifying a transformation from good to evil and back again; this displays both the spoiling and regenerative connotations of the colour green. There is a possibility, as Alice Buchanan has argued, that the colour green is erroneously attributed to the Green Knight due to the poet's mistranslation or misunderstanding of the Irish word glas, which could either mean grey or green, or the identical word glas in Cornish. Glas has been used to denote a range of colours: light blues, greys, and greens of the sea and grass. In the Death of Curoi (one of the Irish stories from Bricriu's Feast), Curoi stands in for Bertilak, and is often called "the man of the grey mantle" which corresponds to Welsh Brenin Llywd or Gwynn ap Nudd. Though the words usually used for grey in the Death of Curoi are lachtna or odar, roughly meaning milk-coloured and shadowy respectively, in later works featuring a green knight, the word glas is used and may have been the basis of misunderstanding. ### Girdle The girdle's symbolic meaning, in Sir Gawain and the Green Knight, has been construed in a variety of ways. Interpretations range from sexual to spiritual. Those who argue for the sexual inference view the girdle as a "trophy". It is not entirely clear if the "winner" is Sir Gawain or the Lady, Bertilak's wife. The girdle is given to Gawain by the Lady to keep him safe when he confronts the Green Knight. When Bertilak comes home from his hunting trip, Gawain does not reveal the girdle to his host; instead, he hides it. This introduces a spiritual interpretation, that Gawain's acceptance of the girdle is a sign of his faltering faith in God, at least in the face of death. To some, the Green Knight is Christ, who overcomes death, while Gawain is the Every Christian, who in his struggles to follow Christ faithfully, chooses the easier path. In Sir Gawain, the easier choice is the girdle, which promises what Gawain most desires. Faith in God, alternatively, requires one's acceptance that what one most desires does not always coincide with what God has planned. It is arguably best to view the girdle not as an either–or situation, but as a complex, multi-faceted symbol that acts to test Gawain in many ways. While Gawain can resist Bertilak's wife's sexual advances, he is unable to resist the powers of the girdle. Gawain is operating under the laws of chivalry which, evidently, have rules that can contradict each other. In the story of Sir Gawain, Gawain finds himself torn between doing what a damsel asks (accepting the girdle) and keeping his promise (returning anything given to him while his host is away). ### Pentangle The poem contains the first recorded use of the word pentangle in English. It contains the only representation of such a symbol on Gawain's shield in the Gawain literature. What is more, the poet uses a total of 46 lines to describe the meaning of the pentangle; no other symbol in the poem receives as much attention or is described in such detail. The poem describes the pentangle as a symbol of faithfulness and an endeles knot (endless knot). From lines 640 to 654, the five points of the pentangle relate directly to Gawain in five ways: five senses, his five fingers, his faith found in the five wounds of Christ, the five joys of Mary (whose face was on the inside of the shield) and finally friendship, fraternity, purity, politeness, and pity (traits that Gawain possessed around others). In line 625, it is described as a syngne þat salamon set (a sign set by Solomon). Solomon, the third king of Israel, in the 10th century BC, was said to have the mark of the pentagram on his ring, which he received from the archangel Michael. The pentagram seal on this ring was said to give Solomon power over demons. Along these lines, some academics link the Gawain pentangle to magical traditions. In Germany, the symbol was called a Drudenfuß (nightmare spirit's foot) and was placed on household objects to keep out evil. The symbol was also associated with magical charms that, if recited or written on a weapon, would call forth magical forces. However, concrete evidence tying the magical pentagram to Gawain's pentangle is scarce. Gawain's pentangle also symbolises the "phenomenon of physically endless objects signifying a temporally endless quality." Many poets use the symbol of the circle to show infinity or endlessness, but Gawain's poet insisted on using something more complex. In medieval number theory, the number five is considered a "circular number", since it "reproduces itself in its last digit when raised to its powers". Furthermore, it replicates itself geometrically; that is, every pentangle has a smaller pentagon that allows a pentangle to be embedded in it and this "process may be repeated forever with decreasing pentangles". Thus, by reproducing the number five, which in medieval number symbolism signified incorruptibility, Gawain's pentangle represents his eternal incorruptibility. ### The Lady's Ring Gawain's refusal of the Lady's ring has major implications for the remainder of the story. While the modern student may tend to pay more attention to the girdle as the eminent object offered by her, readers in the time of Gawain would have noticed the significance of the offer of the ring as they believed that rings, and especially the embedded gems, had talismanic properties similarly done by the Gawain-poet in Pearl. This is especially true of the Lady's ring, as scholars believe it to be a ruby or carbuncle, indicated when the Gawain-Poet describes it as a bryȝt sunne (fiery sun). This red colour can be seen as symbolising royalty, divinity, and the Passion of the Christ, something that Gawain as a knight of the Round Table would strive for, but this colour could also represent the negative qualities of temptation and covetousness. Given the importance of magic rings in Arthurian romance, this remarkable ring would also have been believed to protect the wearer from harm just as the Lady claims the girdle will. ### Numbers The poet highlights number symbolism to add symmetry and meaning to the poem. For example, three kisses are exchanged between Gawain and Bertilak's wife; Gawain is tempted by her on three separate days; Bertilak goes hunting three times, and the Green Knight swings at Gawain three times with his axe. The number two also appears repeatedly, as in the two beheading scenes, two confession scenes, and two castles. The five points of the pentangle, the poet adds, represent Gawain's virtues, for he is for ay faythful in fyue and sere fyue syþez (faithful in five and many times five). The poet goes on to list the ways in which Gawain is virtuous: all five of his senses are without fault; his five fingers never fail him, and he always remembers the five wounds of Christ, as well as the five joys of the Virgin Mary. The fifth five is Gawain himself, who embodies the five moral virtues of the code of chivalry: "friendship, generosity, chastity, courtesy, and piety". All of these virtues reside, as the poet says, in þe endeles knot (the endless knot) of the pentangle, which forever interlinks and is never broken. This intimate relationship between symbol and faith allows for rigorous allegorical interpretation, especially in the physical role that the shield plays in Gawain's quest. Thus, the poet makes Gawain the epitome of perfection in knighthood through number symbolism. The number five is also found in the structure of the poem itself. Sir Gawain is 101 stanzas long, traditionally organised into four 'fitts' of 21, 24, 34, and 22 stanzas. These divisions, however, have since been disputed; scholars have begun to believe that they are the work of the copyist and not of the poet. The surviving manuscript features a series of capital letters added after the fact by another scribe, and some scholars argue that these additions were an attempt to restore the original divisions. These letters divide the manuscript into nine parts. The first and last parts are 22 stanzas long. The second and second-to-last parts are only one stanza long, and the middle five parts are eleven stanzas long. The number eleven is associated with transgression in other medieval literature (being one more than ten, a number associated with the Ten Commandments). Thus, this set of five elevens (55 stanzas) creates the perfect mix of transgression and incorruption, suggesting that Gawain is faultless in his faults. ### Wounds At the story's climax, Gawain is wounded superficially in the neck by the Green Knight's axe. During the medieval period, the body and the soul were believed to be so intimately connected that wounds were considered an outward sign of inward sin. The neck, specifically, was believed to correlate with the part of the soul related to will, connecting the reasoning part (the head) and the courageous part (the heart). Gawain's sin resulted from using his will to separate reasoning from courage. By accepting the girdle from the lady, he employs reason to do something less than courageous—evade death in a dishonest way. Gawain's wound is thus an outward sign of an internal wound. The Green Knight's series of tests shows Gawain the weakness that has been in him all along: the desire to use his will pridefully for personal gain, rather than submitting his will in humility to God. The Green Knight, by engaging with the greatest knight of Camelot, also reveals the moral weakness of pride in all of Camelot, and therefore all of humanity. However, the wounds of Christ, believed to offer healing to wounded souls and bodies, are mentioned throughout the poem in the hope that this sin of prideful "stiffneckedness" will be healed among fallen mortals. ## Interpretations ### Gawain as medieval romance Many critics argue that Sir Gawain and the Green Knight should be viewed as a romance. Medieval romances typically recount the marvellous adventures of a chivalrous, heroic knight, often of super-human ability, who abides by chivalry's strict codes of honour and demeanour, embarks upon a quest and defeats monsters, thereby winning the favour of a lady. Thus, medieval romances focus not on love and sentiment (as the term "romance" implies today), but on adventure. Gawain's function, as medieval scholar Alan Markman says, "is the function of the romance hero ... to stand as the champion of the human race, and by submitting to strange and severe tests, to demonstrate human capabilities for good or bad action." Through Gawain's adventure, it becomes clear that he is merely human. The reader becomes attached to this human view amidst the poem's romanticism, relating to Gawain's humanity while respecting his knightly qualities. Gawain "shows us what moral conduct is. We shall probably not equal his behaviour, but we admire him for pointing out the way." In viewing the poem as a medieval romance, many scholars see it as intertwining chivalric and courtly love laws under the English Order of the Garter. A slightly altered version of the Order's motto, "Honi soit qui mal y pense", or "Shamed be he who finds evil here," has been added, in a different hand, at the end of the poem. Some critics describe Gawain's peers wearing girdles of their own as linked to the origin of the Order of the Garter. However, in the parallel poem The Greene Knight, the lace is white, not green, and is considered the origin of the collar worn by the Knights of the Bath, not the Order of the Garter. Still, a possible connection to the Order is not beyond the realm of possibility. ### Christian interpretations The poem is in many ways deeply Christian, with frequent references to the fall of Adam and Eve and to Jesus Christ. Scholars have debated the depth of the Christian elements within the poem by looking at it in the context of the age in which it was written, coming up with varying views as to what represents a Christian element of the poem and what does not. For example, some critics compare Sir Gawain to the other three poems of the Gawain manuscript. Each has a heavily Christian theme, causing scholars to interpret Gawain similarly. Comparing it to the poem Cleanness (also known as Purity), for example, they see it as a story of the apocalyptic fall of a civilisation, in Gawain's case, Camelot. In this interpretation, Sir Gawain is like Noah, separated from his society and warned by the Green Knight (who is seen as God's representative) of the coming doom of Camelot. Gawain, judged worthy through his test, is spared the doom of the rest of Camelot. King Arthur and his knights, however, misunderstand Gawain's experience and wear garters themselves. In Cleanness the men who are saved are similarly helpless in warning their society of impending destruction. One of the key points stressed in this interpretation is that salvation is an individual experience difficult to communicate to outsiders. In his depiction of Camelot, the poet reveals a concern for his society, whose inevitable fall will bring about the ultimate destruction intended by God. Gawain was written around the time of the Black Death and Peasants' Revolt, events which convinced many people that their world was coming to an apocalyptic end and this belief was reflected in literature and culture. However, other critics see weaknesses in this view, since the Green Knight is ultimately under the control of Morgan le Fay, often viewed as a figure of evil in Camelot tales. This makes the knight's presence as a representative of God problematic. While the character of the Green Knight is usually not viewed as a representation of Christ in Sir Gawain and the Green Knight, critics do acknowledge a parallel. Lawrence Besserman, a specialist in medieval literature, explains that "the Green Knight is not a figurative representative of Christ. But the idea of Christ's divine/human nature provides a medieval conceptual framework that supports the poet's serious/comic account of the Green Knight's supernatural/human qualities and actions." This duality exemplifies the influence and importance of Christian teachings and views of Christ in the era of the Gawain Poet. Furthermore, critics note the Christian reference to Christ's crown of thorns at the conclusion of Sir Gawain and the Green Knight. After Gawain returns to Camelot and tells his story regarding the newly acquired green sash, the poem concludes with a brief prayer and a reference to "the thorn-crowned God". Besserman theorises that "with these final words the poet redirects our attention from the circular girdle-turned-sash (a double image of Gawain's "vntrawþe/renoun": untruth/renown) to the circular Crown of Thorns (a double image of Christ's humiliation turned triumph)." Throughout the poem, Gawain encounters numerous trials testing his devotion and faith in Christianity. When Gawain sets out on his journey to find the Green Chapel, he finds himself lost, and only after praying to the Virgin Mary does he find his way. As he continues his journey, Gawain once again faces anguish regarding his inevitable encounter with the Green Knight. Instead of praying to Mary, as before, Gawain places his faith in the girdle given to him by Bertilak's wife. From the Christian perspective, this leads to disastrous and embarrassing consequences for Gawain as he is forced to re-evaluate his faith when the Green Knight points out his betrayal. Another interpretation sees the work in terms of the perfection of virtue, with the pentangle representing the moral perfection of the connected virtues, the Green Knight as Christ exhibiting perfect fortitude, and Gawain as slightly imperfect in fortitude by virtue of flinching when under the threat of death. An analogy is also made between Gawain's trial and the Biblical test that Adam encounters in the Garden of Eden. Adam succumbs to Eve just as Gawain surrenders to Bertilak's wife by accepting the girdle. Although Gawain sins by putting his faith in the girdle and not confessing when he is caught, the Green Knight pardons him, thereby allowing him to become a better Christian by learning from his mistakes. Through the various games played and hardships endured, Gawain finds his place within the Christian world. ### Feminist interpretations Feminist literary critics see the poem as portraying women's ultimate power over men. Morgan le Fay and Bertilak's wife, for example, are the most powerful characters in the poem—Morgan especially, as she begins the game by enchanting the Green Knight. The girdle and Gawain's scar can be seen as symbols of feminine power, each of them diminishing Gawain's masculinity. Gawain's misogynist passage, in which he blames all his troubles on women and lists the many men who have fallen prey to women's wiles, further supports the feminist view of ultimate female power in the poem. In contrast, others argue that the poem focuses mostly on the opinions, actions, and abilities of men. For example, on the surface, it appears that Bertilak's wife is a strong leading character. By adopting the masculine role, she appears to be an empowered individual, particularly in the bedroom scene. This is not entirely the case, however. While the Lady is being forward and outgoing, Gawain's feelings and emotions are the focus of the story, and Gawain stands to gain or lose the most. The Lady "makes the first move", so to speak, but Gawain decides what is to become of those actions. He, therefore, is in charge of the situation and even the relationship. In the bedroom scene, both the negative and positive actions of the Lady are motivated by her desire. Her feelings cause her to step out of the typical female role and into that of the male, thus becoming more empowered. At the same time, those same actions make the Lady appear adulterous; some scholars compare her with Eve in the Bible. By convincing Gawain to take her girdle, i.e., the apple, the pact made with Bertilak—and therefore the Green Knight—is broken. Based on this, Gawain portrays himself, in what is often called his "antifeminist diatribe" later on, as a "good man seduced". ### Postcolonial interpretations From 1350 to 1400—the period in which the poem is thought to have been written—Wales experienced several raids at the hands of the English, who were attempting to colonise the area. The Gawain poet uses a North West Midlands dialect common on the Welsh–English border, potentially placing him in the midst of this conflict. Patricia Clare Ingham is credited with first viewing the poem through the lens of postcolonialism, and since then a great deal of dispute has emerged over the extent to which colonial differences play a role in the poem. Most critics agree that gender plays a role but differ about whether gender supports the colonial ideals or replaces them as English and Welsh cultures interact in the poem. A large amount of critical debate also surrounds the poem as it relates to the bi-cultural political landscape of the time. Some argue that Bertilak is an example of the hybrid Anglo-Welsh culture found on the Welsh–English border. They therefore view the poem as a reflection of a hybrid culture that plays strong cultures off one another to create a new set of cultural rules and traditions. Other scholars, however, argue that historically much Welsh blood was shed well into the 14th century, creating a situation far removed from the more friendly hybridisation suggested by Ingham. To support this argument further, it is suggested that the poem creates an "us versus them" scenario contrasting the knowledgeable civilised English with the uncivilised borderlands that are home to Bertilak and the other monsters that Gawain encounters. In contrast to this perception of the colonial lands, others argue that the land of Hautdesert, Bertilak's territory, has been misrepresented or ignored in modern criticism. They suggest that it is a land with its own moral agency, one that plays a central role in the story. Bonnie Lander, for example, argues that the denizens of Hautdesert are "intelligently immoral", choosing to follow certain codes and rejecting others, a position which creates a "distinction ... of moral insight versus moral faith". Lander thinks that the border dwellers are more sophisticated because they do not unthinkingly embrace the chivalric codes but challenge them in a philosophical, and—in the case of Bertilak's appearance at Arthur's court—literal sense. Lander's argument about the superiority of the denizens of Hautdesert hinges on the lack of self-awareness present in Camelot, which leads to an unthinking populace that frowns on individualism. In this view, it is not Bertilak and his people, but Arthur and his court, who are the monsters. ### Gawain's journey Several scholars have attempted to find a real-world correspondence for Gawain's journey to the Green Chapel. The Anglesey islands, for example, are mentioned in the poem. They exist today as a single island off the coast of Wales. In line 700, Gawain is said to pass the holy hede (Holy Head), believed by many scholars to be either Holywell or the Cistercian abbey of Poulton in Pulford. Holywell is associated with the beheading of Saint Winifred. As the story goes, Winifred was a virgin who was beheaded by a local leader after she refused his sexual advances. Her uncle, another saint, put her head back in place and healed the wound, leaving only a white scar. The parallels between this story and Gawain's make this area a likely candidate for the journey. Gawain's trek leads him directly into the centre of the Pearl Poet's dialect region, where the candidates for the locations of the Castle at Hautdesert and the Green Chapel stand. Hautdesert is thought to be in the area of Swythamley in northwest Midland, as it lies in the writer's dialect area and matches the topographical features described in the poem. The area is also known to have housed all of the animals hunted by Bertilak (deer, boar, fox) in the 14th century. The Green Chapel is thought to be in either Lud's Church or Wetton Mill, as these areas closely match the descriptions given by the author. Ralph Elliott located the chapel two myle henne (two miles hence) from the old manor house at Swythamley Park at þe boþem of þe brem valay (the bottom of a valley) on a hillside (loke a littel on þe launde on þi lyfte honde) in an enormous fissure (an olde caue / or a creuisse of an olde cragge). Several have tried to replicate this expedition and others such as Michael W. Twomey have created a virtual tour of Gawain's journey entitled 'Travels with Sir Gawain' that include photographs of landscapes mentioned and particular views mentioned in the text. ### Homoerotic interpretations According to Queer scholar Richard Zeikowitz, the Green Knight represents a threat to homosocial friendship in his medieval world. Zeikowitz argues that the narrator of the poem seems entranced by the Knight's beauty, homoeroticising him in poetic form. The Green Knight's attractiveness challenges the homosocial rules of King Arthur's court and poses a threat to their way of life. Zeikowitz also states that Gawain seems to find Bertilak as attractive as the narrator finds the Green Knight. Bertilak, however, follows the homosocial code and develops a friendship with Gawain. Gawain's embracing and kissing Bertilak in several scenes thus represents not a homosexual but a homosocial expression. Men of the time often embraced and kissed, and this was acceptable under the chivalric code. Nonetheless, Zeikowitz claims the Green Knight blurs the lines between homosociality and homosexuality, representing the difficulty medieval writers sometimes had in separating the two. Queer scholar Carolyn Dinshaw argues that the poem may have been a response to accusations that Richard II had a male lover—an attempt to re-establish the idea that heterosexuality was the Christian norm. Around the time the poem was written, the Catholic Church was beginning to express concerns about kissing between males. Many religious figures were trying to make the distinction between strong trust and friendship between males and homosexuality. She asserts that the Pearl Poet seems to have been simultaneously entranced and repulsed by homosexual desire. According to Dinshaw, in his other poem Cleanness, he points out several grievous sins, but spends lengthy passages describing them in minute detail, and she sees this alleged' obsession' as carrying over to Gawain in his descriptions of the Green Knight. Beyond this, Dinshaw proposes that Gawain can be read as a woman-like figure. In her view, he is the passive one in the advances of Bertilak's wife, as well as in his encounters with Bertilak himself, where he acts the part of a woman in kissing the man. However, while the poem does have homosexual elements, these elements are brought up by the poet to establish heterosexuality as the normal lifestyle of Gawain's world. The poem does this by making the kisses between the Lady and Gawain sexual in nature but rendering the kisses between Gawain and Bertilak "unintelligible" to the medieval reader. In other words, the poet portrays kisses between a man and a woman as having the possibility of leading to sex, while in a heterosexual world, kisses between a man and a man are portrayed as having no such possibility. ## Modern adaptations ### Books Though the surviving manuscript dates from the fourteenth century, the first published version of the poem did not appear until as late as 1839, when Sir Frederic Madden of the British Museum recognised the poem as worth reading. Madden's scholarly, Middle English edition of the poem was followed in 1898 by the first Modern English translation – a prose version by literary scholar Jessie Weston. In 1925, J. R. R. Tolkien and E. V. Gordon published a scholarly edition of the Middle English text of Sir Gawain and the Green Knight; a revised edition of this text was prepared by Norman Davis and published in 1967. The book, featuring a text in Middle English with extensive scholarly notes, is frequently confused with the translation into Modern English that Tolkien prepared, along with translations of Pearl and Sir Orfeo, late in his life. Many editions of the latter work, first published in 1975, shortly after his death, list Tolkien on the cover as author rather than translator. Many translations into Modern English are available. Notable translators include Jessie Weston, whose 1898 prose translation and 1907 poetic translation took many liberties with the original; Theodore Banks, whose 1929 translation was praised for its adaptation of the language to modern usage; and Marie Borroff, whose imitative translation was first published in 1967 and "entered the academic canon" in 1968, in the second edition of the Norton Anthology of English Literature. In 2010, her (slightly revised) translation was published as a Norton Critical Edition, with a foreword by Laura Howes. In 2007, Simon Armitage, who grew up near the Gawain poet's purported residence, published a translation which attracted attention in the US and the United Kingdom, and was published in the United States by Norton. In 2021, author John Reppion and artist M. D. Penman released a graphic novel adaptation of Sir Gawain and the Green Knight in collaboration with Leeds Arts University. An expanded deluxe Hardback edition of the book, with an introduction by Alan Moore, was released the following year. ### Film and television The poem has been adapted to film three times, twice by writer-director Stephen Weeks: first as Gawain and the Green Knight in 1973 and again in 1984 as Sword of the Valiant: The Legend of Sir Gawain and the Green Knight, featuring Miles O'Keeffe as Gawain and Sean Connery as the Green Knight. Both films have been criticised for deviating from the poem's plot. Also, Bertilak and the Green Knight are never connected. On 30 July 2021, The Green Knight was released, directed by American filmmaker David Lowery for A24 and starring Dev Patel as Gawain and Ralph Ineson as the Green Knight, albeit with some significant deviations from the original story. There have been at least two television adaptations, Gawain and the Green Knight in 1991 and the animated Sir Gawain and the Green Knight in 2002. The BBC broadcast a documentary presented by Simon Armitage in which the journey depicted in the poem is traced, using what are believed to be the actual locations. ### Theatre The Tyneside Theatre company presented a stage version of Sir Gawain and the Green Knight at the University Theatre, Newcastle at Christmas 1971. It was directed by Michael Bogdanov and adapted for the stage from the translation by Brian Stone. The music and lyrics were composed by Iwan Williams using medieval carols, such as the Boar's Head Carol, as inspiration and folk instruments such as the Northumbrian pipes, whistles and bhodran to create a "rough" feel. Stone had referred Bogdanov to Cuchulain and the Beheading Game, a sequence which is contained in the Grenoside Sword dance. Bogdanov found the pentangle theme to be contained in most sword dances, and so incorporated a long sword dance while Gawain lay tossing uneasily before getting up to go to the Green Chapel. The dancers made the knot of the pentangle around his drowsing head with their swords. The interlacing of the hunting and wooing scenes was achieved by frequent cutting of the action from hunt to bedchamber and back again, while the locale of both remained on-stage. In 1992 Simon Corble created an adaptation with medieval songs and music for The Midsommer Actors' Company. performed as walkabout productions in the summer 1992 at Thurstaston Common and Beeston Castle and in August 1995 at Brimham Rocks, North Yorkshire. Corble later wrote a substantially revised version which was produced indoors at the O'Reilly Theatre, Oxford in February 2014. ### Opera Sir Gawain and the Green Knight was first adapted as an opera in 1978 by the composer Richard Blackford on commission from the village of Blewbury, Oxfordshire. The libretto was written for the adaptation by the children's novelist John Emlyn Edwards. The "Opera in Six Scenes" was subsequently recorded by Decca between March and June 1979 and released on the Argo label in November 1979. Sir Gawain and the Green Knight was adapted into an opera called Gawain by Harrison Birtwistle, first performed in 1991. Birtwistle's opera was praised for maintaining the complexity of the poem while translating it into lyric, musical form. Another operatic adaptation is Lynne Plowman's Gwyneth and the Green Knight, first performed in 2002. This opera uses Sir Gawain as the backdrop but refocuses the story on Gawain's female squire, Gwyneth, who is trying to become a knight. Plowman's version was praised for its approachability, as its target is the family audience and young children, but criticised for its use of modern language and occasional preachy nature.
45,094,397
Dr. Jekyll and Mr. Hyde (1887 play)
1,170,680,546
Stage play by Thomas Russell Sullivan
[ "1887 plays", "American plays adapted into films", "Broadway plays", "Plays based on Strange Case of Dr Jekyll and Mr Hyde", "Plays based on novels", "Science fiction theatre", "West End plays" ]
Dr. Jekyll and Mr. Hyde is a four-act play written by Thomas Russell Sullivan in collaboration with the actor Richard Mansfield. It is an adaptation of Strange Case of Dr Jekyll and Mr Hyde, an 1886 novella by the Scottish author Robert Louis Stevenson. The story focuses on the respected London doctor Henry Jekyll and his involvement with Edward Hyde, a loathsome criminal. After Hyde murders the father of Jekyll's fiancée, Jekyll's friends discover that he and Jekyll are the same person; Jekyll has developed a potion that allows him to transform himself into Hyde and back again. When he runs out of the potion, he is trapped as Hyde and commits suicide before he can be arrested. After reading the novella, Mansfield was intrigued by the opportunity to play a dual role. He secured the right to adapt the story for the stage in the United States and the United Kingdom, and asked Sullivan to write the adaptation. The play debuted in Boston in May 1887, and a revised version opened on Broadway in September of that year. Critics acclaimed Mansfield's performance as the dual character. The play was popular in New York and on tour, and Mansfield was invited to bring it to London. It opened there in August 1888, just before the first Jack the Ripper murders. Some press reports compared the murderer to the Jekyll-Hyde character, and Mansfield was suggested as a possible suspect. Despite significant press coverage, the London production was a financial failure. Mansfield's company continued to perform the play on tours of the U.S. until shortly before his death in 1907. In writing the stage adaptation, Sullivan made several changes to the story; these included creating a fiancée for Jekyll and a stronger moral contrast between Jekyll and Hyde. The changes have been adopted by many subsequent adaptations, including several film versions of the story which were derived from the play. The films included a 1912 adaptation directed by Lucius Henderson, a 1920 adaptation directed by John S. Robertson, and a 1931 adaptation directed by Rouben Mamoulian, which earned Fredric March an Academy Award for Best Actor. A 1941 adaptation, directed by Victor Fleming, was a remake of the 1931 film. ## Plot In the first act, a group of friends (including Sir Danvers Carew's daughter Agnes, attorney Gabriel Utterson, and Dr. and Mrs. Lanyon) has met up at Sir Danvers' home. Dr. Lanyon brings word that Agnes' fiancé, Dr. Henry Jekyll, will be late to the gathering. He then repeats a second-hand story about a man named Hyde, who injured a child in a collision on the street. The story upsets Utterson because Jekyll recently made a new will that gives his estate to a mysterious friend named Edward Hyde. Jekyll arrives; Utterson confronts him about the will, but Jekyll refuses to consider changing it. Jekyll tells Agnes that they should end their engagement because of sins he has committed, but will not explain. Agnes refuses to accept this, and tells Jekyll she loves him. He relents, saying that she will help him control himself, and leaves. Sir Danvers joins his daughter, and they talk about their time in Mangalore, India. When Hyde suddenly enters, Sir Danvers tells Agnes to leave the room. The men argue, and Hyde strangles Sir Danvers. In the second act, Hyde fears that he will be arrested for the murder. He gives his landlady, Rebecca, money to tell visitors that he is not home. Inspector Newcome from Scotland Yard offers Rebecca more money to turn Hyde in, which she promises to do. Hyde flees to Jekyll's laboratory, where Utterson is waiting to confront the doctor about his will; he insults Utterson and leaves. Rebecca, who has followed Hyde, arrives and tells Utterson that Hyde murdered Sir Danvers. In the play's original version, the act ends with Jekyll returning to his laboratory. In later versions (revised after its premiere), the second act contains an additional scene in which Jekyll returns home; his friends think he is protecting Hyde. Agnes, who saw Hyde before her father was murdered, wants Jekyll to accompany her to provide the police with a description, and is distraught when he refuses. In the third act, Jekyll's servant, Poole, gives Dr. Lanyon a powder and liquid with instructions from Jekyll to give them to a person who will request them. While he waits, Lanyon speaks with Newcome, Rebecca, Agnes and Mrs. Lanyon. After the others leave, Hyde arrives for the powder and liquid. After arguing with Lanyon, he mixes them into a potion and drinks it; he immediately transforms into Jekyll. In the final act, Jekyll has begun to change into Hyde without using the potion. Although he still needs it to change back, he has exhausted his supply. Dr. Lanyon tries to help Jekyll re-create the formula, but they are unable to find an ingredient. Jekyll asks Lanyon to bring Agnes to him, but Jekyll turns into Hyde before Lanyon returns. Utterson and Newcome arrive to arrest Hyde; knowing he can no longer transform back into Jekyll, Hyde commits suicide by taking poison. ## Cast and characters The play was produced at the Boston Museum, Broadway's Madison Square Theatre and the Lyceum Theatre in London's West End with the following casts: ## History ### Background and writing The Scottish author Robert Louis Stevenson wrote Strange Case of Dr Jekyll and Mr Hyde in 1885 when he was living in Bournemouth, on England's south coast. In January 1886 the novella was published in the United Kingdom by Longmans, Green & Co. and by Charles Scribner's Sons in the United States, where it was frequently pirated because of the lack of copyright protection in the U.S. for works originally published in the UK. In early 1887, actor Richard Mansfield read Stevenson's novella and immediately got the idea to adapt it for the stage. Mansfield was looking for material that would help him achieve a reputation as a serious actor in the U.S., where he lived, and in England, where he had spent most of his childhood. He had played dual roles as a father and son in a New York production of the operetta Rip Van Winkle, and saw another acting opportunity in playing Jekyll and Hyde. He thought the role was a theatrical novelty that would showcase his talents in a favorable way. Although U.S. copyright law would have allowed him to produce an unauthorized adaptation, Mansfield secured the American and British stage rights. Performing in Boston, he asked a local friend, Thomas Russell Sullivan, to write a script. Sullivan had previously only written in his spare time while working as a clerk for Lee, Higginson & Co., a Boston investment bank. Although he doubted that the novella would make a good play, he agreed to help Mansfield with the project, and they worked quickly to complete the adaptation before other, unauthorized versions could be staged. ### First American productions After just two weeks of rehearsals, the play opened at the Boston Museum on May 9, 1887, as the first American adaptation of Stevenson's novella. On May 14, it closed for rewrites. The updated version opened at the Madison Square Theatre on Broadway on September 12. A. M. Palmer produced and Richard Marston designed the sets for the production. Sullivan invited Stevenson, who had moved to the U.S. that summer; Stevenson was ill, but his wife and mother attended and congratulated Sullivan on the play. The Madison Square production closed on October 1, when Mansfield took his company on a nationwide tour. The tour began at the Chestnut Street Theatre in Philadelphia and visited over a dozen cities, including several returns to Boston and New York. It ended at the Madison Square Theatre, where Dr. Jekyll and Mr. Hyde was performed for a final season matinee on June 29, 1888. In March 1888, while Mansfield's company was touring, Daniel E. Bandmann staged a competing production with the same name at Niblo's Garden. Bandmann's opening night (March 12) coincided with the Great Blizzard of 1888, and only five theatergoers braved the storm. One of the attendees was Sullivan, who was checking his competition. Bandmann's production inspired a letter from Stevenson to the New York Sun saying that only the Mansfield version was authorized and paying him royalties. ### 1888 London production The English actor Henry Irving saw Mansfield's performance in New York and invited him to bring the play to London, where Irving managed the Lyceum Theatre in the West End. Although the play was scheduled to premiere in September 1888, Mansfield discovered that Bandmann planned to open his competing version in August, and he rushed to recall his company from vacation. Mansfield also worked with Irving and Stevenson's publisher, Longmans, to block Bandmann's production and those of other competitors. In the UK, unlike the U.S., the novella had copyright protection. Longmans brought legal actions against the unauthorized versions; its efforts blocked a William Howell Poole production from opening at the Theatre Royal in Croydon on July 26. That day, Fred Wright's Company B presented one performance of its adaptation at the Park Theatre in Merthyr Tydfil before it was also closed. Bandmann had the Opera Comique theater reserved from August 6, but hoped to open his production earlier. Irving blocked this by reserving the theater for Mansfield's rehearsals, and Mansfield's Dr. Jekyll and Mr. Hyde opened at the Lyceum on August 4. Bandmann went ahead with his August 6 opening, but after two performances the production was shut down because of legal action by Longmans. Although Irving attended some of Mansfield's rehearsals, he did not remain in London for the opening. Mansfield worked with Irving's stage manager, H. J. Lovejoy, and "acting manager" Bram Stoker (who would later write the horror novel Dracula) to stage the production. He was dissatisfied with Lovejoy's stagehands and complained to a friend, the drama critic William Winter, that they were "slow" and "argumentative". Since the Lyceum crew had staged many productions and had a good reputation, Winter thought it more likely that they disliked Mansfield. Regardless of the crew's motives, Mansfield became antagonistic towards Irving during his time there. The Lyceum production of Dr. Jekyll and Mr. Hyde was scheduled to close on September 29, after which Mansfield intended to stage other plays. Initially, he followed this plan, introducing productions of Lesbia and A Parisian Romance at the beginning of October. However, Mansfield soon reintroduced Dr. Jekyll and Mr. Hyde to his schedule and added performances between October 10 and October 20. ### Association with Whitechapel murders On August 7, 1888, three days after the Lyceum opening of Dr. Jekyll and Mr. Hyde, Martha Tabram was discovered stabbed to death in London's Whitechapel neighborhood. On August 31 Mary Ann Nichols was found, murdered and mutilated, in the same neighborhood. Press coverage linking these and other Whitechapel murders of women created a furor in London. The public and police suspected that some or all of the murders were committed by one person, who became known as Jack the Ripper. Some press reports compared the unidentified killer with the Jekyll and Hyde characters, suggesting that the Ripper led a respectable life during the day and became a murderer at night. On October 5, the City of London Police received a letter suggesting that Mansfield should be considered a suspect. The letter writer, who had seen him perform as Jekyll and Hyde, thought that Mansfield could easily disguise himself and commit the murders undetected. Mansfield attempted to defuse public concern by staging the London opening of the comedy Prince Karl as a charity performance, despite Stoker's warning that critics would view it as an attempt to obtain favorable publicity for the production. Although some press reports suggested that Mansfield stopped performing Dr. Jekyll and Mr. Hyde in London because of the murders, financial reasons are more likely. Despite widespread publicity due to the murders and Mansfield's disputes with Bandmann, attendance was mediocre, and the production was losing money. On December 1, Mansfield's tenancy at the Lyceum ended. He left London, taking his company on a tour of England. In December they performed Dr. Jekyll and Mr. Hyde and other plays in Liverpool and Derby, then continued to other cities and performed other plays. ### After 1888 When Mansfield left the UK in June 1889, he was deeply in debt because of production losses there. His debts included £2,675 owed to Irving, which Mansfield did not want to pay because he felt that Irving had not supported him adequately at the Lyceum. Irving sued, winning the UK performance rights to Dr. Jekyll and Mr. Hyde, and Mansfield never performed in England again. In the U.S., the play became part of the repertory of Mansfield's company and was repeatedly performed during the 1890s and early 1900s. Mansfield continued in the title role, and Beatrice Cameron continued to play Jekyll's fiancée; the actors married in 1892. In later years he staged the play less often, after becoming fearful that something would go wrong during the transformation scenes. Mansfield's company last performed the play at the New Amsterdam Theatre in New York on March 21, 1907. He fell ill soon afterward, and died on August 30 of that year. The play was closely associated with Mansfield's performance; a 1916 retrospective on adaptations of Stevenson's works indicated that Sullivan's Dr. Jekyll and Mr. Hyde was no longer performed after Mansfield's death. Although Irving obtained the UK performance rights, he never staged the play. His son, Harry Brodribb Irving, produced a new adaptation by J. Comyns Carr in 1910. By that time, more than a dozen other stage adaptations had appeared; the most significant was an 1897 adaptation by Luella Forepaugh and George F. Fish, made available in 1904 for stock theater performances as Dr. Jekyll and Mr. Hyde, Or a Mis-Spent Life. ## Dramatic analysis ### Jekyll-Hyde transformation Although Mansfield's transformations between Jekyll and Hyde included lighting changes and makeup designed to appear different under colored filters, it was mainly accomplished by the actor's facial contortions and changes in posture and movement. As Hyde, Mansfield was hunched over with a grimacing face and claw-like hands; he spoke with a guttural voice and walked differently than he did as Jekyll. The effect was so dramatic that audiences and journalists speculated about how it was achieved. Theories included claims that Mansfield had an inflatable rubber prosthetic, that he applied chemicals, and that he had a mask hidden in a wig, which he pulled down to complete the change. Mansfield denied such theories, emphasizing that he did not use any "mechanical claptrap" in his performance. In contrast to the novella, in which the physical transformation of Jekyll into Hyde is revealed near the end, most dramatic adaptations show it early in the play, because the audience is familiar with the story. As one of its first adapters, Sullivan worked in an environment where the transformation would still shock the audience, and he held the reveal until the third act. ### Changes from the novella There were several differences between Sullivan's adaptation and Stevenson's novella. Stevenson used multiple narrators and a circular narrative (allowing material presented at the end to explain material presented at the beginning), but Sullivan wrote a linear narrative in chronological order. This linear approach and the onstage action conveyed a stronger impression of realism, eliminating uncertainties in Stevenson's narrative. Although making the story more straightforward and less ambiguous was not necessary for a theatrical adaptation, Sullivan's approach was typical of the era's stage melodramas and made the material more acceptable to audiences. The stage presentation's realism also allowed Sullivan to drop the novella's scientific aspects. Although Stevenson used science to make the Jekyll-Hyde transformation plausible to readers, Sullivan could rely on the onstage transformations. In the novella, the transformation is only presented via the account of Lanyon, never as direct narration to the reader. The playwright strengthened the contrast between Jekyll and Hyde compared to Stevenson's original; Sullivan's Hyde was more explicitly evil, and his Jekyll more conventional. In Stevenson's novella Jekyll is socially isolated and neurotic, and his motives for experimenting with the potion are ambiguous. Sullivan's adaptation changed these elements of the character. His Jekyll is socially active and mentally healthy, and his motives for creating the potion are benign; Sullivan's Jekyll tells Lanyon that his discovery will "benefit the world". Later adaptations made further changes, representing Jekyll as noble, religious or involved in charitable work. Mansfield's portrayal of Jekyll is less stereotypically good than later versions, and he thought that making the characterizations too simplistic would hurt the play's dramatic quality. Sullivan's version added women to the story; there were no significant female characters in Stevenson's original. The presence of women (especially Jekyll's fiancée, Agnes Carew) placed Jekyll in traditional social relationships, which made him seem more normal by contemporaneous standards. Hyde behaved lecherously towards Agnes and cruelly towards his landlady, and his behavior towards women in this and later adaptations led to new interpretations of the character. In Stevenson's novella and Sullivan's play, Hyde is said to have committed unspecified crimes. Interpreters began to identify the crimes as sexual, positing sexual repression as a factor in Hyde's characterization. However, Stevenson denied that this was his understanding of the character in his original story; he said Hyde's immoralities were "cruelty and malice, and selfishness and cowardice", not sexual. In some interpretations of the novella, the male characters represent patriarchal society, with Hyde signifying its moral corruption. Other interpretations suggest that the lack of female companionship for the male characters indicates their latent homosexuality and that Hyde is engaged in homosexual activity. These interpretations are harder to apply to the play because of Sullivan's addition of female characters and heterosexual relationships. Mansfield and his American theatre company pronounced Jekyll with a short e (/ɛ/) instead of the long e (/iː/) pronunciation Stevenson intended. The short e pronunciation is now used in most adaptations. ## Reception Reviewing the play's initial production at the Boston Museum, The Boston Post "warmly congratulated" Sullivan on his script and said that it overcame the difficulties of turning Stevenson's story into a drama with only a few flaws. Mansfield's performance was praised for drawing a clear distinction between Jekyll and Hyde, although the reviewer found his portrayal of Hyde better crafted than his portrayal of Jekyll. Audience reaction was enthusiastic, with long applause and several curtain calls for Mansfield. According to The Cambridge Tribune, the audience reaction affirmed "that the play and its production were a work of genius". The Broadway production also received positive reviews. When it opened at the Madison Square Theatre, a New York Times reviewer complimented Mansfield for his acting and for overcoming the difficulty of presenting the story's allegorical material onstage. According to a New-York Daily Tribune reviewer, Mansfield gave excellent performances as Jekyll and Hyde despite a few technical production flaws. A Life review praised Sullivan's adaptation, particularly his addition of a love interest for Jekyll, and complimented the performances of Mansfield, Cameron and Harkins. The Lyceum production received mixed reviews, complimenting Mansfield's performance but criticizing the play as a whole. A Sunday Times reviewer appreciated Mansfield's performance as Hyde and in the transformation scenes, but not as Jekyll, and called the overall play "dismal and wearisome in the extreme". According to a Daily Telegraph review, Stevenson's story was unsuitable for drama and Sullivan had not adapted it well, but the performances of Mansfield and his company were praiseworthy. A review in The Saturday Review criticized Sullivan's adaptation, saying that it presented only one aspect of the Jekyll character from Stevenson's story. The reviewer complimented Mansfield's acting, especially in the transformation scenes, but said that his performance could not salvage the play. A review in The Theatre said that the play itself was not good, but it was an effective showcase for Mansfield's performance. Sharon Aronofsky Weltman summarizes the reception for Mansfield's performance as being mostly critical of his presentation of Jekyll, but universally positive about his performance as Hyde and his handling of the transformation between the two personas. ## Legacy Dr. Jekyll and Mr. Hyde was a milestone in the careers of Sullivan and Mansfield. Sullivan left his banking job to become a full-time writer. He wrote three more plays (none successful), several novels, and a two-volume collection of short stories, many of which have Gothic elements. Sullivan attempted one more stage collaboration with Mansfield, a drama about the Roman emperor Nero, but they became estranged after its failure. For the actor, playing Jekyll and Hyde helped establish his reputation for dramatic roles; he had been known primarily for comedies. Mansfield continued to struggle financially (in part because of his elaborate, expensive productions) before he achieved financial stability in the mid-1890s with a string of successful tours and new productions. ### Influence on later adaptations As the most successful early adaptation of Strange Case of Dr Jekyll and Mr Hyde, Sullivan's play influenced subsequent versions. Later adaptations followed his simplification of the narrative, addition of women characters (especially a romance for Jekyll) and highlighting of the moral contrast between Jekyll and Hyde. Most versions retain the practice of having one actor play Jekyll and Hyde, with the transformation seen by the audience. Several early film versions relied more on Sullivan's play than Stevenson's novella. The Thanhouser Company produced a 1912 film version, directed by Lucius Henderson and starring James Cruze. The one-reel film, based on Sullivan's play, may be an exception to the custom of one actor playing Jekyll and Hyde. Although Cruze was credited with a dual role, Harry Benham (who played the father of Jekyll's fiancée), said in 1963 that he had played Hyde in some scenes. In 1920, Famous Players–Lasky produced a feature-length version directed by John S. Robertson. John Barrymore starred as Jekyll and Hyde, with Martha Mansfield as his fiancée. Clara Beranger's script followed Sullivan's play in having Jekyll engaged to Sir Carew's daughter, but also added a relationship between Hyde and an Italian dancer (played by Nita Naldi). The addition of a female companion for Hyde became a feature of many later adaptations. Weltman says the design for Hyde's residence in the movie may have been influenced by Mansfield's set decoration choices, based on descriptions given in contemporary reviews of the play. The first sound film based on Sullivan's play was a 1931 version, produced and directed by Rouben Mamoulian and distributed by Paramount Pictures. The film's writers, Samuel Hoffenstein and Percy Heath, followed much of Sullivan's storyline. Their screenplay adds a female companion for Hyde similar to Robertson's 1920 version, but Hyde murders her, a plot point that Weltman believes was influenced by the real-world association of the play with Jack the Ripper. Hoffenstein and Heath were nominated for the Academy Award for Best Adapted Screenplay, and the cinematographer Karl Struss was nominated for the Academy Award for Best Cinematography. Fredric March won the Academy Award for Best Actor for his portrayal of Jekyll and Hyde. Metro-Goldwyn-Mayer released a 1941 film which was a remake of Mamoulian's 1931 film. Victor Fleming directed, and Spencer Tracy starred. This version was nominated for three Academy Awards: Joseph Ruttenberg for Best Cinematography, Black-and-White; Harold F. Kress for Best Film Editing, and Franz Waxman for Best Score of a Dramatic Picture. According to the film historian Denis Meikle, Robertson, Mamoulian, and Fleming's films followed a pattern set by Sullivan's play: making Hyde's evil sexual and the Jekyll-Hyde transformation central to the performance. Meikle views this as a deterioration of Stevenson's original narrative initiated by Sullivan. The literary scholar Edwin M. Eigner says of the play and movies based on it that "each [adaptation] did its bit to coarsen Stevenson's ideas". Weltman says the play's association with Jack the Ripper also affected many adaptions, such as the 1990 Broadway musical Jekyll & Hyde and the 1971 film Dr. Jekyll and Sister Hyde. However, many later adaptations diverged from the model established by Sullivan and the early films; some returned to Stevenson's novella, and others spun new variations from aspects of earlier versions.
42,943,433
Blue men of the Minch
1,130,286,680
Scottish mythological creatures
[ "Piscine and amphibian humanoids", "Scottish folklore", "Scottish legendary creatures", "Scottish mythology", "Water spirits" ]
The blue men of the Minch, also known as storm kelpies (Scottish Gaelic: na fir ghorma ), are mythological creatures inhabiting the stretch of water between the northern Outer Hebrides and mainland Scotland, looking for sailors to drown and stricken boats to sink. They appear to be localised to the Minch and surrounding areas to the north and as far east as Wick, unknown in other parts of Scotland and without counterparts in the rest of the world. Apart from their blue colour, the mythical creatures look much like humans, and are about the same size. They have the power to create storms, but when the weather is fine they float sleeping on or just below the surface of the water. The blue men swim with their torsos raised out of the sea, twisting and diving as porpoises do. They are able to speak, and when a group approaches a ship its chief may shout two lines of poetry to the master of the vessel and challenge him to complete the verse. If the skipper fails in that task then the blue men will attempt to capsize his ship. Suggestions to explain the mythical blue men include that they may be a personification of the sea, or originate with the Picts, whose painted bodies may have given the impression of men raising themselves out of the water if they were seen crossing the sea in boats that might have resembled kayaks. The genesis of the blue men may alternatively lie with the North African slaves the Vikings took with them to Scotland, where they spent the winter months close to the Shiant Isles in the Minch. ## Etymology The Minch, a strait that separates the northwest Highlands of Scotland and the northern Inner Hebrides from the northern Outer Hebrides, is home to the blue men. The Scottish Gaelic terms for the blue men is na fir ghorma (in the genitive fear gorm, for example sruth nam fear gorm "the stream of the blue men"). The blue men are also styled as storm kelpies. The most common water spirits in Scottish folklore, kelpies are usually described as powerful horses, but the name is attributed to several different forms and fables throughout the country. The name kelpie may be derived from the Scottish Gaelic calpa or cailpeach, meaning "heifer" or "colt". ## Folk beliefs ### Description and common attributes The mythical blue men may have been part of a tribe of "fallen angels" that split into three; the first became the ground dwelling fairies, the second evolved to become the sea inhabiting blue men, and the remainder the "Merry Dancers" of the Northern Lights in the sky. The legendary creatures are the same size as humans but, as the name implies, blue in colour. Writer and journalist Lewis Spence thought they were the "personifications of the sea itself" as they took their blue colouration from the hue of the sea. Their faces are grey and long in shape and some have long arms, which are also grey, and they favour blue headgear; at least one account claims they also have wings. The tempestuous water around the Shiant Isles 19 kilometres (12 mi) to the north of Skye, an area subject to rapid tides in all weathers, flows beside the caves inhabited by the blue men, a stretch of water known as the Current of Destruction owing to the number of ships wrecked there. Although other storm kelpies are reported as inhabiting the Gulf of Corrievreckan, described by poet, writer, and folklorist Alasdair Alpin MacGregor as "the fiercest of the Highland storm kelpies", the blue men are confined to a very restricted area. According to Donald A. Mackenzie they have no counterparts elsewhere in the world or even in other areas of Scotland; such limited range is rare for beliefs in spirits and demons. Folklorist and Tiree minister John Gregorson Campbell says they were unknown in Argyll on the nearby coast of the mainland for instance, although Church of Scotland minister John Brand, who visited Quarff in Shetland in mid-1700, recounts a tale of what may have been a blue man in the waters around the island. In the form of a bearded old man it rose out of the water, terrifying the passengers and crew of a boat it was following. In traditional tales the blue men have the power to create severe storms, but when the weather is fine they sleep or float just under the surface of the water. They swim with their torso from the waist upwards raised out of the sea, twisting and diving in a similar way to a porpoise. To amuse themselves the creatures play shinty when the skies are clear and bright at night. They are able to speak and converse with mariners and are especially vocal when soaking vessels with water spray, roaring with laughter as vessels capsize. When the blue men gather to attack passing vessels their chief, sometimes named as Shony, rises up out of the water and shouts two lines of poetry to the skipper, and if he cannot add two lines to complete the verse the blue men seize his boat. Mackenzie highlights the following exchange between the skipper of a boat and the chief of the blue men: > > Blue Chief: Man of the black cap what do you say As your proud ship cleaves the brine? Skipper: My speedy ship takes the shortest way And I'll follow you line by line Blue Chief: My men are eager, my men are ready To drag you below the waves Skipper: My ship is speedy, my ship is steady If it sank, it would wreck your caves. The quick responses took the blue chief by surprise; defeated and unable to do any damage to the vessel, the blue men returned to their underwater caves, allowing the vessel free passage through the strait. The blue men may alternatively board a passing vessel and demand tribute from its crew, threatening that if it is not forthcoming they will raise up a storm. ### Capture and killing No surviving tales mention attempts to kill the spirits, but a Gregorson Campbell story tells of the capture of a blue man. Sailors seize a blue man and tie him up on board their ship after he is discovered "sleeping on the waters". Two fellow blue men give chase, calling out to each other as they swim towards the ship: > > Duncan will be one, Donald will be two Will you need another ere you reach the shore? On hearing his companions' voices the captured spirit breaks free of his bonds and jumps overboard as he answers: > > Duncan's voice I hear, Donald too is near But no need of helpers has strong Ian More. Sailors thus believed all blue men have names by which they address each other. ## Origins Mackenzie's explanation of the legend of the blue men was based partly on research into the Annals of Ireland and goes back to the times of Harald Fairhair, the first Norse king, and his battles against the Vikings. The Scottish Gaelic term fir ghorma, meaning "blue men", is the descriptor for a black man according to Dwelly. Thus sruth nam fear gorm, one of the blue men's Gaelic names, literally translates as "stream of the blue men", or "river, tide or stream of the black man". Around the 9th century the Vikings took Moors they had captured and were using as slaves to Ireland. The Vikings spent winter months near the Shiant Isles, and Mackenzie attributes the story of the blue men to "marooned foreign slaves". He quotes an excerpt from historian Alan Orr Anderson's Early sources of Scottish history, A.D. 500 to 1286: > These were the blue men [fir gorma], because Moors are the same as negroes; Mauritania is the same as negro-land [literally, the same as blackness]. More recent newspaper reports have repeated Mackenzie's hypothesis. Historian Malcolm Archibald agrees the legend originates from the days Norsemen had North African slaves, but speculates that the myth may have originated with the Tuareg people of Saharan Africa, who were known as the "blue men of the desert". The origin of the blue men of the Minch may alternatively lie with "tattooing people" specifically the Picts, whose Latin name picti means "painted people". If they were seen crossing the water in boats resembling the kayaks of the Finn-men they may have given simple islanders and mariners the impression of the upper part of the body rising out of the water. ## See also - Kelpie - Water bull
5,532,406
1916 Texas hurricane
1,170,501,879
Category 4 Atlantic hurricane
[ "1910s Atlantic hurricane seasons", "1916 in Texas", "1916 meteorology", "1916 natural disasters in the United States", "August 1916 events", "Category 4 Atlantic hurricanes", "Hurricanes in Texas" ]
The 1916 Texas hurricane was an intense and quick-moving tropical cyclone that caused widespread damage in Jamaica and South Texas in August 1916. A Category 4 hurricane upon landfall in Texas, it was the strongest tropical cyclone to strike the United States in three decades. Throughout its eight-day trek across the Caribbean Sea and Gulf of Mexico, the hurricane caused 37 fatalities and inflicted \$11.8 million in damage. Weather observations were limited for most of the storm's history, so much of its growth has been inferred from scant data analyzed by the Atlantic hurricane reanalysis project in 2008. The precursor disturbance organized into a small tropical storm by August 12, shortly before crossing the Lesser Antilles into the Caribbean Sea. The storm skirted the southern coast of Jamaica as a hurricane on August 15, killing 17 people along the way. No banana plantation was left unscathed by the hours-long onslaught of strong winds. Coconut and cocoa trees also sustained severe losses. The southern parishes saw the severest effects, incurring extensive damage to crops and buildings; damage in Jamaica amounted to \$10 million (equivalent to \$ million in ). The storm then traversed the Yucatán Channel into the Gulf of Mexico and intensified further into the equivalent of a major hurricane on the modern-day Saffir–Simpson scale. On the evening of August 16, the hurricane struck southern Texas near Baffin Bay with winds of 130 mph (210 km/h). Buildings were razed at many coastal cities, the worst impacts being felt in Corpus Christi and surrounding communities. Beachfront structures were destroyed by a 9.2-foot (2.8 m) storm surge. Strong gusts and heavy rainfall spread farther inland across mainly rural sectors of southern Texas, damaging towns and their outlying agricultural districts alike. Railroads and other public utilities were disrupted across the region, with widespread power outages. Eight locations set 24-hour rainfall records; among them was Harlingen, which recorded the storm's rainfall maximum with 6 inches (150 mm) of precipitation. The deluge wrought havoc on military camps along the Mexico–United States border, forcing 30,000 garrisoned militiamen to evacuate. Aggregate property damage across Texas reached \$1.8 million (equivalent to \$ million in ), and 20 people were killed. The hurricane quickly weakened over southwestern Texas and dissipated near New Mexico by August 20. ## Meteorological history According to the U.S. Weather Bureau, the 1916 Texas hurricane "followed an average course for the type of August hurricanes that pass through the Yucatán Channel", but maintained an unusually brisk forward speed throughout its life. A possible precursor disturbance may have originated as early as August 8 near Africa, but observations were inconclusive in determining the formation of a tropical cyclone. The hurricane was first definitively detected as a tropical storm east of Barbados on August 12, based on a 40 mph (64 km/h) wind measurement from a nearby ship. There were no other observations of similarly gusty winds or low air pressures over the next three days while the system traced out the southern periphery of the Azores High westward into the eastern Caribbean Sea. Steady intensification was inferred during this period by the Atlantic hurricane reanalysis project in 2008, which estimated that the storm strengthened into a hurricane on August 15 while located south of Hispaniola. Concurrently, the hurricane curved slightly towards the west-northwest, bringing it just south of Jamaica that day with winds of 85 mph (137 km/h). Near the Cayman Islands, a vessel recorded 55 mph (89 km/h) winds, ultimately proving to be the strongest offshore winds sampled in connection with the cyclone. Continuing to intensify, the hurricane emerged into the Gulf of Mexico through the Yucatán Channel on the morning of August 17. Weather observations remained scant in the open waters of the Gulf of Mexico, the strongest observed winds being limited to marginal gales. The storm reached major hurricane intensity just north of the Yucatán Peninsula on August 17, and reports on the developing hurricane proliferated as the storm neared the Texas coast. By August 18, the hurricane reached Category 4 intensity in the western Gulf of Mexico; the first outer bands began reaching the coast near Corpus Christi, Texas, early that morning. During the evening hours, the center of the hurricane made landfall near Baffin Bay, Texas, with maximum sustained winds of 130 mph (210 km/h) and a central pressure of 932 mbar (hPa; 27.52 inHg). These parameters made it the strongest hurricane of the 1916 Atlantic hurricane season. In terms of pressure, the 1916 Texas hurricane was stronger than any other landfalling tropical cyclone in the United States since 1886. It was larger than average upon landfall, with a 29 mi (47 km) radius of maximum wind. Neither the strongest winds nor lowest pressure were directly measured, and were instead extrapolated from peripheral data by the hurricane reanalysis project using storm surge modelling and pressure to wind relationships. Several other researchers in the 20th century made similar analyses of the landfall, all concluding that Texas was impacted by a major hurricane. Weakening ensued as the storm quickly progressed farther inland and into West Texas; by August 19, the system was a weakening tropical depression, opening into a trough of low pressure the following day near the border between Texas and New Mexico within the valley of the Pecos River. ## Preparations and impact ### Caribbean Sea Crossing the Lesser Antilles from August 12 to 13, the developing tropical cyclone produced breezy conditions; sustained winds peaked at 25 mph (40 km/h) on Antigua and reached low-end tropical storm intensity offshore. In San Fernando, Trinidad and Tobago, a station recorded 0.42 in (11 mm) of rain from the passing disturbance. A warning noting the likelihood of hurricane-force winds was issued for the Yucatán Channel near Cuba's Guanahacabibes Peninsula on August 16. Maritime traffic was briefly halted before being allowed to resume course to Cuban and Central American ports. The hurricane dealt a heavy blow to Jamaica when the storm passed south of the Crown colony on the night of August 15, killing seventeen people and leaving thousands homeless. Although the U.S. Weather Bureau did not indicate a landfall, reporting from The Daily Gleaner suggested that the storm's calm eye passed over Kingston and at least four of the island's southern parishes. Damage was consequently heaviest in the southern half of Jamaica, though some crops across the northern parishes were also affected; the overall damage toll was estimated at \$10 million (equivalent to \$ million in ). Among Jamaica's crops, banana cultivations were the most severely impacted; several communities and parishes documented a majority loss of their bananas, especially on the eastern half of the island. According to the American consulate, the entirety of Jamaica's banana crop was damaged to some extent. In Bath, the storm was the most devastating since the 1903 Jamaica hurricane. The eastern banana-growing belt was thoroughly ruined; five thousand mature banana trees were toppled before the storm's closest approach to Bath, accounting for a near-total loss of the fruit there. Sugar plantations also suffered greatly, as did coconut and cocoa trees in Portland Parish. An estimated 30–50 percent of the cocoa crop was damaged. Winds peaked at 80 mph (130 km/h) during the evening hours of August 15 in Bowden, cutting telegraph communications and damaging many buildings and banana trees. Several hours of gusty winds downed telegraph lines and fruit trees of all varieties throughout Saint Thomas Parish. Homes were unroofed and displaced in Annotto Bay, constituting most of the property damage there. Heavy rainfall caused the Dry, Johnson, and Yallahs rivers to rise above their banks, washing over bridges and rendering them impassable. Communications between Kingston and other parishes were cut off for 48 hours after intense winds brought down telegraph and telephone lines, making the dissemination of damage reports in Jamaica increasingly difficult. The strongest sustained winds reached 72 mph (116 km/h) in Kingston, attended by higher gusts estimated at 85 mph (137 km/h). One station in Kingston measured 1.56 in (40 mm) of precipitation. Power was lost after falling trees struck critical electric wires, halting streetcars. One woman was killed after being electrocuted by a falling electric wire. Kingston was left in darkness overnight, prompting police to warn pedestrians to vacate the city streets. Most of the damage inflicted on property in Kingston and lower Saint Andrew Parish was minor and confined to the most vulnerable structures. Homes, fences, and signs were damaged in both residential and commercial districts of the metropolis. At wharves along the coast, iron-sheet roofs were torn away from lumber sheds. Debris littered roads, and in one case, a house was blown onto a highway. Rough surf generated by the strong winds sank or grounded vessels and lighters on the shores of Kingston Harbour, with one wreck resulting in two fatalities. Substantial losses befell crops in Saint Catherine Parish, including severe damage to banana trees between Kingston and Spanish Town. Damage was also wrought to coconut trees and other large trees in the region. In the hurricane's aftermath, the colonial government planned to assist growers in re-establishing damaged crops, and also allocated £21,000 to relief efforts. Owing to the widespread damage to banana crops, the reduced demand for rail service and subsequent cuts in revenue forced the Jamaica Railway Corporation to downsize. ### Texas General information on the hurricane's location and movement for American interests was first issued by the United States Weather Bureau on the morning of August 13, based on information from Saint Kitts. Alerts to vessels in the path of the storm prompted 20 steamers to anchor in New Orleans, Louisiana. Due to the storm's initially small size and the lack of data concerning it, the Weather Bureau lamented that "the location of the center of the storm was [...] a very unsatisfactory matter", as was the case with two other tropical cyclones monitored by the agency in the same month. On August 18, as the storm neared Texas landfall, hurricane warnings were first issued, advising coastal residents from Cameron County northward to Calhoun County by telegraph and telephone of the hurricane's imminent approach. Anticipating the hurricane's effects, prices closed 7–9 points higher than the previous day at the New Orleans Cotton Exchange and advanced by 10–12 points at the New York Cotton Exchange. Galveston residents evacuated for the mainland via interurban routes and special trains as seas began to rise, filling railcars to capacity; in total, thousands of people evacuated the insular city. Another set of Southern Pacific traincars was readied at Seabrook in case more evacuations were required. A hundred automobiles were used to escort women and children from vulnerable sections of Corpus Christi to the safer buildings of the business district on the afternoon of August 18, finding havens at banks, hotels, schools, and the city hall. However, many city residents did not take precautions in protecting their property, as conventional wisdom held that destructive storms did not affect Corpus Christi. Fearing a repeat of the 1915 Galveston hurricane, some visitors in Galveston headed toward Corpus Christi, only to be caught in the incoming storm. The United States Coast Guard stationed at Brazos Island evacuated summer residents of Padre Island offshore Port Isabel. Nearby ships were brought to the Port Isabel harbor. The coastal steamer Pilot Boy sank in the entrance to the harbor at Port Aransas after being battered by the hurricane's rough seas, killing six of her crew. Water levels along the coast of Texas rose, with storm surge heights reaching 9.2 ft (2.8 m) in Corpus Christi and 4 ft (1.2 m) in Galveston. Although the surge was attenuated by the hurricane's quick motion, the waves were nonetheless destructive, destroying every pier in Corpus Christi Bay and many boats. A large segment of the Corpus Christi causeway was washed away. Outhouses and a dwelling at the Aransas Pass Light Station were undermined. Driftwood was strewn across the coast of Laguna Madre for the first time in living memory. The Category 4 hurricane moved ashore near Baffin Bay at 5:00 p.m. CST (22:00 UTC) on August 18, roughly an hour earlier than forecast. Damage from the hurricane was inflicted over a wide expanse of southern Texas and maximized along the coast. The cities of Bishop, Kingsville, and Corpus Christi sustained the greatest effects. All Western Union communication lines between San Antonio and Brownsville were severed by 1:30 p.m. CST (18:30 UTC) on August 18, preventing the transmission of early reports from the region and accounting for \$50,000 in damage. At Corpus Christi, approximately 45 mi (72 km) northeast of the storm's point of landfall, winds reached at least 90 mph (140 km/h) before the observing station's anemometer was knocked out of commission. Thunderstorms and squalls began affecting the city on the morning of August 18, preceding the onset of hurricane-force winds that evening; light winds prevailed by August 20. Damage was inflicted upon most buildings in the city. Summer cottages were destroyed and the business district incurred thousands of dollars in damage after it was entirely flooded. Many salt cedar plants were blown down. The waterfront area endured the worst effects, including the destruction of all wharves and their ancillary buildings. All bathing pavilions collapsed and a pleasure pier was left in ruins. Much of a coastal apartment compound was reduced to rubble floating in the Corpus Christi Bay. Corpus Christi also lost electricity during the storm, putting the city lights and other services out of commission. Conservative estimates placed financial losses for the city between \$250,000–\$500,000 (equivalent to \$– million in ), and three people drowned along the immediate coast. The nearby communities of Aransas Pass and Rockport sustained "considerable" damage. Nearly every building was affected and many were destroyed in Rockport, including the city hall. Many of Port Aransas's frame buildings, piers, and other coastal structures fell victim to the rough seas. Small shipping interests were hurt in Port Lavaca, particularly the fish and oyster industry. Waterfront homes in the port city were destroyed. Port O'Connor and surrounding locales were impacted by 75 mph (121 km/h) winds that damaged numerous homes and dislodged the roof of a hotel. Strong winds also forced the sea inland, grounding boats and submerging the nearby grounds of the Epworth League in Seadrift. A relief train was sent from Austwell to Port O'Connor to evacuate storm-stricken residents. Bay View College in Portland permanently closed following damage to its buildings. Intense winds in Kingsville unroofed homes and businesses. A city garage's collapse crushed several cars beneath. Governor of Illinois Edward Fitzsimmons Dunne was caught in the storm at Kingsville; he had been inspecting army camps along the Texas–Mexico border in the days before they were ravaged by the storm. Every house was damaged and most were destroyed in Riviera, located 15 mi (24 km) south of Kingsville. In the nearby resort town of Riviera Beach, the hurricane destroyed all businesses and amenities, as well as most of the residences, resulting in an exodus that led to the resort's demise. Farther north in Galveston, the hurricane produced 50 mph (80 km/h) winds that destroyed two homes. Moderate to heavy rains spread across southern Texas both ahead of the storm and to the right of the center's path. There were two foci of heavy rainfall: the first along the coast where a maximum of 6.0 in (150 mm) was reported in Harlingen, and a second borne of orographic lifting in the mountains of southwestern Texas. Eight towns, including Harlingen, set 24-hour rainfall records. Crops were badly injured by the winds and rain, with damage to cotton accounting for most of the financial loss. About one-third of the cotton crop around Shiner was lost, and in some locations more than half of the pecans were blown off trees. The storm proved beneficial for cotton harvesting in Victoria County by helping to clear excess foliage. Many farm buildings and small structures were leveled in Beeville. In Brownsville, plate glass windows were blown out and fences and trees were toppled. Some militia camps were also deluged by the heavy rainfall in the Brownsville area, destroying thousands of dollars worth of government equipment after perishable munitions were exposed to the elements. Four militiamen were injured after U.S. Army tents were flattened by the storm. All military encampments in the area were forced to be temporarily abandoned, with 30,000 people seeking refuge in public buildings in Mercedes and Mission. Similar damage occurred farther upstream along the Rio Grande Valley in Laredo, where the hurricane tore down small buildings and communication poles. Downed wires forced the city to shut down power for most of the municipality. Although damage was widespread, its overall magnitude along the Rio Grande remained slight. Sections of the San Antonio and Aransas Pass Railway and International–Great Northern Railroad were put out of commission, the former left mangled and obstructed by debris. Other trains in the region were delayed by 12–18 hours, and the total cost of damage to railroads and other public utilities exceeded \$300,000 (equivalent to \$ million in ). Over a thousand workers were dispatched by the afflicted railroad companies to repair the railways. Wind damage was documented as far inland as Montell in Uvalde County where frame homes were damaged and windmills collapsed. Strong winds and intermittent rainfall extended into the Austin area, while 68 mph (109 km/h) winds swept through San Antonio. In total, twenty people were killed in Texas, and damage to property was estimated at \$1.8 million (equivalent to \$ million in ). ## See also - List of Texas hurricanes (1900–1949) - Hurricane Bret (1999) – quickly intensified in the western Gulf of Mexico before making landfall on a relatively unpopulated extent of the Texas coast - 1919 Florida Keys hurricane – deadly tropical cyclone that devastated areas of the Florida Keys and Texas, particularly in the Corpus Christi area - Hurricane Allen – struck southern Texas after tracking across the Caribbean Sea - Hurricane Beulah – a Category 5 hurricane which moved through the Yucatán Peninsula before striking southern Texas
21,432,665
Indian Camp
1,169,875,047
Short story by Ernest Hemingway
[ "1924 short stories", "1925 short stories", "Autobiographical short stories", "Fiction about suicide", "Native Americans in popular culture", "Short stories by Ernest Hemingway", "Works originally published in The Transatlantic Review (1924)" ]
"Indian Camp" is a short story written by Ernest Hemingway. The story was first published in 1924 in Ford Madox Ford's literary magazine Transatlantic Review in Paris and republished by Boni & Liveright in Hemingway's first American volume of short stories In Our Time in 1925. Hemingway's semi-autobiographical character Nick Adams—a child in this story—makes his first appearance in "Indian Camp", told from his point of view. In the story Nick Adams' father, a country doctor, has been summoned to a Native American or "Indian" camp to deliver a baby. At the camp, the father is forced to perform an emergency caesarean section using a jack-knife, with Nick as his assistant. Afterward, the woman's husband is discovered dead, having slit his throat during the operation. The story shows the emergence of Hemingway's understated style and his use of counterpoint. An initiation story, "Indian Camp" includes themes such as childbirth and fear of death which permeate much of Hemingway's subsequent work. When the story was published, the quality of writing was noted and praised, and scholars consider "Indian Camp" an important story in the Hemingway canon. ## Plot summary The story begins in the pre-dawn hours as the young Nick Adams, his father, his uncle and their Indian guides row across a lake to a nearby Indian camp. Nick's father, a doctor, has been called out to deliver a baby for a woman who has been in labor for days. At the camp, they find the woman in a cabin lying on a bottom bunkbed; her husband lies above her with an injured foot. Nick's father is forced to perform a caesarian operation on the woman with a jack-knife because the baby is breeched; he asks Nick to assist by holding a basin. The woman screams throughout the operation, and when Nick's uncle tries to hold her down, she bites him. After the baby is delivered, Nick's father turns to the woman's husband on the top bunk and finds that he fatally slit his throat with a straight razor from ear to ear during the operation. Nick is sent out of the cabin, and his uncle leaves with two Natives, not to return. The story ends with only Nick and his father on the lake, rowing away from the camp. Nick asks his father questions about birth and death, and thinks to himself that he will never die, as he watches his father row. ## Background and publication history In the early 1920s, Hemingway and his wife Hadley lived in Paris where he was foreign correspondent for the Toronto Star. When Hadley became pregnant they returned to Toronto. Hemingway biographer Kenneth Lynn suggests that Hadley's childbirth became the inspiration for the story. She went into labor while Hemingway was on a train, returning from New York. Lynn believes Hemingway likely was terrified Hadley would not survive the birth, and he became "beside himself with fear ... about the extent of her suffering and swamped by a sense of helplessness at the realization that he would probably arrive too late to be of assistance to her." Hemingway wrote "Indian Camp" a few months after John Hemingway was born in Toronto on October 10, 1923. While they were in Toronto, Hemingway's first book, Three Stories and Ten Poems, was published in Paris, followed months later by a second volume, in our time (without capitals), which included 18 short vignettes presented as untitled chapters. Hemingway, Hadley, and their son (nicknamed Bumby) returned to Paris in January 1924, moving into a new apartment on the Rue Notre Dame des Champs. With Ezra Pound, Hemingway helped Ford Madox Ford edit his newly launched literary magazine, Transatlantic Review. which published pieces by modernists such as Pound, John Dos Passos, James Joyce, Gertrude Stein, as well as Hemingway. "Indian Camp" began as a 29-page untitled manuscript that Hemingway cut to seven pages; at first he called the story "One Night Last Summer". In 1924, the seven-page story titled "Indian Camp" was published by the Transatlantic Review in the "Works in Progress" section, along with a piece from James Joyce's manuscript Finnegans Wake. A year later on October 5, 1925, "Indian Camp" was republished by Boni & Liveright in New York, in an expanded American edition of Hemingway's first collection of short stories titled In Our Time, (with capitals) with a print-run of 1335 copies. "Indian Camp" was later included in Hemingway's collection The Fifth Column and the First Forty-Nine Stories published in October 1938. Two collections of short stories published after Hemingway's death included "Indian Camp": The Nick Adams Stories (1972) and The Complete Short Stories of Ernest Hemingway: The Finca Vigía Edition (1987). The Nick Adams Stories (1972), edited by Philip Young, included the story fragment titled "Three Shots" that Hemingway originally cut from "Indian Camp." ## Themes and genre ### Initiation and fear of death "Indian Camp" is an initiation story. Nick's father (Dr. Adams) exposes his young son to childbirth and, unintentionally, to violent death—an experience that causes Nick to equate childbirth with death. Hemingway critic Wendolyn Tetlow maintains that in "Indian Camp," sexuality culminates in "butchery-style" birth and bloody death, and that Nick's anxiety is obvious when he turns away from the butchery. The story reaches a climax when Nick's "heightened awareness" of evil causes him to turn away from the experience. Although Nick may not want to watch the caesarian, his father insists he watch - he does not want his son to be initiated into an adult world without toughness, writes Thomas Strychacz. Hemingway biographer Philip Young writes that Hemingway's emphasis in "Indian Camp" was not primarily on the woman who gives birth or the father who kills himself, but on young Nick Adams, who witnesses these events and becomes a "badly scarred and nervous young man". In "Indian Camp," Hemingway begins the events that shape the Adams persona. Young considers this single Hemingway story to hold the "master key" to "what its author was up to for some thirty-five years of his writing career". Critic Howard Hannum agrees. He believes the trauma of birth and suicide Hemingway paints in "Indian Camp" rendered a leitmotif that gave Hemingway a unified framework for the Nick Adams stories. "Indian Camp" is also about the fear of death. The section cut from the story highlights Nick's fear; the published version underscores it in a less obvious manner. In the cut section, later published as "Three Shots," the night before being taken to the Indian camp Nick is left alone in the forest, where he is "overwhelmed by thoughts of death." Critic Paul Strong speculates that Hemingway may have intended the narrative to be structured so that Nick's father chose to take his fearful son to the Indian camp where Nick faced the grisly reality of death, which can have done "little to assuage Nick's fears." Hannum believes Hemingway is intentionally vague about the details of the birth but not the death; he speculates that Nick would have likely "blocked out much of the caesarian but he had clearly seen the father's head tilted back." Critics have questioned why the woman's husband kills himself. Strong finds the arguments that the husband is driven to suicide by the wife's screaming to be problematic because the suicide occurs at the moment the screams are silenced. He points to Hemingway's statement in Death in the Afternoon, "if two people love each other there can be no happy end to it," as evidence that the husband may have killed himself because he is "driven frantic by his wife's pain, and perhaps his own." The story also shows the innocence of childhood; Nick Adams believes he will live forever and be a child forever; he is a character who sees his life "stretching ahead." At the end of the story, in the boat with his father, Nick denies death when he says he will never die. "Indian Camp" shows Hemingway's early fascination with suicide and with the conflict between fathers and sons. Young thinks there is an unavoidable focus on the fact that the two people the principal characters are based on—the father, Clarence Hemingway, and the boy, Ernest Hemingway—end up committing suicide. Kenneth Lynn writes that the irony to modern readers is that both characters in "that boat on the lake would one day do away with themselves." Hemingway shot himself on July 2, 1961; his father had shot himself on December 6, 1928. ### Primitivism, race, and autobiography In his essay "Hemingway's Primitivism and 'Indian camp '" Jeffrey Meyers writes that Hemingway was very clear about the husband's role, because in this story he was writing about a familiar subject—the experiences of his boyhood in Michigan. The young father's role is to "deflate the doctor," who finds victory in slicing open the woman's belly to deliver the infant, and to provide a counterpoint to the mother's strength and resilience. The father's suicide serves as a symbolic rejection of the white doctor whose skill is necessary, but who brings with him destruction. In her paper "Screaming Through Silence: The Violence of Race in 'Indian Camp,'" Amy Strong writes "Indian Camp" is about domination; the husband kills himself at the moment his wife is cut open by a white doctor. She thinks the theme of domination exists on more than one level: Nick is dominated by his father; the white outsiders dominate in the Indian camp; and the white doctor "has cut into the woman, like the early settlers leaving a gash in the tree." According to Hemingway scholar Thomas Strychacz, in the story Hemingway presents a re-enactment of the arrival of Europeans in the New World and the subsequent doctrine of manifest destiny. The white men in the story arrive on the water and are met at a beach by natives. The native husband and father of the baby loses everything, causing him to kill himself: his home is overtaken, and his wife ripped apart. The white doctor tells his son to ignore the woman's screams: "her screams are not important. I don't hear them because they are not important." The doctor's victory is to control nature by delivering a baby, diminished by the father's suicide who through his death symbolically takes back control from the white doctor. Meyers claims the story is not autobiographical though it is an early example of Hemingway's ability to tell stories "true to life." In the story, Nick Adams' father, who is portrayed as "professionally cool," is based on Hemingway's own father, Clarence Hemingway. Hemingway's paternal uncle, George, appears in the story, and is treated unsympathetically. Hannum suggests George may have been the child's father, writing that in the story remains the "never-resolved implication of the paternity of the Indian child." During the surgery the mother bites Uncle George, the Indians laugh at him, and he leaves when the father is found dead. Jackson Benson writes in "Ernest Hemingway: The Life as Fiction and the Fiction as Life" that critics should refrain from finding connections between Hemingway's life and fiction and instead focus on how he uses biographical events to transform life into art. He believes the events in a writer's life have only a vague relationship to the fiction, like a dream from which a drama emerges. Of Hemingway's earliest stories, Benson claims "his early fiction, his best, has often been compared to a compulsive nightmare." In his essay "On Writing," Hemingway wrote that "Indian Camp" was a story in which imaginary events were made to seem real: "Everything good he'd ever written he'd made up . ... Of course he'd never seen an Indian woman having a baby. That was what made it good." ## Writing style Hemingway biographer Carlos Baker writes that Hemingway learned from his short stories how to "get the most from the least, how to prune language, how to multiply intensities, and how to tell nothing but the truth in a way that allowed for telling more than the truth." The style has is considered exemplary of the iceberg theory, because, as Baker describes it, in Hemingway's writing the hard facts float above water while the supporting structure, including the symbolism, operates out of sight. Benson believes Hemingway used autobiographical details as framing devices to write about life in general—not only his life. The concept of the iceberg theory is sometimes referred to as the "theory of omission." Hemingway believed the writer could describe one thing though an entirely different thing occurs below the surface. Hemingway learned from Ezra Pound how to achieve a stripped-down style and how to incorporate the concepts of imagism in his prose. He said Pound "had taught him more 'about how to write and how not to write' than any son of a bitch alive"; and his friend James Joyce told him "to pare down his work to the essentials." The prose is spare and lacks a clear symbolism. Instead of more conventional literary allusions, Hemingway relied on repetitive metaphors or metonymy to build images. The caesarian is repeatedly associated with words such as "the blanket" and "the bunk" in a series of objective correlatives, a technique Hemingway learned from T.S. Eliot. Tetlow believes in this early story Hemingway ignored character development; he simply places a character in a setting, and adds descriptive detail such as a screaming woman, men smoking tobacco, and an infected wound, which give a sense of truth. "Indian Camp" is constructed in three parts: the first places Nick and his father on a dark lake; the second takes place in the squalid and cramped cabin amid terrifying action; and the third shows Nick and his father back on the lake—bathed in sunlight. Hemingway's use of counterpoint is evident when, for example, at the end, Nick trails his hand in lake water that "felt warm in the sharp chill of the morning." Paul Strong believes the deleted section may have provided context and additional counterpoint to the plot, with Nick's aloneness in the "stillness of the night" juxtaposed against the middle scene, crowded with people. Paul Smith writes that by cutting the piece, Hemingway focuses on the story's central point: the life and death initiation rituals, familiar to the residents of the Indian camp but alien to young Nick. Unable to express his feelings fully, in the end, Nick trails his hand in the water and "felt quite sure that he would never die." ## Reception and legacy Hemingway's writing style attracted attention when in our time (without capitals) was published in Paris in 1924—in a small-print run from Ezra Pound's modernist series through Three Mountains Press. Edmund Wilson described the writing as "of the first distinction," enough to bring attention to Hemingway. When "Indian Camp" was published, it received considerable praise. Ford Madox Ford regarded "Indian Camp" as an important early story by a young writer. Critics in the United States claimed Hemingway reinvigorated the short story by his use of declarative sentences and his crisp style. Hemingway said In Our Time had "pretty good unity" and generally critics agree. In the 1970s Carlos Baker wrote of the stories from In Our Time, and specifically "Indian Camp," that they were a remarkable achievement. Hemingway scholars, such as Benson, rank "Indian Camp" as one of Hemingway's "greatest short stories," a story that is described as "best known," "violent" and "dramatic." In 1992, Frederick Busch wrote in The New York Times that Hemingway had gone out of fashion. While his antisemitism, racism, violence, and attitudes toward women and homosexuals made him objectionable by current standards, he turned violence into art unlike any other American writer of his time by showing that "the making of art is a matter of life or death, no less." Busch believes Hemingway's characters either faced life or chose death, a choice shown most starkly in "Indian Camp." The saving of a life in "Indian Camp" is at the center of much of Hemingway's fiction, Busch writes, and adds power to his fiction.
5,673,037
Mauritius sheldgoose
1,169,045,348
Extinct species of bird
[ "Alopochen", "Bird extinctions since 1500", "Birds described in 1893", "Birds of Mauritius", "Ducks", "Extinct animals of Africa", "Extinct animals of Mauritius", "Extinct birds of Indian Ocean islands", "Taxa named by Edward Newton", "Taxa named by Hans Friedrich Gadow" ]
The Mauritius sheldgoose (Alopochen mauritiana), also known as the Mauritius shelduck, is an extinct species of sheldgoose that was endemic to the island of Mauritius. While geese were mentioned by visitors to Mauritius in the 17th century, few details were provided by these accounts. In 1893, a carpometacarpus wing-bone and a pelvis from the Mare aux Songes swamp were used to name a new species of comb duck, Sarcidiornis mauritianus. These bones were connected to the contemporary accounts of geese and later determined to belong to a species related to the Egyptian goose and placed in the sheldgoose genus Alopochen. The Mauritius and Réunion sheldgoose may have descended from Egyptian geese that colonised the Mascarene islands. One contemporary account states that the Mauritius sheldgoose had wings that were half black and half white, and that the bird was not very large. The species may also be depicted in one illustration. Fossil elements show that it was smaller than the Egyptian goose, but with more robust legs. Little is known about the habits of the Mauritius sheldgoose, accounts indicate they were very tame, were grazers, lived in groups, and usually stayed on the north side of the island except for during the dry season when they were forced to the other side to drink. Their robust legs indicate they were becoming more terrestrial, which is supported by accounts stating they avoided water. The species was considered highly palatable by travellers, and while abundant in 1681, it declined quickly thereafter, being declared extinct in 1698. It was probably driven to extinction due to overhunting and predation by introduced animals, particularly cats. ## Taxonomy Geese were reported by visitors to the Mascarene island of Mauritius in the 17th century, but few details were provided by these accounts. In 1889, the Mauritius government requested exploration of the Mare aux Songes swamp for "historical souvenirs", where vast amounts of dodo remains had earlier been found. The new excavations, under the direction of the French naturalist Théodore Sauzier [de], were successful, and apart from dodo bones, remains of other extinct animals, previously known as well as new species, were found. These bones were sent to the Cambridge Museum, where they were examined and described by the British ornithologist Edward Newton and the German ornithologist Hans Gadow. Based on a left carpometacarpus wing-bone (part of the hand, and the holotype specimen), they determined the existence of a large member of the comb duck genus Sarcidiornis, which they considered a new species due to having been restricted to Mauritius, naming it S. mauritianus. They also considered the incomplete left half of a pelvis to belong to this species. Because the contemporary accounts of geese on Mauritius did not mention a caruncle (or knob) on their bill as is seen in Sarcidiornis comb geese, the French zoologist Emile Oustalet doubted they belonged in that genus in 1896. When describing the Malagasy sheldgoose (then Chenalopex sirabensis, now in the genus Alopochen) based on fossils from Madagascar in 1897, the British palaeontologist Charles William Andrews suggested that when more remains were discovered of the Mauritian species, the two might turn out to be the same. While the British zoologist Walter Rothschild noted Oustalet's objection to the species belonging in Sarcidiornis In 1907, he believed that it was merely an oversight that the caruncle was not mentioned in contemporary accounts, and that an allusion to the small size of these geese supported them being Sarcidiornis. The American ornithologist James Greenway listed the bird as a species of Sarcidiornis in 1967. In 1987, the British ornithologist Graham S. Cowles stated that an additional carpometacarpus from the Mare aux Songes then recently identified in the British Museum of Natural History confirmed Andrews' suggestion that the Mauritius bird did not belong in Sarcidiornis, but in the sheldgoose (or shelduck) genus Alopochen, to which the extant Egyptian goose (A. aegyptiaca) belonged. In his 1994 description of the Réunion sheldgoose (then Mascarenachen kervazoi) based on fossils from Réunion, Cowles again listed the Mauritius bird as A. mauritiana, noting that Andrews had implied it was close to the Malagasy sheldgoose. In 1997, the British ornithologists Hywel Glyn Young, Simon J. Tonge, and Julian P. Hume reviewed extinct wildfowl, and noted that the interrelationships of the four extinct sheldgeese from the region of Madagascar and the western Indian Ocean were unclear, and that they may not all have been full species. They also listed the Mauritius sheldgoose as a species of Alopochen. The French palaeontologist Cécile Mourer-Chauviré and colleagues stated in 1999 that while the Mauritius sheldgoose was similar to the Malagasy and Réunion sheldgeese, it may have been endemic to Mauritius, and may be distinguishable from those species if more remains of it are found. They also moved the Réunion sheldgoose to the same genus as the Egyptian goose and the Mauritius sheldgoose, Alophochen. The British writer Errol Fuller stated in 2000 that while the geese seen on Mauritius by 17th century travellers may be connected to the species described from bones, it is possible that there is no connection. The British ecologist Anthony S. Cheke and Hume suggested in 2008 that the Mascarene sheldgeese were derived from Malagasy forms with African affinities, probably descended from the Egyptian goose after it had colonised the Mascarene islands. They added that fossils of the Mauritius sheldgoose were "extremely rare". In 2013, Hume noted that the first known tarsometatarsus (a lower leg bone) of the Mauritius shelgoose was collected from the Mare aux Songes in 2006, and that he had reidentified a radius (a forelimb bone) as that of the sheldgoose, which had originally been assigned to the Mauritius night heron by Newton and Gadow in 1893. Reflecting changing historical classifications and definitions, the Mauritius sheldgoose has also been referred to by common names such as Mauritius shelduck and Mascarene swan, with further variations such as Mauritian shelduck and Mascarene sheldgoose. ## Description The best contemporary description of the Mauritius sheldgoose, and the only one that indicates what it looked like, is that of the English traveller John Marshall from 1668: > Here are many geese, the halfe of their wings towards the end, are black, and the other halfe white. They are not large but fat and good [to eat]. The holotype carpometacarpus of the Mauritius sheldgoose has a strongly projecting alular metacarpal (the hand bone to which the alula feathers attach) which ends in a callosity (with a rough and irregular surface). The length of the carpometacarpus is 77 mm (3.0 in), within the size range of the Malagasy sheldgoose, and slightly larger than the largest individual of the Réunion sheldgoose. The carpometacarpus is similar in size to that of the brant goose (Branta bernicla), but considerably smaller than that of the domestic goose (Anser anser domesticus). There is no evidence that the Mauritius sheldgoose and its extinct island relatives were flightless. Additional fossil elements show that the Mauritius sheldgoose was smaller than the Egyptian goose, but with more robust legs, a feature it had in common with the Réunion sheldgoose. The pelvis of the Mauritius sheldgoose is also similar in size to that of the brant goose, measuring 70 mm (2.8 in) from the front brim of the acetabulum (the socket in the hip where the femur attaches) to the hind end of the ischium (which forms the back part of the pelvis), and generally agrees with the pelvis of ducks and geese. While the bill of the Mauritius sheldgoose is unknown, that of the Réunion sheldgoose was distinct in being shorter than that of the Egyptian goose. ### Possible depiction In 2004, Cheke attempted to identify a drawing of a bird that had been declared a dodo by the British historian Richard Grove in a 1995 book about western colonisation of oceanic islands. The bird was depicted in an illustration of a farm at Foul Bay, Mauritius, which showed agricultural practices, introduced animals, and birds and eels. Grove considered this to be the only illustration showing a dodo in its natural habitat and the last depiction of the species in life, and stated it was drawn by the commandant of the Dutch colony of Mauritius Isaac Lamotius in 1677. Grove believed that the drawing had been made to illustrate the overexploitation of the ebony forest to the Dutch East India Company, and that Lamotius had therefore been a sort of early conservationist. Cheke, who had previously researched the history of the dodo, found no documentary or ornithological arguments for this identification, and expressed puzzlement over it and other of Grove's conclusions. After contacting the Dutch national archives, he established that the illustration was unsigned, but had been accompanied by a 1670 letter written by the previous commandant G. F. van Wreeden and H. Klingenbergh. Cheke pointed out that the supposed dodo had a short, deep bill, webbed feet, normal wings, and a short, upturned tail, features inconsistent with it being a dodo. He suggested it was instead a better fit for the Mauritius sheldgoose, which would therefore make it the only known contemporary illustration of this bird in life. The new identification also implied that the dodo was already extinct by 1670, though the drawing had been used to support it surviving longer than generally assumed. Cheke identified two other waterbirds depicted in a stream as possible Mascarene teals, and a crow-like bird as a Mauritius bulbul. Cheke and the British palaeontologist Jolyon C. Parish stated in 2020 that the illustration "almost certainly" showed the Mauritius sheldgoose. ## Behaviour and ecology Little is known about the habits of the Mauritius sheldgoose. The Dutch soldier Johannes Pretorius' 1660s report about his stay on Mauritius is the most detailed contemporary account of its behaviour: > Geese are also here in abundance. They are a little larger than ducks, very tame and stupid, seldom in the water, eating grass, sometimes 40 or 50 or even a 100 together. When they are being shot, the ones that are not hit by the hail stay put and do not fly away. They usually keep to the north side of the island, far away from where the people live, except in the dry season when they are forced to drink on the other side of the island, and sometimes near the lodge. Hume and the British historian Ria Winters stated in 2015 that like many geese, the Mauritius sheldgoose was a grazer, and pointed out that Mauritius once had seven endemic species of grass, two of which are now extinct, as well as other species. Hume suggested in 2017 that the relatively robust legs of the Mauritius sheldgoose may indicate it was becoming more terrestrial, supported by the 1681 ship's log of the President which stated: > Up a little within the woods are several ponds and lakes of water with great numbers of flamingoes and gray teal and geese; but for the geese these are most in the woods or dry ponds. Many other endemic species of Mauritius were lost after human colonisation of the island, so the ecosystem of the island is severely damaged and hard to reconstruct. Before humans arrived, Mauritius was entirely covered in forests, almost all of which have since been lost to deforestation. The surviving endemic fauna is still seriously threatened. The Mauritius sheldgoose lived alongside other recently extinct Mauritian birds such as the dodo, the red rail, the Mascarene teal, the broad-billed parrot, the Mascarene grey parakeet, the Mauritius blue pigeon, the Mauritius scops owl, the Mascarene coot, and the Mauritius night heron. Extinct Mauritian reptiles include the saddle-backed Mauritius giant tortoise, the domed Mauritius giant tortoise, the Mauritian giant skink, and the Round Island burrowing boa. The small Mauritian flying fox and the snail Tropidophora carinata lived on Mauritius and Réunion but became extinct in both islands. Some plants, such as Casearia tinifolia and the palm orchid, have also become extinct. ## Extinction Travellers to Mauritius and Réunion made repeat mentions of highly palatable geese and ducks, and geese were listed among the favourite prey of hunters there. Cheke stated in 1987 that the Mauritius sheldgoose were considered abundant in 1681, but quickly declined thereafter; the French explorer François Leguat considered them rare in 1693, and the Dutch governor of Mauritius Roelof Deodati declared them extinct in 1698. Cheke added that since the number of men on these islands was low in the 1600s, it is unlikely they would have been responsible for the extinction of widespread animals, but those limited to certain habitats, like for example geese and ducks, may have been exterminated by hunting, though reduced breeding would probably be due to introduced animals. Cheke elaborated in 2013 that the main culprit was cats, with hunting being secondary, and the species survived introduced rats and pigs. Hume stated in 2017 that the Mauritius sheldgoose probably went extinct due to overhunting and possibly predation on its eggs and chicks by introduced mammals, particularly cats.
40,581,436
Frank Jenner
1,157,173,376
Australian evangelist
[ "1903 births", "1977 deaths", "20th-century evangelicals", "Australian Plymouth Brethren", "Australian evangelicals", "Australian gamblers", "British Plymouth Brethren", "British people of World War I", "Converts to Christianity from atheism or agnosticism", "Converts to evangelical Christianity", "Deaths from cancer in New South Wales", "Deaths from colorectal cancer", "English emigrants to Australia", "English evangelicals", "English gamblers", "Evangelists", "IBM employees", "Janitors", "Military personnel from Southampton", "People with Parkinson's disease", "People with narcolepsy", "Royal Australian Navy personnel of World War II", "Royal Australian Navy sailors", "Royal Navy sailors", "United States Navy sailors" ]
Frank Arthur "Bones" Jenner (surname often misspelled Genor; 2 November 1903 – 8 May 1977) was an Australian evangelist. His signature approach to evangelism was to ask people on George Street, Sydney, "If you died within 24 hours, where would you be in eternity? Heaven or hell?" Born and raised in England, he contracted African trypanosomiasis at the age of twelve and suffered from narcolepsy for the rest of his life. After some time, he joined the Royal Navy, but deserted in New York and joined the United States Navy. When he was 24, he deserted again while in Australia. He subsequently worked for the Royal Australian Navy until he bought his way out in 1937. That year, Jenner encountered a group of men from the Glanton Exclusive Brethren who were engaging in open-air preaching, and he converted to Christianity. For 28 years, from his initial conversion until his debility from Parkinson's disease, Jenner engaged in personal evangelism, probably speaking with more than 100,000 people in total. One person who became a Christian after encountering Jenner's question was Noel Stanton, who went on to found the Jesus Army in 1969. In 1952, the Reverend Francis Dixon of Lansdowne Baptist Church in Bournemouth, England, began hearing several testimonies from people who became Christians after Jenner accosted them on George Street, Sydney. The following year, Dixon met with Jenner in Australia and told him about the people he had met who had become Christians as a result of Jenner's evangelism, and Jenner, then fifty years old, cried because he had not previously known that even one of the people he had talked to had remained a Christian beyond their initial profession of faith. Jenner died from colorectal cancer in 1977. While he was alive, very few people knew of him, but after he died, stories of his evangelistic activities circulated widely, and elements of some of these stories contradicted others. In 2000, Raymond Wilson published Jenner of George Street: Sydney's Soul-Winning Sailor in an attempt to tell the story of Jenner's life accurately. Nonetheless, conflicting accounts of Jenner's life have continued to propagate, including an account from Ché Ahn in which Jenner is referred to as "Mr. Genor". ## Early life Frank Arthur Jenner was born on 2 November 1903 in Southampton, Hampshire, England. His father was a hotel pub owner and former sea captain. Jenner had four brothers. According to his posthumous biographer Raymond Wilson, Jenner was anti-authoritarian as a boy and, at the age of twelve, during World War I, he was sent to work aboard a training ship for misbehaving boys. When he was fourteen, the ship sailed from Southampton to Cape Town, South Africa. On the way, while the ship was docked at a port in West Africa, a tsetse fly bit Jenner and infected him with Trypanosoma; he therefore contracted African trypanosomiasis, which is also called "sleeping sickness". He subsequently entered a 15-day coma, but eventually recovered. From this point on, he suffered from excessive daytime sleepiness and was eventually diagnosed with narcolepsy, which prevented him from ever being able to drive a car. When the war ended, he returned to England. ## Navy career After some time, Jenner joined the Royal Navy, but deserted in New York City, United States. He soon joined the United States Navy. Jenner's daughter stated in an interview after his death that he learned how to gamble during this time and he soon developed the impulse control disorder of problem gambling. He became particularly attached to the game craps, which was popular in the United States at the time. He started to keep a rabbit's foot in the left upper pocket of his shirt, and would rub it with his left hand while he rolled the dice with his right. His shipmates therefore began calling him "Bones", a nickname he kept for the rest of his navy career. When he was 24, his work with the United States Navy involved going to Australia and he deserted again, this time in Melbourne. There, he met Charlie Peters, who invited him to his home to have a meal with his family including Jessie, Peters' 23-year-old daughter. Jessie and Jenner married a year later, on 6 July 1929, at HMAS Cerberus. They continued to live in Melbourne after their wedding and Jenner joined the Royal Australian Navy. He soon became one of the sailors assigned to travel to England to retrieve HMAS Canberra. He was serving on HMAS Australia in 1937 when he was legally discharged from the navy, buying his way out but not receiving a pension. In 1939, with the onset of World War II, Jenner was recalled to active duty. Because of his narcolepsy, he was given shore duties in Sydney. In this capacity, he participated in undercover operations and delivered sealed orders. After the war, he left the navy and became a janitor for IBM, a technology and consulting corporation. ## Conversion to Christianity In 1937, Jenner encountered a group of men from the Glanton Exclusive Brethren standing in front of the National Australia Bank on Collins Street. One of the men was engaging in open-air preaching. Jenner interrupted the man to say he would listen to the man's good news provided that he was allowed to share some good news first. The man agreed, so Jenner taught the group of Brethren how to play craps there on the pavement. One of the Brethren invited Jenner into his home for tea and told him about the gospel. Jenner converted to Christianity and, when he went home, told Jessie she was a sinner bound for hell and therefore in need of salvation. According to Wilson's biography of Jenner, Jessie thought Jenner had become manic or insane. They had a young daughter named Ann by this point and Jenner was gambling so much that he was not providing for his family. For both these reasons, Jessie left Jenner and moved to Corowa to work on a farm, taking Ann with her. She said she would return only when Jenner regained his sanity. On several occasions, he aggressively told Jessie's brothers they needed to become Christians, which angered them. On one of these occasions, their conversation became physical and they began punching each other. The brothers rejected Jenner and were never reconciled to him. He wrote to his family back in England informing them of his conversion and asking them to become Christians too, but he received no reply. Later in 1937, Jessie became seriously infected with boils and, while under the care of a Glanton Brethren family, became a Christian. Before the end of the year, Jenner and Jessie began living together again. Although Jenner gave up gambling, he was often unemployed because he would evangelise at his workplace and then be fired. In 1939, Jessie developed a peptic ulcer. At the time, it was believed that such ulcers were caused by stress, and Jessie's ulcer was therefore attributed to the stress induced by the family's lack of money. Consequently, she and Ann moved to India to live with Jenner's aunt Emily McKenzie, who ran the Kotagiri Keswick Missionary Home. Ann subsequently attended Hebron School in Ooty, Tamil Nadu, until she was ten years old. Once Jessie had recovered from her illness, they returned to Sydney on SS Oronsay. Jenner would normally wake up to pray at 5 am each day. In the 1940s, Jenner left the Glanton Brethren and joined the Open Brethren. For the rest of his life, Jenner attended Open Brethren churches: one on Goulburn Street in Sydney and the other in Bexley, New South Wales. At these churches, people did not understand what narcolepsy was and thought Jenner was consistently falling asleep during services because he lacked respect for God. The church on Goulburn Street also disapproved of his partnership with other Christian organisations and churches; Jenner actively partnered with The Navigators, Campaigners for Christ, Baptists, Anglicans, and Methodists. ## Evangelism Out of gratitude to God for giving him salvation, Jenner committed to consistently engaging in personal evangelism, and aimed to talk with ten different people every day thenceforward. For 28 years, from his initial conversion until his debility from Parkinson's disease, Jenner engaged in this form of evangelism. He probably spoke with more than 100,000 people in total, hundreds of whom made initial professions of commitment to Christianity. He kept religious tracts in his shirt pocket where he had previously kept his rabbit's foot, and he often gave these tracts to people he met. He also kept a card in his pocket with Philippians 4:13 on it in order to give himself courage in evangelising. This verse reads, "I can do all things through Christ who strengthens me." While engaging in these activities, Jenner would normally wear a white shirt, black shoes, and trousers, and sometimes a navy greatcoat. Usually evangelising on George Street, Sydney, Jenner asked many people the same question: "If you died within 24 hours, where would you be in eternity? Heaven or hell?" If they were willing to engage in conversation with him, he would invite them either to his home or to a local church. The question became known as "the Frank Jenner question". Jenner was most active in evangelism during World War II. On Saturday nights during the war, Jenner would invite groups of sailors to his home for a service consisting of some hymns and a short sermon. One of the people to whom Jenner posed his question was Noel Stanton, a man from Bedfordshire, England, who was serving in Sydney with the Royal Navy at the time. Stanton became preoccupied with the memory of this meeting for several months afterwards and, the next year, became a committed Christian. Stanton went on to found the Jesus Army in Northampton, England, in 1969. In 1945, Jenner approached Norrie Jeffs, who had just returned from participating in Operation Meridian at Palembang on Sumatra, and, having asked Jeffs his question, Jeffs responded that he was already a Christian. Jenner then invited Jeffs over to his house, where Jeffs met several other visitors, including the woman who would later become his wife. In 1952, another person Jenner accosted with his question on George Street was Ian Boyden, a man from Roseville who was serving in the Royal Australian Air Force. After having a brief conversation with Jenner, Boyden accepted Jenner's invitation to attend a church service at Renwick Gospel Hall, where he responded to the sermon by committing to living as a Christian thenceforward, which he did for at least fifty years. Many other people who had a brief encounter with Jenner on the street in Sydney also became Christians, but Jenner did not realise that any of the people he accosted had remained a Christian beyond their initial profession of faith until 1953, when Francis Dixon told him the stories of several such people. When Dave Rosten, another Sydney evangelist, attempted to imitate Jenner's method of evangelism, he was punched in the midriff by the first person he spoke to, so he decided that Jenner's approach to evangelism was not for others to emulate. In 1947, Jenner asked his question to a man named Angus Carruthers, who responded that he was a Christian and going to heaven. Jenner invited Carruthers back to his home, where Carruthers met Jenner's daughter, Ann. Carruthers and Ann married three years later. ## Discovery by Francis Dixon The Reverend Francis Willmore Dixon was the head pastor of Lansdowne Baptist Church in Bournemouth, England, and his youth pastor, Peter Culver, had become a Christian as a result of meeting Jenner on George Street on 2 September 1945. In 1952, at an All Nations Bible College event, Dixon and Culver heard Noel Stanton's Christian testimony, which included the episode in which Stanton had met Jenner. Dixon then realised that Culver and Stanton must have become Christians as a result of the same man. The following year, Dixon heard two different British sailors who did not know each other testify at Lansdowne Baptist Church, and both had told very similar stories to Culver and Stanton; both had been walking down George Street and had been asked Jenner's question. Dixon then travelled to Australia with his wife to engage in itinerant preaching there. Dixon hoped to find Jenner there, although Dixon did not yet know the name of the man he was looking for. In Adelaide, Dixon told the stories of Culver and Stanton while preaching. Murray Wilkes then approached Dixon and said he had also become a Christian after having been asked Jenner's question on George Street. At a Methodist church in Perth, Dixon told Culver's, Stanton's, and Wilkes' stories again, and met yet another person who had become a Christian after an encounter with Jenner. Finally arriving in Sydney, Dixon asked Alec Gilchrist of Campaigners for Christ if he knew a man who asked strangers on George Street whether or not they were headed for heaven or hell. Gilchrist was familiar with Jenner and informed Dixon about how to contact Jenner. Dixon visited Jenner at his house and told him about all the people he had met who had become Christians because of Jenner's evangelism. Jenner, now fifty years old, had never before heard of even one person living their lives as Christians as a result of his evangelism, and he cried upon hearing that there were several. After returning from Australia, Dixon went on to discover more people who had become Christians because of Jenner in Bournemouth, Cumbria, India, and Jamaica. By 1979, Dixon had discovered ten people who had become Christians as a result of Jenner's evangelism. It is because of Dixon that the story of Jenner's evangelism began to be told. Dixon's wife Nancy wrote an account of Jenner's evangelism, which she called "The Jenner Story". ## Later life In later years, Jenner developed Parkinson's disease and therefore retired from IBM. With money that Jessie had inherited, the couple moved to Bexley in 1953, where they began attending Bexley Gospel Hall. Towards the end of his life, Jenner developed dementia and his narcolepsy worsened. For six months, he was confined to a bed and was treated with amphetamine. He was then diagnosed with colorectal cancer and spent a subsequent ten days at Calvary Hospital, Kogarah, New South Wales, where he died at 11:45 pm on 8 May 1977 at the age of 73. Because he had befriended so many police officers towards the end of his life, his body was given a police escort to the burial, which took place at Woronora Lawn Cemetery. His wife died two years later. ## Legacy While Jenner was alive, very few people knew of him, and the effects of his evangelism were largely unrecognised. After his death, however, stories about his evangelism circulated widely. Stories of his evangelistic activities generated a largely oral tradition, and elements of some stories contradicted others. Many storytellers said Jenner was small in stature and that he had white hair; this description is contradicted by interviews with family members. In 2000, Raymond Wilson published a book called Jenner of George Street: Sydney's Soul-Winning Sailor in an attempt to tell the story of Jenner's life accurately. Jenner's family had been finding it painful to have alternate accounts of Jenner's life circulating around the world, so they gave Wilson all the information he desired. Wilson wrote that Jenner was "eccentric ... the very antithesis of the 'wise', 'mighty', and 'noble'," but that his life was therefore a good demonstration of 2 Corinthians 12:9, which states that God's "power is made perfect in weakness." Wilson wrote that Jenner's question of "heaven or hell" was very similar to that of Arthur Stace, another Australian street evangelist who wrote the word "Eternity" on the sidewalks so people would consider where they would be in eternity. Wilson called Jenner a battler and did not recommend that his readers emulate Jenner's evangelistic activities "unless Divinely fitted in a similar way." Wilson wrote that he "travelled and corresponded widely to ascertain the facts of the story" and that he personally verified the accuracy of the information by retrieving first-hand accounts from all the major figures in Jenner's life. The people Wilson interviewed included Nancy Dixon; Ann and Angus Carruthers, Jenner's daughter and son-in-law; Murray Wilkes; Ian Boyden; Tas McCarthy; Peter Culver; Noel Stanton; and Mary Stares. Nonetheless, conflicting accounts of Jenner's life continued to propagate at least as late as 2006. In some accounts of Jenner's evangelism, Jenner is referred to as "Mr. Genor". One such account was recorded by Ray Comfort on the Living Waters website and then repeated in the 2006 book Spirit-led Evangelism: Reaching the Lost through Love and Power by Ché Ahn. Ahn is one of the storytellers who refers to Jenner as a "little white-haired man", and Ahn concludes his story by writing that Jenner died two weeks after encountering Dixon, who is not named. These details contradict the information provided by Wilson, who writes in his biography that Jenner died more than twenty years after Dixon told him about the people who had become Christians as a result of his evangelism. In 2013, Gary Wilkinson produced and directed The Frank Jenner Question, a documentary film featuring interviews with Jenner's daughter and people who had become Christians because of Jenner's evangelism. Claire Goodwin encouraged people to emulate Jenner by including an account of his evangelism in her 2013 book Compelled to Tell: A Fascinating Journey from a New York Dead-End Street to a Lifetime of Ministry and Soul-Winning.
51,375,509
Wisconsin Territorial Centennial half dollar
1,154,383,328
1936 United States commemorative coin
[ "1936 establishments in Wisconsin", "Currencies introduced in 1936", "Early United States commemorative coins", "Fifty-cent coins", "Mammals in art", "Wisconsin Territory" ]
The Wisconsin Territorial Centennial half dollar is a commemorative half dollar designed by David Parsons and Benjamin Hawkins and minted by the United States Bureau of the Mint in 1936. The obverse depicts a pick axe and lead ore, referring to the lead mining in early Wisconsin, while the reverse depicts a badger and the territorial seal. Organizers of the territorial centennial celebration sought a commemorative half dollar as a fundraiser; at this time newly-issued commemorative coins found a ready market from collectors and speculators. Accordingly, legislation was introduced by Senator Robert M. La Follette Jr., which, though it was amended, passed Congress without opposition. When initial designs by Parsons were rejected by the Commission of Fine Arts, Hawkins was hired, and he executed the designs, though Parsons was also given credit. A total of 25,000 pieces were coined for public sale in July 1936. This did not occur until after the centennial celebrations had ended, and though the coins were promoted during them, sales were weak and the coins were sold by the Wisconsin Historical Society until the supply was exhausted in the late 1950s. The coins currently catalog for up to \$250. ## Background The state of Wisconsin, before its admission to the Union in 1848, was a territory. Much of the Wisconsin Territory had been part of the Northwest Territory, ceded by Great Britain in 1783 as part of the Treaty of Paris. The area then became part of the Michigan Territory, and gained importance during the 1820s, when large deposits of lead (commemorated on the obverse of the coin) were discovered in southwestern Wisconsin. Many of these early miners chose to live in their shafts, rather than building a separate house, leading to the nickname "badgers" for Wisconsinites. As more Americans moved in, the area became important enough to become a separate territory in 1836. Its first governor, Henry Dodge, was sworn in on July 4, 1836. Sparked by low-mintage issues which appreciated in value, the market for United States commemorative coins spiked in 1936. Until 1954, the entire mintage of such issues was sold by the government at face value to a group authorized by Congress, who then tried to sell the coins at a profit to the public. The new pieces then came on to the secondary market, and in early 1936 all earlier commemoratives sold at a premium to their issue prices. The apparent easy profits to be made by purchasing and holding commemoratives attracted many to the coin collecting hobby, where they sought to purchase the new issues. Congress authorized an explosion of commemorative coins in 1936; no fewer than 15 were issued for the first time. At the request of the groups authorized to purchase them, several coins minted in prior years were produced again, dated 1936, senior among them the Oregon Trail Memorial half dollar, first struck in 1926. In order to help fund various activities for the Wisconsin Centennial that year, the Wisconsin Centennial Commission appointed a Coinage Committee to call for commemorative half dollars commemorating the Centennial. This committee had been formed in February 1935, in part due to suggestions from the Madison Coin Club. As 1936 was the peak year for commemorative coins, the fact that a territorial centennial was hardly worthy of commemoration on United States coinage due to being of local or regional significance was not taken into consideration. As numismatic author Q. David Bowers put it, "the establishment of the territorial government [was] a rather obscure event to observe with a nationally-distributed coin". Numismatist Bob Bair, writing in 2021, deemed the Wisconsin Centennial "one of the many events of strictly local interest in the 1920s and 1930s that, with the assistance of Congress and the U.S. Mint, used commemorative coinage to enhance their revenue". ## Legislation Fred W. Harris of the Coinage Committee interested one of Wisconsin's senators, Robert M. La Follette Jr., in the coinage proposal. La Follette introduced legislation for a Wisconsin Territorial half dollar in the United States Senate on January 30, 1936; it was referred to the Committee on Banking and Currency. There, it was one of several commemorative coin bills to be considered on March 11, 1936, by a subcommittee led by Colorado's Alva B. Adams. Senator Adams had heard of the commemorative coin abuses of the mid-1930s, with issuers increasing the number of coins needed for a complete set by having them issued at different mints with different mint marks; authorizing legislation placed no prohibition on this. Lyman W. Hoffecker, a Texas coin dealer and official of the American Numismatic Association, testified and told the subcommittee that some issues, like the Oregon Trail half dollar, had been issued over the course of years with different dates and mint marks. Other issues had been entirely bought up by single dealers, and some low-mintage varieties of commemorative coins were selling at high prices. The many varieties and inflated prices for some issues that resulted from these practices angered coin collectors trying to keep their collections current. The Wisconsin bill emerged from the committee on March 26, 1936, with a report authored by Adams. The original bill would have allowed the Wisconsin committee selling the coins to decide how many would be struck, and they could be struck at any or all of the mints. Instead, the amendments required that they be struck at only one mint, that no fewer than 5,000 be struck at one time, and that they be issued within a year of the passage of the legislation, with all dated with the year of enactment. The Senate passed the bill on March 27, 1936, the fourth of six commemorative coin bills considered in succession, each passed without debate or dissent. The bill then went to the House of Representatives, and was referred to the Committee on Coinage, Weights, and Measures. On April 16, 1936, that committee reported back through Andrew Somers of New York, recommending it pass after being amended to increase the minimum number struck at one time from 5,000 to 25,000, thus setting the minimum mintage at 25,000. The House of Representatives passed the bill, with the committee amendments and without debate or dissent, on April 28, 1936, on the motion of Wisconsin's Gardner R. Withrow. As the two houses had not passed identical versions, this sent the bill back to the Senate. On May 4, Adams moved that the Senate agree to the House amendment, which it did; the bill became law, authorizing not fewer than 25,000 legal-tender Wisconsin half dollars, with the signature of President Franklin D. Roosevelt on May 15, 1936. Under the terms of the authorizing legislation, there was no upper limit to the number of coins that could be struck, so long as they were taken in tranches of not less than 25,000 coins, were of a single design dated 1936, and were issued by the government by May 15, 1937, one year from the bill's enactment. ## Preparation In April 1936, with the bill still before Congress, the Wisconsin Centennial Commission selected David Parsons, a local art student, to design the coin, dictating that the seal of Wisconsin Territory be used for one side, and a badger for the other. The models were poorly executed and in very high relief; they were rejected by the Bureau of the Mint. The Commission asked for the name of a suitable artist, and the Treasury Department referred the issue to the Commission of Fine Arts (CFA), which then recruited New York sculptor Benjamin Hawkins. The CFA was charged by a 1921 executive order by President Harding with rendering advisory opinions on public artworks, including coins. On May 14, 1936, the CFA chair, Charles Moore, wrote to Hawkins informing him of the Centennial Commission's requirements and enclosing a copy of the territorial seal. He told Hawkins that the Centennial Commission expected the work to be done within three weeks. Hawkins submitted the finished models to the Mint on June 3, 1936, which were approved by the CFA two days later. The models were reduced to coin-size hubs by the Medallic Art Company of New York. Numismatic author Don Taxay felt it unjust that Parsons and Hawkins are given joint credit for the coin, since Hawkins did not work from Parsons's designs, but from the territorial seal, and the Hawkins badger was completely different from that of Parsons. Taxay deemed the crediting of Parsons symptomatic of the desire of such commissions to associate their work with local artists. ## Design The obverse is an inexact rendering of the territorial seal. A miner's forearm, holding a pickaxe, dominates the design, with a pile of lead ore and soil in the background. This represents the mining activities in southeast Wisconsin that drew many settlers to the area in the 1820s. The date mentioned, July 4, 1836, is that on which the first governor, Dodge, took office. The date of the coin and WISCONSIN TERRITORIAL CENTENNIAL ring the design. The reverse features a badger, the Wisconsin state animal. Behind it are three arrows, symbolic of the conflict between settlers and the Black Hawk Indians, with an olive branch, marking the peace that paved the way for the establishment of the territory, or, as Anthony Swiatek and Walter Breen put it in their 1988 book on commemorative coins, "the massacre and expulsion of the Indians that made the area safe for white settlers". The name of the country, the coin's denomination, and the inscriptions required by law surround the pictorial design. Hawkins's initial H appears below the badger, which represents Wisconsin's early fur-trading days; both sides of the half dollar emphasize the natural resources of the state. According to Bowers "The design was not a favorite with collectors, and relatively little enthusiasm was ever shown for it." Taxay deemed the Wisconsin coin "one of our poorest issues". Cornelius Vermeule, an art historian who wrote a book on American coins and medals, disliked the Wisconsin half dollar and compared its design to that on a box of baking soda.→ > The coin of 1936 that marks the hundredth anniversary of Wisconsin's formation as a territory smacks of amateurism. The models were the work of an art student at the University of Wisconsin ... This half dollar of the United States is, as a work of art, little more than a high school medal of the dullest variety. As a visual experience, it ranks with some of the worst local-society or small-occasion medals which have a timelessness if only in the mediocre level of their art. ## Distribution and collecting The Coinage Committee was confident enough that the authorizing legislation would become law that Harris, who served as the coin's distributor, began accepting orders in April 1936, a month before the bill passed. While the legislation authorizing the commemorative called for a minimum mintage of 25,000 coins with no limit on the maximum number of coins that could be minted, Harris chose to take a conservative approach and minted only 25,015 coins, which included 15 coins put aside for examination and testing at the 1937 meeting of the annual Assay Commission. These were struck at the Philadelphia Mint in July 1936. Although the coins were unlikely to have been available during the Wisconsin Centennial celebration from June 27 to July 5, most were sold for \$1.50 per coin by mail order through the work of the committee. The coin was marketed during the Centennial Cavalcade of Wisconsin, a historical pageant that could be seen from June 27 to July 5, 1936, at Camp Randall Stadium, the University of Wisconsin's football stadium. The coin was mentioned favorably in Wisconsin local newspapers, which claimed that the issue had immediately sold out due to orders from far and wide. However, due to their late release and primarily local appeal, the coins did not sell very well, and many remained unsold by the end of 1936. They continued to be sold for the next 16 years by the Wisconsin Historical Society at the reduced price of \$1.25 per coin, until the price was raised to \$3 per coin in 1952. The supply of coins eventually was exhausted in the late 1950s. The coins were sold in plain cardboard holders that, similar to holders for the York County, Maine, Tercentenary half dollar, contained slots for up to five coins. Orders of one or two coins were sealed in tissue paper and shipped in envelopes that were either imprinted "L.M. HANKS, FIRST NATIONAL BANK BUILDING, MADISON, WISC" or rubber-stamped "AFTER 10 DAYS RETURN TO STATE SUPERINTENDENT, STATE CAPITOL, MADISON, WISC". Like the coins, envelopes are collectibles, valued at \$50–\$75. The Wisconsin piece in uncirculated condition sold for about \$1.25 by 1940, up to \$3.00 by 1950, \$14 by 1960, and \$325 by 1985. The deluxe edition of R. S. Yeoman's A Guide Book of United States Coins, published in 2020, lists the coin for \$175 to \$250, depending on condition. A specimen in exceptional condition sold for \$17,625 in 2015.
6,969,063
Battle of Grand Port
1,172,535,569
1810 naval battle between the French Navy and the British Royal Navy
[ "1810 in France", "1810 in Mauritius", "August 1810 events", "Battles inscribed on the Arc de Triomphe", "Conflicts in 1810", "Isle de France (Mauritius)", "Military history of Mauritius", "Naval battles of the Napoleonic Wars", "Wars involving Mauritius" ]
The Battle of Grand Port was a naval battle between squadrons of frigates from the French Navy and the British Royal Navy. The battle was fought during 20–27 August 1810 over possession of the harbour of Grand Port on Isle de France (now Mauritius) during the Napoleonic Wars. The British squadron of four frigates sought to blockade the port to prevent its use by the French through the capture of the fortified Île de la Passe at its entrance. This position was seized by a British landing party on 13 August and, when a French squadron under Captain Guy-Victor Duperré approached the bay nine days later, the British commander, Captain Samuel Pym, decided to lure them into coastal waters where his forces could ambush them. Four of the five French ships managed to break past the British blockade, taking shelter in the protected anchorage, which was only accessible through a series of complicated routes between reefs and sandbanks that were impassable without an experienced harbour pilot. When Pym ordered his frigates to attack the anchored French on 22 and 23 August, his ships became trapped in the narrow channels of the bay: two were irretrievably grounded; a third, outnumbered by the combined French squadron, was defeated; and a fourth was unable to close to within effective gun range. Although the French ships were also badly damaged, the battle was a disaster for the British: one ship was captured after suffering irreparable damage, the grounded ships were set on fire to prevent their capture by French boarding parties and the remaining vessel was seized as it left the harbour by the main French squadron from Port Napoleon under Commodore Jacques Hamelin. The British defeat was the worst suffered by the Royal Navy during the entire war and left the Indian Ocean and its vital trade convoys exposed to attack from Hamelin's frigates. In response, the British authorities sought to reinforce the squadron on Île Bourbon under Josias Rowley by ordering all available ships to the region, but this piecemeal reinforcement resulted in a series of desperate actions as individual British ships were attacked by the confident and more powerful French squadron. In December an adequate reinforcement was assembled with the provision of a strong battle squadron under Admiral Albemarle Bertie, that rapidly invaded and captured the Isle de France. ## Background During the early 19th century, the Indian Ocean formed an essential part of the network of trade routes that connected the British Empire. Heavily laden East Indiamen travelled from British Indian port cities such as Bombay and Calcutta to the United Kingdom carrying millions of pounds worth of goods. From Britain, the ships returned on the same routes, often carrying soldiers for the growing British Indian Army, then under the control of the Honourable East India Company (HEIC). Following the outbreak of the Napoleonic Wars in 1803, the British Admiralty had made the security of these routes a priority, and by 1807, the Dutch bases at the Cape of Good Hope and Java had been neutralised by expeditionary forces to prevent their use by enemy raiders. The French Indian Ocean possessions, principally Île Bonaparte and Isle de France, were more complicated targets, protected from attack not only by the great distances involved in preparing an invasion attempt but also by heavy fortifications and a substantial garrison of French Army soldiers augmented by large local militias. The French had recognised the importance of these islands as bases for raiding warships during the French Revolutionary Wars (1793–1801), but by late 1807 the only naval resources allocated to the region were a few older frigates and a large number of local privateers. Following the reduction of these remaining naval forces on Isle de France during 1808, by defeat in battle and disarmaments due to age and unseaworthiness, the French naval authorities made a serious attempt to disrupt British trade in the region, ordering five large modern frigates to sail to Isle de France under Commodore Jacques Hamelin. Four of these ships broke through the British blockade of the French coast, arriving in the Indian Ocean in the spring of 1809, where Hamelin dispersed them into the Bay of Bengal with orders to intercept, attack and capture or destroy the heavily armed but extremely valuable convoys of East Indiamen. The first French success came at the end of the spring, when the frigate Caroline successfully attacked a convoy at the action of 31 May 1809, seizing two heavily laden merchant ships. Commodore Josias Rowley was given command of the British response to the French deployment, a hastily assembled force composed mainly of those ships available at the Cape of Good Hope in early 1809. Ordered to stop the French raiders, Rowley was unable to spread his limited squadron wide enough to pursue the roving French frigates, instead using his forces to blockade and raid the French Indian Ocean islands in anticipation of Hamelin's return. In August, Caroline arrived with her prizes at Saint-Paul on Île Bonaparte and Rowley determined to seize the frigate. He planned a successful invasion of the town, launched on 20 September, which resulted in the capture of the port's defences, Caroline and the captured East Indiamen. With his objectives complete, Rowley withdrew five days later. Almost a year later, Rowley returned with a larger task force and made a second landing around the capital of Île Bonaparte, Saint-Denis. Marching on the seat of government, Rowley's troops rapidly overwhelmed the defences and forced the island's garrison to surrender, renaming the island Île Bourbon and installing a British governor. Hamelin had used the British preoccupation with Île Bonaparte to send additional frigates to sea during 1809 and early 1810, including his flagship Vénus, which captured three East Indiamen at the action of 18 November 1809, and Bellone, which took the Portuguese frigate Minerva in the Bay of Bengal a few days later. Minerva, renamed Minerve in French hands, was subsequently involved in the action of 3 July 1810, when a further two East Indiamen were captured. The squadron in this action was commanded by Guy-Victor Duperré in Bellone, whose ships were so badly damaged that Duperré was forced to spend nearly a month repairing his vessels in the Comoros Islands before they were ready to return to Isle de France. ### Operations off Grand Port With Île Bourbon secured in July 1810, the British now occupied a large fortified island base within easy sailing distance of Isle de France. Even before Île Bourbon was completely in British hands, Rowley had detached HMS Sirius from the invasion squadron with orders to restore the blockade of Isle de France. Shortly afterwards, Sirius's captain Samuel Pym led his men in a raid on a coastal vessel moored off the southern side of the island. Two days after this successful operation, reinforcements arrived in the form of the frigates HMS Iphigenia, HMS Nereide and the small brig HMS Staunch. Nereide carried 100 specially selected soldiers from the 69th and 33rd Regiments and some artillerymen from the garrison at Madras, to be used in storming and garrisoning offshore islands, beginning with Île de la Passe off Grand Port, a well defended islet that protected a natural harbour on the southeastern shore. These fortified islands could be used to block entry to the ports of Isle de France and thus trap Hamelin's squadron. Grand Port was an easily defensible natural harbour because the bay was protected from the open sea by a large coral reef through which a complicated channel meandered, known only to experienced local pilots. Île de la Passe was vitally important in the control of Grand Port because it featured a heavy battery that covered the entrance to the channel, thus controlling the passage to the sheltered inner lagoon. The British planned to use the troops on Nereide, under her captain Nesbit Willoughby, to storm Île de la Passe and capture the battery. Willoughby would then use a local man serving on his ship named John Johnson (known in some texts as "the black pilot"), to steer through the channel and land troops near the town, distributing leaflets promising freedom and prosperity under British rule in an attempt to corrode the morale of the defenders. The first attack on Île de la Passe was launched on the evening of 10 August, with Staunch towing boats carrying over 400 soldiers, Royal Marines and volunteer sailors to the island under cover of darkness, guided by Nereide's pilot. During the night the pilot became lost; the boats were scattered in high winds and had not reassembled by dawn. To distract French attention from the drifting boats, Pym directed Captain Henry Lambert in Iphigenia to sail conspicuously off Port Napoleon, where the main body of the French squadron, led by Hamelin in Vénus, was based. Pym joined Lambert later in the day and the frigates subsequently returned to the waters of Grand Port by different routes, confusing French observers from the shore as to British intentions. By 13 August, the boats originally intended for the attack had still not been assembled and Pym decided that he could not risk waiting any longer without the French launching a counterattack. Launching his own boats at 8:00 pm, guided by the pilot and commanded by Pym's second in command, Lieutenant Norman, Pym's marines and sailors landed on the island in darkness under heavy fire from the defenders. Norman was killed in the initial exchange of fire, but his deputy, Lieutenant Watling, seized the island by storming the fortifications surrounding the battery. Seven British personnel were killed and 18 wounded in the battle, in which the storming party managed to seize intact French naval code books and took 80 prisoners. Willoughby was furious that Pym had assumed command of the operation without his permission and the officers exchanged angry letters, part of an ongoing disagreement between them that engendered mutual distrust. With Île de la Passe secure, Pym gave command of the blockade of Grand Port to Willoughby and returned to his station off Port Napoleon with Iphigenia. Willoughby used his independent position to raid the coastline, landing at Pointe du Diable on 17 August on the northern edge of Grand Port with 170 men and storming the fort there, destroying ten cannon and capturing another. Marching south towards the town of Grand Port itself, Willoughby's men fought off French counterattacks and distributed propaganda pamphlets at the farms and villages they passed. Willoughby re-embarked his troops in the evening but landed again the following day at Grande Rivière to observe the effects of his efforts. Burning a signal station, Willoughby advanced inland, but was checked by the arrival of 800 French reinforcements from Port Napoleon and returned to HMS Nereide. The brief expedition cost the British two men wounded and one missing, to French casualties of at least ten killed or wounded. Willoughby followed the attack on Grande Rivière with unopposed minor landings on 19 and 20 August. ### Duperré's arrival Willoughby's raiding was interrupted at 10:00 am on 20 August when five ships were sighted, rapidly approaching from the southeast. These ships were Guy-Victor Duperré's squadron of Bellone, Minerve, corvette Victor and prizes Windham and Ceylon returning from the Comoros Islands. Following a month of repairs on Anjouan, Duperré had sailed for Isle de France without encountering any opposition on his return passage, and was intending to enter Grand Port via the channel protected by Île de la Passe. Duperré was unaware of the British occupation of the island, and Willoughby intended to lure the French squadron into the channel by concealing the British presence off the harbour. Once there, Willoughby hoped to defeat them or damage them so severely that they would be unable to break out unaided, thus isolating Duperré's squadron from Hamelin's force in Port Napoleon and containing the French in separate harbours to prevent them from concentrating against the British blockade squadrons. Willoughby brought Nereide close to Île de la Passe to combine their fire and protect his boats, which were carrying 160 men back to Nereide from a raid near Grand Port that morning. Raising a French tricolour over Île de la Passe and on Nereide, Willoughby transmitted the French code captured on the island: "L'ennemi croise au Coin de Mire" and received an acknowledgement from Duperré. The use of these signals convinced Duperré, over the objections of Captain Pierre Bouvet on Minerve, that Nereide was Surcouf's privateer Charles, which was expected from France. The French squadron closed with the harbour during the morning, Victor entering the channel under Île de la Passe at 1:40 pm. As Victor passed Nereide and the fort Willoughby opened fire, Lieutenant Nicolas Morice surrendering the outnumbered corvette after the first volley. Willoughby sent boats to attempt to take possession of Victor, but they were unable to reach the vessel. Behind the corvette, Minerve and Ceylon pushed into the channel and signalled Morice to follow them, exchanging fire with the fort. As Morice raised his colours again and followed Minerve, a large explosion blasted out of Île de la Passe, where the false French flag had ignited on a brazier as it was lowered and set fire to a nearby stack of cartridges, which exploded in the close confines of the fort. Three men were killed and 12 badly burned, six cannon were dismounted and one discharged unexpectedly, killing a British sailor in a boat attempting to board Victor. With the fort out of action and a significant number of her crew scattered in small boats in the channel, Nereide alone was unable to block French entry to Grand Port. With Willoughby's ambush plan ruined, the scattered boats sought to rejoin Nereide, passing directly through the French squadron. Although several boats were in danger of being run down by the French ships and one even bumped alongside Minerve, all eventually rejoined Nereide safely. The opportunity to cause significant damage to the French in the narrow channel had been lost, with Bellone joining the squadron in passing through the channel with minimal resistance. In addition to British losses in the explosion at the fort, two men had been killed and one wounded on Nereide. French losses were more severe, Minerve suffered 23 casualties and Ceylon eight. With both sides recognising that further action was inevitable, Willoughby sent a boat to Sirius requesting additional assistance and Duperré sent a message overland with Lieutenant Morice, requesting support from Hamelin's squadron (Morice fell from his horse during the mission and was severely injured). Command of Victor passed to Henri Moisson. In the afternoon, Willoughby used mortars on Île de la Passe to shell the French squadron, forcing Duperré to retreat into the shallow harbour at Grand Port and Willoughby subsequently sent officers into Grand Port on 21 August under a flag of truce, demanding the release of Victor, which he insisted had surrendered and should thus be handed over to the blockade squadron as a prize. Duperré refused to consider the request. One French ship had failed to enter the channel off Grand Port: the captured East Indiaman Windham. Early on 21 August, her French commander attempted to shelter in Rivière Noire. Sirius spotted the merchant ship under the batteries there and sent two boats into the anchorage, stormed the ship and brought her out without a single casualty, despite the boarding party having forgotten to take any weapons with them and being only armed with wooden foot-stretchers wielded as clubs. ## Battle From prisoners captured on Windham, Pym learned of the nature and situation of Duperré's squadron and sent orders to Port Napoleon with Captain Lucius Curtis in the recently arrived HMS Magicienne for Iphigenia to join Sirius and Nereide off Grand Port. Sirius and Nereide met on the morning of 22 August, Willoughby welcoming Pym with signals describing an "enemy of inferior force". Although Duperré's squadron was technically weaker than the four British frigates combined, Willoughby's signal was misleading as the French had taken up a strong crescent shaped battleline in the bay and could cover the mouth of the channel through which the British ships could only pass one at a time. Duperré also anticipated the arrival of reinforcements from Port Napoleon under Governor Charles Decaen at any time and could call on the support of soldiers and gun batteries on shore. In addition, French launches had moved the buoys marking the channel through the coral reef to confuse any British advance. ### British attack On 22 August, at 2:40 pm, Pym led an attack on Duperré's squadron without waiting for Iphigenia and Magicienne, entering the channel that led to the anchorage at Grand Port. He was followed by Nereide, but Willoughby had refused to allow Pym to embark the harbour pilot: the only person in the British squadron who knew the passage through the reefs. Without guidance by an experienced pilot, Sirius was aground within minutes and could not be brought off until 8:30 am on 23 August. Nereide anchored nearby during the night to protect the flagship. At 10:00 am, Iphigenia and Magicienne arrived and at 2:40 pm, after a conference between the captains as to the best course of action, the force again attempted to negotiate the channel. Although the squadron was now guided by Nereide's pilot, Sirius again grounded at 3:00 pm and Magicienne 15 minutes later after over-correcting to avoid the reef that Sirius had struck. Nereide and Iphigenia continued the attack, Iphigenia engaging Minerve and Ceylon at close range and Nereide attacking Bellone. Long-range fire from Magicienne was also directed at Victor, which was firing on Nereide. Within minutes of the British attack, Ceylon surrendered and boats from Magicienne sought but failed to take possession of her. The French crew drove the captured East Indiaman on shore, joined shortly afterwards by Minerve, Bellone and later by Victor, so that by 6:30 pm the entire French force was grounded and all but Bellone prevented from firing their main broadsides by beached ships blocking their arc of fire. Bellone was ideally positioned to maintain her fire on Nereide from her beached position, and at 7:00 pm a cannon shot cut Nereide's stern anchor cable. The British frigate swung around, presenting her stern to Bellone and pulling both her broadsides away from the French squadron. Raked by Bellone and desperate to return fire, Willoughby had the bow anchor cable cut, bringing a portion of his ship's starboard broadside to bear on Bellone. At 8:00 pm, Duperré was seriously wounded in the cheek by shrapnel from a grape shot fired by Nereide; Ensign Vigoureux concealed his unconscious body under a signal flag and discreetly brought him below decks while Bouvet assumed command of the French squadron on board Bellone, placing Lieutenant Albin Roussin in charge of Minerve. Building an improvised bridge between the French ships and the shore, Bouvet increased the men and ammunition reaching Bellone and thus significantly increased her rate of fire. He also had the rail removed between the foredeck and the quarterdeck of Minerve, and had iron hooks nailed to the freeboard below the starboard gangway as to provide attachment points for additional guns, thus building a continuous second deck on his frigate where he constituted a complete second battery. By 10:00 pm Nereide was a wreck, receiving shot from several sides, with most of her guns dismounted and casualties mounting to over 200: the first lieutenant was dying, the second was severely wounded and Willoughby's left eye had been dislodged from its socket by a wooden splinter. Recognising her battered state, Bouvet then diverted fire from Nereide to concentrate on Magicienne. Refusing to surrender until all options had been exhausted, Willoughby dispatched boats to Sirius, asking Pym if he believed it would be practical to send boats to tow Nereide out of range. Pym replied that with the boats engaged in attempting to tow Sirius and Magicienne off the reef it was not possible to deploy them under fire to tow Nereide. Pym also suggested that Willoughby disembark his men and set fire to his ship in the hope that the flames would spread to the French ships clustered on shore. Willoughby refused this suggestion as it was not practical to disembark the dozens of wounded men aboard Nereide in the growing darkness and refused to personally abandon his men when Pym ordered him to transfer to Sirius. At 11:00 pm, Willoughby ordered a boat to row to Bellone and notify the French commander that he had surrendered. Willoughby's boat had been holed by shot and was unable to make the short journey. The message was instead conveyed by French prisoners from Nereide who had dived overboard and reached the shore during the night. Recalling the false flags used on 20 August, Bouvet resolved to wait until morning before accepting the surrender. ### Attempted withdrawal At 1:50 am on 24 August, Bellone ceased firing on the shattered Nereide. During the remaining hours of darkness, Pym continued his efforts to dislodge Sirius from the reef and sent orders to Lambert, whose Iphigenia had been blocked from firing on the French by Nereide and also prevented from pursuing the Minerve by a large reef blocking access to the beach. With Iphigenia now becalmed in the coastal waters, Pym instructed Lambert to begin warping his ship out of the harbour, using anchors attached to the capstan to drag the ship slowly through the shallow water. Magicienne, like Iphigenia, had been stranded out of range of the beached French ships and so had instead directed her fire against a battery erected on shore, which she had destroyed by 2:00 am. When daylight came, it showed a scene of great confusion, with Sirius and Magicienne grounded in the approaches to the harbour, the French ships "on shore in a heap" in the words of Pym, Iphigenia slowly pulling herself away from the French squadron and Nereide lying broken and battered under the guns of Bellone, a Union Flag nailed to her masthead. This flag prompted a fresh burst of cannon fire from Bouvet, and it was not until Willoughby ordered the mizenmast to be chopped down that the French acknowledged the surrender and ceased firing. At 7:00 am, Lambert notified Pym that he had cleared the reef separating Iphigenia from the French ships and suggested that if Pym sent reinforcements from Sirius he might be able to board and capture the entire French squadron. Pym refused permission, insisting that Lambert assist him in pulling Sirius off the reef instead. Although Lambert intended to subsequently attack the French alone, Pym forbade him and sent a direct order for Lambert to move out of range of the enemy. At 10:00 am, Iphigenia reached Sirius and together the ships began firing at French troops ashore, who were endeavouring to raise a gun battery within range of the frigates. Magicienne, irretrievably stuck on the reef, rapidly flooding and with her capstan smashed by French shot, now bore the brunt of long-range French fire from both Bellone and the shore until Pym ordered Curtis to abandon his ship, transferring his men aboard Iphigenia. At 7:30 pm Magicienne was set on fire, her magazines exploding at 11:00 pm. On the shoreline, Duperré had been unable to spare any men to take possession of Nereide until 3:00 pm. A party under Lieutenant Roussin, second in command on Victor and temporarily in command of Minerve, was sent but had orders to return once the ship had been disarmed: freeing the remaining French prisoners, Roussin spiked the guns to prevent their further use, administered basic medical care and returned to shore, recounting that over 100 men lay dead or dying aboard the British frigate. At 4:00 am on 25 August, the newly erected French gun battery opened fire on Sirius and Iphigenia, which returned fire as best they could. Accepting that Sirius was beyond repair, Pym removed all her personnel and military supplies, setting fire to the frigate at 9:00 am, shortly after Iphigenia had pulled beyond the range of the battery, using a cannon as an anchor after losing hers the previous day. French boats attempted to reach Sirius and capture her before she exploded, although they turned back when Pym launched his own boats to contest possession of the wreck. The frigate's remaining munitions exploded at 11:00 am. During the morning, Duperré sent an official boarding party aboard Nereide, who wet the decks to prevent any risk of fire from the ships burning in the harbour and removed 75 corpses from the frigate. ### French response When news of the arrival of Duperré's squadron reached Decaen at Port Napoleon, he immediately despatched fast couriers to Grand Port and ordered Hamelin's squadron, consisting of the frigates Vénus, Manche, Astrée and the brig Entreprenant, to make ready to sail in support of Duperré. Hamelin departed Port Napoleon at midnight on 21 August, intending to sail northeast and then south, down the island's eastern shore. On 23 August, Hamelin's squadron spotted and captured a British transport ship named Ranger, sent 24 days earlier from the Cape of Good Hope with 300 tons of food supplies and extensive naval stores for Rowley on Île Bourbon. On rounding the northern headlands of Isle de France, Hamelin found he could make no progress against the headwinds and reversed direction, passing the western shore of the island and arriving off Grand Port at 1:00 pm on 27 August. The two extra days Hamelin had spent rounding Isle de France saw activity from the British forces remaining at Grand Port. There had been no strong winds in the bay and Iphigenia was forced to resort to slowly warping towards the mouth of the channel in the hope of escaping the approaching French reinforcements. Boats had removed the crews of Sirius and Magicienne to Île de la Passe, where the fortifications had been strengthened, but supplies were running low and Magicienne's launch was sent to Île Bourbon to request urgent reinforcement and resupply from Rowley's remaining squadron. On the morning of 27 August, Lambert discovered the brig Entreprenant off the harbour mouth and three French sail approaching in the distance. Iphigenia was still 1.2 kilometres (3⁄4 mi) from Île de Passe at the edge of the lagoon and was low on shot and unable to manoeuvre in the calm weather without anchors. Recognising that resistance under such conditions against an overwhelming force was futile, Lambert negotiated with Hamelin, offering to surrender Île de la Passe if Iphigenia and the men on the island were given permission to sail to Île Bourbon unmolested. ### British surrender On the morning of 28 August, Lambert received a message from Hamelin, promising to release all the prisoners under conditions of parole within one month if Île de la Passe and Iphigenia were both surrendered without resistance. The message also threatened that if Lambert refused, the French would attack and overwhelm the badly outnumbered British force. Recognising that food supplies were low, reinforcements had not arrived and that his ammunition stores were almost empty, Lambert agreed to the terms. Lambert later received a message from Decaen proposing similar terms and notified the French governor that he had surrendered to Hamelin. Decaen was furious that Hamelin had agreed terms without consulting him, but eventually agreed to accept the terms of the surrender as well. The wounded were treated by French doctors at Grand Port and later repatriated, although the remainder of the prisoners were placed in a cramped and unpleasant prison at Port Napoleon from which, despite the terms of the surrender, they were not released until British forces captured the island in December. Rowley first learned of the operations off Grand Port on 22 August, when Windham arrived off Saint Paul. Eager to support Pym's attack, Rowley immediately set sail in his frigate HMS Boadicea, with the transport Bombay following with two companies of the 86th Regiment of Foot to provide a garrison on any territory seized in the operation. The headwinds were strong and it was not until 29 August that Rowley arrived off Grand Port, having been notified of the situation there by Magicienne's launch the previous day. Sighting a cluster of frigates around Île de la Passe, Rowley closed with the island before turning sharply when Vénus and Manche raised their colours and gave chase. Rowley repeatedly feinted towards the French ships and then pulled away, hoping to draw them away from Grand Port in the hope that Bombay might board the now unprotected Iphigenia and capture her. Bombay was thwarted by the reappearance of Astrée and Entreprenant and Rowley was chased by Vénus and Manche back to Saint Denis, anchoring there on 30 August. Rowley attempted a second time to rescue Iphigenia from Grand Port the following week, but by the time he returned Bellone and Minerve had been refloated and the French force was far too strong for Rowley's flagship to attack alone. ## Aftermath The battle is noted as the most significant defeat for the Royal Navy during the Napoleonic Wars. Not only had four frigates been lost with their entire crews, but 105 experienced sailors had been killed and 168 wounded in one of the bloodiest frigate encounters of the war. French losses were also heavy, with Duperré reporting 36 killed and 112 wounded on his squadron and among the soldiers firing from the shore. The loss of such a large proportion of his force placed Rowley at a significant disadvantage in September, as Hamelin's squadron, bolstered by the newly commissioned Iphigénie, now substantially outnumbered his own (the ruined Néréide was also attached to the French squadron, but the damage suffered was so severe that the ship never sailed again). Withdrawing to Isle de France, Rowley requested that reinforcements be diverted from other duties in the region to replace his lost ships and to break the French blockade of Île Bourbon, led by Bouvet. These newly arrived British frigates, cruising alone in unfamiliar waters, became targets for Hamelin, who twice forced the surrender of single frigates, only for Rowley to beat his ships away from their prize each time. On the second occasion, Rowley was able to chase and capture Hamelin and his flagship Vénus, bringing an end to his raiding career and to the activities of his squadron, who remained on Isle de France until they were all captured at the fall of the island in December by an invasion fleet under Vice-Admiral Albemarle Bertie. In France the action was greeted with celebration, and it became the only naval battle commemorated on the Arc de Triomphe. The British response was despondent, although all four captains were subsequently cleared and praised at their courts-martial inquiring into the loss of their ships. The only criticism was of Willoughby, who was accused of giving a misleading signal in indicating that the French were of inferior force on 22 August. Contemporary historian William James described British reaction to the battle as "that the noble behaviour of her officers and crew threw such a halo of glory around the defeat at Grand Port, that, in public opinion at least, the loss of four frigates was scarcely considered a misfortune." He also notes that "No case of which we are aware more deeply affects the character of the Royal Navy than the defeat it sustained at Grand Port." On 30 December 1899, a monument was erected at the harbour of Grand Port in the memory of the British and French sailors who were killed in the engagement. ## In literature The battle attracted the attention of authors from both Britain and France, featuring in the 1843 novel Georges by Alexandre Dumas, "Dead Reckoning" by C. Northcote Parkinson and the 1977 novel The Mauritius Command by Patrick O'Brian. ## Monuments On 30 December 1899, a monument was erected at the harbour of Grand Port in the memory of the British and French sailors who were killed in the engagement. ## Order of battle
1,818,561
California State Route 75
1,171,611,173
Highway in California
[ "Roads in San Diego County, California", "Southern California freeways", "State Scenic Highway System (California)", "State highways in California" ]
State Route 75 (SR 75) is a 13-mile (21 km) north-south state highway in San Diego County in the U.S. state of California. It is a loop route of Interstate 5 (I-5) that begins near Imperial Beach, heading west on Palm Avenue. The route continues north along the Silver Strand, a thin strip of land between the Pacific Ocean and San Diego Bay, through Silver Strand State Beach. SR 75 then passes through the city of Coronado as Orange Avenue and continues onto the San Diego–Coronado Bay Bridge, which traverses the bay, before joining back with I-5 near downtown San Diego. The Silver Strand Highway was constructed and open to the public by 1924. What would become SR 75 was added to the state highway system in 1933, and designated Legislative Route 199 in 1935. SR 75 was not officially designated until the 1964 state highway renumbering. The Coronado Bay Bridge opened in 1969, and provided a direct connection between San Diego and Coronado. Since then, various proposals have taken place to relieve commuter traffic between San Diego and Naval Air Station North Island that traverses the city of Coronado. However, none of these proposals have gained support, including an attempt in 2010. ## Route description SR 75 begins as Palm Avenue at I-5 in the Nestor neighborhood of San Diego, heading westbound from the Southland Plaza shopping center. The route travels between the communities of Palm City and Nestor before entering the city limits of Imperial Beach. There, SR 75 curves to the north, becoming Silver Strand Boulevard and crossing into Coronado. SR 75 continues onto the peninsula containing Coronado Island, separated from the mainland by San Diego Bay. The highway passes through the Silver Strand Training Complex and the South Bay Study Area before entering the Coronado Cays subdivision and paralleling Silver Strand State Beach. After this, SR 75 passes through the United States Naval Amphibious Base for a few miles before entering downtown Coronado. The highway becomes Orange Avenue and turns north-northeast as the main street through Coronado. SR 75 intersects SR 282 at the one-way couplet of Third and Fourth Streets; SR 282 continues west on Third Street and returns to SR 75 on Fourth Street, while SR 75 continues east on Fourth Street and heads west towards Orange Avenue on Third Street. The one-way couplet is brief, and SR 75 becomes a divided highway before crossing the Coronado Bridge. While on the bridge, SR 75 crosses into the city of San Diego again. Once on the mainland, SR 75 has a northbound exit to National Avenue and a southbound entrance from Cesar E. Chavez Parkway. Through traffic is directed onto I-5 south or north in Logan Heights, where SR 75 ends. SR 75 is eligible for the State Scenic Highway System. It is officially designated as a scenic route for nearly its entire length, from the Imperial Beach city limit to Avenida del Sol in Coronado, and the portion across the Coronado Bridge meaning that it is a substantial section of highway passing through a "memorable landscape" with no "visual intrusions", where the potential designation has gained popular favor with the community. SR 75 is also part of the National Highway System, a network of highways that are considered essential to the country's economy, defense, and mobility by the Federal Highway Administration. In 2013, SR 75 had an annual average daily traffic (AADT) of 66,000 on the Coronado Bridge (the highest AADT for the highway), and 16,000 between Rainbow Drive and 7th Street in Imperial Beach (the lowest AADT for the highway). ## History ### Construction The intersection of Third Street and Orange Avenue dates back to at least 1890. The process of paving portions of Orange Avenue began in 1893, with an estimated cost of \$50,000 (about \$ in dollars); three miles (4.8 km) of sidewalks were also included. The plan was to make the avenue "one of the most beautiful in Southern California." From Palm City to Imperial Beach, the road was paved by 1920. The Silver Strand Highway opened in 1924 during a festival at the Tent City summer resort in Coronado, and went from Coronado to Palm City. By 1928, all streets in the city of Coronado had been paved, which was expected to encourage people to visit Tent City. Plans to transfer the Silver Strand Highway to state maintenance were in place as early as November 1931, and were to take effect once Silver Strand State Park was completed and open. In 1933, the highway from the San Diego–Coronado Ferry to Route 2 (now I-5) was added to the state highway system, and was designated as Legislative Route 199 two years later. By that same year, Sign Route 75 was posted from U.S. Route 101 (US 101) in Palm City to the ferry landing. After a subsequent highway project around 1939, SR 75 passed through Tent City and, according to William Cecil, the city's public works director in 1998, "contributed to its demise." The first contract for widening the highway between Coronado and Coronado Heights was awarded in 1944, as this part of the road was "now too narrow and dilapidated to meet traffic requirements." The State Highway Commission allocated \$25,000 (about \$ in dollars) to install traffic signals at the intersection of SR 75 and US 101 in March 1951. Plans to widen the road to four lanes were put on hold in July. By July 1952, it had been disclosed that some local businesses near Palm City had lodged opposition to the widening of the highway after \$500,000 (about \$ in dollars) had been allocated to the project. Following protests from local businessmen regarding the design of the median, the planned removal of access to intersecting streets, and the planned changes to street parking, Governor Earl Warren wrote to the San Diego Public Safety Committee, hoping to have the dispute resolved. In November, funds were allocated to acquire land for the construction in the 1953–1954 state budget. A year later, \$430,000 (about \$ in dollars) had been allocated to the widening project. A contract was given to the Daley Corporation to carry out the construction in 1955. The highway was to be widened to four lanes, and would add three pedestrian crossings. The completion of the widening project was announced on August 10, 1956. The final cost of the project was \$850,000 (about \$ in dollars), with money from the City of Coronado and the state. ### Designation and bridge construction Discussion regarding a bridge dates back to 1926; however, the Navy opposed the plan over concerns that an enemy could destroy the bridge and trap ships in the harbor. In 1955, the California Senate approved \$200,000 (about \$ in dollars) to conduct a study regarding a possible vehicular tunnel from San Diego to Coronado. Later, in June 1961, a proposal for an underwater tube along SR 75 was formally proposed, and would not have needed the approval of the residents of Coronado. Interviews of commuters were planned in August, to determine the traffic patterns along SR 75. The survey took place on October 2 along Silver Strand Boulevard. The SR 75 designation was originally established in 1963 with two segments: from I-5 to the ferry across San Diego Bay from Coronado to downtown, and from SR 125 to I-5. In 1967, the Coronado Bridge was scheduled to be added to the route once it was completed, and the portion from Fourth Street to the ferry was deemed as temporary until the bridge opened. Construction began in February. Coronado residents largely opposed the bridge, but Governor Pat Brown "overrode their wishes" according to former city councilman Bob Odiorne, who also claimed that the opposition caused the city to lose opportunities to move the approaches to the bridge away from residential areas. Following attempts from Barbara Hutchinson, the vice president of the Kearny Mesa Town Council, to ask the Coronado and San Diego city councils to intervene in the construction, San Diego city attorney Edward Butler stated that the state had the ultimate authority to decide whether or not to build the bridge, and that the City of San Diego could not interfere. Before the bridge opened, in 1968, the changes originally proposed by the Legislature in 1967 were made to the law; the designation came into effect on February 21, 1969. The bridge eventually opened on August 3, 1969. By 1969, Palm Avenue was the primary commercial street in Imperial Beach, and was described by the San Diego Union as "a strip of large signs and businesses. It is not a 'downtown.'" Plans were under way to add an interchange at Silver Strand State Beach for the Coronado Cays development. In September, the City of Coronado added Orange Avenue south of Third Street as a truck route leading to the base. By May 1970, the part of SR 75 on the Coronado Bridge had been declared a scenic highway. President Richard Nixon and Mexican president Díaz Ordaz used Orange Avenue as a motorcade route on September 3, 1970, en route to the Hotel del Coronado. ### Proposals and renumbering Proposition N was proposed in 1974 to attempt to resolve concerns regarding traffic in Coronado. The plan was to build another highway along the northern and eastern shore of Coronado Island, to bypass the busy residential and commercial districts and provide easy access to the Silver Strand from the western end of the bridge. The proposition asked voters whether the City Council should "actively pursue" the matter. Critics contended that the highway would block the view of the San Diego Bay, and that the city would be unable to alter traffic patterns in the meantime. Coronado mayor Rolland McNeely opposed the proposal in early November 1974 as it would require approval from over thirty government agencies and would force the city to continue with building this road, although some declared it "impossible to build." The voters rejected this plan. The portion of SR 75 from Pomona Avenue in Coronado to Imperial Beach was also recommended to become a scenic highway in February 1974. Future improvements to the Imperial Beach – Coronado portion were cancelled in April. In 1976, the California State Legislature renumbered the portion from I-5 to SR 125 as SR 117, which later became SR 905. The change took effect at the beginning of 1977. The renumbering was to reduce confusion with the Coronado portion, according to Caltrans regional director Jacob Dekema; new signs were to be put into place shortly thereafter. The bridge and the resulting traffic continued to be a hotly debated issue in the early 1980s. A plan in 1981 to convert Fourth Street into an expressway leading to the naval station was strongly opposed by the public due to the required demolition of structures and a lack of evidence that the plan would succeed in reducing traffic; by this time, Third and Fourth streets had been converted into one-way streets between the bridge and the naval station. A major renovation of the bridge was scheduled for late 1992, which would include a movable barrier to prevent head-on collisions and necessary resurfacing of the roadway. Work was underway in January 1993 on the \$4 million project (about \$ in dollars), but it was behind the three-month schedule by 11 days due to rainfall and was expected to be completed by March. When the Coronado Bridge opened, a toll of 60 cents was charged to use the bridge. In 1980, the toll became \$1.20, charged only in the westbound direction towards Coronado. A seventh toll booth was to be constructed in September 1987. The toll dropped to \$1 in 1988. The bridge tolls ended at 10 p.m. on June 27, 2002, after the San Diego Association of Governments decided to stop collecting tolls; drivers paid a total of \$197 million throughout the years. The speed limit was decreased to 25 miles per hour (40 km/h) in October 2005 along Third and Fourth streets, after traffic increased by 20 percent following the removal of the toll. Traffic barriers along Third Street to block traffic from turning onto intersecting streets were removed in November 2004, following voter approval. The City of Coronado has attempted to have a tunnel built from the Coronado bridge to the San Diego Naval Base numerous times, and hired Ledford Enterprises to help with the lobbying process in 2002 and 2006. The city endorsed a proposed study in 2004 to determine possible alternatives to resolve the traffic issues, which included keeping the status quo. On June 8, 2010, Coronado voters decided against Proposition H, which would have advised the city to undergo further investigation into building the tunnel. This concluded ten years of studies and proposals by the city of Coronado to find a way to reduce traffic to the naval station during rush hour. Critics of the proposal did not believe that the tunnel would resolve the traffic issues on the northern part of SR 75 or on SR 282. Following this, the Coronado City Council voted to abolish the Tunnel Commission that had been formed to study the issue. Efforts were underway by Imperial Beach city officials to improve the reputation and economic standing of the Palm Avenue area in the first decade of the 21st century. The area was described by the San Diego Union-Tribune as a "hodgepodge of vacant land and aging apartment buildings and businesses, many in need of a coat of paint" in 2003. Residents hoped to revitalize the area, providing commerce right next to an entrance to the beach. City officials offered local business owners loans for necessary construction or rehabilitation in 2005. The Imperial Beach city council approved the redevelopment of the Palm Avenue corridor in 2008, following a study in 2003. A Palm Avenue Commercial Corridor Master Plan was endorsed in February 2009, in efforts to improve the commercial area. In September 2012, the Imperial Beach city council raised objections over the Caltrans decision to increase the speed limit on SR 75 to 45 miles per hour (72 km/h) from 40 miles per hour (64 km/h) on the portion of the highway from Delaware Street to the western Imperial Beach city limit due to concerns about safety. The rest of the highway was to retain the 40 miles per hour (64 km/h) speed limit. ## Major intersections ## See also
21,013,002
Landing at Nadzab
1,128,167,822
WWII airborne landing of 1943
[ "1943 in Papua New Guinea", "Airborne operations of World War II", "Battles and operations of World War II involving Japan", "Battles and operations of World War II involving Papua New Guinea", "Battles of World War II involving Australia", "Battles of World War II involving the United States", "Conflicts in 1943", "Operation Cartwheel", "September 1943 events", "South West Pacific theatre of World War II", "World War II aerial operations and battles of the Pacific theatre" ]
The Landing at Nadzab was an airborne landing on 5 September 1943 during the New Guinea campaign of World War II in conjunction with the landing at Lae. The Nadzab action began with a parachute drop at Lae Nadzab Airport, combined with an overland force. The parachute drop was carried out by the US Army's 503rd Parachute Infantry Regiment and elements of the Australian Army's 2/4th Field Regiment into Nadzab, New Guinea in the Markham Valley, observed by General Douglas MacArthur, circling overhead in a B-17. The Australian 2/2nd Pioneer Battalion, 2/6th Field Company, and B Company, Papuan Infantry Battalion reached Nadzab after an overland and river trek that same day and began preparing the airfield. The first transport aircraft landed the next morning, but bad weather delayed the Allied build up. Over the next days, the 25th Infantry Brigade of the Australian 7th Division gradually arrived. An air crash at Jackson's Field ultimately caused half the Allied casualties of the battle. Once assembled at Nadzab, the 25th Infantry Brigade commenced its advance on Lae. On 11 September, it engaged the Japanese soldiers at Jensen's Plantation. After defeating them, it engaged and defeated a larger Japanese force at Heath's Plantation. During this skirmish, Private Richard Kelliher won the Victoria Cross, Australia's highest award for gallantry. Instead of fighting for Lae, the Japanese Army withdrew over the Saruwaged Range. This proved to be a gruelling test of endurance for the Japanese soldiers who had to struggle over the rugged mountains; in the end, the Japanese Army managed to withdraw its forces from Salamaua and Lae, though with extensive losses from exposure and starvation during the retreat. Troops of the 25th Infantry Brigade reached Lae shortly before those of the 9th Division that had been advancing on Lae from the opposite direction. The development of Nadzab was delayed by the need to upgrade the Markham Valley Road. After strenuous efforts in the face of wet weather, the road was opened on 15 December 1943. Nadzab then became the major Allied air base in New Guinea. ## Background ### Strategy #### Allied In July 1942, the United States Joint Chiefs of Staff approved a series of operations against the Japanese bastion at Rabaul, which blocked any Allied advance along the northern coast of New Guinea toward the Philippines or north toward the main Japanese naval base at Truk. In keeping with the overall Allied grand strategy of defeating Nazi Germany first, the immediate aim of these operations was not the defeat of Japan but merely the reduction of the threat posed by Japanese aircraft and warships based at Rabaul to air and sea communications between the United States and Australia. By agreement among the Allied nations, in March 1942 the Pacific theatre was divided into two separate commands, each with its own commander-in-chief. The South West Pacific Area, which included Australia, Indonesia, and the Philippines came under General Douglas MacArthur as supreme commander. Most of the remainder, known as the Pacific Ocean Areas, came under Admiral Chester W. Nimitz. There was no overall commander, and no authority capable of resolving competing claims for resources, setting priorities, or shifting resources from one command to the other. Such decisions had to be made on the basis of compromise, cooperation and consensus. Rabaul fell within MacArthur's area, but the initial operations in the southern Solomon Islands came under Nimitz. The Japanese reaction to Task One, the seizure of the southern part of the Solomon Islands, was more violent than anticipated and some months passed before the Guadalcanal Campaign was brought to a successful conclusion. Meanwhile, General MacArthur's forces fought off a series of Japanese offensives in Papua in the Kokoda Track campaign, Battle of Milne Bay, Battle of Buna–Gona, the Battle of Wau and the Battle of the Bismarck Sea. Following these victories, the initiative in the South West Pacific passed to the Allies and General Douglas MacArthur pressed ahead with his plans for Task Two. At the Pacific Military Conference in Washington, D.C. in March 1943, the plans were reviewed by the Joint Chiefs of Staff. The chiefs were unable to supply all the requested resources, so the plans had to be scaled back, with the capture of Rabaul postponed to 1944. On 6 May 1943, MacArthur's General Headquarters (GHQ) in Brisbane issued Warning Instruction No. 2, officially informing subordinate commands of the plan, which divided the Task Two operations on the New Guinea axis into three parts: 1. Occupy Kiriwina and Woodlark Islands and establish air forces thereon. 2. Seize the Lae-Salamaua-Finschhafen-Madang area and establish air forces therein. 3. Occupy western New Britain, establishing air forces at Cape Gloucester, Arawe and Gasmata. Occupy or neutralise Talasea. The second part was assigned to General Sir Thomas Blamey's New Guinea Force. As a result, "It became obvious that any military offensive in 1943 would have to be carried out mainly by the Australian Army, just as during the bitter campaigns of 1942." #### Japanese The Japanese maintained separate army and navy headquarters at Rabaul which cooperated with each other but were responsible to different higher authorities. Naval forces came under the Southeast Area Fleet, commanded by Vice Admiral Jinichi Kusaka. Army forces came under General Hitoshi Imamura's Eighth Area Army, consisting of the Seventeenth Army in the Solomon Islands, Lieutenant General Hatazō Adachi's Eighteenth Army in New Guinea, and the 6th Air Division, based at Rabaul. As a result of the Battle of the Bismarck Sea, the Japanese decided not send any more convoys to Lae, but to land troops at Hansa Bay and Wewak and move them forward to Lae by barge or submarine. In the long run they hoped to complete a road over the Finisterre Range and thence to Lae through the Ramu and Markham Valleys. Imamura ordered Adachi to capture the Allied bases at Wau, Bena Bena and Mount Hagen. To support these operations, Imperial General Headquarters transferred the 7th Air Division to New Guinea. On 27 July 1943, Lieutenant General Kumaichi Teramoto's Fourth Air Army was assigned to Imamura's command to control the 6th and 7th Air Divisions, the 14th Air Brigade and some miscellaneous squadrons. By June, Adachi had three divisions in New Guinea; the 41st Division at Wewak and the 20th Division around Madang, both recently arrived from Palau, and the 51st Division in the Salamaua area, a total of about 80,000 men. Of these only the 51st Division was in contact with the enemy. Like Blamey, Adachi faced formidable difficulties of transportation and supply just to bring his troops into battle. ### Geography The Markham River originates in the Finisterre Range and flows for 110 miles (180 km), emptying into the Huon Gulf near Lae. The Markham Valley, which rises to an elevation of 1,210 feet (370 m), runs between the Finisterre Range to the north and the Bismarck Range to the south and varies from 6 to 12 miles (10 to 19 km) wide. The valley floor is largely composed of gravel and is generally infertile. Half of its area was covered by dense kangaroo grass 4–5 feet (1.2–1.5 m) high, but in parts where there had been a build-up of silt, Kunai grass grew from 6 to 8 feet (1.8 to 2.5 m) high. Rainfall is around 39 inches (1,000 mm) per annum. The Markham Valley was traversable by motor vehicles in the dry season, which ran from December to April, and therefore formed part of a natural highway between the Japanese bases at Lae and Madang. ### Planning and preparation At Blamey's Advanced Allied Land Forces Headquarters (Adv LHQ) in St Lucia, Queensland, the Deputy Chief of the General Staff, Major General Frank Berryman, headed the planning process. A model of the Lae-Salamaua area was constructed in a secure room at St Lucia, the windows were boarded up and two guards were posted on the door round the clock. On 16 May, Blamey held a conference with Berryman and Lieutenant General Sir Edmund Herring, the commander of I Corps, around the model at which the details of the operation were discussed. Blamey's operational concept was for a double envelopment of Lae, using "two of the finest divisions on the Allied side". Major General George Wootten's 9th Division would land east of Lae in a shore-to-shore operation and advance on Lae. Meanwhile, Major General George Alan Vasey's 7th Division, in a reprise of the Battle of Buna–Gona in 1942, would advance on Lae from the west by an overland route. Its primary role was to prevent reinforcement of the Japanese garrison at Lae by establishing itself in a blocking position across the Markham Valley. Its secondary task was to assist the 9th Division in the capture of Lae. The plan was generally known as Operation POSTERN, although this was actually the GHQ code name for Lae itself. Meanwhile, Major General Stanley Savige's 3rd Division in the Wau area and Major General Horace Fuller's US 41st Infantry Division around Morobe were ordered to advance on Salamaua so as to threaten it and draw Japanese forces away from Lae. The result was the arduous Salamaua Campaign, which was fought between June and September, and which at times looked like succeeding all too well, capturing Salamaua and forcing the Japanese back to Lae, thereby throwing Blamey's whole strategy into disarray. The POSTERN plan called for the 7th Division to move in transports to Port Moresby and in coastal shipping to the mouth of the Lakekamu River. It would travel up the river in barges to Bulldog, and in trucks over the Bulldog Road to Wau and Bulolo. From there it would march overland via the Watut and Wampit Valleys to the Markham River, cross the Markham River with the aid of paratroops, and secure an airfield site. There were a number of suitable airfield sites in the Markham Valley; Blamey selected Nadzab as the most promising. Vasey pronounced the plan "a dog's breakfast". There were a number of serious problems. It relied on the Bulldog Road being completed, which it was not, due to the rugged nature of the country to be traversed and shortages of equipment. Even if it was, the 7th Division would have been unlikely to make the operation target date. It had taken heavy casualties in the Battle of Buna–Gona and was seriously under-strength, with many men on leave or suffering from malaria. It would take time to concentrate it at its camp at Ravenshoe, Queensland on the Atherton Tableland. To bring it up to strength, the 1st Motor Brigade was disbanded in July to provide reinforcements. Reinforcements passed through the Jungle Warfare Training Centre at Canungra, Queensland, where they spent a month training under conditions closely resembling those in New Guinea. The delays in getting the overland supply route organised and the 7th Division itself ready meant that, in the initial stages of the operation at least, the 7th Division would have to be maintained by air. Vasey further proposed that the bulk of his forces avoid a tiring overland march by moving directly to Nadzab by air, which increased the importance of capturing Nadzab early. MacArthur agreed to make the 2nd Battalion, 503rd Parachute Infantry Regiment based at nearby Gordonvale, Queensland, available to New Guinea Force to capture Nadzab. He further authorised the regiment to conduct training with the 7th Division and a number of exercises were conducted. Colonel Kenneth H. Kinsler, the commander of the 503rd, eager to discuss the Battle of Crete with the 21st Infantry Brigade's Brigadier Ivan Dougherty, took the unusual step of parachuting into Ravenshoe. On 31 July, Vasey raised the prospect of utilising the entire regiment with Kinsler. Blamey took up the matter with MacArthur, who authorised it on 8 August. Blamey made the Australian Army transport available to ship the regiment from Cairns to Port Moresby, except for the 2nd Battalion and advance party, which moved by air as originally planned. The 7th Division was treated to a training film, "Loading the Douglas C-47", and the commander of the Advanced Echelon of Lieutenant General George Kenney's Fifth Air Force, Major General Ennis Whitehead, made five C-47 Dakota transports available to the 7th Division each day so they could practise loading and unloading. Whitehead also made a Boeing B-17 Flying Fortress available so Vasey could fly low over the target area on 7 August. Meanwhile, the 2/2nd Pioneer Battalion and 2/6th Field Company practiced crossing the Laloki River with folding boats. They flew to Tsili Tsili Airfield on 23 and 24 August. To give the paratroops some artillery support, Lieutenant Colonel Alan Blyth of the 2/4th Field Regiment proposed dropping some of its eight short 25-pounders by parachute. A call went out for volunteers and four officers and 30 other ranks were selected. On 30 August, Vasey watched them carry out a practice jump at Rogers Airfield. This turned out to be the easy part. Brand new guns were received from the 10th Advanced Ordnance Depot at Port Moresby on 23 August. Two were handed over for training while, as a precaution, the remaining six were sent the 2/117th Field Workshops for inspection and checking. All six were condemned, owing to a number of serious defects in assembly and manufacture. On 30 August, the gunners received orders to move out the next day, so the 2/51st Light Aid Detachment cannibalised six guns to produce two working guns, which were proofed by firing 20 rounds per gun. Only one was ready in time to leave with the gunners so the other followed on a special flight. Eight of the 2/4th Field Regiment's Mark II 25-pounders were also condemned owing to the presence of filings in the buffer system. Vasey was less than impressed. Vasey was concerned about the Japanese strength in the Lae area, which his staff estimated at 6,400, in addition to the 7,000 that Herring's I Corps staff estimated were in the Salamaua area. However, a more immediate danger was posed by the Japanese Fourth Air Army at Wewak. Photographs taken by Allied reconnaissance planes showed 199 Japanese aircraft on the four fields there on 13 August. On 17 August, Whitehead's heavy and medium bombers and fighters, escorted by fighters, bombed Wewak. Taking the Japanese by surprise, they destroyed around 100 Japanese aircraft on the ground. In September, the Japanese Army air forces had at their disposal only 60 or 70 operational aircraft to oppose the Allied air forces in New Guinea, although both the 6th and 7th Air Divisions were in the area. On the south bank of the Markham River lay Markham Point, where the Japanese maintained a force of about 200 men on commanding ground. Part of the 24th Infantry Battalion was ordered to capture the position. The attack on the morning of 4 September went wrong from the start, with two scouts being wounded by a land mine. The force fought its way into the Japanese position but took heavy casualties and was forced to withdraw. Twelve Australians were killed and six were wounded in the attack. It was then decided to merely contain the Japanese force at Markham Point, which was subjected to mortar fire and an airstrike. ## Battle ### Assault Transport aircraft were controlled by the 54th Troop Carrier Wing, which was commanded by Colonel Paul H. Prentiss, with his headquarters at Port Moresby. He had two groups under his command: the 374th Troop Carrier Group at Ward's Field and the 375th Troop Carrier Group at Dobodura, plus the 65th and 66th Troop Carrier Squadrons of the 403rd Troop Carrier Group at Jackson's Field. In addition, Prentiss could draw on the 317th Troop Carrier Group at Archerfield Airport and RAAF Base Townsville, although it was not under his command. Postponing the operation from August to September 1943 allowed for the arrival of the 433rd Troop Carrier Group from the United States. Each squadron was equipped with 13 C-47 aircraft, and each group consisted of four squadrons, for a total of 52 aircraft per group. The actual date was chosen by General Kenney based on the advice of his two weather-forecasting teams, one Australian and one American. Ideally, Z-Day would be clear from Port Moresby to Nadzab but foggy over New Britain, thereby preventing the Japanese air forces at Rabaul from intervening. Forecasting the weather days in advance with such precision was difficult enough in peacetime, but more so in wartime, when many of the areas from which the weather patterns developed were occupied by the enemy and data from them was consequently denied to the forecasters. When the two teams differed over the best possible date, Kenney "split the difference between the two forecasts and told General MacArthur we would be ready to go on the morning of the 4th for the amphibious movement of the 9th Division to Hopoi Beach and about nine o’clock on the morning of the 5th we would be ready to fly the 503rd Parachute Regiment to Nadzab." Z-Day, 5 September 1943, dawned with inauspiciously bad weather. Fog and rain shrouded both the departure airfields, Jackson's and Ward's but, as the forecasters had predicted, by 0730 the fog began to dissipate. The first C-47 took off at 0820. The formation of 79 C-47s, each carrying 19 or 20 paratroops, was divided into three flights. The first, consisting of 24 C-47s from the 403rd Troop Carrier Group from Jackson's, carried 1st Battalion, 503rd Parachute Infantry Regiment. The second, of 31 C-47s from the 375th Troop Carrier Group from Ward's, carried the 2nd Battalion, 503rd Parachute Infantry Regiment. The third, consisting of 24 C-47s of 317th Troop Carrier Group, from Jackson's, carried the 3rd Battalion, 503rd Parachute Infantry Regiment. Each battalion had its own drop zone. The transports were escorted by 48 P-38 Lightning fighters from the 35th and 475th Fighter Groups, 12 P-39 Airacobras from the 36th Fighter Squadron, 8th Fighter Group and 48 P-47 Thunderbolts from the 348th Fighter Group. When Kenney informed MacArthur that he planned to observe the operation from a B-17, MacArthur reminded Kenney of his orders to keep out of combat, orders that Brigadier General Kenneth Walker had disobeyed at the cost of his life. Kenney went over the reasons why he thought he should go, ending with "They were my kids and I was going to see them do their stuff." MacArthur replied "You’re right, George, we’ll both go. They’re my kids, too." Three hundred and two aircraft from eight different airfields in the Moresby and Dobodura areas, made a rendezvous over Tsili Tsili at 10:07, flying through clouds, passes in the mountains, and over the top. "Not a single squadron," wrote General Kenney, "did any circling or stalling around but all slid into place like clockwork and proceeded on the final flight down the Watut Valley, turned to the right down the Markham, and went directly to the target." Leading the formation were 48 B-25s from the 38th and 345th Bombardment Groups whose job was to "sanitise" the drop zones by dropping their loads of sixty 20-pound (9.1 kg) fragmentation bombs and strafing with the eight .50-calibre machine guns mounted in their noses. They were followed by seven A-20s of the 3rd Bombardment Group (Light). Each carried four M10 smoke tanks mounted under the wings. The smoke tanks were each filled with 19 US gallons (72 L) of the smoke agent FS. In two groups of two and one of three flying at 250 feet (76 m) at 225 mph (362 km/h), they laid three smoke curtains adjacent to the three drop zones. The lead aircraft discharged two tanks, waited four seconds, then discharged the other two. The following aircraft went through the same procedure, creating a slight overlap to insure a continuous screen. Conditions were favourable, while the 85% humidity kept the screens effective for five minutes and stopped their dispersal for ten. Next came the C-47s, flying at 400 to 500 feet (122 to 152 m) at 100 to 105 mph (161 to 169 km/h). Dropping commenced at 10:22. Each aircraft dropped all its men in ten seconds and the whole regiment was unloaded in four and a half minutes. Following the transports came five B-17s with their racks loaded with 300 lb (140 kg) packages with parachutes, to be dropped to the paratroopers on call by panel signals as they needed them. This mobile supply unit stayed for much of the day, eventually dropping 15 tons of supplies. A group of 24 B-24s and four B-17s, which left the column just before the junction of the Watut and the Markham attacked the Japanese defensive position at Heath's Plantation, about halfway between Nadzab and Lae. Five B-25 weather aircraft were used along the route and over the passes, to keep the units informed on weather to be encountered during their flights to the rendezvous. Generals MacArthur, Kenney, and Vasey observed the operation, from separate B-17s. Later, MacArthur received the Air Medal for having "personally led the American paratroopers" and "skillfully directed this historic operation". During the operation, including the bombing of Heath's, a total of 92 long tons (93 t) of high-explosive bombs was dropped, 32 long tons (33 t) of fragmentation bombs were dropped and 42,580 rounds of .50 calibre and 5,180 rounds of .30 calibre ammunition were expended. No air opposition was encountered, and only one C-47 failed to make the drop. Its cargo door blew off during the flight, damaging its elevator. It safely returned to base. Three paratroopers were killed in the drop; two fell to their deaths when their parachutes malfunctioned while another landed in a tree and then fell some 66 feet (20 m) to the ground. There were 33 minor injuries caused by rough landings. The three battalions met no opposition on the ground and formed up in their assembly areas. This took some time due to the tropical heat and the high grass. Five C-47s of the 375th Troop Carrier Group carrying the gunners of the 2/4th Field Regiment took off from Ward's Airfield after the main force and landed at Tsili Tsili. After an hour on the ground, they set out for Nadzab. Most jumped from the first two aircraft. The next three aircraft dropped equipment, including the dismantled guns. The "pushers out" followed when the aircraft made a second pass over the drop. One Australian injured his shoulder in the drop. The gunners then had to locate and assemble their guns in the tall grass. Enough parts were found to assemble one gun and have it ready for firing within two and a half hours of dropping, although to maintain surprise they did not carry out registration fire until morning. It took three days to find the missing parts and assemble the other gun. At 1515, two B-17s dropped 192 boxes of ammunition. Their dropping was accurate, but some boxes of ammunition tore away from their parachutes. ### Follow-up Meanwhile, a force under Lieutenant Colonel J. T. Lang, consisting of the 2/2nd Pioneer Battalion, 2/6th Field Company, and detachments from the 7th Division Signals, 2/5th Field Ambulance and ANGAU, with 760 native carriers, set out from Tsili Tsili on 2 September. Most of the force moved overland, reaching Kirkland's Crossing on 4 September, where it rendezvoused with B Company, Papuan Infantry Battalion. That night, a party of engineers and pioneers set out from Tsili Tsili in 20 small craft, sailed and paddled down the Watut and Markham Rivers to join Lang's force at Kirkland's Crossing. The small river-borne task force included ten British 5-ton folding assault boats and Hoehn military folboats. which met up with 2/6th Independent Company commandos who had reconnoitered the proposed crossing area with eight of these folboats the day before. While neither river was deep, both were fast flowing, with shoals and hidden snags. Three boats were lost with their equipment and one man drowned. On the morning of 5 September, Lang's force was treated to the sight of the air force passing overhead. At this point, the Markham River formed three arms, separated by broad sand bars. Two were fordable but the other was deep and flowing at 5 knots (9.3 km/h; 5.8 mph). Using the folding boats and local timber, they constructed a pontoon bridge, allowing the whole force to cross the river safely with all their equipment. That evening, they reached the Americans' position. The next day they went to work on the airstrip with hand tools. Trees were felled, potholes filled in and a windsock erected. Fourteen gliders were supposed to fly in three light tractors, three mowers, a wheeled rake and other engineering equipment from Dobodura. Because the lack of opposition made immediate resupply non-urgent, and because he had doubts about the proficiency of the glider pilots, whom he knew had undergone only minimal training, General Blamey decided that the glider operation was not worth the risk to the glider pilots or their passengers and cancelled it, substituting instead the afternoon supply run by specially modified B-17s. Lacking mowers, the Kunai grass was cut by hand by the pioneers, sappers, paratroops and native civilians and burned, causing the destruction of some stores and equipment that had been lost in the long grass and "a swirl of black dust". By 11:00 on 6 September, the 1,500 feet (460 m) strip – which had not been used for over a year – had been extended to 3,300 feet (1,000 m). The first plane to land was an L-4 Piper Cub at 0940 6 September, bringing with it Colonel Murray C. Woodbury, the commander of the U.S. Army's 871st Airborne Engineer Aviation Battalion. Three transports followed, nearly running down some of the throng working on the strip. Another 40 aircraft followed in the afternoon, many containing American and Australian engineers. The 871st followed the next day with its small air-portable bulldozers and graders. They located a site for a new airstrip, which became known as No. 1, the existing one becoming No. 2. The site proved to be an excellent one; an old, dry riverbed with soil largely composed of gravel. A gravel base and steel plank was laid to accommodate the fighters based at Tsili Tsili that were in danger of bogging down when the weather deteriorated. By the end of October there were four airstrips at Nadzab, one of which was 6,000 feet (1,800 m) long and sealed with bitumen. While engineers and anti-aircraft gunners arrived from Tsili Tsili, no infantry arrived from Port Moresby on 6 September because of bad flying weather over the Owen Stanley Range, although the 2/25th Infantry Battalion was flown to Tsili Tsili. On 7 September, reveille was sounded for the 2/33rd Infantry Battalion at 03:30 and the unit boarded trucks of the 158th General Transport Company that took it to marshalling areas near the airfields in preparation for the movement to Nadzab. At 04:20, B-24 Liberator 42-40682 "Pride of the Cornhuskers" of the 43rd Bombardment Group piloted by 2nd Lieutenant Howard Wood set out from Jackson's Airfield on a reconnaissance sortie to Rabaul, with a full load of 2,800 imperial gallons (13,000 L) of fuel and four 500 lb (230 kg) bombs. It clipped a tree at the end of the runway, crashed into two other trees and exploded, killing all eleven crewmen on board instantly and spraying burning fuel over a large area. Five of the 158th General Transport Company's trucks containing men of the 2/33rd Infantry Battalion were hit and burst into flames. Every man in those trucks was killed or injured; 15 were killed outright, 44 died of their wounds and 92 were injured but survived. Despite the disaster, the 2/33rd Infantry Battalion flew out to Tsili Tsili as scheduled. Due to the unpredictable weather, aircraft continued to arrive at Nadzab sporadically. Only the 2/25th Infantry Battalion and part of the 2/33rd had reached Nadzab by the morning of 8 September when Vasey ordered the commander of the 25th Infantry Brigade, Brigadier Ken Eather, to initiate the advance on Lae. That day there were 112 landings at Nadzab. On 9 September, as the advance began, the rest of the 2/33rd Infantry Battalion reached Nadzab from Tsili Tsili, but while there were 116 landings at Nadzab, bad weather prevented the 2/31st Infantry Battalion from leaving Port Moresby. Finally, on 12 September, after three non-flying days, the 2/31st Infantry Battalion reached Nadzab on some of the 130 landings on the two strips at Nadzab that day. On 13 September, a platoon of the 2/25th Infantry Battalion came under very heavy fire from a concealed Japanese machine gun near Heath's Plantation that wounded a number of Australians, including Corporal W. H. (Billy) Richards, and halted the platoon's advance. Private Richard Kelliher suddenly, on his own initiative, dashed toward the post and hurled two grenades at it, which killed some of the Japanese defenders but not all. He returned to his section, seized a Bren gun, dashed back to the enemy post and silenced it. He then asked permission to go out again to rescue the wounded Richards, which he accomplished successfully under heavy fire from another enemy position. Kelliher was awarded the Victoria Cross. North of the main advance, a patrol from Lieutenant Colonel John J. Tolson's 3rd Battalion, 503rd Parachute Infantry Regiment, encountered a force of 200 Japanese crossing the Bumbu River on 15 September. The Americans engaged the Japanese force and reported inflicting heavy losses. The arrival of that day of the first units of Brigadier Ivan Dougherty's 21st Infantry Brigade at Nadzab at last allowed the paratroopers to be relieved. By this time, the 9th Division was about 1.5 miles (2.4 km) East of Lae, while the 7th Division was 7 miles (11 km) away and "it appeared an odds-on bet that the 9th would reach Lae first". The 7th Division resumed its advance at dawn on 16 September. The last ten Japanese troops facing the 2/33rd Infantry Battalion were killed and the 2/25th Infantry Battalion passed through its position and headed for Lae. As they moved down the Markham Valley Road, they occasionally encountered sick Japanese soldiers who held the column momentarily. Brigadier Eather came up in his jeep and started urging the diggers to hurry up. They were unimpressed. Eather, armed with a pistol, then acted as leading scout, with his troops following in a column of route behind him. The column entered Lae unopposed by the Japanese but aircraft of the Fifth Air Force strafed the 2/33rd Infantry Battalion and dropped parachute fragmentation bombs, wounding two men. Whitehead soon received a message sent in the clear from Vasey that read: "Only the Fifth Air Force bombers are preventing me from entering Lae." By early afternoon, the 2/31st Infantry Battalion reached the Lae airfield where it killed 15 Japanese soldiers and captured one. The 25th Infantry Brigade then came under fire from the 9th Division's 25-pounders, wounding one soldier. Vasey and Eather tried every available means to inform Wootten of the situation. A message eventually reached him through RAAF channels at 14:25 and the artillery was silenced. ### Japanese withdrawal On 8 September, Adachi ordered Nakano to abandon Salamaua and fall back on Lae. Nakano had already evacuated his hospital patients and artillery to Lae. On 11 September, his main body began to withdraw. By this time, it was clear that Blamey intended to cut off and destroy the 51st Division. After discussing the matter with Imperial General Headquarters in Tokyo, Imamura and Adachi called off their plans to capture Bena Bena and Mount Hagen and instructed Nakano and Shoge to move overland to the north coast of the Huon Peninsula while the 20th Division moved from Madang to Finschhafen, sending one regiment down the Ramu valley to assist the 51st Division. The Salamaua garrison assembled at Lae on 14 September, and the Japanese evacuated the town over the next few days. It was a retreating band that contacted the 3rd Battalion, 503rd Parachute Infantry Regiment. The Japanese hurriedly altered their route before the Australians could intercept them. Crossing the Saruwaged Range proved to be a gruelling test of endurance for the Japanese soldiers. They started out with ten days' rations but this was exhausted by the time they reached Mount Salawaket. The 51st Division had already abandoned most of its heavy equipment; now, many soldiers threw away their rifles. "The Sarawaged crossing", wrote Lieutenant General Kane Yoshihara, "took far longer than had been expected, and its difficulties were beyond discussion. Near the mountain summits the cold was intense and sleep was quite impossible all the cold night; they could only doze beside the fire. Squalls came, the ice spread and they advanced through snow under this tropical sky. Gradually the road they were climbing became a descending slope, but the inclination was so steep that if they missed their footing they would fall thousands and thousands of feet – and how many men lost their lives like that!" In the end, the Japanese Army could take pride in conducting a creditable defence in the face of an impossible tactical situation. "Fortune and Nature, however, favoured a valiant defender despite the equally valiant striving of the attackers." ## Aftermath ### Casualties The 503rd Parachute Infantry lost three men killed and 33 injured in the jump. Another eight were killed and 12 wounded in action against the Japanese, and 26 were evacuated sick. The 2/5th Field Ambulance treated 55 jump casualties on 7 September. Between 5 and 19 September, the 7th Division reported 38 killed and 104 wounded, while another 138 were evacuated sick. To this must be added the 11 Americans and 59 Australians killed and 92 Australians injured in the air crash at Jackson's Airfield. Thus, 119 Allied servicemen were killed, 241 wounded or injured, and 166 evacuated sick. Japanese casualties were estimated at 2,200, but it is impossible to apportion them between the 7th and 9th Divisions. ### Base development The development of Nadzab depended on heavy construction equipment which had to be landed at Lae and moved over the Markham Valley Road. The job of improving the road was assigned to the 842nd Engineer Aviation Battalion, which arrived at Lae on 20 September but after a few days' work it was ordered to relieve the 871st Airborne Aviation Battalion at Nadzab. The 842nd reached Nadzab on 4 October but a combination of unseasonable rainfall and heavy military traffic destroyed the road surface and closed the road, forcing Nadzab to be supplied from Lae by air. The 842nd then had to resume work on the road, this time from the Nadzab end. Heavy rain was experienced on 46 of the next 60 days. The road was reopened on 15 December, allowing the 836th, 839th, 868th and 1881st Engineer Aviation Battalions and No. 62 Works Wing RAAF to move to Nadzab to work on the development of the airbase. The airbase would eventually consist of four all-weather airfields. No 1 had a 6,000 feet (1,800 m) by 100-foot (30 m) runway surfaced with Marsden Matting and a 7,000 feet (2,100 m) by 100-foot (30 m) runway surfaced with bitumen. No. 2 had a 4,000 feet (1,200 m) by 100-foot (30 m) runway partially surfaced with bitumen. No. 3 had a 7,000 feet (2,100 m) by 100-foot (30 m) runway surfaced with bitumen in the centre with 1,000 feet (300 m) of Marsden mat at either end. No. 4, an RAAF airfield named Newton after Flight Lieutenant William Ellis Newton, had two parallel 6,000 feet (1,800 m) by 100-foot (30 m) runways surfaced with bitumen. Nadzab became the Allied Air Forces' main base in New Guinea. ### Outcome General Blamey declared the capture of Lae and Salamaua to be "a signal step on the road to Victory". Tolson described the 503rd Parachute Infantry Regiment's operation at Nadzab as "probably the classic text-book airborne operation of World War II". Coming after the impressive but flawed performance of the airborne arm in the Allied invasion of Sicily, Nadzab influenced thinking about the value of airborne operations. However, the impact was far greater than anyone on the Allied side realised, and the ramifications went far beyond New Guinea. Imperial General Headquarters had regarded the defeats in the Guadalcanal Campaign and Battle of Buna–Gona as setbacks only, and had continued to plan offensives in the South West Pacific. Now it concluded that the Japanese position was over-extended. A new defensive line was drawn running through Western New Guinea, the Caroline Islands and the Mariana Islands. Henceforth, positions beyond that line would be held as an outpost line. General Imamura was now charged not with winning a decisive victory, but only with holding on as long as possible so as to delay the Allied advance.
7,960,582
French battleship Jauréguiberry
1,136,844,016
Pre-dreadnought battleship constructed for the French Navy
[ "1893 ships", "Battleships of the French Navy", "World War I battleships of France" ]
Jauréguiberry was a pre-dreadnought battleship constructed for the French Navy (French: Marine Nationale) in the 1890s. Built in response to a naval expansion program of the British Royal Navy, she was one of a group of five roughly similar battleships, including Masséna, Bouvet, Carnot, and Charles Martel. Jauréguiberry was armed with a mixed battery of 305 mm (12 in), 274 mm (10.8 in) and 138 mm (5.4 in) guns. Constraints on displacement imposed by the French naval command produced a series of ships that were significantly inferior to their British counterparts, suffering from poor stability and a mixed armament that was difficult to control in combat conditions. In peacetime the ship participated in routine training exercises and cruises in the Mediterranean Sea, primarily as part of the Mediterranean Squadron. The ship was involved in several accidents, including a boiler explosion and an accidental torpedo detonation that delayed her entry into service in 1897. Two more torpedo explosions occurred in 1902 and 1905, and she ran aground during a visit to Portsmouth in August 1905. By 1907, she had been transferred to the Reserve Division, although she continued to participate in maneuvers and other peacetime activities. Following the outbreak of World War I in July 1914, Jauréguiberry escorted troop convoys from North Africa and India to France. She supported French troops during the Gallipoli Campaign, including during the landing at Cape Helles in April 1915, before she became guardship at Port Said from 1916 until the end of the war. Upon her return to France in 1919 she became an accommodation hulk until 1932. The ship was sold for scrap in 1934. ## Background and design In 1889, the British Royal Navy passed the Naval Defence Act, which resulted in the construction of the eight Royal Sovereign-class battleships; this major expansion of naval power led the French government to respond with the Statut Naval (Naval Law) of 1890. The law called for twenty-four "cuirasses d'escadre" (squadron battleships) and a host of other vessels, including coastal-defense battleships, cruisers, and torpedo boats. The first stage of the program was to be a group of four squadron battleships built to different designs, but meeting the same basic requirements, including armor, armament, and displacement. The naval high command issued the basic characteristics on 24 December 1889; displacement should not exceed 14,000 metric tons (13,779 long tons), the main battery was to consist of 34-centimeter (13.4 in) and 27 cm (10.6 in) guns, the belt armor should be 45 cm (17.7 in), and the ships should maintain a top speed of 17 knots (31 km/h; 20 mph). The secondary armament was to be either 14 cm (5.5 in) or 16 cm (6.3 in) caliber, with as many guns fitted as space would allow. The basic design for the ships was based on the previous battleship Brennus, but instead of mounting the main battery all on the centerline, the ships used the lozenge arrangement of the earlier vessel Magenta, which moved two of the main battery guns to single turrets on the wings. Although the navy had stipulated that displacement could be up to 14,000 metric tons, political considerations, namely parliamentary objections to increases in naval expenditures, led the designers to limit displacement to around 12,000 metric tons (11,810 long tons). Five naval architects submitted proposals to the competition. The design for Jauréguiberry was prepared by Amable Lagane, the director of naval construction at the Forges et Chantiers de la Méditerranée shipyard in La Seyne-sur-Mer. Lagane had previously supervised the construction of the Magenta-class ironclad Marceau, which influenced his design for Jauréguiberry. Though the program called for four ships to be built in the first year, five were ultimately ordered: Jauréguiberry, Charles Martel, Masséna, Carnot, and Bouvet. Jauréguiberry used a very similar hull form to Marceau's, and as a result, was shorter and wider than the other vessels. The design for Jauréguiberry was also influenced by the Chilean battleship Capitán Prat, then under construction in France (and which also had been designed by Lagane). A small vessel, Capitán Prat had adopted twin-gun turrets for her secondary battery to save space that would have been taken up by traditional casemate mountings. Lagane incorporated that solution in Jauréguiberry, though she was the only French battleship of the program to use that arrangement owing to fears that the rate of fire would be reduced and that the turrets would be more vulnerable to being disabled by a single lucky hit. She was the first French battleship to use electric motors to operate her main-battery turrets. She and her half-sisters were disappointments in service; they generally suffered from stability problems, and Louis-Émile Bertin, the Director of Naval Construction in the late 1890s, referred to the ships as "chavirables" (prone to capsizing). All five of the vessels compared poorly to their British counterparts, particularly their contemporaries of the Majestic class. The ships suffered from a lack of uniformity of equipment, which made them hard to maintain in service, and their mixed gun batteries comprising several calibers made gunnery in combat conditions difficult, since the splashes of relatively similarly sized shells were hard to differentiate and thus made it difficult to calculate corrections to hit the target. Many of the problems that plagued the ships in service were a result of the limitation on their displacement, particularly their stability and seakeeping. ### General characteristics and machinery Jauréguiberry was 111.9 meters (367 ft 2 in) long overall. She had a maximum beam of 23 meters (75 ft 6 in) and a draft of 8.45 meters (27 ft 9 in). She displaced 11,818 metric tons (11,631 long tons) at normal load and 12,229 metric tons (12,036 long tons) at full load. She was fitted with two heavy military masts with fighting tops. In 1905 her captain described her as an excellent sea-boat and a good fighting ship, although her secondary armament was too light. He also said that she was stable and well laid-out with good living conditions. She had a crew of 631 officers and enlisted sailors. Jauréguiberry had two vertical triple-expansion steam engines, also built by Forges et Chantiers de la Méditerranée, which were designed to give the ship a speed of 17.5 knots (32.4 km/h; 20.1 mph). On trials they developed 14,441 indicated horsepower (10,769 kW) and drove the ship to a maximum speed of 17.71 knots (32.80 km/h; 20.38 mph). Each engine drove a 5.7-meter (18 ft 8 in) propeller. Twenty-four Lagraffel d'Allest water-tube boilers provided steam for the engines at a pressure of 15 kg/cm<sup>2</sup> (1,471 kPa; 213 psi). The boilers were distributed between six boiler rooms and were ducted into a pair of closely spaced funnels. She normally carried 750 metric tons (738 long tons) of coal, but could carry a maximum of 1,080 metric tons (1,063 long tons). This gave her a radius of action of 3,920 nautical miles (7,260 km; 4,510 mi) at 10 knots (19 km/h; 12 mph). ### Armament Jauréguiberry's main armament consisted of two 45-caliber Canon de 305 mm (12 in) Modèle 1887 guns in two single-gun turrets, one each fore and aft of the superstructure. A pair of 45-caliber Canon de 274 mm (10.8 in) Modèle 1887 guns were mounted in single-gun wing turrets, one amidships on each side, sponsoned out over the tumblehome of the ship's sides. Each 305 mm turret had an arc of fire of 250°. The 305 mm guns fired 292-kilogram (644 lb) cast iron (CI) projectiles, or heavier 340-kilogram (750 lb) armor-piercing (AP) and semi-armor-piercing (SAP) shells at a muzzle velocity of 780 to 815 meters per second (2,560 to 2,670 ft/s). The 274 mm guns were also supplied with a mix of CI, AP, and SAP shells, with the same muzzle velocity as the larger guns. The ship's offensive armament was completed by a secondary battery of eight 45-caliber Canon de 138.6 mm (5.5 in) Modèle 1891 guns mounted in manually operated twin-gun turrets. The turrets were placed at the corners of the superstructure with 160° arcs of fire. They fired 30 kg (66 lb) CI or 35 kg (77 lb) AP or SAP shells at a muzzle velocity of 730 to 770 meters per second (2,400 to 2,500 ft/s). Defense against torpedo boats was provided by a variety of light-caliber weapons. Sources disagree on the number and types, possibly indicating changes over the ship's lifetime. All sources agree on four 50-caliber (65-millimeter (2.6 in)) guns. These fired a 4-kilogram (8.8 lb) shell at a muzzle velocity of 715 meters per second (2,350 ft/s). Gibbons and Gardiner agree on twelve, later eighteen, although d'Ausson lists fourteen, 47 mm (1.9 in) 40-caliber Canon de 47 mm Modèle 1885 Hotchkiss guns that were mounted in the fighting tops and on the superstructure. They fired a 1.49-kilogram (3.3 lb) projectile at 610 meters per second (2,000 ft/s) to a maximum range of 4,000 meters (4,400 yd). Their theoretical maximum rate of fire was fifteen rounds per minute, but only seven rounds per minute sustained. Gibbons and Gardiner agree that eight 37 mm (1.5 in) Hotchkiss 5-barrel revolving guns were mounted on the fore and aft superstructures, although none are listed by d'Ausson. The ship was initially fitted with 450-millimeter (17.7 in) torpedo tubes, though sources disagree on the number. Gardiner states that she had two submerged tubes and two above-water tubes, but d'Ausson states that she had six tubes, two each above water in the bow and stern and one on each broadside underwater. The above-water tubes were removed during a refit in 1906. The M1892 torpedoes carried a 75 kg (165 lb) warhead, and could be set at 27.5 knots (50.9 km/h; 31.6 mph) or 32.5 knots (60.2 km/h; 37.4 mph), which could reach targets at 1,000 m (3,300 ft) or 800 m (2,600 ft), respectively. ### Armor Jauréguiberry had a total of 3,960 metric tons (3,897 long tons) of nickel-steel armor; equal to 33.5% of her normal displacement. Her waterline belt ranged from 160–400 mm (6.3–15.7 in) in thickness. Above the belt was a 100 mm (3.9 in) thick strake of side armor that created a highly divided cofferdam. Around the above-water torpedo tubes, the upper strake increased to 170 mm (6.7 in). The 90-millimeter (3.5 in) armored deck rested on the top of the waterline belt. Her 305 mm gun turrets were protected by 370 mm (15 in) of armor on the sides and faces while her 274 mm turrets had 280 mm (11 in) of armor. The ship's secondary turrets were protected by 100 millimeters (3.9 in) of armor. The walls of her conning tower were 250 mm (9.8 in) thick. ## Service Jauréguiberry was ordered on 8 April 1891 and laid down on 23 April at Forges et Chantiers de la Méditerranée in La Seyne-sur-Mer. She was launched on 27 October 1893 and was complete enough to begin her sea trials on 30 January 1896. A tube in one of her boilers burst on 10 June during a 24-hour engine trial, killing six and wounding three. Two months later she suffered an accident while testing her main armament. She was finally commissioned on 16 February 1897, although the explosion of a torpedo's air chamber on 30 March delayed her assignment to the Mediterranean Squadron until 17 May. During this period, she was fitted with a new electric order-transmission system that relayed instructions from the ship's fire-control center to the guns, a marked improvement over the voice tubes that were in standard use in the world's navies at the time. Immediately on entering service, she and her half-sisters Charles Martel and Carnot were sent to join the International Squadron that had been assembled beginning in February. The multinational force also included ships of the Austro-Hungarian Navy, the Imperial German Navy, the Italian Regia Marina, the Imperial Russian Navy, and the British Royal Navy, and it was sent to intervene in the 1897–1898 Greek uprising on Crete against rule by the Ottoman Empire. Throughout the ship's peacetime career, she was occupied with routine training exercises, which included gunnery training, combined maneuvers with torpedo boats and submarines, and practice attacks on coastal fortifications. One of the largest of these exercises was conducted between March and July 1900, and involved the Mediterranean Squadron and the Northern Squadron. On 6 March, Jauréguiberry joined the battleships Brennus, Gaulois, Charlemagne, Charles Martel, and Bouvet and four protected cruisers for maneuvers off Golfe-Juan, including night-firing training. Over the course of April, the ships visited numerous French ports along the Mediterranean coast, and on 31 May the fleet steamed to Corsica for a visit that lasted until 8 June. After completing its own exercises in the Mediterranean, the Mediterranean Squadron rendezvoused with the Northern Squadron off Lisbon, Portugal, in late June before proceeding to Quiberon Bay for joint maneuvers in July. The maneuvers concluded with a naval review in Cherbourg on 19 July for President Émile Loubet. On 1 August, the Mediterranean Squadron departed for Toulon, arriving on 14 August. On 20 January 1902 the air chamber of another torpedo exploded, killing one sailor and wounding three. In September she transported the Minister of the Navy to Bizerte. By this time, the ship had been assigned to the 2nd Battle Division of the Mediterranean Squadron, along with Bouvet and the new battleship Iéna, the latter becoming the divisional flagship. In October, Jauréguiberry and the rest of the Mediterranean Squadron battleships steamed to Palma de Mallorca, and on the return to Toulon they conducted training exercises. Jauréguiberry was transferred to the Northern Squadron in 1904, her place in the Mediterranean Squadron being taken by the new battleship Suffren. Jauréguiberry arrived at Brest on 25 March. She was lightly damaged when she touched a rock while entering Brest in fog on 18 July and, in another incident, her steering compartment was flooded when a torpedo air chamber burst between her screws during a torpedo-launching exercise on 18 May 1905. While visiting Portsmouth on 14 August, Jauréguiberry ran aground for a short time in the outer harbor. She returned to the Mediterranean Squadron in February 1907 where she was assigned to the Reserve Division, and the following year was reassigned to the 3rd Division. On 13 January 1908, she joined the battleships République, Patrie, Gaulois, Charlemagne, Saint Louis, and Masséna for a cruise in the Mediterranean, first to Golfe-Juan and then to Villefranche-sur-Mer, where the squadron stayed for a month. In 1909, the 3rd and 4th Divisions were reformed into the 2nd Independent Squadron and transferred to the Atlantic in 1910. Beginning on 29 September 1910 her boiler tubes were renewed in a four-month refit at Cherbourg. On 4 September 1911, she participated in a naval review off Toulon. In October 1912 the Squadron was reassigned to the Mediterranean Squadron and a year later, in October 1913, Jauréguiberry was transferred to the Training Division. During this period, she was fitted with an experimental fire-control system as part of a series of tests before it was installed in the new Courbet-class dreadnought battleships. She became the flagship of the Special Division in April 1914; in August, the commander of the division was Contre-amiral (Rear Admiral) Darrieus. At that time, the division also included the battleship Charlemagne and the cruisers Pothuau and D'Entrecasteaux. ### World War I Following the outbreak of World War I in July 1914, France announced general mobilization on 1 August. The next day, Admiral Augustin Boué de Lapeyrère ordered the entire French fleet to begin raising steam at 22:15 so the ships could sortie early the next day. The bulk of the fleet, including the Division de complément, was sent to French North Africa, where they escorted the vital troop convoys carrying elements of the French Army from North Africa back to France to counter the expected German invasion. The French fleet was tasked with guarding against a possible attack by the German battlecruiser Goeben, which instead fled to the Ottoman Empire. As part of her mission, Jauréguiberry was sent to Oran, French Algeria on 4 August, in company with Bouvet, Suffren, and Gaulois. She also escorted a convoy of Indian troops passing through the Mediterranean in September. Beginning in December, Jauréguiberry was stationed at Bizerte, remaining there until February 1915 when she sailed to Port Said to become flagship of the Syrian Division, commanded by Admiral Louis Dartige du Fournet. At that time, the division included Saint Louis, the coast defence battleship Henri IV, and D'Entrecasteaux. On 25 March, Jauréguiberry departed Port Said for the Dardanelles, where the French and British fleets were attempting to break through the Ottoman defenses guarding the straits. An earlier Anglo-French attack on 18 March had cost the French fleet the battleship Bouvet, and two other battleships—Suffren and Gaulois—had been badly damaged and forced to withdraw. To make good his losses, Admiral Émile Guépratte requested that Jauréguiberry and Saint Louis be transferred to his command. On 1 April, Guépratte transferred his flag from Charlemagne to Jauréguiberry. By late May, the French squadron had been restored to effective strength, and included the battleships Saint Louis, Charlemagne, Patrie, Suffren, and Henri IV. The formation was designated the 3rd Battle Division. Jauréguiberry provided gunfire support to the troops during the Landing at Cape Helles on 25 April, during which the French forces made a diversionary landing on the Asian side of the straits. During the operation, Jauréguiberry and the other French ships kept the Ottoman guns on that side of the strait largely suppressed, and prevented them from interfering with the main landing at Cape Helles. She continued operations in the area until 26 May, including supporting the Allied attack during the Second Battle of Krithia on 6 May. She was lightly damaged by Turkish artillery on 30 April and 5 May, but continued to fire her guns as needed. Jauréguiberry was recalled to Port Said on 19 July and bombarded Ottoman-controlled Haifa on 13 August. She resumed her role as flagship of the Syrian Division on 19 August. The ship participated in the occupation of Ile Rouad on 1 September and other missions off the Syrian coast until she was transferred to Ismailia in January 1916 to assist in the defense of the Suez Canal, although she returned to Port Said shortly afterward. Jauréguiberry was refitted at Malta between 25 November and 26 December 1916, thereafter returning to Port Said. She landed some of her guns to help defend the canal in 1917 and was reduced to reserve in 1918. The ship arrived at Toulon on 6 March 1919 where she was decommissioned and transferred to the Engineer's Training School on 30 March for use as an accommodation hulk. She was struck from the Navy List on 20 June 1920, but remained assigned to the Engineer's School until 1932. Jauréguiberry was sold for scrap on 23 June 1934 for the price of 1,147,000 francs.
208,430
Enceladus
1,173,702,339
Natural satellite orbiting Saturn
[ "Articles containing video clips", "Astronomical objects discovered in 1789", "Discoveries by William Herschel", "Enceladus", "Moons of Saturn", "Moons with a prograde orbit" ]
Enceladus is the sixth-largest moon of Saturn (19th largest in the Solar System). It is about 500 kilometers (310 miles) in diameter, about a tenth of that of Saturn's largest moon, Titan. It is mostly covered by fresh, clean ice, making it one of the most reflective bodies of the Solar System. Consequently, its surface temperature at noon reaches only −198 °C (75.1 K; −324.4 °F), far colder than a light-absorbing body would be. Despite its small size, Enceladus has a wide range of surface features, ranging from old, heavily cratered regions to young, tectonically deformed terrain. Enceladus was discovered on August 28, 1789 by William Herschel, but little was known about it until the two Voyager spacecraft, Voyager 1 and Voyager 2, flew by Saturn in 1980 and 1981. In 2005, the spacecraft Cassini started multiple close flybys of Enceladus, revealing its surface and environment in greater detail. In particular, Cassini discovered water-rich plumes venting from the south polar region. Cryovolcanoes near the south pole shoot geyser-like jets of water vapor, molecular hydrogen, other volatiles, and solid material, including sodium chloride crystals and ice particles, into space, totaling about 200 kilograms (440 pounds) per second. More than 100 geysers have been identified. Some of the water vapor falls back as "snow"; the rest escapes and supplies most of the material making up Saturn's E ring. According to NASA scientists, the plumes are similar in composition to comets. In 2014, NASA reported that Cassini had found evidence for a large south polar subsurface ocean of liquid water with a thickness of around 10 km (6 mi). The existence of Enceladus' subsurface ocean has since been mathematically modelled and replicated. These geyser observations, along with the finding of escaping internal heat and very few (if any) impact craters in the south polar region, show that Enceladus is currently geologically active. Like many other satellites in the extensive systems of the giant planets, Enceladus is trapped in an orbital resonance. Its resonance with Dione excites its orbital eccentricity, which is damped by tidal forces, tidally heating its interior and driving the geological activity. Cassini performed chemical analysis of Enceladus's plumes, finding evidence for hydrothermal activity, possibly driving complex chemistry. Ongoing research on Cassini data suggests that Enceladus's hydrothermal environment could be habitable to some of Earth's hydrothermal vent's microorganisms, and that plume-found methane could be produced by such organisms. ## History ### Discovery Enceladus was discovered by William Herschel on August 28, 1789, during the first use of his new 1.2 m (47 in) 40-foot telescope, then the largest in the world, at Observatory House in Slough, England. Its faint apparent magnitude (H<sub>V</sub> = +11.7) and its proximity to the much brighter Saturn and Saturn's rings make Enceladus difficult to observe from Earth with smaller telescopes. Like many satellites of Saturn discovered prior to the Space Age, Enceladus was first observed during a Saturnian equinox, when Earth is within the ring plane. At such times, the reduction in glare from the rings makes the moons easier to observe. Prior to the Voyager missions the view of Enceladus improved little from the dot first observed by Herschel. Only its orbital characteristics were known, with estimations of its mass, density and albedo. ### Naming Enceladus is named after the giant Enceladus of Greek mythology. The name, like the names of each of the first seven satellites of Saturn to be discovered, was suggested by William Herschel's son John Herschel in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope. He chose these names because Saturn, known in Greek mythology as Cronus, was the leader of the Titans. Geological features on Enceladus are named by the International Astronomical Union (IAU) after characters and places from Richard Francis Burton's 1885 translation of The Book of One Thousand and One Nights. Impact craters are named after characters, whereas other feature types, such as fossae (long, narrow depressions), dorsa (ridges), planitiae (plains), sulci (long parallel grooves), and rupes (cliffs) are named after places. The IAU has officially named 85 features on Enceladus, most recently Samaria Rupes, formerly called Samaria Fossa. ## Shape and size Enceladus is a relatively small satellite composed of ice and rock. It is a scalene ellipsoid in shape; its diameters, calculated from images taken by Cassini's ISS (Imaging Science Subsystem) instrument, are 513 km between the sub- and anti-Saturnian poles, 503 km between the leading and trailing hemispheres, and 497 km between the north and south poles. Enceladus is only one-seventh the diameter of Earth's Moon. It ranks sixth in both mass and size among the satellites of Saturn, after Titan (5,150 km), Rhea (1,530 km), Iapetus (1,440 km), Dione (1,120 km) and Tethys (1,050 km). ## Orbit and rotation Enceladus is one of the major inner satellites of Saturn along with Dione, Tethys, and Mimas. It orbits at 238,000 km (148,000 mi) from Saturn's center and 180,000 km (110,000 mi) from its cloud tops, between the orbits of Mimas and Tethys. It orbits Saturn every 32.9 hours, fast enough for its motion to be observed over a single night of observation. Enceladus is currently in a 2:1 mean-motion orbital resonance with Dione, completing two orbits around Saturn for every one orbit completed by Dione. This resonance maintains Enceladus's orbital eccentricity (0.0047), which is known as a forced eccentricity. This non-zero eccentricity results in tidal deformation of Enceladus. The dissipated heat resulting from this deformation is the main heating source for Enceladus's geologic activity. Enceladus orbits within the densest part of Saturn's E ring, the outermost of its major rings, and is the main source of the ring's material composition. Like most of Saturn's larger satellites, Enceladus rotates synchronously with its orbital period, keeping one face pointed toward Saturn. Unlike Earth's Moon, Enceladus does not appear to librate more than 1.5° about its spin axis. However, analysis of the shape of Enceladus suggests that at some point it was in a 1:4 forced secondary spin–orbit libration. This libration could have provided Enceladus with an additional heat source. ### Source of the E ring Plumes from Enceladus, which are similar in composition to comets, have been shown to be the source of the material in Saturn's E ring. The E ring is the widest and outermost ring of Saturn (except for the tenuous Phoebe ring). It is an extremely wide but diffuse disk of microscopic icy or dusty material distributed between the orbits of Mimas and Titan. Mathematical models show that the E ring is unstable, with a lifespan between 10,000 and 1,000,000 years; therefore, particles composing it must be constantly replenished. Enceladus is orbiting inside the ring, at its narrowest but highest density point. In the 1980s some suspected that Enceladus is the main source of particles for the ring. This hypothesis was confirmed by Cassini's first two close flybys in 2005. The Cosmic Dust Analyzer (CDA) "detected a large increase in the number of particles near Enceladus", confirming it as the primary source for the E ring. Analysis of the CDA and INMS data suggest that the gas cloud Cassini flew through during the July encounter, and observed from a distance with its magnetometer and UVIS, was actually a water-rich cryovolcanic plume, originating from vents near the south pole. Visual confirmation of venting came in November 2005, when ISS imaged geyser-like jets of icy particles rising from Enceladus's south polar region. (Although the plume was imaged before, in January and February 2005, additional studies of the camera's response at high phase angles, when the Sun is almost behind Enceladus, and comparison with equivalent high-phase-angle images taken of other Saturnian satellites, were required before this could be confirmed.) ## Geology ### Surface features Voyager 2 was the first spacecraft to observe Enceladus's surface in detail, in August 1981. Examination of the resulting highest-resolution imagery revealed at least five different types of terrain, including several regions of cratered terrain, regions of smooth (young) terrain, and lanes of ridged terrain often bordering the smooth areas. Extensive linear cracks and scarps were observed. Given the relative lack of craters on the smooth plains, these regions are probably less than a few hundred million years old. Accordingly, Enceladus must have been recently active with "water volcanism" or other processes that renew the surface. The fresh, clean ice that dominates its surface makes Enceladus the most reflective body in the Solar System, with a visual geometric albedo of 1.38 and bolometric Bond albedo of 0.81±0.04. Because it reflects so much sunlight, its surface only reaches a mean noon temperature of −198 °C (−324 °F), somewhat colder than other Saturnian satellites. Observations during three flybys on February 17, March 9, and July 14, 2005, revealed Enceladus's surface features in much greater detail than the Voyager 2 observations. The smooth plains, which Voyager 2 had observed, resolved into relatively crater-free regions filled with numerous small ridges and scarps. Numerous fractures were found within the older, cratered terrain, suggesting that the surface has been subjected to extensive deformation since the craters were formed. Some areas contain no craters, indicating major resurfacing events in the geologically recent past. There are fissures, plains, corrugated terrain and other crustal deformations. Several additional regions of young terrain were discovered in areas not well-imaged by either Voyager spacecraft, such as the bizarre terrain near the south pole. All of this indicates that Enceladus's interior is liquid today, even though it should have been frozen long ago. #### Impact craters Impact cratering is a common occurrence on many Solar System bodies. Much of Enceladus's surface is covered with craters at various densities and levels of degradation. This subdivision of cratered terrains on the basis of crater density (and thus surface age) suggests that Enceladus has been resurfaced in multiple stages. Cassini observations provided a much closer look at the crater distribution and size, showing that many of Enceladus's craters are heavily degraded through viscous relaxation and fracturing. Viscous relaxation allows gravity, over geologic time scales, to deform craters and other topographic features formed in water ice, reducing the amount of topography over time. The rate at which this occurs is dependent on the temperature of the ice: warmer ice is easier to deform than colder, stiffer ice. Viscously relaxed craters tend to have domed floors, or are recognized as craters only by a raised, circular rim. Dunyazad crater is a prime example of a viscously relaxed crater on Enceladus, with a prominent domed floor. #### Tectonic features Voyager 2 found several types of tectonic features on Enceladus, including troughs, scarps, and belts of grooves and ridges. Results from Cassini suggest that tectonics is the dominant mode of deformation on Enceladus, including rifts, one of the more dramatic types of tectonic features that were noted. These canyons can be up to 200 km long, 5–10 km wide, and 1 km deep. Such features are geologically young, because they cut across other tectonic features and have sharp topographic relief with prominent outcrops along the cliff faces. Evidence of tectonics on Enceladus is also derived from grooved terrain, consisting of lanes of curvilinear grooves and ridges. These bands, first discovered by Voyager 2, often separate smooth plains from cratered regions. Grooved terrains such as the Samarkand Sulci are reminiscent of grooved terrain on Ganymede. Unlike those seen on Ganymede, grooved topography on Enceladus is generally more complex. Rather than parallel sets of grooves, these lanes often appear as bands of crudely aligned, chevron-shaped features. In other areas, these bands bow upwards with fractures and ridges running the length of the feature. Cassini observations of the Samarkand Sulci have revealed dark spots (125 and 750 m wide) located parallel to the narrow fractures. Currently, these spots are interpreted as collapse pits within these ridged plain belts. In addition to deep fractures and grooved lanes, Enceladus has several other types of tectonic terrain. Many of these fractures are found in bands cutting across cratered terrain. These fractures probably propagate down only a few hundred meters into the crust. Many have probably been influenced during their formation by the weakened regolith produced by impact craters, often changing the strike of the propagating fracture. Another example of tectonic features on Enceladus are the linear grooves first found by Voyager 2 and seen at a much higher resolution by Cassini. These linear grooves can be seen cutting across other terrain types, like the groove and ridge belts. Like the deep rifts, they are among the youngest features on Enceladus. However, some linear grooves have been softened like the craters nearby, suggesting that they are older. Ridges have also been observed on Enceladus, though not nearly to the extent as those seen on Europa. These ridges are relatively limited in extent and are up to one kilometer tall. One-kilometer high domes have also been observed. Given the level of resurfacing found on Enceladus, it is clear that tectonic movement has been an important driver of geology for much of its history. #### Smooth plains Two regions of smooth plains were observed by Voyager 2. They generally have low relief and have far fewer craters than in the cratered terrains, indicating a relatively young surface age. In one of the smooth plain regions, Sarandib Planitia, no impact craters were visible down to the limit of resolution. Another region of smooth plains to the southwest of Sarandib is criss-crossed by several troughs and scarps. Cassini has since viewed these smooth plains regions, like Sarandib Planitia and Diyar Planitia at much higher resolution. Cassini images show these regions filled with low-relief ridges and fractures, probably caused by shear deformation. The high-resolution images of Sarandib Planitia revealed a number of small impact craters, which allow for an estimate of the surface age, either 170 million years or 3.7 billion years, depending on assumed impactor population. The expanded surface coverage provided by Cassini has allowed for the identification of additional regions of smooth plains, particularly on Enceladus's leading hemisphere (the side of Enceladus that faces the direction of motion as it orbits Saturn). Rather than being covered in low-relief ridges, this region is covered in numerous criss-crossing sets of troughs and ridges, similar to the deformation seen in the south polar region. This area is on the opposite side of Enceladus from Sarandib and Diyar Planitiae, suggesting that the placement of these regions is influenced by Saturn's tides on Enceladus. #### South polar region Images taken by Cassini during the flyby on July 14, 2005, revealed a distinctive, tectonically deformed region surrounding Enceladus's south pole. This area, reaching as far north as 60° south latitude, is covered in tectonic fractures and ridges. The area has few sizable impact craters, suggesting that it is the youngest surface on Enceladus and on any of the mid-sized icy satellites. Modeling of the cratering rate suggests that some regions of the south polar terrain are possibly as young as 500,000 years or less. Near the center of this terrain are four fractures bounded by ridges, unofficially called "tiger stripes". They appear to be the youngest features in this region and are surrounded by mint-green-colored (in false color, UV–green–near IR images), coarse-grained water ice, seen elsewhere on the surface within outcrops and fracture walls. Here the "blue" ice is on a flat surface, indicating that the region is young enough not to have been coated by fine-grained water ice from the E ring. Results from the visual and infrared spectrometer (VIMS) instrument suggest that the green-colored material surrounding the tiger stripes is chemically distinct from the rest of the surface of Enceladus. VIMS detected crystalline water ice in the stripes, suggesting that they are quite young (likely less than 1,000 years old) or the surface ice has been thermally altered in the recent past. VIMS also detected simple organic (carbon-containing) compounds in the tiger stripes, chemistry not found anywhere else on Enceladus thus far. One of these areas of "blue" ice in the south polar region was observed at high resolution during the July 14, 2005, flyby, revealing an area of extreme tectonic deformation and blocky terrain, with some areas covered in boulders 10–100 m across. The boundary of the south polar region is marked by a pattern of parallel, Y- and V-shaped ridges and valleys. The shape, orientation, and location of these features suggest they are caused by changes in the overall shape of Enceladus. As of 2006 there were two theories for what could cause such a shift in shape: the orbit of Enceladus may have migrated inward, leading to an increase in Enceladus's rotation rate. Such a shift would lead to a more oblate shape; or a rising mass of warm, low-density material in Enceladus's interior may have led to a shift in the position of the current south polar terrain from Enceladus's southern mid-latitudes to its south pole. Consequently, the moon's ellipsoid shape would have adjusted to match the new orientation. One problem of the polar flattening hypothesis is that both polar regions should have similar tectonic deformation histories. However, the north polar region is densely cratered, and has a much older surface age than the south pole. Thickness variations in Enceladus's lithosphere is one explanation for this discrepancy. Variations in lithospheric thickness are supported by the correlation between the Y-shaped discontinuities and the V-shaped cusps along the south polar terrain margin and the relative surface age of the adjacent non-south polar terrain regions. The Y-shaped discontinuities, and the north–south trending tension fractures into which they lead, are correlated with younger terrain with presumably thinner lithospheres. The V-shaped cusps are adjacent to older, more heavily cratered terrains. #### South polar plumes Following Voyager's encounters with Enceladus in the early 1980s, scientists postulated it to be geologically active based on its young, reflective surface and location near the core of the E ring. Based on the connection between Enceladus and the E ring, scientists suspected that Enceladus was the source of material in the E ring, perhaps through venting of water vapor. The first Cassini sighting of a plume of icy particles above Enceladus's south pole came from the Imaging Science Subsystem (ISS) images taken in January and February 2005, though the possibility of a camera artifact delayed an official announcement. Data from the magnetometer instrument during the February 17, 2005, encounter provided evidence for a planetary atmosphere. The magnetometer observed a deflection or "draping" of the magnetic field, consistent with local ionization of neutral gas. During the two following encounters, the magnetometer team determined that gases in Enceladus's atmosphere are concentrated over the south polar region, with atmospheric density away from the pole being much lower. Unlike the magnetometer, the Ultraviolet Imaging Spectrograph failed to detect an atmosphere above Enceladus during the February encounter when it looked over the equatorial region, but did detect water vapor during an occultation over the south polar region during the July encounter.Cassini flew through this gas cloud on a few encounters, allowing instruments such as the ion and neutral mass spectrometer (INMS) and the cosmic dust analyzer (CDA) to directly sample the plume. (See 'Composition' section.) The November 2005 images showed the plume's fine structure, revealing numerous jets (perhaps issuing from numerous distinct vents) within a larger, faint component extending out nearly 500 km (310 mi) from the surface. The particles have a bulk velocity of 1.25 ± 0.1 kilometers per second (2,800 ± 220 miles per hour), and a maximum velocity of 3.40 km/s (7,600 mph). Cassini's UVIS later observed gas jets coinciding with the dust jets seen by ISS during a non-targeted encounter with Enceladus in October 2007. The combined analysis of imaging, mass spectrometry, and magnetospheric data suggests that the observed south polar plume emanates from pressurized subsurface chambers, similar to Earth's geysers or fumaroles. Fumaroles are probably the closer analogy, since periodic or episodic emission is an inherent property of geysers. The plumes of Enceladus were observed to be continuous to within a factor of a few. The mechanism that drives and sustains the eruptions is thought to be tidal heating. The intensity of the eruption of the south polar jets varies significantly as a function of the position of Enceladus in its orbit. The plumes are about four times brighter when Enceladus is at apoapsis (the point in its orbit most distant from Saturn) than when it is at periapsis. This is consistent with geophysical calculations which predict the south polar fissures are under compression near periapsis, pushing them shut, and under tension near apoapsis, pulling them open. Much of the plume activity consists of broad curtain-like eruptions. Optical illusions from a combination of viewing direction and local fracture geometry previously made the plumes look like discrete jets. The extent to which cryovolcanism really occurs is a subject of some debate. At Enceladus, it appears that cryovolcanism occurs because water-filled cracks are periodically exposed to vacuum, the cracks being opened and closed by tidal stresses. ### Internal structure Before the Cassini mission, little was known about the interior of Enceladus. However, flybys by Cassini provided information for models of Enceladus's interior, including a better determination of the mass and shape, high-resolution observations of the surface, and new insights on the interior. Initial mass estimates from the Voyager program missions suggested that Enceladus was composed almost entirely of water ice. However, based on the effects of Enceladus's gravity on Cassini, its mass was determined to be much higher than previously thought, yielding a density of 1.61 g/cm<sup>3</sup>. This density is higher than those of Saturn's other mid-sized icy satellites, indicating that Enceladus contains a greater percentage of silicates and iron. Castillo et al. (2005) suggested that Iapetus and the other icy satellites of Saturn formed relatively quickly after the formation of the Saturnian subnebula, and thus were rich in short-lived radionuclides. These radionuclides, like aluminium-26 and iron-60, have short half-lives and would produce interior heating relatively quickly. Without the short-lived variety, Enceladus's complement of long-lived radionuclides would not have been enough to prevent rapid freezing of the interior, even with Enceladus's comparatively high rock–mass fraction, given its small size. Given Enceladus's relatively high rock–mass fraction, the proposed enhancement in <sup>26</sup>Al and <sup>60</sup>Fe would result in a differentiated body, with an icy mantle and a rocky core. Subsequent radioactive and tidal heating would raise the temperature of the core to 1,000 K, enough to melt the inner mantle. For Enceladus to still be active, part of the core must have also melted, forming magma chambers that would flex under the strain of Saturn's tides. Tidal heating, such as from the resonance with Dione or from libration, would then have sustained these hot spots in the core and would power the current geological activity. In addition to its mass and modeled geochemistry, researchers have also examined Enceladus's shape to determine if it is differentiated. Porco et al. (2006) used limb measurements to determine that its shape, assuming hydrostatic equilibrium, is consistent with an undifferentiated interior, in contradiction to the geological and geochemical evidence. However, the current shape also supports the possibility that Enceladus is not in hydrostatic equilibrium, and may have rotated faster at some point in the recent past (with a differentiated interior). Gravity measurements by Cassini show that the density of the core is low, indicating that the core contains water in addition to silicates. #### Subsurface water ocean Evidence of liquid water on Enceladus began to accumulate in 2005, when scientists observed plumes containing water vapor spewing from its south polar surface, with jets moving 250 kg of water vapor every second at up to 2,189 km/h (1,360 mph) into space. Soon after, in 2006 it was determined that Enceladus's plumes are the source of Saturn's E Ring. The sources of salty particles are uniformly distributed along the tiger stripes, whereas sources of "fresh" particles are closely related to the high-speed gas jets. The "salty" particles are heavier and mostly fall back to the surface, whereas the fast "fresh" particles escape to the E ring, explaining its salt-poor composition of 0.5–2% of sodium salts by mass. Gravimetric data from Cassini'''s December 2010 flybys showed that Enceladus likely has a liquid water ocean beneath its frozen surface, but at the time it was thought the subsurface ocean was limited to the south pole. The top of the ocean probably lies beneath a 30 to 40 kilometers (19 to 25 mi) thick ice shelf. The ocean may be 10 kilometers (6.2 mi) deep at the south pole. Measurements of Enceladus's "wobble" as it orbits Saturn—called libration—suggests that the entire icy crust is detached from the rocky core and therefore that a global ocean is present beneath the surface. The amount of libration (0.120° ± 0.014°) implies that this global ocean is about 26 to 31 kilometers (16 to 19 miles) deep. For comparison, Earth's ocean has an average depth of 3.7 kilometers. #### Composition The Cassini spacecraft flew through the southern plumes on several occasions to sample and analyze its composition. As of 2019, the data gathered is still being analyzed and interpreted. The plumes' salty composition (-Na, -Cl, -CO<sub>3</sub>) indicates that the source is a salty subsurface ocean. The INMS instrument detected mostly water vapor, as well as traces of molecular nitrogen, carbon dioxide, and trace amounts of simple hydrocarbons such as methane, propane, acetylene and formaldehyde. The plumes' composition, as measured by the INMS, is similar to that seen at most comets. Cassini also found traces of simple organic compounds in some dust grains, as well as larger organics such as benzene (C <sub>6</sub>H <sub>6</sub>), and complex macromolecular organics as large as 200 atomic mass units, and at least 15 carbon atoms in size. The mass spectrometer detected molecular hydrogen (H<sub>2</sub>) which was in "thermodynamic disequilibrium" with the other components, and found traces of ammonia (NH <sub>3</sub>). A model suggests that Enceladus's salty ocean (-Na, -Cl, -CO<sub>3</sub>) has an alkaline pH of 11 to 12. The high pH is interpreted to be a consequence of serpentinization of chondritic rock that leads to the generation of H<sub>2</sub>, a geochemical source of energy that could support both abiotic and biological synthesis of organic molecules such as those that have been detected in Enceladus's plumes. Further analysis in 2019 was done of the spectral characteristics of ice grains in Enceladus's erupting plumes. The study found that nitrogen-bearing and oxygen-bearing amines were likely present, with significant implications for the availability of amino acids in the internal ocean. The researchers suggested that the compounds on Enceladus could be precursors for "biologically relevant organic compounds". ### Possible heat sources During the flyby of July 14, 2005, the Composite Infrared Spectrometer (CIRS) found a warm region near the south pole. Temperatures in this region ranged from 85 to 90 K, with small areas showing as high as 157 K (−116 °C), much too warm to be explained by solar heating, indicating that parts of the south polar region are heated from the interior of Enceladus. The presence of a subsurface ocean under the south polar region is now accepted, but it cannot explain the source of the heat, with an estimated heat flux of 200 mW/m<sup>2</sup>, which is about 10 times higher than that from radiogenic heating alone. Several explanations for the observed elevated temperatures and the resulting plumes have been proposed, including venting from a subsurface reservoir of liquid water, sublimation of ice, decompression and dissociation of clathrates, and shear heating, but a complete explanation of all the heat sources causing the observed thermal power output of Enceladus has not yet been settled. Heating in Enceladus has occurred through various mechanisms ever since its formation. Radioactive decay in its core may have initially heated it, giving it a warm core and a subsurface ocean, which is now kept above freezing through unidentified mechanisms. Geophysical models indicate that tidal heating is a main heat source, perhaps aided by radioactive decay and some heat-producing chemical reactions. A 2007 study predicted the internal heat of Enceladus, if generated by tidal forces, could be no greater than 1.1 gigawatts, but data from Cassini's infrared spectrometer of the south polar terrain over 16 months, indicate that the internal heat generated power is about 4.7 gigawatts, and suggest that it is in thermal equilibrium. The observed power output of 4.7 gigawatts is challenging to explain from tidal heating alone, so the main source of heat remains a mystery. Most scientists think the observed heat flux of Enceladus is not enough to maintain the subsurface ocean, and therefore any subsurface ocean must be a remnant of a period of higher eccentricity and tidal heating, or the heat is produced through another mechanism. #### Tidal heating Tidal heating occurs through the tidal friction processes: orbital and rotational energy are dissipated as heat in the crust of an object. In addition, to the extent that tides produce heat along fractures, libration may affect the magnitude and distribution of such tidal shear heating. Tidal dissipation of Enceladus's ice crust is significant because Enceladus has a subsurface ocean. A computer simulation that used data from Cassini was published in November 2017, and it indicates that friction heat from the sliding rock fragments within the permeable and fragmented core of Enceladus could keep its underground ocean warm for up to billions of years. It is thought that if Enceladus had a more eccentric orbit in the past, the enhanced tidal forces could be sufficient to maintain a subsurface ocean, such that a periodic enhancement in eccentricity could maintain a subsurface ocean that periodically changes in size. A 2016 analysis claimed that "a model of the tiger stripes as tidally flexed slots that puncture the ice shell can simultaneously explain the persistence of the eruptions through the tidal cycle, the phase lag, and the total power output of the tiger stripe terrain, while suggesting that eruptions are maintained over geological timescales." Previous models suggest that resonant perturbations of Dione could provide the necessary periodic eccentricity changes to maintain the subsurface ocean of Enceladus, if the ocean contains a substantial amount of ammonia. The surface of Enceladus indicates that the entire moon has experienced periods of enhanced heat flux in the past. #### Radioactive heating The "hot start" model of heating suggests Enceladus began as ice and rock that contained rapidly decaying short-lived radioactive isotopes of aluminium, iron and manganese. Enormous amounts of heat were then produced as these isotopes decayed for about 7 million years, resulting in the consolidation of rocky material at the core surrounded by a shell of ice. Although the heat from radioactivity would decrease over time, the combination of radioactivity and tidal forces from Saturn's gravitational tug could prevent the subsurface ocean from freezing. The present-day radiogenic heating rate is 3.2 ergs/s (or 0.32 gigawatts), assuming Enceladus has a composition of ice, iron and silicate materials. Heating from long-lived radioactive isotopes uranium-238, uranium-235, thorium-232 and potassium-40 inside Enceladus would add 0.3 gigawatts to the observed heat flux. The presence of Enceladus's regionally thick subsurface ocean suggests a heat flux \~10 times higher than that from radiogenic heating in the silicate core. #### Chemical factors Because no ammonia was initially found in the vented material by INMS or UVIS, which could act as an antifreeze, it was thought such a heated, pressurized chamber would consist of nearly pure liquid water with a temperature of at least 270 K (−3 °C), because pure water requires more energy to melt. In July 2009 it was announced that traces of ammonia had been found in the plumes during flybys in July and October 2008. Reducing the freezing point of water with ammonia would also allow for outgassing and higher gas pressure, and less heat required to power the water plumes. The subsurface layer heating the surface water ice could be an ammonia–water slurry at temperatures as low as 170 K (−103 °C), and thus less energy is required to produce the plume activity. However, the observed 4.7 gigawatts heat flux is enough to power the cryovolcanism without the presence of ammonia. ## Origin ### Mimas–Enceladus paradox Mimas, the innermost of the round moons of Saturn and directly interior to Enceladus, is a geologically dead body, even though it should experience stronger tidal forces than Enceladus. This apparent paradox can be explained in part by temperature-dependent properties of water ice (the main constituent of the interiors of Mimas and Enceladus). The tidal heating per unit mass is given by the formula $q_{tid}=\frac{63\rho n^{5} r^{4} e^{2}}{38\mu Q},$ where ρ is the (mass) density of the satellite, n is its mean orbital motion, r is the satellite's radius, e is the orbital eccentricity of the satellite, μ is the shear modulus and Q is the dimensionless dissipation factor. For a same-temperature approximation, the expected value of q<sub>tid</sub> for Mimas is about 40 times that of Enceladus. However, the material parameters μ and Q are temperature dependent. At high temperatures (close to the melting point), μ and Q are low, so tidal heating is high. Modeling suggests that for Enceladus, both a 'basic' low-energy thermal state with little internal temperature gradient, and an 'excited' high-energy thermal state with a significant temperature gradient, and consequent convection (endogenic geologic activity), once established, would be stable. For Mimas, only a low-energy state is expected to be stable, despite its being closer to Saturn. So the model predicts a low-internal-temperature state for Mimas (values of μ and Q are high) but a possible higher-temperature state for Enceladus (values of μ and Q are low). Additional historical information is needed to explain how Enceladus first entered the high-energy state (e.g. more radiogenic heating or a more eccentric orbit in the past). The significantly higher density of Enceladus relative to Mimas (1.61 vs. 1.15 g/cm<sup>3</sup>), implying a larger content of rock and more radiogenic heating in its early history, has also been cited as an important factor in resolving the Mimas paradox. It has been suggested that for an icy satellite the size of Mimas or Enceladus to enter an 'excited state' of tidal heating and convection, it would need to enter an orbital resonance before it lost too much of its primordial internal heat. Because Mimas, being smaller, would cool more rapidly than Enceladus, its window of opportunity for initiating orbital resonance-driven convection would have been considerably shorter. ### Proto-Enceladus hypothesis Enceladus is losing mass at a rate of 200 kg/second. If mass loss at this rate continued for 4.5 Gyr, the satellite would have lost approximately 30% of its initial mass. A similar value is obtained by assuming that the initial densities of Enceladus and Mimas were equal. It suggests that tectonics in the south polar region is probably mainly related to subsidence and associated subduction caused by the process of mass loss. ### Date of formation In 2016, a study of how the orbits of Saturn's moons should have changed due to tidal effects suggested that all of Saturn's satellites inward of Titan, including Enceladus (whose geologic activity was used to derive the strength of tidal effects on Saturn's satellites), may have formed as little as 100 million years ago. A later study from 2019 estimated that the ocean is around one billion years old. ### Potential habitability Enceladus ejects plumes of salted water laced with grains of silica-rich sand, nitrogen (in ammonia), and organic molecules, including trace amounts of simple hydrocarbons such as methane (CH <sub>4</sub>), propane (C <sub>3</sub>H <sub>8</sub>), acetylene (C <sub>2</sub>H <sub>2</sub>) and formaldehyde (CH <sub>2</sub>O), which are carbon-bearing molecules. This indicates that hydrothermal activity —an energy source— may be at work in Enceladus's subsurface ocean. Models indicate that the large rocky core is porous, allowing water to flow through it, transferring heat and chemicals. It was confirmed by observations and other research. Molecular hydrogen (H <sub>2</sub>), a geochemical source of energy that can be metabolized by methanogen microbes to provide energy for life, could be present if, as models suggest, Enceladus's salty ocean has an alkaline pH from serpentinization of chondritic rock. The presence of an internal global salty ocean with an aquatic environment supported by global ocean circulation patterns, with an energy source and complex organic compounds in contact with Enceladus's rocky core, may advance the study of astrobiology and the study of potentially habitable environments for microbial extraterrestrial life. Geochemical modeling results concerning not-yet-detected phosphorus indicate the moon meets potential abiogenesis-requirements.However, phosphates have been detected from a cryovolcanic plume detected by Cassini and is discussed in a paper in the June 14 2023 issue of Nature entitled "Detection of Phosphates Originating From Enceladus's Ocean". The presence of a wide range of organic compounds and ammonia indicates their source may be similar to the water/rock reactions known to occur on Earth and that are known to support life. Therefore, several robotic missions have been proposed to further explore Enceladus and assess its habitability. Some of the proposed missions are: Journey to Enceladus and Titan (JET), Enceladus Explorer (En-Ex), Enceladus Life Finder (ELF), Life Investigation For Enceladus (LIFE), and Enceladus Life Signatures and Habitability (ELSAH). In June 2023, astronomers reported that the presence of phosphates on Enceladus has been detected, completing the discovery of all the basic chemical ingredients for life on the moon. ### Hydrothermal vents On April 13, 2017, NASA announced the discovery of possible hydrothermal activity on Enceladus's sub-surface ocean floor. In 2015, the Cassini probe made a close fly-by of Enceladus's south pole, flying within 48.3 km (30.0 mi) of the surface, as well as through a plume in the process. A mass spectrometer on the craft detected molecular hydrogen (H<sub>2</sub>) from the plume, and after months of analysis, the conclusion was made that the hydrogen was most likely the result of hydrothermal activity beneath the surface. It has been speculated that such activity could be a potential oasis of habitability. The presence of ample hydrogen in Enceladus's ocean means that microbes – if any exist there – could use it to obtain energy by combining the hydrogen with carbon dioxide dissolved in the water. The chemical reaction is known as "methanogenesis" because it produces methane as a byproduct, and is at the root of the tree of life on Earth, the birthplace of all life that is known to exist. ## Exploration ### Voyager missions The two Voyager spacecraft made the first close-up images of Enceladus. Voyager 1 was the first to fly past Enceladus, at a distance of 202,000 km on November 12, 1980. Images acquired from this distance had very poor spatial resolution, but revealed a highly reflective surface devoid of impact craters, indicating a youthful surface. Voyager 1 also confirmed that Enceladus was embedded in the densest part of Saturn's diffuse E ring. Combined with the apparent youthful appearance of the surface, Voyager scientists suggested that the E ring consisted of particles vented from Enceladus's surface. Voyager 2 passed closer to Enceladus (87,010 km) on August 26, 1981, allowing higher-resolution images to be obtained. These images showed a young surface. They also revealed a surface with different regions with vastly different surface ages, with a heavily cratered mid- to high-northern latitude region, and a lightly cratered region closer to the equator. This geologic diversity contrasts with the ancient, heavily cratered surface of Mimas, another moon of Saturn slightly smaller than Enceladus. The geologically youthful terrains came as a great surprise to the scientific community, because no theory was then able to predict that such a small (and cold, compared to Jupiter's highly active moon Io) celestial body could bear signs of such activity. ### Cassini The answers to many remaining mysteries of Enceladus had to wait until the arrival of the Cassini spacecraft on July 1, 2004, when it entered orbit around Saturn. Given the results from the Voyager 2 images, Enceladus was considered a priority target by the Cassini mission planners, and several targeted flybys within 1,500 km of the surface were planned as well as numerous, "non-targeted" opportunities within 100,000 km of Enceladus. The flybys have yielded significant information concerning Enceladus's surface, as well as the discovery of water vapor with traces of simple hydrocarbons venting from the geologically active south polar region. These discoveries prompted the adjustment of Cassini's flight plan to allow closer flybys of Enceladus, including an encounter in March 2008 that took it to within 48 km of the surface. Cassini's extended mission included seven close flybys of Enceladus between July 2008 and July 2010, including two passes at only 50 km in the later half of 2008. Cassini performed a flyby on October 28, 2015, passing as close as 49 km (30 mi) and through a plume. Confirmation of molecular hydrogen (H <sub>2</sub>) would be an independent line of evidence that hydrothermal activity is taking place in the Enceladus seafloor, increasing its habitability. Cassini has provided strong evidence that Enceladus has an ocean with an energy source, nutrients and organic molecules, making Enceladus one of the best places for the study of potentially habitable environments for extraterrestrial life. ### Proposed mission concepts The discoveries Cassini made at Enceladus have prompted studies into follow-up mission concepts, including a probe flyby (Journey to Enceladus and Titan or JET) to analyze plume contents in situ, a lander by the German Aerospace Center to study the habitability potential of its subsurface ocean (Enceladus Explorer), and two astrobiology-oriented mission concepts (the Enceladus Life Finder and Life Investigation For Enceladus (LIFE)). The European Space Agency (ESA) was assessing concepts in 2008 to send a probe to Enceladus in a mission to be combined with studies of Titan: Titan Saturn System Mission (TSSM). TSSM was a joint NASA/ESA flagship-class proposal for exploration of Saturn's moons, with a focus on Enceladus, and it was competing against the Europa Jupiter System Mission (EJSM) proposal for funding. In February 2009, it was announced that NASA/ESA had given the EJSM mission priority ahead of TSSM, although TSSM'' will continue to be studied and evaluated. In November 2017, Russian billionaire Yuri Milner expressed interest in funding a "low-cost, privately funded mission to Enceladus which can be launched relatively soon." In September 2018, NASA and the Breakthrough Initiatives, founded by Milner, signed a cooperation agreement for the mission's initial concept phase. The spacecraft would be low-cost, low mass, and would be launched at high speed on an affordable rocket. The spacecraft would be directed to perform a single flyby through Enceladus' plumes in order to sample and analyze its content for biosignatures. NASA provided scientific and technical expertise through various reviews, from March 2019 to December 2019. In 2022, the Planetary Science Decadal Survey by the National Academy of Sciences recommended that NASA prioritize its newest probe concept, the Enceladus Orbilander, as a Flagship-class mission, alongside its newest concepts for a Mars sample-return mission and the Uranus Orbiter and Probe. The Enceladus Orbilander would be launched on a similarly affordable rocket, but would cost about \$5 billion, and be designed to endure eighteen months in orbit inspecting Enceladus' plumes before landing and spending two Earth years conducting surface astrobiology research. ## See also - Enceladus in fiction - List of extraterrestrial volcanoes - List of geological features on Enceladus - List of natural satellites
11,759
McDonnell Douglas F-4 Phantom II
1,173,572,118
Fighter aircraft family
[ "1950s United States fighter aircraft", "Aircraft first flown in 1958", "Articles containing video clips", "Carrier-based aircraft", "Low-wing aircraft", "McDonnell Douglas F-4 Phantom II", "McDonnell aircraft", "Twinjets" ]
The McDonnell Douglas F-4 Phantom II is an American tandem two-seat, twin-engine, all-weather, long-range supersonic jet interceptor and fighter-bomber originally developed by McDonnell Aircraft for the United States Navy. Proving highly adaptable, it entered service with the Navy in 1961 before it was adopted by the United States Marine Corps and the United States Air Force, and by the mid-1960s it had become a major part of their air arms. Phantom production ran from 1958 to 1981 with a total of 5,195 aircraft built, making it the most produced American supersonic military aircraft in history, and cementing its position as a signature combat aircraft of the Cold War. The Phantom is a large fighter with a top speed of over Mach 2.2. It can carry more than 18,000 pounds (8,400 kg) of weapons on nine external hardpoints, including air-to-air missiles, air-to-ground missiles, and various bombs. The F-4, like other interceptors of its time, was initially designed without an internal cannon. Later models incorporated an M61 Vulcan rotary cannon. Beginning in 1959, it set 15 world records for in-flight performance, including an absolute speed record and an absolute altitude record. The F-4 was used extensively during the Vietnam War. It served as the principal air superiority fighter for the U.S. Air Force, Navy, and Marine Corps and became important in the ground-attack and aerial reconnaissance roles late in the war. During the Vietnam War, one U.S. Air Force pilot, two weapon systems officers (WSOs), one U.S. Navy pilot and one radar intercept officer (RIO) became aces by achieving five aerial kills against enemy fighter aircraft. The F-4 continued to form a major part of U.S. military air power throughout the 1970s and 1980s, being gradually replaced by more modern aircraft such as the F-15 Eagle and F-16 Fighting Falcon in the U.S. Air Force, the F-14 Tomcat in the U.S. Navy, and the F/A-18 Hornet in the U.S. Navy and U.S. Marine Corps. The F-4 Phantom II remained in use by the U.S. in the reconnaissance and Wild Weasel (Suppression of Enemy Air Defenses) roles in the 1991 Gulf War, finally leaving service in 1996. It was also the only aircraft used by both U.S. flight demonstration teams: the United States Air Force Thunderbirds (F-4E) and the United States Navy Blue Angels (F-4J). The F-4 was also operated by the armed forces of 11 other nations. Israeli Phantoms saw extensive combat in several Arab–Israeli conflicts, while Iran used its large fleet of Phantoms, acquired before the fall of the Shah, in the Iran–Iraq War. As of 2021, 63 years after its first flight, the F-4 remains in active service with the air forces of Iran, South Korea, Greece, and Turkey. The aircraft has most recently been in service against the Islamic State group in the Middle East. ## Development ### Origins In 1952, McDonnell's Chief of Aerodynamics, Dave Lewis, was appointed by CEO Jim McDonnell to be the company's preliminary design manager. With no new aircraft competitions on the horizon, internal studies concluded the Navy had the greatest need for a new and different aircraft type: an attack fighter. In 1953, McDonnell Aircraft began work on revising its F3H Demon naval fighter, seeking expanded capabilities and better performance. The company developed several projects, including a variant powered by a Wright J67 engine, and variants powered by two Wright J65 engines, or two General Electric J79 engines. The J79-powered version promised a top speed of Mach 1.97. On 19 September 1953, McDonnell approached the United States Navy with a proposal for the "Super Demon". Uniquely, the aircraft was to be modular, as it could be fitted with one- or two-seat noses for different missions, with different nose cones to accommodate radar, photo cameras, four 20 mm (.79 in) cannon, or 56 FFAR unguided rockets in addition to the nine hardpoints under the wings and the fuselage. The Navy was sufficiently interested to order a full-scale mock-up of the F3H-G/H, but felt that the upcoming Grumman XF9F-9 and Vought XF8U-1 already satisfied the need for a supersonic fighter. The McDonnell design was therefore reworked into an all-weather fighter-bomber with 11 external hardpoints for weapons and on 18 October 1954, the company received a letter of intent for two YAH-1 prototypes. Then on 26 May 1955, four Navy officers arrived at the McDonnell offices and, within an hour, presented the company with an entirely new set of requirements. Because the Navy already had the Douglas A-4 Skyhawk for ground attack and F-8 Crusader for dogfighting, the project now had to fulfill the need for an all-weather fleet defense interceptor. A second crewman was added to operate the powerful radar; designers believed that air combat in the next war would overload solo pilots with information. ### XF4H-1 prototype The XF4H-1 was designed to carry four semi-recessed AAM-N-6 Sparrow III radar-guided missiles, and to be powered by two J79-GE-8 engines. As in the McDonnell F-101 Voodoo, the engines sat low in the fuselage to maximize internal fuel capacity and ingested air through fixed geometry intakes. The thin-section wing had a leading edge sweep of 45° and was equipped with blown flaps for better low-speed handling. Wind tunnel testing had revealed lateral instability, requiring the addition of 5° dihedral to the wings. To avoid redesigning the titanium central section of the aircraft, McDonnell engineers angled up only the outer portions of the wings by 12°, which averaged to the required 5° over the entire wingspan. The wings also received the distinctive "dogtooth" for improved control at high angles of attack. The all-moving tailplane was given 23° of anhedral to improve control at high angles of attack, while still keeping the tailplane clear of the engine exhaust. In addition, air intakes were equipped with one fixed ramp and one variable geometry ramp with angle scheduled to give maximum pressure recovery between Mach 1.4 and Mach 2.2. Airflow matching between the inlet and engine was achieved by bypassing the engine as secondary air into the exhaust nozzle. All-weather intercept capability was achieved with the AN/APQ-50 radar. To meet requirements for carrier operations, the landing gear was designed to withstand landings with a maximum sink rate of 23 ft/s (7 m/s), while the nose strut could extend by 20 in (51 cm) to increase angle of attack on the catapult portion of a takeoff. On 25 July 1955, the Navy ordered two XF4H-1 test aircraft and five YF4H-1 pre-production examples. The Phantom made its maiden flight on 27 May 1958 with Robert C. Little at the controls. A hydraulic problem precluded the retraction of the landing gear, but subsequent flights went more smoothly. Early testing resulted in redesign of the air intakes, including the distinctive addition of 12,500 holes to "bleed off" the slow-moving boundary layer air from the surface of each intake ramp. Series production aircraft also featured splitter plates to divert the boundary layer away from the engine intakes. The aircraft was soon in competition with the XF8U-3 Crusader III. Due to cockpit workload, the Navy wanted a two-seat aircraft and on 17 December 1958 the F4H was declared the winner. Delays with the J79-GE-8 engines meant that the first production aircraft were fitted with J79-GE-2 and −2A engines, each having 16,100 lbf (71.8 kN) of afterburning thrust. In 1959, the Phantom began carrier suitability trials with the first complete launch-recovery cycle performed on 15 February 1960 from Independence. There were proposals to name the F4H "Satan" and "Mithras". In the end, the aircraft was given the less controversial name "Phantom II", the first "Phantom" being another McDonnell jet fighter, the FH-1 Phantom. The Phantom II was briefly given the designation F-110A and named "Spectre" by the USAF, but these were not officially used and the Tri-Service aircraft designation system, F-4, was adopted in September 1962. ### Production Early in production, the radar was upgraded to the Westinghouse AN/APQ-72, an AN/APG-50 with a larger radar antenna, necessitating the bulbous nose, and the canopy was reworked to improve visibility and make the rear cockpit less claustrophobic. During its career the Phantom underwent many changes in the form of numerous variants developed. The USN operated the F4H-1 (re-designated F-4A in 1962) with J79-GE-2 and -2A engines of 16,100 lbf (71.62 kN) thrust and later builds receiving -8 engines. A total of 45 F-4As were built; none saw combat, and most ended up as test or training aircraft. The USN and USMC received the first definitive Phantom, the F-4B which was equipped with the Westinghouse APQ-72 radar (pulse only), a Texas Instruments AAA-4 Infrared search and track pod under the nose, an AN/AJB-3 bombing system and powered by J79-GE-8,-8A and -8B engines of 10,900 lbf (48.5 kN) dry and 16,950 lbf (75.4 kN) afterburner (reheat) with the first flight on 25 March 1961. 649 F-4Bs were built with deliveries beginning in 1961 and VF-121 Pacemakers receiving the first examples at NAS Miramar. The USAF received Phantoms as the result of Defense Secretary Robert McNamara's push to create a unified fighter for all branches of the US military. After an F-4B won the "Operation Highspeed" fly-off against the Convair F-106 Delta Dart, the USAF borrowed two Naval F-4Bs, temporarily designating them F-110A in January 1962, and developed requirements for their own version. Unlike the US Navy's focus on air-to-air interception in the Fleet Air Defense (FAD) mission, the USAF emphasized both an air-to-air and an air-to-ground fighter-bomber role. With McNamara's unification of designations on 18 September 1962, the Phantom became the F-4 with the naval version designated F-4B and USAF F-4C. The first Air Force Phantom flew on 27 May 1963, exceeding Mach 2 on its maiden flight. The F-4J improved both air-to-air and ground-attack capability; deliveries begun in 1966 and ended in 1972 with 522 built. It was equipped with J79-GE-10 engines with 17,844 lbf (79.374 kN) thrust, the Westinghouse AN/AWG-10 Fire Control System (making the F-4J the first fighter in the world with operational look-down/shoot-down capability), a new integrated missile control system and the AN/AJB-7 bombing system for expanded ground attack capability. The F-4N (updated F-4Bs) with smokeless engines and F-4J aerodynamic improvements started in 1972 under a U.S. Navy-initiated refurbishment program called "Project Bee Line" with 228 converted by 1978. The F-4S model resulted from the refurbishment of 265 F-4Js with J79-GE-17 smokeless engines of 17,900 lbf (79.379 kN), AWG-10B radar with digitized circuitry for improved performance and reliability, Honeywell AN/AVG-8 Visual Target Acquisition Set or VTAS (world's first operational Helmet Sighting System), classified avionics improvements, airframe reinforcement and leading edge slats for enhanced maneuvering. The USMC also operated the RF-4B with reconnaissance cameras with 46 built; the RF-4B flew alone and unarmed, with a requirement to fly straight and level at 5,000 feet while taking photographs. They relied on the shortcomings of the anti-aircraft defenses to survive as they were unable to make evasive maneuvers. Phantom II production ended in the United States in 1979 after 5,195 had been built (5,057 by McDonnell Douglas and 138 in Japan by Mitsubishi). Of these, 2,874 went to the USAF, 1,264 to the Navy and Marine Corps, and the rest to foreign customers. The last U.S.-built F-4 went to South Korea, while the last F-4 built was an F-4EJ built by Mitsubishi Heavy Industries in Japan and delivered on 20 May 1981. As of 2008, 631 Phantoms were in service worldwide, while the Phantoms were in use as a target drone (specifically QF-4Cs) operated by the U.S. military until 21 December 2016, when the Air Force officially ended use of the type. ### World records To show off their new fighter, the Navy led a series of record-breaking flights early in Phantom development: All in all, the Phantom set 16 world records. Five of the speed records remained unbeaten until the F-15 Eagle appeared in 1975. - Operation Top Flight: On 6 December 1959, the second XF4H-1 performed a zoom climb to a world record 98,557 ft (30,040 m). Commander Lawrence E. Flint Jr., USN accelerated his aircraft to Mach 2.5 (2,660 km/h; 1,650 mph) at 47,000 ft (14,330 m) and climbed to 90,000 ft (27,430 m) at a 45° angle. He then shut down the engines and glided to the peak altitude. As the aircraft fell through 70,000 ft (21,300 m), Flint restarted the engines and resumed normal flight. - On 5 September 1960, an F4H-1 averaged 1,216.78 mph (1,958.16 km/h) over a 500 km (311 mi) closed-circuit course. - On 25 September 1960, an F4H-1F averaged 1,390.24 mph (2,237.37 km/h) over a 100 km (62.1 mi) closed-circuit course. FAIRecord File Number 8898. - Operation LANA: To celebrate the 50th anniversary of Naval aviation (L is the Roman numeral for 50 and ANA stood for Anniversary of Naval Aviation) on 24 May 1961, Phantoms flew across the continental United States in under three hours and included several tanker refuelings. The fastest of the aircraft averaged 869.74 mph (1,400.28 km/h) and completed the trip in 2 hours 47 minutes, earning the pilot (and future NASA Astronaut), Lieutenant Richard Gordon, USN and RIO, Lieutenant Bobbie Young, USN, the 1961 Bendix trophy. - Operation Sageburner: On 28 August 1961, a F4H-1F Phantom II averaged 1,452.777 kilometers per hour (902.714 miles per hour) over a 3 mi (4.82 km) course flying below 125 feet (38.1 m) at all times. Commander J.L. Felsman, USN was killed during the first attempt at this record on 18 May 1961 when his aircraft disintegrated in the air after pitch damper failure. - Operation Skyburner: On 22 November 1961, a modified Phantom with water injection, piloted by Lt. Col. Robert B. Robinson, set an absolute world record average speed over a 20-mile (32.2 km) long 2-way straight course of 1,606.342 mph (2,585.086 km/h). - On 5 December 1961, another Phantom set a sustained altitude record of 66,443.8 feet (20,252 m). - Project High Jump: A series of time-to-altitude records was set in early 1962: 34.523 seconds to 3,000 m (9,840 ft), 48.787 seconds to 6,000 m (19,700 ft), 61.629 seconds to 9,000 m (29,500 ft), 77.156 seconds to 12,000 m (39,400 ft), 114.548 seconds to 15,000 m (49,200 ft), 178.5 s to 20,000 m (65,600 ft), 230.44 s to 25,000 m (82,000 ft), and 371.43 s to 30,000 m (98,400 ft). All High Jump records were set by F4H-1 production number 108 (Bureau Number 148423). Two of the records were set by future distinguished NASA astronaut LCdr John Young. ## Design ### Overview The F-4 Phantom is a tandem-seat fighter-bomber designed as a carrier-based interceptor to fill the U.S. Navy's fleet defense fighter role. Innovations in the F-4 included an advanced pulse-Doppler radar and extensive use of titanium in its airframe. Despite imposing dimensions and a maximum takeoff weight of over 60,000 lb (27,000 kg), the F-4 has a top speed of Mach 2.23 and an initial climb rate of over 41,000 ft/min (210 m/s). The F-4's nine external hardpoints have a capability of up to 18,650 pounds (8,480 kg) of weapons, including air-to-air and air-to-surface missiles, and unguided, guided, and thermonuclear weapons. Like other interceptors of its day, the F-4 was designed without an internal cannon. The baseline performance of a Mach 2-class fighter with long-range and a bomber-sized payload would be the template for the next generation of large and light/middle-weight fighters optimized for daylight air combat. ### Flight characteristics "Speed is life" was F-4 pilots' slogan, as the Phantom's greatest advantage in air combat was acceleration and thrust, which permitted a skilled pilot to engage and disengage from the fight at will. MiGs usually could outturn the F-4 because of the high drag on the Phantom's airframe, as a massive fighter aircraft designed to fire radar-guided missiles from beyond visual range, the F-4 lacked the agility of its Soviet opponents and was subject to adverse yaw during hard maneuvering. Although the F-4 was subject to irrecoverable spins during aileron rolls, pilots reported the aircraft to be very responsive and easy to fly on the edge of its performance envelope. In 1972, the F-4E model was upgraded with leading edge slats on the wing, greatly improving high angle of attack maneuverability at the expense of top speed. The J79 had a reduced time lag between the pilot advancing the throttle, from idle to maximum thrust, and the engine producing maximum thrust compared to earlier engines. While landing on USS Midway (CV-41) John Chesire's tailhook missed the arresting gear as he (mistakenly) reduced thrust to idle. He then slammed the throttle to full afterburner, the engine's response time being enough to return to full thrust quickly, and he was able get the Phantom airborne again successfully (bolter). The J79 produced noticeable amounts of black smoke (at mid-throttle/cruise settings), a severe disadvantage in that it made it easier for the enemy to spot the aircraft. Two decades after the aircraft entered service this was solved on the F-4S, which was fitted with the −10A engine variant with a smokeless combustor. The lack of an internal gun "was the biggest mistake on the F-4", Chesire said; "Bullets are cheap and tend to go where you aim them. I needed a gun, and I really wished I had one." Marine Corps General John R. Dailey recalled that "everyone in RF-4s wished they had a gun on the aircraft." For a brief period, doctrine held that turning combat would be impossible at supersonic speeds and little effort was made to teach pilots air combat maneuvering. In reality, engagements quickly became subsonic, as pilots would slow down in an effort to get behind their adversaries. Furthermore, the relatively new heat-seeking and radar-guided missiles at the time were frequently reported as unreliable and pilots had to fire multiple missiles just to hit one enemy fighter. To compound the problem, rules of engagement in Vietnam precluded long-range missile attacks in most instances, as visual identification was normally required. Many pilots found themselves on the tail of an enemy aircraft, but too close to fire short-range Falcons or Sidewinders. Although by 1965 USAF F-4Cs began carrying SUU-16 external gunpods containing a 20 mm (.79 in) M61A1 Vulcan Gatling cannon, USAF cockpits were not equipped with lead-computing gunsights until the introduction of the SUU-23, virtually assuring a miss in a maneuvering fight. Some Marine Corps aircraft carried two pods for strafing. In addition to the loss of performance due to drag, combat showed the externally mounted cannon to be inaccurate unless frequently boresighted, yet far more cost-effective than missiles. The lack of a cannon was finally addressed by adding an internally mounted 20 mm (.79 in) M61A1 Vulcan on the F-4E. ### Costs Note: Original amounts were in 1965 U.S. dollars. The figures in these tables have been adjusted for inflation to the current year. ## Operational history ### United States Air Force In USAF service, the F-4 was initially designated the F-110A prior to the introduction of the 1962 United States Tri-Service aircraft designation system. The USAF quickly embraced the design and became the largest Phantom user. The first USAF Phantoms in Vietnam were F-4Cs from the 43rd Tactical Fighter Squadron arrived in December 1964. Unlike the U.S. Navy and U.S. Marine Corps, which flew the Phantom with a Naval Aviator (pilot) in the front seat and a naval flight officer as a radar intercept officer (RIO) in the back seat, the USAF initially flew its Phantoms with a rated Air Force Pilot in front and back seats. Pilots usually did not like flying in the back seat; while the GIB, or "guy in back", could fly and ostensibly land the aircraft, he had fewer flight instruments and a very restricted forward view. The Air Force later assigned a rated Air Force Navigator qualified as a weapon/targeting systems officer (later designated as weapon systems officer or WSO) in the rear seat instead of another pilot. On 10 July 1965, F-4Cs of the 45th Tactical Fighter Squadron, 15th TFW, on temporary assignment in Ubon, Thailand, scored the USAF's first victories against North Vietnamese MiG-17s using AIM-9 Sidewinder air-to-air missiles. On 26 April 1966, an F-4C from the 480th Tactical Fighter Squadron scored the first aerial victory by a U.S. aircrew over a North Vietnamese MiG-21 "Fishbed". On 24 July 1965, another Phantom from the 45th Tactical Fighter Squadron became the first American aircraft to be downed by an enemy SAM, and on 5 October 1966 an 8th Tactical Fighter Wing F-4C became the first U.S. jet lost to an air-to-air missile, fired by a MiG-21. Early aircraft suffered from leaks in wing fuel tanks that required re-sealing after each flight and 85 aircraft were found to have cracks in outer wing ribs and stringers. There were also problems with aileron control cylinders, electrical connectors, and engine compartment fires. Reconnaissance RF-4Cs made their debut in Vietnam on 30 October 1965, flying the hazardous post-strike reconnaissance missions. The USAF Thunderbirds used the F-4E from the 1969 season until 1974. Although the F-4C was essentially identical to the Navy/Marine Corps F-4B in-flight performance and carried the AIM-9 Sidewinder missiles, USAF-tailored F-4Ds initially arrived in June 1967 equipped with AIM-4 Falcons. However, the Falcon, like its predecessors, was designed to shoot down heavy bombers flying straight and level. Its reliability proved no better than others and its complex firing sequence and limited seeker-head cooling time made it virtually useless in combat against agile fighters. The F-4Ds reverted to using Sidewinders under the "Rivet Haste" program in early 1968, and by 1972 the AIM-7E-2 "Dogfight Sparrow" had become the preferred missile for USAF pilots. Like other Vietnam War Phantoms, the F-4Ds were urgently fitted with radar warning receivers to detect the Soviet-built S-75 Dvina SAMs. From the initial deployment of the F-4C to Southeast Asia, USAF Phantoms performed both air superiority and ground attack roles, supporting not only ground troops in South Vietnam, but also conducting bombing sorties in Laos and North Vietnam. As the F-105 force underwent severe attrition between 1965 and 1968, the bombing role of the F-4 proportionately increased until after November 1970 (when the last F-105D was withdrawn from combat) it became the primary USAF tactical ordnance delivery system. In October 1972 the first squadron of EF-4C Wild Weasel aircraft deployed to Thailand on temporary duty. The "E" prefix was later dropped and the aircraft was simply known as the F-4C Wild Weasel. Sixteen squadrons of Phantoms were permanently deployed between 1965 and 1973, and 17 others deployed on temporary combat assignments. Peak numbers of combat F-4s occurred in 1972, when 353 were based in Thailand. A total of 445 Air Force Phantom fighter-bombers were lost, 370 in combat and 193 of those over North Vietnam (33 to MiGs, 30 to SAMs, and 307 to AAA). The RF-4C was operated by four squadrons, and of the 83 losses, 72 were in combat including 38 over North Vietnam (seven to SAMs and 65 to AAA). By war's end, the U.S. Air Force had lost a total of 528 F-4 and RF-4C Phantoms. When combined with U.S. Navy and Marine Corps losses of 233 Phantoms, 761 F-4/RF-4 Phantoms were lost in the Vietnam War. On 28 August 1972, Captain Steve Ritchie became the first USAF ace of the war. On 9 September 1972, WSO Capt Charles B. DeBellevue became the highest-scoring American ace of the war with six victories. and WSO Capt Jeffrey Feinstein became the last USAF ace of the war on 13 October 1972. Upon return to the United States, DeBellevue and Feinstein were assigned to undergraduate pilot training (Feinstein was given a vision waiver) and requalified as USAF pilots in the F-4. USAF F-4C/D/E crews claimed 107.5 MiG kills in Southeast Asia (50 by Sparrow, 31 by Sidewinder, five by Falcon, 15.5 by gun, and six by other means). On 31 January 1972, the 170th Tactical Fighter Squadron/183d Tactical Fighter Group of the Illinois Air National Guard became the first Air National Guard unit to transition to Phantoms from Republic F-84F Thunderstreaks which were found to have corrosion problems. Phantoms would eventually equip numerous tactical fighter and tactical reconnaissance units in the USAF active, National Guard, and reserve. On 2 June 1972, a Phantom flying at supersonic speed shot down a MiG-19 over Thud Ridge in Vietnam with its cannon. At a recorded speed of Mach 1.2, Major Phil Handley's shoot down was the first and only recorded gun kill while flying at supersonic speeds. On 15 August 1990, 24 F-4G Wild Weasel Vs and six RF-4Cs were deployed to Shaikh Isa AB, Bahrain, for Operation Desert Storm. The F-4G was the only aircraft in the USAF inventory equipped for the Suppression of Enemy Air Defenses (SEAD) role, and was needed to protect coalition aircraft from Iraq's extensive air defense system. The RF-4C was the only aircraft equipped with the ultra-long-range KS-127 LOROP (long-range oblique photography) camera, and was used for a variety of reconnaissance missions. In spite of flying almost daily missions, only one RF-4C was lost in a fatal accident before the start of hostilities. One F-4G was lost when enemy fire damaged the fuel tanks and the aircraft ran out of fuel near a friendly airbase. The last USAF Phantoms, F-4G Wild Weasel Vs from 561st Fighter Squadron, were retired on 26 March 1996. The last operational flight of the F-4G Wild Weasel was from the 190th Fighter Squadron, Idaho Air National Guard, in April 1996. The last operational USAF/ANG F-4 to land was flown by Maj Mike Webb and Maj Gary Leeder of the Idaho ANG. Like the Navy, the Air Force has operated QF-4 target drones, serving with the 82d Aerial Targets Squadron at Tyndall Air Force Base, Florida, and Holloman Air Force Base, New Mexico. It was expected that the F-4 would remain in the target role with the 82d ATRS until at least 2015, when they would be replaced by early versions of the F-16 Fighting Falcon converted to a QF-16 configuration. Several QF-4s also retain capability as manned aircraft and are maintained in historical color schemes, being displayed as part of Air Combat Command's Heritage Flight at air shows, base open houses, and other events while serving as non-expendable target aircraft during the week. On 19 November 2013, BAE Systems delivered the last QF-4 aerial target to the Air Force. The example had been in storage for over 20 years before being converted. Over 16 years, BAE had converted 314 F-4 and RF-4 Phantom IIs into QF-4s and QRF-4s, with each aircraft taking six months to adapt. As of December 2013, QF-4 and QRF-4 aircraft had flown over 16,000 manned and 600 unmanned training sorties, with 250 unmanned aircraft being shot down in firing exercises. The remaining QF-4s and QRF-4s held their training role until the first of 126 QF-16s were delivered by Boeing. The final flight of an Air Force QF-4 from Tyndall AFB took place on 27 May 2015 to Holloman AFB. After Tyndall AFB ceased operations, the 53d Weapons Evaluation Group at Holloman became the fleet of 22 QF-4s' last remaining operator. The base continued using them to fly manned test and unmanned live fire test support and Foreign Military Sales testing, with the final unmanned flight taking place in August 2016. The type was officially retired from US military service with a four–ship flight at Holloman during an event on 21 December 2016. The remaining QF-4s were to be demilitarized after 1 January 2017. ### United States Navy On 30 December 1960, the VF-121 "Pacemakers" at NAS Miramar became the first Phantom operator with its F4H-1Fs (F-4As). The VF-74 "Be-devilers" at NAS Oceana became the first deployable Phantom squadron when it received its F4H-1s (F-4Bs) on 8 July 1961. The squadron completed carrier qualifications in October 1961 and Phantom's first full carrier deployment between August 1962 and March 1963 aboard Forrestal. The second deployable U.S. Atlantic Fleet squadron to receive F-4Bs was the VF-102 "Diamondbacks", who promptly took their new aircraft on the shakedown cruise of Enterprise. The first deployable U.S. Pacific Fleet squadron to receive the F-4B was the VF-114 "Aardvarks", which participated in the September 1962 cruise aboard USS Kitty Hawk. By the time of the Tonkin Gulf incident, 13 of 31 deployable navy squadrons were armed with the type. F-4Bs from Constellation made the first Phantom combat sortie of the Vietnam War on 5 August 1964, flying bomber escort in Operation Pierce Arrow. Navy fighter pilots were unused to flying with a non-pilot RIO, but learned from air combat in Vietnam the benefits of the GiB "guy in back" or "voice in the luggage compartment" helping with the workload. The first Phantom air-to-air victory of the war took place on 9 April 1965 when an F-4B from VF-96 "Fighting Falcons" piloted by Lieutenant (junior grade) Terence M. Murphy and his RIO, Ensign Ronald Fegan, shot down a Chinese MiG-17 "Fresco". The Phantom was then shot down, probably by an AIM-7 Sparrow from one of its wingmen. There continues to be controversy over whether the Phantom was shot down by MiG guns or, as enemy reports later indicated, an AIM-7 Sparrow III from one of Murphy's and Fegan's wingmen. On 17 June 1965, an F-4B from VF-21 "Freelancers" piloted by Commander Louis Page and Lieutenant John C. Smith shot down the first North Vietnamese MiG of the war. On 10 May 1972, Lieutenant Randy "Duke" Cunningham and Lieutenant (junior grade) William P. Driscoll flying an F-4J, call sign "Showtime 100", shot down three MiG-17s to become the first American flying aces of the war. Their fifth victory was believed at the time to be over a mysterious North Vietnamese ace, Colonel Nguyen Toon, now considered mythical. On the return flight, the Phantom was damaged by an enemy surface-to-air missile. To avoid being captured, Cunningham and Driscoll flew their burning aircraft using only the rudder and afterburner (the damage to the aircraft rendered conventional control nearly impossible), until they could eject over water. During the war, U.S. Navy F-4 Phantom squadrons participated in 84 combat tours with F-4Bs, F-4Js, and F-4Ns. The Navy claimed 40 air-to-air victories at a cost of 73 Phantoms lost in combat (seven to enemy aircraft, 13 to SAMs, and 53 to AAA). An additional 54 Phantoms were lost in mishaps. In 1984, all Navy F-4Ns were retired from Fleet service in deployable USN squadrons and by 1987 the last F-4Ss were retired from deployable USN squadrons. On 25 March 1986, an F-4S belonging to the VF-151 "Vigilantes," became the last active duty U.S. Navy Phantom to launch from an aircraft carrier, in this case, Midway. On 18 October 1986, an F-4S from the VF-202 "Superheats", a Naval Reserve fighter squadron, made the last-ever Phantom carrier landing while operating aboard America. In 1987, the last of the Naval Reserve-operated F-4S aircraft were replaced by F-14As. The last Phantoms in service with the Navy were QF-4N and QF-4S target drones operated by the Naval Air Warfare Center at NAS Point Mugu, California. These airframes were subsequently retired in 2004. ### United States Marine Corps The Marine Corps received its first F-4Bs in June 1962, with the "Black Knights" of VMFA-314 at Marine Corps Air Station El Toro, California becoming the first operational squadron. Marine Phantoms from VMFA-531 "Grey Ghosts" were assigned to Da Nang airbase on South Vietnam's northeast coast on 10 May 1965 and were initially assigned to provide air defense for the USMC. They soon began close air support missions (CAS) and VMFA-314 'Black Knights', VMFA-232 'Red Devils, VMFA-323 'Death Rattlers', and VMFA-542 'Bengals' soon arrived at the primitive airfield. Marine F-4 pilots claimed three enemy MiGs (two while on exchange duty with the USAF) at the cost of 75 aircraft lost in combat, mostly to ground fire, and four in accidents. The VMCJ-1 Golden Hawks (later VMAQ-1 and VMAQ-4 which had the old RM tailcode) flew the first photo recon mission with an RF-4B variant on 3 November 1966 from Da Nang AB, South Vietnam and remained there until 1970 with no RF-4B losses and only one aircraft damaged by anti-aircraft artillery (AAA) fire. VMCJ-2 and VMCJ-3 (now VMAQ-3) provided aircraft for VMCJ-1 in Da Nang and VMFP-3 was formed in 1975 at MCAS El Toro, CA consolidating all USMC RF-4Bs in one unit that became known as "The Eyes of the Corps." VMFP-3 disestablished in August 1990 after the Advanced Tactical Airborne Reconnaissance System was introduced for the F/A-18D Hornet. The F-4 continued to equip fighter-attack squadrons in both active and reserve Marine Corps units throughout the 1960s, 1970s and 1980s and into the early 1990s. In the early 1980s, these squadrons began to transition to the F/A-18 Hornet, starting with the same squadron that introduced the F-4 to the Marine Corps, VMFA-314 at MCAS El Toro, California. On 18 January 1992, the last Marine Corps Phantom, an F-4S in the Marine Corps Reserve, was retired by the "Cowboys" of VMFA-112 at NAS Dallas, Texas, after which the squadron was re-equipped with F/A-18 Hornets. ### Aerial combat in the Vietnam War The USAF and the US Navy had high expectations of the F-4 Phantom, assuming that the massive firepower, the best available on-board radar, the highest speed and acceleration properties, coupled with new tactics, would provide Phantoms with an advantage over the MiGs. However, in confrontations with the lighter MiG-21, F-4s did not always succeed and began to suffer losses. Over the course of the air war in Vietnam, between 3 April 1965 and 8 January 1973, each side would ultimately claim favorable kill ratios. During the war, U.S. Navy F-4 Phantoms claimed 40 air-to-air victories at a loss of seven Phantoms to enemy aircraft. USMC F-4 pilots claimed three enemy MiGs at the cost of one aircraft in air-combat. USAF F-4 Phantom crews scored 107+1⁄2 MiG kills (including 33+1⁄2 MiG-17s, eight MiG-19s and 66 MiG-21s) at a cost of 33 Phantoms in air-combat. F-4 pilots were credited with a total of 150+1⁄2 MiG kills at a cost of 42 Phantoms in air-combat. According to the VPAF, 103 F-4 Phantoms were shot down by MiG-21s at a cost of 54 MiG-21s downed by F-4s. During the war, the VPAF lost 131 MiGs in air combat (63 MiG-17s, eight MiG-19s and 60 MiG-21s) of which one half were by F-4s. From 1966 to November 1968, in 46 air battles conducted over North Vietnam between F-4s and MiG-21s, VPAF claimed 27 F-4s were shot down by MiG-21s at a cost of 20 MiG-21s In 1970, one F-4 Phantom was shot down by a MiG-21. The struggle culminated on 10 May 1972, with VPAF aircraft completing 64 sorties, resulting in 15 air battles. The VPAF claimed seven F-4s were shot down, while U.S. confirmed five F-4s were lost. The Phantoms, in turn, managed to destroy two MiG-21s, three MiG-17s, and one MiG-19. On 11 May, two MiG-21s, which played the role of "bait", brought the four F-4s to two MiG-21s circling at low altitude. The MiGs quickly engaged and shot down two F-4s. On 18 May, Vietnamese aircraft made 26 sorties in eight air engagements, which cost 4 F-4 Phantoms; Vietnamese fighters on that day did not suffer losses. ### Non-U.S. users The Phantom has served with the air forces of many countries, including Australia, Egypt, Germany, United Kingdom, Greece, Iran, Israel, Japan, Spain, South Korea and Turkey. #### Australia The Royal Australian Air Force (RAAF) leased 24 USAF F-4Es from 1970 to 1973 while waiting for their order for the General Dynamics F-111C to be delivered. They were so well-liked that the RAAF considered retaining the aircraft after the F-111Cs were delivered. They were operated from RAAF Amberley by No. 1 Squadron and No. 6 Squadron. #### Egypt In 1979, the Egyptian Air Force purchased 35 former USAF F-4Es along with a number of Sparrow, Sidewinder, and Maverick missiles from the U.S. for \$594 million as part of the "Peace Pharaoh" program. An additional seven surplus USAF aircraft were purchased in 1988. Three attrition replacements had been received by the end of the 1990s. Egyptian F-4Es were retired in 2020, with their former base at Cairo West Air Base being reconfigured for the operation of F-16C/D Fighting Falcons. #### Germany The German Air Force (Luftwaffe) initially ordered the reconnaissance RF-4E in 1969, receiving a total of 88 aircraft from January 1971. In 1973, under the "Peace Rhine" program, the Luftwaffe purchased 175 units of the F-4F. The “F” variant was a more agile version of the “E”, due to its lower weight and slatted wings. However this was achieved at the expense of reduced fuel capacity, and the elimination of AIM-7 Sparrow capability. These purchases made Germany the largest export customer for the Phantom. In 1975, Germany also received 10 F-4Es for training in the U.S. In the late 1990s, these were withdrawn from service after being replaced by F-4Fs. In 1982, the initially unarmed RF-4Es were given a secondary ground attack capability; these aircraft were retired in 1994. The F-4F was upgraded in the mid-1980s. Germany also initiated the Improved Combat Efficiency (ICE) program in 1983. The 110 ICE-upgraded F-4Fs entered service in 1992, and were expected to remain in service until 2012. All the remaining Luftwaffe Phantoms were based at Wittmund with Jagdgeschwader 71 (fighter wing 71) in Northern Germany and WTD61 at Manching. A total of 24 German F-4F Phantom IIs were operated by the 49th Tactical Fighter Wing of the USAF at Holloman AFB to train Luftwaffe crews until December 2004. Phantoms were deployed to NATO states under the Baltic Air Policing starting in 2005, 2008, 2009, 2011 and 2012. The German Air Force retired its last F-4Fs on 29 June 2013. German F-4Fs flew 279,000 hours from entering service on 31 August 1973 until retirement. #### Greece In 1971, the Hellenic Air Force ordered brand new F-4E Phantoms, with deliveries starting in 1974. In the early 1990s, the Hellenic AF acquired surplus RF-4Es and F-4Es from the Luftwaffe and U.S. ANG. Following the success of the German ICE program, on 11 August 1997, a contract was signed between DASA of Germany and Hellenic Aerospace Industry for the upgrade of 39 aircraft to the very similar "Peace Icarus 2000" standard. On 5 May 2017, the Hellenic Air Force officially retired the RF-4E Phantom II during a public ceremony. #### Iran In the 1960s and 1970s when the U.S. and Iran were on friendly terms, the U.S. delivered 225 F-4D, F-4E, and RF-4E Phantoms to Iran, making it the second largest export customer. The Imperial Iranian Air Force saw at least one engagement, resulting in a loss, after an RF-4C was rammed by a Soviet MiG-21 during Project Dark Gene, an ELINT operation during the Cold War. The Islamic Republic of Iran Air Force Phantoms saw heavy action in the Iran–Iraq War in the 1980s and were kept operational by overhaul and servicing from Iran's aerospace industry. Notable operations of Iranian F-4s during the war included Operation Scorch Sword, an attack by two F-4s against the Iraqi Osirak nuclear reactor site near Baghdad on 30 September 1980, and the attack on H3, a 4 April 1981 strike by eight Iranian F-4s against the H-3 complex of air bases in the far west of Iraq, which resulted in many Iraqi aircraft being destroyed or damaged for no Iranian losses. On 5 June 1984, two Saudi Arabian fighter pilots shot down two Iranian F-4 fighters. The Royal Saudi Air Force pilots were flying American-built F-15s and fired air-to-air missiles to bring down the Iranian planes. The Saudi fighter pilots had Boeing KC-135 Stratotanker planes and Boeing E-3 Sentry AWACS surveillance planes assist in the encounter. The aerial fight occurred in Saudi airspace over the Persian Gulf near the Saudi island Al Arabiyah, about 60 miles northeast of Jubail. Iranian F-4s were in use as of late 2014; the aircraft reportedly conducted air strikes on ISIS targets in the eastern Iraqi province of Diyala. #### Israel The Israeli Air Force acquired between 212 and 222 newly built and ex-USAF aircraft, and modified several as one-off special reconnaissance variants. The first F-4Es, nicknamed "Kurnass" (Sledgehammer), and RF-4Es, nicknamed "Orev" (Raven), were delivered in 1969 under the "Peace Echo I" program. Additional Phantoms arrived during the 1970s under "Peace Echo II" through "Peace Echo V" and "Nickel Grass" programs. Israeli Phantoms saw extensive combat during Arab–Israeli conflicts, first seeing action during the War of Attrition. In the 1980s, Israel began the "Kurnass 2000" modernization program which significantly updated avionics. The last Israeli F-4s were retired in 2004. #### Japan From 1968, the Japan Air Self-Defense Force (JASDF) purchased a total of 140 F-4EJ Phantoms without aerial refueling, AGM-12 Bullpup missile system, nuclear control system or ground attack capabilities. Mitsubishi built 138 under license in Japan and 14 unarmed reconnaissance RF-4Es were imported. One of the aircraft (17-8440) was the last of the 5,195 F-4 Phantoms to be produced. It was manufactured by Mitsubishi Heavy Industries on 21 May 1981. "The Final Phantom" served with 306th Tactical Fighter Squadron and later transferred to the 301st Tactical Fighter Squadron. Of these, 96 F-4EJs were modified to the F-4EJ Kai (改, modified) standard. 15 F-4EJ and F-4EJ Kai were converted to reconnaissance aircraft designated RF-4EJ. Japan had a fleet of 90 F-4s in service in 2007. After studying several replacement fighters the F-35A Lightning II was chosen in 2011. The 302nd Tactical Fighter Squadron became the first JASDF F-35 Squadron at Misawa Air Base when it converted from the F-4EJ Kai on 29 March 2019. The JASDF's sole aerial reconnaissance unit, the 501st Tactical Reconnaissance Squadron, retired their RF-4Es and RF-4EJs on 9 March 2020, and the unit itself dissolved on 26 March. The 301st Tactical Fighter Squadron then became the sole user of the F-4EJ in the Air Defense Command, with their retirement originally scheduled in 2021 along with the unit's transition to the F-35A. However, on 20 November 2020, the 301st Tactical Fighter Squadron announced the earlier retirement of their remaining F-4EJs, concluding the Phantom's long-running career in the JASDF Air Defense Command. Although retirement was announced, the 301st TFS continued operations up until 10 December 2020, with the squadron's Phantoms being decommissioned on 14 December. Two F-4EJs and a F-4EJ Kai continued to be operated by the Air Development and Test Wing in Gifu Prefecture until their retirement on 17 March 2021, marking an end of Phantom operations in Japan. #### South Korea The Republic of Korea Air Force purchased its first batch of secondhand USAF F-4D Phantoms in 1968 under the "Peace Spectator" program. The F-4Ds continued to be delivered until 1988. The "Peace Pheasant II" program also provided new-built and former USAF F-4Es. #### Spain The Spanish Air Force acquired its first batch of ex-USAF F-4C Phantoms in 1971 under the "Peace Alfa" program. Designated C.12, the aircraft were retired in 1989. At the same time, the air arm received a number of ex-USAF RF-4Cs, designated CR.12. In 1995–1996, these aircraft received extensive avionics upgrades. Spain retired its RF-4s in 2002. #### Turkey The Turkish Air Force (TAF) received 40 F-4Es in 1974, with a further 32 F-4Es and 8 RF-4Es in 1977–78 under the "Peace Diamond III" program, followed by 40 ex-USAF aircraft in "Peace Diamond IV" in 1987, and a further 40 ex-U.S. Air National Guard Aircraft in 1991. A further 32 RF-4Es were transferred to Turkey after being retired by the Luftwaffe between 1992 and 1994. In 1995, Israel Aerospace Industries (IAI) implemented an upgrade similar to Kurnass 2000 on 54 Turkish F-4Es which were dubbed the F-4E 2020 Terminator. Turkish F-4s, and more modern F-16s have been used to strike Kurdish PKK bases in ongoing military operations in Northern Iraq. On 22 June 2012, a Turkish RF-4E was shot down by Syrian air defenses while flying a reconnaissance flight near the Turkish-Syrian border. Turkey has stated the reconnaissance aircraft was in international airspace when it was shot down, while Syrian authorities stated it was inside Syrian airspace. Turkish F-4s remained in use as of 2020, and it plans to fly them at least until 2030. On 24 February 2015, two RF-4Es crashed in the Malatya region in the southeast of Turkey, under yet unknown circumstances, killing both crew of two each. On 5 March 2015, an F-4E-2020 crashed in central Anatolia killing both crew. After the recent accidents, the TAF withdrew RF-4Es from active service. Turkey was reported to have used F-4 jets to attack PKK separatists and the ISIS capital on 19 September 2015. The Turkish Air Force has reportedly used the F-4E 2020s against the more recent Third Phase of the PKK conflict on heavy bombardment missions into Iraq on 15 November 2015, 12 January 2016, and 12 March 2016. #### United Kingdom The United Kingdom bought versions based on the U.S. Navy's F-4J for use with the Royal Air Force and the Royal Navy's Fleet Air Arm. The UK was the only country outside the United States to operate the Phantom at sea, with them operating from HMS Ark Royal. The main differences were the use of the British Rolls-Royce Spey engines and of British-made avionics. The RN and RAF versions were given the designation F-4K and F-4M respectively, and entered service with the British military aircraft designations Phantom FG.1 (fighter/ground attack) and Phantom FGR.2 (fighter/ground attack/reconnaissance). Initially, the FGR.2 was used in the ground attack and reconnaissance role, primarily with RAF Germany, while 43 Squadron was formed in the air defense role using the FG.1s that had been intended for the Fleet Air Arm for use aboard HMS Eagle. The superiority of the Phantom over the English Electric Lightning in terms of both range and weapons system capability, combined with the successful introduction of the SEPECAT Jaguar, meant that, during the mid-1970s, most of the ground attack Phantoms in Germany were redeployed to the UK to replace air defense Lightning squadrons. A second RAF squadron, 111 Squadron, was formed on the FG.1 in 1979 after the disbandment of 892 NAS. In 1982, during the Falklands War, three Phantom FGR2s of No. 29 Squadron were on active Quick Reaction Alert duty on Ascension Island to protect the base from air attack. After the Falklands War, 15 upgraded ex-USN F-4Js, known as the F-4J(UK) entered RAF service to compensate for one interceptor squadron redeployed to the Falklands. Around 15 RAF squadrons received various marks of Phantom, many of them based in Germany. The first to be equipped was No. 228 Operational Conversion Unit at RAF Coningsby in August 1968. One noteworthy operator was No. 43 Squadron where Phantom FG1s remained the squadron equipment for 20 years, arriving in September 1969 and departing in July 1989. During this period the squadron was based at Leuchars. The interceptor Phantoms were replaced by the Panavia Tornado F3 from the late 1980s onwards. Originally to be used until 2003, it was set back to 1992 due to restructuring of the British Armed Forces and the last combat British Phantoms were retired in October 1992 when No. 74(F) Squadron was disbanded. Phantom FG.1 XT597 was the last British Phantom to be retired on 28 January 1994, it was used as a test jet by the Aeroplane and Armament Experimental Establishment for its whole service life. ### Civilian use Sandia National Laboratories expended an F-4 mounted on a "rocket sled" in a crash test to record the results of an aircraft impacting a reinforced concrete structure, such as a nuclear power plant. One aircraft, an F-4D (civilian registration NX749CF), is operated by the Massachusetts-based non-profit organization Collings Foundation as a "living history" exhibit. Funds to maintain and operate the aircraft, which is based in Houston, Texas, are raised through donations/sponsorships from public and commercial parties. After finding the Lockheed F-104 Starfighter inadequate, NASA used the F-4 to photograph and film Titan II missiles after launch from Cape Canaveral during the 1960s. Retired U.S. Air Force colonel Jack Petry described how he put his F-4 into a Mach 1.2 dive synchronized to the launch countdown, then "walked the (rocket's) contrail". Petry's Phantom stayed with the Titan for 90 seconds, reaching 68,000 feet, then broke away as the missile continued into space. NASA's Dryden Flight Research Center acquired an F-4A on 3 December 1965. It made 55 flights in support of short programs, chase on X-15 missions and lifting body flights. The F-4 also supported a biomedical monitoring program involving 1,000 flights by NASA Flight Research Center aerospace research pilots and students of the USAF Aerospace Research Pilot School flying high-performance aircraft. The pilots were instrumented to record accurate and reliable data of electrocardiogram, respiration rate, and normal acceleration. In 1967, the Phantom supported a brief military-inspired program to determine whether an airplane's sonic boom could be directed and whether it could be used as a weapon of sorts, or at least an annoyance. NASA also flew an F-4C in a spanwise blowing study from 1983 to 1985, after which it was returned. ## Variants F-4A, B, J, N and S Variants for the U.S. Navy and the U.S. Marine Corps. F-4B was upgraded to F-4N, and F-4J was upgraded to F-4S. F-110 (original USAF designation for F-4C), F-4C, D and E Variants for the U.S. Air Force. F-4E introduced an internal M61 Vulcan cannon. The F-4D and E were the most numerously produced, widely exported, and also extensively used under the Semi Automatic Ground Environment (SAGE) U.S. air defense system. F-4G Wild Weasel V A dedicated SEAD variant for the U.S. Air Force with updated radar and avionics, converted from F-4E. The designation F-4G was applied earlier to an entirely different U.S. Navy Phantom. F-4K and M Variants for the Royal Navy and Royal Air Force, respectively, re-engined with Rolls-Royce Spey turbofan engines. F-4EJ and RF-4EJ Simplified F-4E exported to and license-built in Japan. Some modified for reconnaissance role, carrying photographic and/or electronic reconnaissance pods and designated RF-4EJ. F-4F Simplified F-4E exported to Germany. QRF-4C, QF-4B, E, G, N and S Retired aircraft converted into remote-controlled target drones used for weapons and defensive systems research by USAF and USN / USMC. RF-4B, C, and E Tactical reconnaissance variants. ## Operators ### Operators Greece - Hellenic Air Force – 18 F-4E AUPs in service - Andravida Air Base, Elis - 338th Fighter-Bomber Squadron Iran - Islamic Republic of Iran Air Force – 62 F-4D, F-4E, and RF-4Es in service - Bandar Abbas Air Base, Hormozgan Province - 91st Tactical Fighter Squadron (F-4E) - Bushehr Air Base, Bushehr Province - 61st Tactical Fighter Squadron (F-4E) - Chabahar Konarak Air Base, Sistan and Baluchestan Province - 101st Tactical Fighter Squadron (F-4D) - Hamadan Air Base, Hamadan Province - 31st Tactical Reconnaissance Squadron (RF-4E) - 31st Tactical Fighter Squadron (F-4E) South Korea - Republic of Korea Air Force – 27 F-4Es in service - Suwon Air Base, Gyeonggi Province - 153rd Fighter Squadron Turkey - Turkish Air Force – 54 F-4E 2020 Terminators in service - Eskişehir Air Base, Eskişehir Province - 111 Filo ### Former operators Australia - Royal Australian Air Force (F-4E 1970 to 1973) Egypt - Egyptian Air Force (F-4E 1977 to 2020) Germany - German Air Force (RF-4E 1971 to 1994; F-4F 1973 to 2013; F-4E 1978 to 1992) Greece - Hellenic Air Force (RF-4E 1978 to 2017) Pahlavi Iran - Imperial Iranian Air Force (F-4D 1968 to 1979; F-4E 1971 to 1979; RF-4E 1971 to 1979) ; - Israeli Air Force (F-4E 1969 to 2004; RF-4C 1970 to 1971; RF-4E 1971 to 2004) Japan - Japan Air Self-Defense Force (F-4EJ 1971 to 2021; RF-4E 1974 to 2020; RF-4EJ 1992 to 2020) South Korea - Republic of Korea Air Force (F-4D 1969 to 2010; RF-4C 1989 to 2014) Spain - Spanish Air Force (F-4C 1971 to 1990; RF-4C 1978 to 2002) Turkey - Turkish Air Force (RF-4E 1980 to 2015) United Kingdom - Aeroplane and Armament Experimental Establishment (F-4K 1970 to 1994) - Fleet Air Arm (F-4K 1968 to 1978) - Royal Air Force (F-4M 1968 to 1992; F-4K 1969 to 1990; F-4J(UK) 1984 to 1991) United States - NASA (F-4A 1965 to 1967; F-4C 1983 to 1985) - United States Air Force (F-4B 1963 to 1964; F-4C 1964 to 1989; RF-4C 1964 to 1995; F-4D 1965 to 1992; F-4E 1967 to 1991; F-4G 1978 to 1996; QF-4 1996 to 2016) - United States Marine Corps (F-4B 1962 to 1979; RF-4B 1965 to 1990; F-4J 1967 to 1984; F-4N 1973 to 1985; F-4S 1978 to 1992) - United States Navy (F-4A 1960 to 1968; F-4B 1961 to 1974; F-4J 1966 to 1982; F-4N 1973 to 1984; F-4S 1979 to 1987; QF-4 1983 to 2004) ## Culture ### Nicknames The Phantom gathered a number of nicknames during its career. Some of these names included "Snoopy", "Rhino", "Double Ugly", "Old Smokey", the "Flying Anvil", "Flying Footlocker", "Flying Brick", "Lead Sled", the "Big Iron Sled", and the "St. Louis Slugger". In recognition of its record of downing large numbers of Soviet-built MiGs, it was called the "World's Leading Distributor of MiG Parts". As a reflection of excellent performance in spite of its bulk, the F-4 was dubbed "the triumph of thrust over aerodynamics." German Luftwaffe crews called their F-4s the Eisenschwein ("Iron Pig"), Fliegender Ziegelstein ("Flying Brick") and Luftverteidigungsdiesel ("Air Defense Diesel"). In the RAF it was most commonly referred to as “The Toom” (not tomb). ### Reputation Imitating the spelling of the aircraft's name, McDonnell issued a series of patches. Pilots became "Phantom Phlyers", backseaters became "Phantom Pherrets", fans of the F-4 "Phantom Phanatics", and call it the "Phabulous Phantom". Ground crewmen who worked on the aircraft are known as "Phantom Phixers". Several active websites are devoted to sharing information on the F-4, and the aircraft is grudgingly admired as brutally effective by those who have flown it. Colonel (Ret.) Chuck DeBellevue reminisced, "The F-4 Phantom was the last plane that looked like it was made to kill somebody. It was a beast. It could go through a flock of birds and kick out barbeque from the back." It had "A reputation of being a clumsy bruiser reliant on brute engine power and obsolete weapons technology." ### The Spook The aircraft's emblem is a whimsical cartoon ghost called "The Spook", which was created by McDonnell Douglas technical artist, Anthony "Tony" Wong, for shoulder patches. The name "Spook" was coined by the crews of either the 12th Tactical Fighter Wing or the 4453rd Combat Crew Training Wing at MacDill AFB. The figure is ubiquitous, appearing on many items associated with the F-4. The Spook has followed the Phantom around the world adopting local fashions; for example, the British adaptation of the U.S. "Phantom Man" is a Spook that sometimes wears a bowler hat and smokes a pipe. ## Aircraft on display As a result of its extensive number of operators and large number of aircraft produced, there are many F-4 Phantom II of numerous variants on display worldwide. ## Notable accidents - On 6 June 1971, Hughes Airwest Flight 706, a McDonnell Douglas DC-9-31 collided in mid-air with a United States Marine Corps F-4B Phantom above the San Gabriel Mountains, while en route from Los Angeles International Airport to Salt Lake City. All 49 on board the DC-9 were killed, while the pilot of the F-4B was unable to eject and died when the aircraft crashed shortly afterwards. The F-4B's Radar Intercept Officer successfully ejected from the plane and parachuted to safety, being the sole survivor of the incident. - On 9 August 1974, a Royal Air Force Phantom FGR2 was involved in a fatal collision with a civilian PA-25-235 Pawnee crop-sprayer over Norfolk, England. Aircraft Accident Report 975 - On 21 March 1987, Captain Dean Paul Martin, a pilot in the 163d Tactical Fighter Group of the California Air National Guard and son of entertainer Dean Martin, crashed his F-4C into San Gorgonio Mountain, California, shortly after departure from March Air Force Base. Both Martin and his weapon systems officer (WSO) Captain Ramon Ortiz were killed. - On 30 January 2023, a Greek Air Force F-4E Phantom II crashed into the Ionian Sea. The aircraft was conducting a training exercise when it crashed 46 km south of the Andravida Air Base. The pilot, Captain Efstathios Tsitlakidis, and co-pilot, First Lieutenant Marios Michael Touroutsikas were killed in the crash. ## Specifications (F-4E) ## See also
38,820,233
Marvel Science Stories
1,169,989,892
American pulp science fiction magazine
[ "Defunct science fiction magazines published in the United States", "Fantasy fiction magazines", "Magazines disestablished in 1941", "Magazines disestablished in 1952", "Magazines established in 1938", "Magazines reestablished in 1950", "Pulp magazines", "Science fiction magazines established in the 1930s" ]
Marvel Science Stories was an American pulp magazine that ran for a total of fifteen issues in two separate runs, both edited by Robert O. Erisman. The publisher for the first run was Postal Publications, and the second run was published by Western Publishing; both companies were owned by Abraham and Martin Goodman. The first issue was dated August 1938, and carried stories with more sexual content than was usual for the genre, including several stories by Henry Kuttner, under his own name and also under pseudonyms. Reaction was generally negative, with one reader referring to Kuttner's story "The Time Trap" as "trash". This was the first of several titles featuring the word "Marvel", and Marvel Comics came from the same stable in the following year. The magazine was canceled after the April 1941 issue, but when a boom in science fiction magazines began in 1950, the publishers revived it. The first issue of the new series was dated November 1950; a further six issues appeared, the last dated May 1952. In addition to Kuttner, contributors to the first run included Arthur J. Burks and Jack Williamson; the second run published stories by Arthur C. Clarke, Isaac Asimov, Jack Vance, and L. Sprague de Camp, among others. In the opinion of science fiction historian Joseph Marchesani, the quality of the second incarnation of the magazine was superior to the first, but it was unable to compete with the new higher-quality magazines that had appeared in the interim. ## Publication history Although science fiction (sf) had been published before the 1920s, it did not begin to coalesce into a separately marketed genre until the appearance in 1926 of Amazing Stories, a pulp magazine published by Hugo Gernsback. After 1931, when Miracle Science and Fantasy Stories was launched, no new sf magazines appeared for several years. In 1936 Abraham and Martin Goodman, two brothers who owned a publishing company with multiple imprints, launched Ka-Zar, an imitation Tarzan magazine with some borderline sf content. It lasted for three issues, with the last issue dated January 1937. In addition to this marginal sf magazine, in 1937 the Goodmans began publishing several "weird-menace" pulps. These were a genre of pulp magazine known for incorporating "sex and sadism", with storylines that placed women in danger, usually because of a threat that appeared to be supernatural but was ultimately revealed to be the work of a human villain. The Goodmans' titles were Detective Short Stories, launched in August 1937, and Mystery Tales, which published its first issue in March 1938. These were followed up by Marvel Science Stories, edited by Robert O. Erisman, which was not intended to be a weird-menace pulp, but rather an sf magazine. The influence of the "sex and sadism" side of the Goodman's portfolio of magazines was apparent, however: authors were sometimes asked to add more sex to their stories than was usual in sf at the time. This was the first time that the word "Marvel" was used in the title of a Goodman publication. It went on to be used in other titles, notably Marvel Comics in the following year. The word may have been chosen to appeal to advertisers Marvel Home Utilities and Marvel Mystery Oil, or it may have been that Martin Goodman liked the name because it was similar to his own. The first issue, dated August 1938, appeared on newsstands in May of that year. It contained "Survival" by Arthur J. Burks as the lead novel; this was well received by the readers, and did not contain any sexual content. The first couple of issues contained several stories that did little to offend readers, but they also contained two stories by Henry Kuttner, who was selling regularly to the Goodmans' other publications. Erisman and the Goodmans had asked Kuttner to spice up his submissions to Marvel Science Stories. He obliged with "Avengers of Space" in the first issue, which included "scenes of aliens lusting after unclothed Earth women", in the words of sf historian Mike Ashley, and "The Time Trap" in the second issue. Reader reaction was strongly negative: a typical letter, from William Hamling, later to become a publisher and editor of science fiction magazines in his own right, commented: "I was just about to write you a letter of complete congratulations when my eyes fell upon Kuttner's "The Time Trap". All I can say is: PLEASE, in the future, dislodge such trash from your magazine". In addition to these two stories published under Kuttner's name, there were two more stories in the same two issues by him (under pseudonyms) which were equally offensive to readers such as Hamling. After five issues, the title was changed to Marvel Tales; at the same time, the number of stories advertised as "passionate" or containing "sin-lost" or "lust-crazed" characters sharply increased. Though some stories contained little to match the titillating blurbs, others did, with "women entrapped, burned and otherwise maltreated, and whips cracking into use with uninventive frequency", according to sf historian Joseph Marchesani. While women with large breasts often appeared on pulp magazine covers, Marvel's content was unusually explicit. Isaac Asimov wrote in 1969 of how in "1938-39 ... for some half a dozen issues or so, a magazine I won't name" published "spicy" stories about "the hot passion of alien monsters for Earthwomen. Clothes were always getting ripped off and breasts were described in a variety of elliptical phrases" for its "few readers" before "the magazine died a deserved death". The magazine ceased publication with the April 1941 issue, but in 1950 the Goodmans saw an opportunity to revive it when a new boom in science fiction magazines got under way. Erisman was still working for the Goodmans, and was listed as editorial director of the new version of the magazine, but much of the editorial work was done by Daniel Keyes, who was credited as "Editorial Associate" on the 1951 issues. The first issue of the new incarnation of Marvel Science Stories was dated November 1950. After two issues Erisman switched the magazine to a digest format, but the final issue, dated May 1952, was once again a pulp. The post-war issues contained stories by well-known writers, including Arthur C. Clarke, Asimov, Richard Matheson, William Tenn, Jack Vance, and Lester del Rey, but the stories were of only average quality. In Marchesani's opinion, Erisman and Keyes were able to improve on the material published in the pre-war Marvel Tales, but the field had grown more sophisticated since those days, and the writers who sold to Marvel Tales were now publishing their best work elsewhere. William Knoles's 1960 Playboy article on the pulp era, "Girls of the Slime God", was, Asimov said, mostly based on Marvel. ## Bibliographic details There were nine issues in the first sequence, in one volume of six numbers and a second volume of three numbers. All issues in the first run were in pulp format and were priced at 15 cents. The first four issues were 128 pages; the next five were 112 pages. The title was Marvel Science Stories for five issues, then Marvel Tales for two issues, and then Marvel Stories for the last two issues of the first run. The publisher for the first series was listed as Postal Publications of Chicago for the first four issues, and as Western Publishing of New York and Chicago; in both cases the owner was Martin and Abraham Goodman. The intended schedule was bimonthly but this was never achieved. The editor was Robert O. Erisman. The second incarnation of the magazine lasted for six issues on a more regular quarterly schedule, starting in November 1950. The price was 25 cents and the page count was 128 pages for all six issues; the first two issues and last issue of this sequence were in pulp format, and the three from May 1951 to November 1951 were in digest format. The title returned to Marvel Science Stories for the first three issues of this series, and changed to Marvel Science Fiction for the last three issues. The publisher was listed as Stadium Publishing of New York; as with the first series, Martin and Abraham Goodman were the owners. There was a British reprint of the February 1951 issue, published by Thorpe & Porter and dated May 1951. Science fiction bibliographer Brad Day lists five other British reprints of the second series of Marvel Science Stories, but no copies are recorded by more recent bibliographers. In 1977 the Goodmans launched a digest science fiction magazine titled Skyworlds, which has been described by Mike Ashley as "without any shadow of a doubt, the worst" of the 1970s crop of science fiction magazines; the fiction it contained was almost entirely reprinted from the second series of Marvel Science Stories.
8,389
Major depressive disorder
1,171,777,421
Mental disorder involving persistent low mood, low self-esteem, and loss of interest
[ "Major depressive disorder", "Mood disorders", "Wikipedia medicine articles ready to translate", "Wikipedia neurology articles ready to translate" ]
Major depressive disorder (MDD), also known as clinical depression, is a mental disorder characterized by at least two weeks of pervasive low mood, low self-esteem, and loss of interest or pleasure in normally enjoyable activities. Introduced by a group of US clinicians in the mid-1970s, the term was adopted by the American Psychiatric Association for this symptom cluster under mood disorders in the 1980 version of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III), and has become widely used since. The diagnosis of major depressive disorder is based on the person's reported experiences, behavior reported by relatives or friends, and a mental status examination. There is no laboratory test for the disorder, but testing may be done to rule out physical conditions that can cause similar symptoms. The most common time of onset is in a person's 20s, with females affected about twice as often as males. The course of the disorder varies widely, from one episode lasting months to a lifelong disorder with recurrent major depressive episodes. Those with major depressive disorder are typically treated with psychotherapy and antidepressant medication. Medication appears to be effective, but the effect may be significant only in the most severely depressed. Hospitalization (which may be involuntary) may be necessary in cases with associated self-neglect or a significant risk of harm to self or others. Electroconvulsive therapy (ECT) may be considered if other measures are not effective. Major depressive disorder is believed to be caused by a combination of genetic, environmental, and psychological factors, with about 40% of the risk being genetic. Risk factors include a family history of the condition, major life changes, certain medications, chronic health problems, and substance use disorders. It can negatively affect a person's personal life, work life, or education, and cause issues with a person's sleeping habits, eating habits, and general health. Major depressive disorder affected approximately 163 million people (2% of the world's population) in 2017. The percentage of people who are affected at one point in their life varies from 7% in Japan to 21% in France. Lifetime rates are higher in the developed world (15%) compared to the developing world (11%). The disorder causes the second-most years lived with disability, after lower back pain. ## Symptoms and signs Major depression significantly affects a person's family and personal relationships, work or school life, sleeping and eating habits, and general health. A person having a major depressive episode usually exhibits a low mood, which pervades all aspects of life, and an inability to experience pleasure in previously enjoyable activities. Depressed people may be preoccupied with—or ruminate over—thoughts and feelings of worthlessness, inappropriate guilt or regret, helplessness or hopelessness. Other symptoms of depression include poor concentration and memory, withdrawal from social situations and activities, reduced sex drive, irritability, and thoughts of death or suicide. Insomnia is common; in the typical pattern, a person wakes very early and cannot get back to sleep. Hypersomnia, or oversleeping, can also happen, as well as day-night rhythm disturbances, such as diurnal mood variation. Some antidepressants may also cause insomnia due to their stimulating effect. In severe cases, depressed people may have psychotic symptoms. These symptoms include delusions or, less commonly, hallucinations, usually unpleasant. People who have had previous episodes with psychotic symptoms are more likely to have them with future episodes. A depressed person may report multiple physical symptoms such as fatigue, headaches, or digestive problems; physical complaints are the most common presenting problem in developing countries, according to the World Health Organization's criteria for depression. Appetite often decreases, resulting in weight loss, although increased appetite and weight gain occasionally occur. Family and friends may notice agitation or lethargy. Older depressed people may have cognitive symptoms of recent onset, such as forgetfulness, and a more noticeable slowing of movements. Depressed children may often display an irritable rather than a depressed mood; most lose interest in school and show a steep decline in academic performance. Diagnosis may be delayed or missed when symptoms are interpreted as "normal moodiness." Elderly people may not present with classical depressive symptoms. Diagnosis and treatment is further complicated in that the elderly are often simultaneously treated with a number of other drugs, and often have other concurrent diseases. ## Cause The etiology of depression is not yet fully understood. The biopsychosocial model proposes that biological, psychological, and social factors all play a role in causing depression. The diathesis–stress model specifies that depression results when a preexisting vulnerability, or diathesis, is activated by stressful life events. The preexisting vulnerability can be either genetic, implying an interaction between nature and nurture, or schematic, resulting from views of the world learned in childhood. American psychiatrist Aaron Beck suggested that a triad of automatic and spontaneous negative thoughts about the self, the world or environment, and the future may lead to other depressive signs and symptoms. Adverse childhood experiences (incorporating childhood abuse, neglect and family dysfunction) markedly increase the risk of major depression, especially if more than one type. Childhood trauma also correlates with severity of depression, poor responsiveness to treatment and length of illness. Some are more susceptible than others to developing mental illness such as depression after trauma, and various genes have been suggested to control susceptibility. There appears to be a link between air pollution and depression and suicide. There may be an association between long-term PM2.5 exposure and depression, and a possible association between short-term PM10 exposure and suicide. ### Genetics Genes play a major role in the development of depression. Family and twin studies find that nearly 40% of individual differences in risk for major depressive disorder can be explained by genetic factors. Like most psychiatric disorders, major depressive disorder is likely influenced by many individual genetic changes. In 2018, a genome-wide association study discovered 44 genetic variants linked to risk for major depression; a 2019 study found 102 variants in the genome linked to depression. However, it appears that major depression is less heritable compared to bipolar disorder and schizophrenia. Research focusing on specific candidate genes has been criticized for its tendency to generate false positive findings. There are also other efforts to examine interactions between life stress and polygenic risk for depression. ### Other health problems Depression can also arise after a chronic or terminal medical condition, such as HIV/AIDS or asthma, and may be labeled "secondary depression." It is unknown whether the underlying diseases induce depression through effect on quality of life, or through shared etiologies (such as degeneration of the basal ganglia in Parkinson's disease or immune dysregulation in asthma). Depression may also be iatrogenic (the result of healthcare), such as drug-induced depression. Therapies associated with depression include interferons, beta-blockers, isotretinoin, contraceptives, cardiac agents, anticonvulsants, antimigraine drugs, antipsychotics, and hormonal agents such as gonadotropin-releasing hormone agonist (GnRH agonist). Substance use in early age is associated with increased risk of developing depression later in life. Depression occurring after giving birth is called postpartum depression and is thought to be the result of hormonal changes associated with pregnancy. Seasonal affective disorder, a type of depression associated with seasonal changes in sunlight, is thought to be triggered by decreased sunlight. Vitamin B<sub>2</sub>, B<sub>6</sub> and B<sub>12</sub> deficiency may cause depression in females. ## Pathophysiology The pathophysiology of depression is not completely understood, but current theories center around monoaminergic systems, the circadian rhythm, immunological dysfunction, HPA-axis dysfunction and structural or functional abnormalities of emotional circuits. Derived from the effectiveness of monoaminergic drugs in treating depression, the monoamine theory posits that insufficient activity of monoamine neurotransmitters is the primary cause of depression. Evidence for the monoamine theory comes from multiple areas. First, acute depletion of tryptophan—a necessary precursor of serotonin and a monoamine—can cause depression in those in remission or relatives of people who are depressed, suggesting that decreased serotonergic neurotransmission is important in depression. Second, the correlation between depression risk and polymorphisms in the 5-HTTLPR gene, which codes for serotonin receptors, suggests a link. Third, decreased size of the locus coeruleus, decreased activity of tyrosine hydroxylase, increased density of alpha-2 adrenergic receptor, and evidence from rat models suggest decreased adrenergic neurotransmission in depression. Furthermore, decreased levels of homovanillic acid, altered response to dextroamphetamine, responses of depressive symptoms to dopamine receptor agonists, decreased dopamine receptor D1 binding in the striatum, and polymorphism of dopamine receptor genes implicate dopamine, another monoamine, in depression. Lastly, increased activity of monoamine oxidase, which degrades monoamines, has been associated with depression. However, the monoamine theory is inconsistent with observations that serotonin depletion does not cause depression in healthy persons, that antidepressants instantly increase levels of monoamines but take weeks to work, and the existence of atypical antidepressants which can be effective despite not targeting this pathway. One proposed explanation for the therapeutic lag, and further support for the deficiency of monoamines, is a desensitization of self-inhibition in raphe nuclei by the increased serotonin mediated by antidepressants. However, disinhibition of the dorsal raphe has been proposed to occur as a result of decreased serotonergic activity in tryptophan depletion, resulting in a depressed state mediated by increased serotonin. Further countering the monoamine hypothesis is the fact that rats with lesions of the dorsal raphe are not more depressive than controls, the finding of increased jugular 5-HIAA in people who are depressed that normalized with selective serotonin reuptake inhibitor (SSRI) treatment, and the preference for carbohydrates in people who are depressed. Already limited, the monoamine hypothesis has been further oversimplified when presented to the general public. A 2022 review found no consistent evidence supporting the serotonin hypothesis, linking serotonin levels and depression. Immune system abnormalities have been observed, including increased levels of cytokines involved in generating sickness behavior (which shares overlap with depression). The effectiveness of nonsteroidal anti-inflammatory drugs (NSAIDs) and cytokine inhibitors in treating depression, and normalization of cytokine levels after successful treatment further suggest immune system abnormalities in depression. HPA-axis abnormalities have been suggested in depression given the association of CRHR1 with depression and the increased frequency of dexamethasone test non-suppression in people who are depressed. However, this abnormality is not adequate as a diagnosis tool, because its sensitivity is only 44%. These stress-related abnormalities are thought to be the cause of hippocampal volume reductions seen in people who are depressed. Furthermore, a meta-analysis yielded decreased dexamethasone suppression, and increased response to psychological stressors. Further abnormal results have been obscured with the cortisol awakening response, with increased response being associated with depression. Theories unifying neuroimaging findings have been proposed. The first model proposed is the limbic-cortical model, which involves hyperactivity of the ventral paralimbic regions and hypoactivity of frontal regulatory regions in emotional processing. Another model, the cortico-striatal model, suggests that abnormalities of the prefrontal cortex in regulating striatal and subcortical structures result in depression. Another model proposes hyperactivity of salience structures in identifying negative stimuli, and hypoactivity of cortical regulatory structures resulting in a negative emotional bias and depression, consistent with emotional bias studies. ## Diagnosis ### Clinical assessment A diagnostic assessment may be conducted by a suitably trained general practitioner, or by a psychiatrist or psychologist, who records the person's current circumstances, biographical history, current symptoms, family history, and alcohol and drug use. The assessment also includes a mental state examination, which is an assessment of the person's current mood and thought content, in particular the presence of themes of hopelessness or pessimism, self-harm or suicide, and an absence of positive thoughts or plans. Specialist mental health services are rare in rural areas, and thus diagnosis and management is left largely to primary-care clinicians. This issue is even more marked in developing countries. Rating scales are not used to diagnose depression, but they provide an indication of the severity of symptoms for a time period, so a person who scores above a given cut-off point can be more thoroughly evaluated for a depressive disorder diagnosis. Several rating scales are used for this purpose; these include the Hamilton Rating Scale for Depression, the Beck Depression Inventory or the Suicide Behaviors Questionnaire-Revised. Primary-care physicians have more difficulty with underrecognition and undertreatment of depression compared to psychiatrists. These cases may be missed because for some people with depression, physical symptoms often accompany depression. In addition, there may also be barriers related to the person, provider, and/or the medical system. Non-psychiatrist physicians have been shown to miss about two-thirds of cases, although there is some evidence of improvement in the number of missed cases. A doctor generally performs a medical examination and selected investigations to rule out other causes of depressive symptoms. These include blood tests measuring TSH and thyroxine to exclude hypothyroidism; basic electrolytes and serum calcium to rule out a metabolic disturbance; and a full blood count including ESR to rule out a systemic infection or chronic disease. Adverse affective reactions to medications or alcohol misuse may be ruled out, as well. Testosterone levels may be evaluated to diagnose hypogonadism, a cause of depression in men. Vitamin D levels might be evaluated, as low levels of vitamin D have been associated with greater risk for depression. Subjective cognitive complaints appear in older depressed people, but they can also be indicative of the onset of a dementing disorder, such as Alzheimer's disease. Cognitive testing and brain imaging can help distinguish depression from dementia. A CT scan can exclude brain pathology in those with psychotic, rapid-onset or otherwise unusual symptoms. No biological tests confirm major depression. In general, investigations are not repeated for a subsequent episode unless there is a medical indication. ### DSM and ICD criteria The most widely used criteria for diagnosing depressive conditions are found in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM) and the World Health Organization's International Statistical Classification of Diseases and Related Health Problems (ICD). The latter system is typically used in European countries, while the former is used in the US and many other non-European nations, and the authors of both have worked towards conforming one with the other. Both DSM and ICD mark out typical (main) depressive symptoms. The most recent edition of the DSM is the Fifth Edition, Text Revision (DSM-5-TR), and the most recent edition of the ICD is the Eleventh Edition (ICD-11). Under mood disorders, ICD-11 classifies major depressive disorder as either single episode depressive disorder (where there is no history of depressive episodes, or of mania) or recurrent depressive disorder (where there is a history of prior episodes, with no history of mania). ICD-11 symptoms, present nearly every day for at least two weeks, are a depressed mood or anhedonia, accompanied by other symptoms such as "difficulty concentrating, feelings of worthlessness or excessive or inappropriate guilt, hopelessness, recurrent thoughts of death or suicide, changes in appetite or sleep, psychomotor agitation or retardation, and reduced energy or fatigue." These symptoms must affect work, social, or domestic activities. The ICD-11 system allows further specifiers for the current depressive episode: the severity (mild, moderate, severe, unspecified); the presence of psychotic symptoms (with or without psychotic symptoms); and the degree of remission if relevant (currently in partial remission, currently in full remission). These two disorders are classified as "Depressive disorders", in the category of "Mood disorders". According to DSM-5, there are two main depressive symptoms: a depressed mood, and loss of interest/pleasure in activities (anhedonia). These symptoms, as well as five out of the nine more specific symptoms listed, must frequently occur for more than two weeks (to the extent in which it impairs functioning) for the diagnosis. Major depressive disorder is classified as a mood disorder in the DSM-5. The diagnosis hinges on the presence of single or recurrent major depressive episodes. Further qualifiers are used to classify both the episode itself and the course of the disorder. The category Unspecified Depressive Disorder is diagnosed if the depressive episode's manifestation does not meet the criteria for a major depressive episode. #### Major depressive episode A major depressive episode is characterized by the presence of a severely depressed mood that persists for at least two weeks. Episodes may be isolated or recurrent and are categorized as mild (few symptoms in excess of minimum criteria), moderate, or severe (marked impact on social or occupational functioning). An episode with psychotic features—commonly referred to as psychotic depression—is automatically rated as severe. If the person has had an episode of mania or markedly elevated mood, a diagnosis of bipolar disorder is made instead. Depression without mania is sometimes referred to as unipolar because the mood remains at one emotional state or "pole". Bereavement is not an exclusion criterion in the DSM-5, and it is up to the clinician to distinguish between normal reactions to a loss and MDD. Excluded are a range of related diagnoses, including dysthymia, which involves a chronic but milder mood disturbance; recurrent brief depression, consisting of briefer depressive episodes; minor depressive disorder, whereby only some symptoms of major depression are present; and adjustment disorder with depressed mood, which denotes low mood resulting from a psychological response to an identifiable event or stressor. #### Subtypes The DSM-5 recognizes six further subtypes of MDD, called specifiers, in addition to noting the length, severity and presence of psychotic features: - "Melancholic depression" is characterized by a loss of pleasure in most or all activities, a failure of reactivity to pleasurable stimuli, a quality of depressed mood more pronounced than that of grief or loss, a worsening of symptoms in the morning hours, early-morning waking, psychomotor retardation, excessive weight loss (not to be confused with anorexia nervosa), or excessive guilt. - "Atypical depression" is characterized by mood reactivity (paradoxical anhedonia) and positivity, significant weight gain or increased appetite (comfort eating), excessive sleep or sleepiness (hypersomnia), a sensation of heaviness in limbs known as leaden paralysis, and significant long-term social impairment as a consequence of hypersensitivity to perceived interpersonal rejection. - "Catatonic depression" is a rare and severe form of major depression involving disturbances of motor behavior and other symptoms. Here, the person is mute and almost stuporous, and either remains immobile or exhibits purposeless or even bizarre movements. Catatonic symptoms also occur in schizophrenia or in manic episodes, or may be caused by neuroleptic malignant syndrome. - "Depression with anxious distress" was added into the DSM-5 as a means to emphasize the common co-occurrence between depression or mania and anxiety, as well as the risk of suicide of depressed individuals with anxiety. Specifying in such a way can also help with the prognosis of those diagnosed with a depressive or bipolar disorder. - "Depression with peri-partum onset" refers to the intense, sustained and sometimes disabling depression experienced by women after giving birth or while a woman is pregnant. DSM-IV-TR used the classification "postpartum depression," but this was changed to not exclude cases of depressed woman during pregnancy. Depression with peripartum onset has an incidence rate of 3–6% among new mothers. The DSM-V mandates that to qualify as depression with peripartum onset, onset occurs during pregnancy or within one month of delivery. - "Seasonal affective disorder" (SAD) is a form of depression in which depressive episodes come on in the autumn or winter, and resolve in spring. The diagnosis is made if at least two episodes have occurred in colder months with none at other times, over a two-year period or longer. ### Differential diagnoses To confirm major depressive disorder as the most likely diagnosis, other potential diagnoses must be considered, including dysthymia, adjustment disorder with depressed mood, or bipolar disorder. Dysthymia is a chronic, milder mood disturbance in which a person reports a low mood almost daily over a span of at least two years. The symptoms are not as severe as those for major depression, although people with dysthymia are vulnerable to secondary episodes of major depression (sometimes referred to as double depression). Adjustment disorder with depressed mood is a mood disturbance appearing as a psychological response to an identifiable event or stressor, in which the resulting emotional or behavioral symptoms are significant but do not meet the criteria for a major depressive episode. Other disorders need to be ruled out before diagnosing major depressive disorder. They include depressions due to physical illness, medications, and substance use disorders. Depression due to physical illness is diagnosed as a mood disorder due to a general medical condition. This condition is determined based on history, laboratory findings, or physical examination. When the depression is caused by a medication, non-medical use of a psychoactive substance, or exposure to a toxin, it is then diagnosed as a specific mood disorder (previously called substance-induced mood disorder). ## Screening and prevention Preventive efforts may result in decreases in rates of the condition of between 22 and 38%. Since 2016, the United States Preventive Services Task Force (USPSTF) has recommended screening for depression among those over the age 12; though a 2005 Cochrane review found that the routine use of screening questionnaires has little effect on detection or treatment. Screening the general population is not recommended by authorities in the UK or Canada. Behavioral interventions, such as interpersonal therapy and cognitive-behavioral therapy, are effective at preventing new onset depression. Because such interventions appear to be most effective when delivered to individuals or small groups, it has been suggested that they may be able to reach their large target audience most efficiently through the Internet. The Netherlands mental health care system provides preventive interventions, such as the "Coping with Depression" course (CWD) for people with sub-threshold depression. The course is claimed to be the most successful of psychoeducational interventions for the treatment and prevention of depression (both for its adaptability to various populations and its results), with a risk reduction of 38% in major depression and an efficacy as a treatment comparing favorably to other psychotherapies. ## Management The most common and effective treatments for depression are psychotherapy, medication, and electroconvulsive therapy (ECT); a combination of treatments is the most effective approach when depression is resistant to treatment. American Psychiatric Association treatment guidelines recommend that initial treatment should be individually tailored based on factors including severity of symptoms, co-existing disorders, prior treatment experience, and personal preference. Options may include pharmacotherapy, psychotherapy, exercise, ECT, transcranial magnetic stimulation (TMS) or light therapy. Antidepressant medication is recommended as an initial treatment choice in people with mild, moderate, or severe major depression, and should be given to all people with severe depression unless ECT is planned. There is evidence that collaborative care by a team of health care practitioners produces better results than routine single-practitioner care. Psychotherapy is the treatment of choice (over medication) for people under 18, and cognitive behavioral therapy (CBT), third wave CBT and interpersonal therapy may help prevent depression. The UK National Institute for Health and Care Excellence (NICE) 2004 guidelines indicate that antidepressants should not be used for the initial treatment of mild depression because the risk-benefit ratio is poor. The guidelines recommend that antidepressants treatment in combination with psychosocial interventions should be considered for: \* People with a history of moderate or severe depression \* Those with mild depression that has been present for a long period \* As a second line treatment for mild depression that persists after other interventions \* As a first line treatment for moderate or severe depression. The guidelines further note that antidepressant treatment should be continued for at least six months to reduce the risk of relapse, and that SSRIs are better tolerated than tricyclic antidepressants. Treatment options are more limited in developing countries, where access to mental health staff, medication, and psychotherapy is often difficult. Development of mental health services is minimal in many countries; depression is viewed as a phenomenon of the developed world despite evidence to the contrary, and not as an inherently life-threatening condition. There is insufficient evidence to determine the effectiveness of psychological versus medical therapy in children. ### Lifestyle Physical exercise has been found to be effective for major depression, and may be recommended to people who are willing, motivated, and healthy enough to participate in an exercise program as treatment. It is equivalent to the use of medications or psychological therapies in most people. In older people it does appear to decrease depression. Sleep and diet may also play a role in depression, and interventions in these areas may be an effective add-on to conventional methods. In observational studies, smoking cessation has benefits in depression as large as or larger than those of medications. ### Talking therapies Talking therapy (psychotherapy) can be delivered to individuals, groups, or families by mental health professionals, including psychotherapists, psychiatrists, psychologists, clinical social workers, counselors, and psychiatric nurses. A 2012 review found psychotherapy to be better than no treatment but not other treatments. With more complex and chronic forms of depression, a combination of medication and psychotherapy may be used. There is moderate-quality evidence that psychological therapies are a useful addition to standard antidepressant treatment of treatment-resistant depression in the short term. Psychotherapy has been shown to be effective in older people. Successful psychotherapy appears to reduce the recurrence of depression even after it has been stopped or replaced by occasional booster sessions. The most-studied form of psychotherapy for depression is CBT, which teaches clients to challenge self-defeating, but enduring ways of thinking (cognitions) and change counter-productive behaviors. CBT can perform as well as antidepressants in people with major depression. CBT has the most research evidence for the treatment of depression in children and adolescents, and CBT and interpersonal psychotherapy (IPT) are preferred therapies for adolescent depression. In people under 18, according to the National Institute for Health and Clinical Excellence, medication should be offered only in conjunction with a psychological therapy, such as CBT, interpersonal therapy, or family therapy. Several variables predict success for cognitive behavioral therapy in adolescents: higher levels of rational thoughts, less hopelessness, fewer negative thoughts, and fewer cognitive distortions. CBT is particularly beneficial in preventing relapse. Cognitive behavioral therapy and occupational programs (including modification of work activities and assistance) have been shown to be effective in reducing sick days taken by workers with depression. Several variants of cognitive behavior therapy have been used in those with depression, the most notable being rational emotive behavior therapy, and mindfulness-based cognitive therapy. Mindfulness-based stress reduction programs may reduce depression symptoms. Mindfulness programs also appear to be a promising intervention in youth. Problem solving therapy, cognitive behavioral therapy, and interpersonal therapy are effective interventions in the elderly. Psychoanalysis is a school of thought, founded by Sigmund Freud, which emphasizes the resolution of unconscious mental conflicts. Psychoanalytic techniques are used by some practitioners to treat clients presenting with major depression. A more widely practiced therapy, called psychodynamic psychotherapy, is in the tradition of psychoanalysis but less intensive, meeting once or twice a week. It also tends to focus more on the person's immediate problems, and has an additional social and interpersonal focus. In a meta-analysis of three controlled trials of Short Psychodynamic Supportive Psychotherapy, this modification was found to be as effective as medication for mild to moderate depression. ### Antidepressants Conflicting results have arisen from studies that look at the effectiveness of antidepressants in people with acute, mild to moderate depression. A review commissioned by the National Institute for Health and Care Excellence (UK) concluded that there is strong evidence that SSRIs, such as escitalopram, paroxetine, and sertraline, have greater efficacy than placebo on achieving a 50% reduction in depression scores in moderate and severe major depression, and that there is some evidence for a similar effect in mild depression. Similarly, a Cochrane systematic review of clinical trials of the generic tricyclic antidepressant amitriptyline concluded that there is strong evidence that its efficacy is superior to placebo. Antidepressants work less well for the elderly than for younger individuals with depression. To find the most effective antidepressant medication with minimal side-effects, the dosages can be adjusted, and if necessary, combinations of different classes of antidepressants can be tried. Response rates to the first antidepressant administered range from 50 to 75%, and it can take at least six to eight weeks from the start of medication to improvement. Antidepressant medication treatment is usually continued for 16 to 20 weeks after remission, to minimize the chance of recurrence, and even up to one year of continuation is recommended. People with chronic depression may need to take medication indefinitely to avoid relapse. SSRIs are the primary medications prescribed, owing to their relatively mild side-effects, and because they are less toxic in overdose than other antidepressants. People who do not respond to one SSRI can be switched to another antidepressant, and this results in improvement in almost 50% of cases. Another option is to switch to the atypical antidepressant bupropion. Venlafaxine, an antidepressant with a different mechanism of action, may be modestly more effective than SSRIs. However, venlafaxine is not recommended in the UK as a first-line treatment because of evidence suggesting its risks may outweigh benefits, and it is specifically discouraged in children and adolescents as it increases the risk of suicidal thoughts or attempts. For children and adolescents with moderate-to-severe depressive disorder, fluoxetine seems to be the best treatment (either with or without cognitive behavioural therapy) but more research is needed to be certain. Sertraline, escitalopram, duloxetine might also help in reducing symptoms. Some antidepressants have not been shown to be effective. Medications are not recommended in children with mild disease. There is also insufficient evidence to determine effectiveness in those with depression complicated by dementia. Any antidepressant can cause low blood sodium levels; nevertheless, it has been reported more often with SSRIs. It is not uncommon for SSRIs to cause or worsen insomnia; the sedating atypical antidepressant mirtazapine can be used in such cases. Irreversible monoamine oxidase inhibitors, an older class of antidepressants, have been plagued by potentially life-threatening dietary and drug interactions. They are still used only rarely, although newer and better-tolerated agents of this class have been developed. The safety profile is different with reversible monoamine oxidase inhibitors, such as moclobemide, where the risk of serious dietary interactions is negligible and dietary restrictions are less strict. It is unclear whether antidepressants affect a person's risk of suicide. For children, adolescents, and probably young adults between 18 and 24 years old, there is a higher risk of both suicidal ideations and suicidal behavior in those treated with SSRIs. For adults, it is unclear whether SSRIs affect the risk of suicidality. One review found no connection; another an increased risk; and a third no risk in those 25–65 years old and a decreased risk in those more than 65. A black box warning was introduced in the United States in 2007 on SSRIs and other antidepressant medications due to the increased risk of suicide in people younger than 24 years old. Similar precautionary notice revisions were implemented by the Japanese Ministry of Health. ### Other medications and supplements The combined use of antidepressants plus benzodiazepines demonstrates improved effectiveness when compared to antidepressants alone, but these effects may not endure. The addition of a benzodiazepine is balanced against possible harms and other alternative treatment strategies when antidepressant mono-therapy is considered inadequate. For treatment-resistant depression, adding on the atypical antipsychotic brexpiprazole for short-term or acute management may be considered. Brexpiprazole may be effective for some people, however, the evidence as of 2023 supporting its use is weak and this medication has potential adverse effects including weight gain and akathisia. Brexpiprazole has not been sufficiently studied in older people or children and the use and effectiveness of this adjunctive therapy for longer term management is not clear. Ketamine may have a rapid antidepressant effect lasting less than two weeks; there is limited evidence of any effect after that, common acute side effects, and longer-term studies of safety and adverse effects are needed. A nasal spray form of esketamine was approved by the FDA in March 2019 for use in treatment-resistant depression when combined with an oral antidepressant; risk of substance use disorder and concerns about its safety, serious adverse effects, tolerability, effect on suicidality, lack of information about dosage, whether the studies on it adequately represent broad populations, and escalating use of the product have been raised by an international panel of experts. There is insufficient high quality evidence to suggest omega-3 fatty acids are effective in depression. There is limited evidence that vitamin D supplementation is of value in alleviating the symptoms of depression in individuals who are vitamin D-deficient. Lithium appears effective at lowering the risk of suicide in those with bipolar disorder and unipolar depression to nearly the same levels as the general population. There is a narrow range of effective and safe dosages of lithium thus close monitoring may be needed. Low-dose thyroid hormone may be added to existing antidepressants to treat persistent depression symptoms in people who have tried multiple courses of medication. Limited evidence suggests stimulants, such as amphetamine and modafinil, may be effective in the short term, or as adjuvant therapy. Also, it is suggested that folate supplements may have a role in depression management. There is tentative evidence for benefit from testosterone in males. ### Electroconvulsive therapy Electroconvulsive therapy (ECT) is a standard psychiatric treatment in which seizures are electrically induced in a person with depression to provide relief from psychiatric illnesses. ECT is used with informed consent as a last line of intervention for major depressive disorder. A round of ECT is effective for about 50% of people with treatment-resistant major depressive disorder, whether it is unipolar or bipolar. Follow-up treatment is still poorly studied, but about half of people who respond relapse within twelve months. Aside from effects in the brain, the general physical risks of ECT are similar to those of brief general anesthesia. Immediately following treatment, the most common adverse effects are confusion and memory loss. ECT is considered one of the least harmful treatment options available for severely depressed pregnant women. A usual course of ECT involves multiple administrations, typically given two or three times per week, until the person no longer has symptoms. ECT is administered under anesthesia with a muscle relaxant. Electroconvulsive therapy can differ in its application in three ways: electrode placement, frequency of treatments, and the electrical waveform of the stimulus. These three forms of application have significant differences in both adverse side effects and symptom remission. After treatment, drug therapy is usually continued, and some people receive maintenance ECT. ECT appears to work in the short term via an anticonvulsant effect mostly in the frontal lobes, and longer term via neurotrophic effects primarily in the medial temporal lobe. ### Other Transcranial magnetic stimulation (TMS) or deep transcranial magnetic stimulation is a noninvasive method used to stimulate small regions of the brain. TMS was approved by the FDA for treatment-resistant major depressive disorder (trMDD) in 2008 and as of 2014 evidence supports that it is probably effective. The American Psychiatric Association, the Canadian Network for Mood and Anxiety Disorders, and the Royal Australia and New Zealand College of Psychiatrists have endorsed TMS for trMDD. Transcranial direct current stimulation (tDCS) is another noninvasive method used to stimulate small regions of the brain with a weak electric current. Several meta-analyses have concluded that active tDCS was useful for treating depression. There is a small amount of evidence that sleep deprivation may improve depressive symptoms in some individuals, with the effects usually showing up within a day. This effect is usually temporary. Besides sleepiness, this method can cause a side effect of mania or hypomania. There is insufficient evidence for Reiki and dance movement therapy in depression. Cannabis is specifically not recommended as a treatment. ## Prognosis Studies have shown that 80% of those with a first major depressive episode will have at least one more during their life, with a lifetime average of four episodes. Other general population studies indicate that around half those who have an episode recover (whether treated or not) and remain well, while the other half will have at least one more, and around 15% of those experience chronic recurrence. Studies recruiting from selective inpatient sources suggest lower recovery and higher chronicity, while studies of mostly outpatients show that nearly all recover, with a median episode duration of 11 months. Around 90% of those with severe or psychotic depression, most of whom also meet criteria for other mental disorders, experience recurrence. Cases when outcome is poor are associated with inappropriate treatment, severe initial symptoms including psychosis, early age of onset, previous episodes, incomplete recovery after one year of treatment, pre-existing severe mental or medical disorder, and family dysfunction. A high proportion of people who experience full symptomatic remission still have at least one not fully resolved symptom after treatment. Recurrence or chronicity is more likely if symptoms have not fully resolved with treatment. Current guidelines recommend continuing antidepressants for four to six months after remission to prevent relapse. Evidence from many randomized controlled trials indicates continuing antidepressant medications after recovery can reduce the chance of relapse by 70% (41% on placebo vs. 18% on antidepressant). The preventive effect probably lasts for at least the first 36 months of use. Major depressive episodes often resolve over time, whether or not they are treated. Outpatients on a waiting list show a 10–15% reduction in symptoms within a few months, with approximately 20% no longer meeting the full criteria for a depressive disorder. The median duration of an episode has been estimated to be 23 weeks, with the highest rate of recovery in the first three months. According to a 2013 review, 23% of untreated adults with mild to moderate depression will remit within 3 months, 32% within 6 months and 53% within 12 months. ### Ability to work Depression may affect people's ability to work. The combination of usual clinical care and support with return to work (like working less hours or changing tasks) probably reduces sick leave by 15%, and leads to fewer depressive symptoms and improved work capacity, reducing sick leave by an annual average of 25 days per year. Helping depressed people return to work without a connection to clinical care has not been shown to have an effect on sick leave days. Additional psychological interventions (such as online cognitive behavioral therapy) lead to fewer sick days compared to standard management only. Streamlining care or adding specific providers for depression care may help to reduce sick leave. ### Life expectancy and the risk of suicide Depressed individuals have a shorter life expectancy than those without depression, in part because people who are depressed are at risk of dying of suicide. About 50% of people who die of suicide have a mood disorder such as major depression, and the risk is especially high if a person has a marked sense of hopelessness or has both depression and borderline personality disorder. About 2–8% of adults with major depression die by suicide. In the US, the lifetime risk of suicide associated with a diagnosis of major depression is estimated at 7% for men and 1% for women, even though suicide attempts are more frequent in women. Depressed people have a higher rate of dying from other causes. There is a 1.5- to 2-fold increased risk of cardiovascular disease, independent of other known risk factors, and is itself linked directly or indirectly to risk factors such as smoking and obesity. People with major depression are less likely to follow medical recommendations for treating and preventing cardiovascular disorders, further increasing their risk of medical complications. Cardiologists may not recognize underlying depression that complicates a cardiovascular problem under their care. ## Epidemiology Major depressive disorder affected approximately 163 million people in 2017 (2% of the global population). The percentage of people who are affected at one point in their life varies from 7% in Japan to 21% in France. In most countries the number of people who have depression during their lives falls within an 8–18% range. In the United States, 8.4% of adults (21 million individuals) have at least one episode within a year-long period; the probability of having a major depressive episode is higher for females than males (10.5% to 6.2%), and highest for those aged 18 to 25 (17%). Among adolescents between the ages of 12 and 17, 17% of the U.S. population (4.1 million individuals) had a major depressive episode in 2020 (females 25.2%, males 9.2%). Among individuals reporting two or more races, the US prevalence is highest. Major depression is about twice as common in women as in men, although it is unclear why this is so, and whether factors unaccounted for are contributing to this. The relative increase in occurrence is related to pubertal development rather than chronological age, reaches adult ratios between the ages of 15 and 18, and appears associated with psychosocial more than hormonal factors. In 2019, major depressive disorder was identified (using either the DSM-IV-TR or ICD-10) in the Global Burden of Disease Study as the fifth most common cause of years lived with disability and the 18th most common for disability-adjusted life years. People are most likely to develop their first depressive episode between the ages of 30 and 40, and there is a second, smaller peak of incidence between ages 50 and 60. The risk of major depression is increased with neurological conditions such as stroke, Parkinson's disease, or multiple sclerosis, and during the first year after childbirth. It is also more common after cardiovascular illnesses, and is related more to those with a poor cardiac disease outcome than to a better one. Depressive disorders are more common in urban populations than in rural ones and the prevalence is increased in groups with poorer socioeconomic factors, e.g., homelessness. Depression is common among those over 65 years of age and increases in frequency beyond this age. The risk of depression increases in relation to the frailty of the individual. Depression is one of the most important factors which negatively impact quality of life in adults, as well as the elderly. Both symptoms and treatment among the elderly differ from those of the rest of the population. Major depression was the leading cause of disease burden in North America and other high-income countries, and the fourth-leading cause worldwide as of 2006. In the year 2030, it is predicted to be the second-leading cause of disease burden worldwide after HIV, according to the WHO. Delay or failure in seeking treatment after relapse and the failure of health professionals to provide treatment are two barriers to reducing disability. ### Comorbidity Major depression frequently co-occurs with other psychiatric problems. The 1990–92 National Comorbidity Survey (US) reported that half of those with major depression also have lifetime anxiety and its associated disorders, such as generalized anxiety disorder. Anxiety symptoms can have a major impact on the course of a depressive illness, with delayed recovery, increased risk of relapse, greater disability and increased suicidal behavior. Depressed people have increased rates of alcohol and substance use, particularly dependence, and around a third of individuals diagnosed with attention deficit hyperactivity disorder (ADHD) develop comorbid depression. Post-traumatic stress disorder and depression often co-occur. Depression may also coexist with ADHD, complicating the diagnosis and treatment of both. Depression is also frequently comorbid with alcohol use disorder and personality disorders. Depression can also be exacerbated during particular months (usually winter) in those with seasonal affective disorder. While overuse of digital media has been associated with depressive symptoms, using digital media may also improve mood in some situations. Depression and pain often co-occur. One or more pain symptoms are present in 65% of people who have depression, and anywhere from 5 to 85% of people who are experiencing pain will also have depression, depending on the setting—a lower prevalence in general practice, and higher in specialty clinics. Depression is often underrecognized, and therefore undertreated, in patients presenting with pain. Depression often coexists with physical disorders common among the elderly, such as stroke, other cardiovascular diseases, Parkinson's disease, and chronic obstructive pulmonary disease. ## History The Ancient Greek physician Hippocrates described a syndrome of melancholia (μελαγχολία, melankholía) as a distinct disease with particular mental and physical symptoms; he characterized all "fears and despondencies, if they last a long time" as being symptomatic of the ailment. It was a similar but far broader concept than today's depression; prominence was given to a clustering of the symptoms of sadness, dejection, and despondency, and often fear, anger, delusions and obsessions were included. The term depression itself was derived from the Latin verb deprimere, meaning "to press down". From the 14th century, "to depress" meant to subjugate or to bring down in spirits. It was used in 1665 in English author Richard Baker's Chronicle to refer to someone having "a great depression of spirit", and by English author Samuel Johnson in a similar sense in 1753. The term also came into use in physiology and economics. An early usage referring to a psychiatric symptom was by French psychiatrist Louis Delasiauve in 1856, and by the 1860s it was appearing in medical dictionaries to refer to a physiological and metaphorical lowering of emotional function. Since Aristotle, melancholia had been associated with men of learning and intellectual brilliance, a hazard of contemplation and creativity. The newer concept abandoned these associations and through the 19th century, became more associated with women. Although melancholia remained the dominant diagnostic term, depression gained increasing currency in medical treatises and was a synonym by the end of the century; German psychiatrist Emil Kraepelin may have been the first to use it as the overarching term, referring to different kinds of melancholia as depressive states. Freud likened the state of melancholia to mourning in his 1917 paper Mourning and Melancholia. He theorized that objective loss, such as the loss of a valued relationship through death or a romantic break-up, results in subjective loss as well; the depressed individual has identified with the object of affection through an unconscious, narcissistic process called the libidinal cathexis of the ego. Such loss results in severe melancholic symptoms more profound than mourning; not only is the outside world viewed negatively but the ego itself is compromised. The person's decline of self-perception is revealed in his belief of his own blame, inferiority, and unworthiness. He also emphasized early life experiences as a predisposing factor. Adolf Meyer put forward a mixed social and biological framework emphasizing reactions in the context of an individual's life, and argued that the term depression should be used instead of melancholia. The first version of the DSM (DSM-I, 1952) contained depressive reaction and the DSM-II (1968) depressive neurosis, defined as an excessive reaction to internal conflict or an identifiable event, and also included a depressive type of manic-depressive psychosis within Major affective disorders. The term unipolar (along with the related term bipolar) was coined by the neurologist and psychiatrist Karl Kleist, and subsequently used by his disciples Edda Neele and Karl Leonhard. The term Major depressive disorder was introduced by a group of US clinicians in the mid-1970s as part of proposals for diagnostic criteria based on patterns of symptoms (called the "Research Diagnostic Criteria", building on earlier Feighner Criteria), and was incorporated into the DSM-III in 1980. The American Psychiatric Association added "major depressive disorder" to the Diagnostic and Statistical Manual of Mental Disorders (DSM-III), as a split of the previous depressive neurosis in the DSM-II, which also encompassed the conditions now known as dysthymia and adjustment disorder with depressed mood. To maintain consistency the ICD-10 used the same criteria, with only minor alterations, but using the DSM diagnostic threshold to mark a mild depressive episode, adding higher threshold categories for moderate and severe episodes. The ancient idea of melancholia still survives in the notion of a melancholic subtype. The new definitions of depression were widely accepted, albeit with some conflicting findings and views. There have been some continued empirically based arguments for a return to the diagnosis of melancholia. There has been some criticism of the expansion of coverage of the diagnosis, related to the development and promotion of antidepressants and the biological model since the late 1950s. ## Society and culture ### Terminology The term "depression" is used in a number of different ways. It is often used to mean this syndrome but may refer to other mood disorders or simply to a low mood. People's conceptualizations of depression vary widely, both within and among cultures. "Because of the lack of scientific certainty," one commentator has observed, "the debate over depression turns on questions of language. What we call it—'disease,' 'disorder,' 'state of mind'—affects how we view, diagnose, and treat it." There are cultural differences in the extent to which serious depression is considered an illness requiring personal professional treatment, or an indicator of something else, such as the need to address social or moral problems, the result of biological imbalances, or a reflection of individual differences in the understanding of distress that may reinforce feelings of powerlessness, and emotional struggle. ### Stigma Historical figures were often reluctant to discuss or seek treatment for depression due to social stigma about the condition, or due to ignorance of diagnosis or treatments. Nevertheless, analysis or interpretation of letters, journals, artwork, writings, or statements of family and friends of some historical personalities has led to the presumption that they may have had some form of depression. People who may have had depression include English author Mary Shelley, American-British writer Henry James, and American president Abraham Lincoln. Some well-known contemporary people with possible depression include Canadian songwriter Leonard Cohen and American playwright and novelist Tennessee Williams. Some pioneering psychologists, such as Americans William James and John B. Watson, dealt with their own depression. There has been a continuing discussion of whether neurological disorders and mood disorders may be linked to creativity, a discussion that goes back to Aristotelian times. British literature gives many examples of reflections on depression. English philosopher John Stuart Mill experienced a several-months-long period of what he called "a dull state of nerves", when one is "unsusceptible to enjoyment or pleasurable excitement; one of those moods when what is pleasure at other times, becomes insipid or indifferent". He quoted English poet Samuel Taylor Coleridge's "Dejection" as a perfect description of his case: "A grief without a pang, void, dark and drear, / A drowsy, stifled, unimpassioned grief, / Which finds no natural outlet or relief / In word, or sigh, or tear." English writer Samuel Johnson used the term "the black dog" in the 1780s to describe his own depression, and it was subsequently popularized by British Prime Minister Sir Winston Churchill, who also had the disorder. Johann Wolfgang von Goethe in his Faust, Part I, published in 1808, has Mephistopheles assume the form of a black dog, specifically a poodle. Social stigma of major depression is widespread, and contact with mental health services reduces this only slightly. Public opinions on treatment differ markedly to those of health professionals; alternative treatments are held to be more helpful than pharmacological ones, which are viewed poorly. In the UK, the Royal College of Psychiatrists and the Royal College of General Practitioners conducted a joint Five-year Defeat Depression campaign to educate and reduce stigma from 1992 to 1996; a MORI study conducted afterwards showed a small positive change in public attitudes to depression and treatment. While serving his first term as Prime Minister of Norway, Kjell Magne Bondevik attracted international attention in August 1998 when he announced that he was suffering from a depressive episode, becoming the highest ranking world leader to admit to suffering from a mental illness while in office. Upon this revelation, Anne Enger became acting Prime Minister for three weeks, from 30 August to 23 September, while he recovered from the depressive episode. Bondevik then returned to office. Bondevik received thousands of supportive letters, and said that the experience had been positive overall, both for himself and because it made mental illness more publicly acceptable.
9,941
Æthelberht of Kent
1,167,322,899
King of Kent from 589
[ "560s births", "616 deaths", "6th-century English monarchs", "7th-century Christian saints", "7th-century English monarchs", "Christian royal saints", "Converts to Christianity from pagan religions", "Gregorian mission", "Jutish people", "Kentish monarchs", "Kentish saints", "Medieval legislators", "Roman Catholic royal saints" ]
Æthelberht (/ˈæθəlbərt/; also Æthelbert, Aethelberht, Aethelbert or Ethelbert; Old English: Æðelberht ; c. 550 – 24 February 616) was King of Kent from about 589 until his death. The eighth-century monk Bede, in his Ecclesiastical History of the English People, lists him as the third king to hold imperium over other Anglo-Saxon kingdoms. In the late ninth century Anglo-Saxon Chronicle, he is referred to as a bretwalda, or "Britain-ruler". He was the first English king to convert to Christianity. Æthelberht was the son of Eormenric, succeeding him as king, according to the Chronicle. He married Bertha, the Christian daughter of Charibert I, king of the Franks, thus building an alliance with the most powerful state in contemporary Western Europe; the marriage probably took place before he came to the throne. Bertha's influence may have led to Pope Gregory I's decision to send Augustine as a missionary from Rome. Augustine landed on the Isle of Thanet in east Kent in 597. Shortly thereafter, Æthelberht converted to Christianity, churches were established, and wider-scale conversion to Christianity began in the kingdom. He provided the new church with land in Canterbury, thus helping to establish one of the foundation stones of English Christianity. Æthelberht's law for Kent, the earliest written code in any Germanic language, instituted a complex system of fines; the law code is preserved in the Textus Roffensis. Kent was rich, with strong trade ties to the Continent, and Æthelberht may have instituted royal control over trade. Coinage probably began circulating in Kent during his reign for the first time since the Anglo-Saxon settlement. He later came to be regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February. ## Historical context In the fifth century, raids on Britain by continental peoples had developed into full-scale migrations. The newcomers are known to have included Angles, Saxons, Jutes and Frisians, and there is evidence of other groups as well. These groups captured territory in the east and south of England, but at about the end of the fifth century, a British victory at the battle of Mount Badon (Mons Badonicus) halted the Anglo-Saxon advance for fifty years. From about 550, however, the British began to lose ground once more, and within twenty-five years it appears that control of almost all of southern England was in the hands of the invaders. Anglo-Saxons probably conquered Kent before Mons Badonicus. There is both documentary and archaeological evidence that Kent was primarily colonised by Jutes, from the southern part of the Jutland peninsula. According to legend, the brothers Hengist and Horsa landed in 449 as mercenaries for a British king, Vortigern. After a rebellion over pay and Horsa's death in battle, Hengist established the Kingdom of Kent. Some historians now think the underlying story of a rebelling mercenary force may be accurate; most now date the founding of the kingdom of Kent to the middle of the fifth-century, which is consistent with the legend. This early date, only a few decades after the departure of the Romans, also suggests that more of Roman civilization may have survived into Anglo-Saxon rule in Kent than in other areas. Overlordship was a central feature of Anglo-Saxon politics which began before Æthelberht's time; kings were described as overlords as late as the ninth century. The Anglo-Saxon invasion may have involved military coordination of different groups within the invaders, with a leader who had authority over many different groups; Ælle of Sussex may have been such a leader. Once the new states began to form, conflicts among them began. Tribute from dependents could lead to wealth. A weaker state also might ask or pay for the protection of a stronger neighbour against a warlike third state. Sources for this period in Kentish history include the Ecclesiastical History of the English People, written in 731 by Bede, a Northumbrian monk. Bede was interested primarily in England's Christianization. Since Æthelberht was the first Anglo-Saxon king to convert to Christianity, Bede provides more substantial information about him than about any earlier king. One of Bede's correspondents was Albinus, abbot of the monastery of St. Peter and St. Paul (subsequently renamed St. Augustine's) in Canterbury. The Anglo-Saxon Chronicle, a collection of annals assembled c. 890 in the kingdom of Wessex, mentions several events in Kent during Æthelberht's reign. Further mention of events in Kent occurs in the late sixth century history of the Franks by Gregory of Tours. This is the earliest surviving source to mention any Anglo-Saxon kingdom. Some of Pope Gregory the Great's letters concern the mission of St. Augustine to Kent in 597; these letters also mention the state of Kent and its relationships with neighbours. Other sources include regnal lists of the kings of Kent and early charters (land grants by kings to their followers or to the church). Although no originals survive from Æthelberht's reign, later copies exist. A law code from Æthelberht's reign also survives. ## Ancestry, accession and chronology According to Bede, Æthelberht was descended directly from Hengist. Bede gives the line of descent as follows: "Ethelbert was son of Irminric, son of Octa, and after his grandfather Oeric, surnamed Oisc, the kings of the Kentish folk are commonly known as Oiscings. The father of Oeric was Hengist." An alternative form of this genealogy, found in the Historia Brittonum among other places, reverses the position of Octa and Oisc in the lineage. The first of these names that can be placed historically with reasonable confidence is Æthelberht's father, whose name now usually is spelled Eormenric. The only direct written reference to Eormenric is in Kentish genealogies, but Gregory of Tours does mention that Æthelberht's father was the king of Kent, though Gregory gives no date. Eormenric's name provides a hint of connections to the kingdom of the Franks, across the English channel; the element "Eormen" was rare in names of the Anglo-Saxon aristocracy, but much more common among Frankish nobles. One other member of Æthelberht's family is known: his sister, Ricole, who is recorded by both Bede and the Anglo-Saxon Chronicle as the mother of Sæberht, king of the East Saxons (i.e., Essex). The dates of Æthelberht's birth and accession to the throne of Kent are both matters of debate. Bede, the earliest source to give dates, is thought to have drawn his information from correspondence with Albinus. Bede states that when Æthelberht died in 616 he had reigned for fifty-six years, placing his accession in 560. Bede also says that Æthelberht died twenty-one years after his baptism. Augustine's mission from Rome is known to have arrived in 597, and according to Bede, it was this mission that converted Æthelberht. Hence Bede's dates are inconsistent. The Anglo-Saxon Chronicle, an important source for early dates, is inconsistent with Bede and also has inconsistencies among different manuscript versions. Putting together the different dates in the Chronicle for birth, death and length of reign, it appears that Æthelberht's reign was thought to have been either 560–616 or 565–618 but that the surviving sources have confused the two traditions. It is possible that Æthelberht was converted to Christianity before Augustine's arrival. Æthelberht's wife was a Christian and brought a Frankish bishop with her, to attend her at court, so Æthelberht would have had knowledge of Christianity before the mission reached Kent. It also is possible that Bede had the date of Æthelberht's death wrong; if, in fact, Æthelberht died in 618, this would be consistent with his baptism in 597, which is in accord with the tradition that Augustine converted the king within a year of his arrival. Gregory of Tours, in his Historia Francorum, writes that Bertha, daughter of Charibert I, king of the Franks, married the son of the king of Kent. Bede says that Æthelberht received Bertha "from her parents". If Bede is interpreted literally, the marriage would have had to take place before 567, when Charibert died. The traditions for Æthelberht's reign, then, would imply that Æthelberht married Bertha before either 560 or 565. The extreme length of Æthelberht's reign also has been regarded with skepticism by historians; it has been suggested that he died in the fifty-sixth year of his life, rather than the fifty-sixth year of his reign. This would place the year of his birth approximately at 560, and he would not then have been able to marry until the mid 570s. According to Gregory of Tours, Charibert was king when he married Ingoberg, Bertha's mother, which places that marriage no earlier than 561. It therefore is unlikely that Bertha was married much before about 580. These later dates for Bertha and Æthelberht also solve another possible problem: Æthelberht's daughter, Æthelburh, seems likely to have been Bertha's child, but the earlier dates would have Bertha aged sixty or so at Æthelburh's likely birthdate using the early dates. Gregory, however, also says that he thinks that Ingoberg was seventy years old in 589; and this would make her about forty when she married Charibert. This is possible, but seems unlikely, especially as Charibert seems to have had a preference for younger women, again according to Gregory's account. This would imply an earlier birth date for Bertha. On the other hand, Gregory refers to Æthelberht at the time of his marriage to Bertha simply as "a man of Kent", and in the 589 passage concerning Ingoberg's death, which was written in about 590 or 591, he refers to Æthelberht as "the son of the king of Kent". If this does not simply reflect Gregory's ignorance of Kentish affairs, which seems unlikely given the close ties between Kent and the Franks, then some assert that Æthelberht's reign cannot have begun before 589. While all of the contradictions above cannot be reconciled, the most probable dates that may be drawn from available data place Æthelberht's birth at approximately 560 and, perhaps, his marriage to Bertha at 580. His reign is most likely to have begun in 589 or 590. ## Kingship of Kent The later history of Kent shows clear evidence of a system of joint kingship, with the kingdom being divided into east Kent and west Kent, although it appears that there generally was a dominant king. This evidence is less clear for the earlier period, but there are early charters, known to be forged, which nevertheless imply that Æthelberht ruled as joint king with his son, Eadbald. It may be that Æthelberht was king of east Kent and Eadbald became king of west Kent; the east Kent king seems generally to have been the dominant ruler later in Kentish history. Whether or not Eadbald became a joint king with Æthelberht, there is no question that Æthelberht had authority throughout the kingdom. The division into two kingdoms is most likely to date back to the sixth century; east Kent may have conquered west Kent and preserved the institutions of kingship as a subkingdom. This was a common pattern in Anglo-Saxon England, as the more powerful kingdoms absorbed their weaker neighbours. An unusual feature of the Kentish system was that only sons of kings appeared to be legitimate claimants to the throne, although this did not eliminate all strife over the succession. The main towns of the two kingdoms were Rochester, for west Kent, and Canterbury, for east Kent. Bede does not state that Æthelberht had a palace in Canterbury, but he does refer to Canterbury as Æthelberht's "metropolis", and it is clear that it is Æthelberht's seat. ## Relations with the Franks There are many indications of close relations between Kent and the Franks. Æthelberht's marriage to Bertha certainly connected the two courts, although not as equals: the Franks would have thought of Æthelberht as an under-king. There is no record that Æthelberht ever accepted a continental king as his overlord and, as a result, historians are divided on the true nature of the relationship. Evidence for an explicit Frankish overlordship of Kent comes from a letter written by Pope Gregory the Great to Theuderic, king of Burgundy, and Theudebert, king of Austrasia. The letter concerned Augustine's mission to Kent in 597, and in it Gregory says that he believes "that you wish your subjects in every respect to be converted to that faith in which you, their kings and lords, stand". It may be that this is a papal compliment, rather than a description of the relationship between the kingdoms. It also has been suggested that Liudhard, Bertha's chaplain, was intended as a representative of the Frankish church in Kent, which also could be interpreted as evidence of overlordship. A possible reason for the willingness of the Franks to connect themselves with the Kentish court is the fact that a Frankish king, Chilperic I, is recorded as having conquered a people known as the Euthiones during the mid-sixth century. If, as seems likely from the name, these people were the continental remnants of the Jutish invaders of Kent, then it may be that the marriage was intended as a unifying political move, reconnecting different branches of the same people. Another perspective on the marriage may be gained by considering that it is likely that Æthelberht was not yet king at the time he and Bertha were wed: it may be that Frankish support for him, acquired via the marriage, was instrumental in gaining the throne for him. Regardless of the political relationship between Æthelberht and the Franks, there is abundant evidence of strong connections across the English Channel. There was a luxury trade between Kent and the Franks, and burial artefacts found include clothing, drink, and weapons that reflect Frankish cultural influence. The Kentish burials have a greater range of imported goods than those of the neighbouring Anglo-Saxon regions, which is not surprising given Kent's easier access to trade across the English Channel. In addition, the grave goods are both richer and more numerous in Kentish graves, implying that material wealth was derived from that trade. Frankish influences also may be detected in the social and agrarian organization of Kent. Other cultural influences may be seen in the burials as well, so it is not necessary to presume that there was direct settlement by the Franks in Kent. ## Rise to dominance ### Bretwalda In his Ecclesiastical History, Bede includes his list of seven kings who held imperium over the other kingdoms south of the Humber. The usual translation for imperium is "overlordship". Bede names Æthelberht as the third on the list, after Ælle of Sussex and Ceawlin of Wessex. The anonymous annalist who composed one of the versions of the Anglo-Saxon Chronicle repeated Bede's list of seven kings in a famous entry under the year 827, with one additional king, Egbert of Wessex. The Chronicle also records that these kings held the title bretwalda, or "Britain-ruler". The exact meaning of bretwalda has been the subject of much debate; it has been described as a term "of encomiastic poetry", but there also is evidence that it implied a definite role of military leadership. The prior bretwalda, Ceawlin, is recorded by the Anglo-Saxon Chronicle as having fought Æthelberht in 568. The entry states that Æthelberht lost the battle and was driven back to Kent. The dating of the entries concerning the West Saxons in this section of the Chronicle is thought to be unreliable and a recent analysis suggests that Ceawlin's reign is more likely to have been approximately 581–588, rather than the dates of 560–592 that are given in the Chronicle. The battle was at "Wibbandun", which may be translated as Wibba's Mount; it is not known where this was. At some point Ceawlin ceased to hold the title of bretwalda, perhaps after a battle at Stoke Lyne, in Oxfordshire, which the Chronicle dates to 584, some eight years before he was deposed in 592 (again using the Chronicle's unreliable dating). Æthelberht certainly was a dominant ruler by 601, when Gregory the Great wrote to him: Gregory urges Æthelberht to spread Christianity among those kings and peoples subject to him, implying some level of overlordship. If the battle of Wibbandun was fought c. 590, as has been suggested, then Æthelberht must have gained his position as overlord at some time in the 590s. This dating for Wibbandun is slightly inconsistent with the proposed dates of 581–588 for Ceawlin's reign, but those dates are not thought to be precise, merely the most plausible given the available data. ### Relationships with other kingdoms In addition to the evidence of the Chronicle that Æthelberht was accorded the title of bretwalda, there is evidence of his domination in several of the southern kingdoms of the Heptarchy. In Essex, Æthelberht appears to have been in a position to exercise authority shortly after 604, when his intervention helped in the conversion of King Sæberht of Essex, his nephew, to Christianity. It was Æthelberht, and not Sæberht, who built and endowed St. Pauls in London, where St Paul's Cathedral now stands. Further evidence is provided by Bede, who explicitly describes Æthelberht as Sæberht's overlord. Bede describes Æthelberht's relationship with Rædwald, king of East Anglia, in a passage that is not completely clear in meaning. It seems to imply that Rædwald retained ducatus, or military command of his people, even while Æthelberht held imperium. This implies that being a bretwalda usually included holding the military command of other kingdoms and also that it was more than that, since Æthelberht is bretwalda despite Rædwald's control of his own troops. Rædwald was converted to Christianity while in Kent but did not abandon his pagan beliefs; this, together with the fact that he retained military independence, implies that Æthelberht's overlordship of East Anglia was much weaker than his influence with the East Saxons. An alternative interpretation, however, is that the passage in Bede should be translated as "Rædwald, king of the East Angles, who while Æthelberht lived, even conceded to him the military leadership of his people"; if this is Bede's intent, then East Anglia firmly was under Æthelberht's overlordship. There is no evidence that Æthelberht's influence in other kingdoms was enough for him to convert any other kings to Christianity, although this is partly due to the lack of sources—nothing is known of Sussex's history, for example, for almost all of the seventh and eighth centuries. Æthelberht was able to arrange a meeting in 602 in the Severn valley, on the northwestern borders of Wessex, however, and this may be an indication of the extent of his influence in the west. No evidence survives showing Kentish domination of Mercia, but it is known that Mercia was independent of Northumbria, so it is quite plausible that it was under Kentish overlordship. ## Augustine's mission and early Christianisation The native Britons had converted to Christianity under Roman rule. The Anglo-Saxon invasions separated the British church from European Christianity for centuries, so the church in Rome had no presence or authority in Britain, and in fact, Rome knew so little about the British church that it was unaware of any schism in customs. However, Æthelberht would have known something about the Roman church from his Frankish wife, Bertha, who had brought a bishop, Liudhard, with her across the Channel, and for whom Æthelberht built a chapel, St Martin's. In 596, Pope Gregory the Great sent Augustine, prior of the monastery of St. Andrew in Rome, to England as a missionary, and in 597, a group of nearly forty monks, led by Augustine, landed on the Isle of Thanet in Kent. According to Bede, Æthelberht was sufficiently distrustful of the newcomers to insist on meeting them under the open sky, to prevent them from performing sorcery. The monks impressed Æthelberht, but he was not converted immediately. He agreed to allow the mission to settle in Canterbury and permitted them to preach. It is not known when Æthelberht became a Christian. It is possible, despite Bede's account, that he already was a Christian before Augustine's mission arrived. It is likely that Liudhard and Bertha pressed Æthelberht to consider becoming a Christian before the arrival of the mission, and it is also likely that a condition of Æthelberht's marriage to Bertha was that Æthelberht would consider conversion. Conversion via the influence of the Frankish court would have been seen as an explicit recognition of Frankish overlordship, however, so it is possible that Æthelberht's delay of his conversion until it could be accomplished via Roman influence might have been an assertion of independence from Frankish control. It also has been argued that Augustine's hesitation—he turned back to Rome, asking to be released from the mission—is an indication that Æthelberht was a pagan at the time Augustine was sent. At the latest, Æthelberht must have converted before 601, since that year Gregory wrote to him as a Christian king. An old tradition records that Æthelberht converted on 1 June, in the summer of the year that Augustine arrived. Through Æthelberht's influence Sæberht, king of Essex, also was converted, but there were limits to the effectiveness of the mission. The entire Kentish court did not convert: Eadbald, Æthelberht's son and heir, was a pagan at his accession. Rædwald, king of East Anglia, was only partly converted (apparently while at Æthelberht's court) and retained a pagan shrine next to the new Christian altar. Augustine also was unsuccessful in gaining the allegiance of the British clergy. ## Law code Some time after the arrival of Augustine's mission, perhaps in 602 or 603, Æthelberht issued a set of laws, in ninety sections. These laws are by far the earliest surviving code composed in any of the Germanic countries, and they were almost certainly among the first documents written down in Anglo-Saxon, as literacy would have arrived in England with Augustine's mission. The only surviving early manuscript, the Textus Roffensis, dates from the twelfth century, and it now resides in the Medway Studies Centre in Strood, Kent. Æthelberht's code makes reference to the church in the very first item, which enumerates the compensation required for the property of a bishop, a deacon, a priest, and so on; but overall, the laws seem remarkably uninfluenced by Christian principles. Bede asserted that they were composed "after the Roman manner", but there is little discernible Roman influence either. In subject matter, the laws have been compared to the Lex Salica of the Franks, but it is not thought that Æthelberht based his new code on any specific previous model. The laws are concerned with setting and enforcing the penalties for transgressions at all levels of society; the severity of the fine depended on the social rank of the victim. The king had a financial interest in enforcement, for part of the fines would come to him in many cases, but the king also was responsible for law and order, and avoiding blood feuds by enforcing the rules on compensation for injury was part of the way the king maintained control. Æthelberht's laws are mentioned by Alfred the Great, who compiled his own laws, making use of the prior codes created by Æthelberht, as well as those of Offa of Mercia and Ine of Wessex. One of Æthelberht's laws seems to preserve a trace of a very old custom: the third item in the code states that "If the king is drinking at a man's home, and anyone commits any evil deed there, he is to pay twofold compensation." This probably refers to the ancient custom of a king traveling the country, being hosted, and being provided for by his subjects wherever he went. The king's servants retained these rights for centuries after Æthelberht's time. Items 77–81 in the code have been interpreted as a description of a woman's financial rights after a divorce or legal separation. These clauses define how much of the household goods a woman could keep in different circumstances, depending on whether she keeps custody of the children, for example. It has recently been suggested, however, that it would be more correct to interpret these clauses as referring to women who are widowed, rather than divorced. ## Trade and coinage There is little documentary evidence about the nature of trade in Æthelberht's Kent. It is known that the kings of Kent had established royal control of trade by the late seventh century, but it is not known how early this control began. There is archaeological evidence suggesting that the royal influence predates any of the written sources. It has been suggested that one of Æthelberht's achievements was to take control of trade away from the aristocracy and to make it a royal monopoly. The continental trade provided Kent access to luxury goods which gave it an advantage in trading with the other Anglo-Saxon nations, and the revenue from trade was important in itself. Kentish manufacture before 600 included glass beakers and jewelry. Kentish jewellers were highly skilled, and before the end of the sixth century they gained access to gold. Goods from Kent are found in cemeteries across the channel and as far away as at the mouth of the Loire. It is not known what Kent traded for all of this wealth, although it seems likely that there was a flourishing slave trade. It may well be that this wealth was the foundation of Æthelberht's strength, although his overlordship and the associated right to demand tribute would have brought wealth in its turn. It may have been during Æthelberht's reign that the first coins were minted in England since the departure of the Romans: none bear his name, but it is thought likely that the first coins predate the end of the sixth century. These early coins were gold, and probably were the shillings (scillingas in Old English) that are mentioned in Æthelberht's laws. The coins are also known to numismatists as thrymsas. ## Death and succession Æthelberht died on 24 February 616 and was succeeded by his son, Eadbald, who was not a Christian—Bede says he had been converted but went back to his pagan faith, although he ultimately did become a Christian king. Eadbald outraged the church by marrying his stepmother, which was contrary to Church law, and by refusing to accept baptism. Sæberht of the East Saxons also died at approximately this time, and he was succeeded by his three sons, none of whom were Christian. A subsequent revolt against Christianity and the expulsion of the missionaries from Kent may have been a reaction to Kentish overlordship after Æthelberht's death as much as a pagan opposition to Christianity. In addition to Eadbald, it is possible that Æthelberht had another son, Æthelwald. The evidence for this is a papal letter to Justus, archbishop of Canterbury from 619 to 625, that refers to a king named Aduluald, who is apparently different from Audubald, which refers to Eadbald. There is no agreement among modern scholars on how to interpret this: "Aduluald" might be intended as a representation of "Æthelwald", and hence an indication of another king, perhaps a sub-king of west Kent; or it may be merely a scribal error which should be read as referring to Eadbald. ## Liturgical celebration Æthelberht was later regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February. In the 2004 edition of the Roman Martyrology, he is listed under his date of death, 24 February, with the citation: 'King of Kent, converted by St Augustine, bishop, the first leader of the English people to do so'. The Roman Catholic Archdiocese of Southwark, which contains Kent, commemorates him on 25 February. He is also venerated in the Eastern Orthodox Church as Saint Ethelbert, king of Kent, his day commemorated on 25 February. ## See also - Kentish Royal Legend
19,238,871
How a Mosquito Operates
1,140,112,741
1912 film
[ "1910s American animated films", "1910s animated short films", "1912 animated films", "1912 films", "1912 short films", "American black-and-white films", "American silent short films", "Animated films about insects", "Articles containing video clips", "Films about mosquitoes", "Films directed by Winsor McCay", "Flies and humans", "Insect bites and stings" ]
How a Mosquito Operates is a 1912 silent animated film by American cartoonist Winsor McCay. The six-minute short depicts a giant mosquito tormenting a sleeping man. The film is one of the earliest works of animation, and its technical quality is considered far ahead of its time. It is also known under the titles The Story of a Mosquito and Winsor McCay and his Jersey Skeeters. McCay had a reputation for his proficient drawing skills, best remembered in the elaborate cartooning of the children's comic strip Little Nemo in Slumberland he began in 1905. He delved into the emerging art of animation with the film Little Nemo (1911), and followed its success by adapting an episode of his comic strip Dream of the Rarebit Fiend into How a Mosquito Operates. McCay gave the film a more coherent story and more developed characterization than in the Nemo film, with naturalistic timing, motion, and weight in the animation. How a Mosquito Operates had an enthusiastic reception when McCay first showed it as part of his vaudeville act. He further developed the character animation he introduced in Mosquito with his best-known animated work, Gertie the Dinosaur (1914). ## Synopsis A man looks around apprehensively before entering his room. A giant mosquito with a top hat and briefcase flies in after him through a transom window. It repeatedly feeds on the sleeping man, who tries in vain to shoo it away. The mosquito eventually drinks itself so full that it explodes. ## Style How a Mosquito Operates is one of the earliest examples of line-drawn animation. McCay used minimal backgrounds and capitalized on strengths of the film medium, then in its infancy, by focusing on the physical, visual action of the characters. No intertitles interrupt the silent visuals. Rather than merely expanding like a balloon, as the mosquito drinks its abdomen fills consistent with its bodily structure in a naturalistic way. The heavier it becomes, the more difficulty it has keeping its balance. In its excitement as it feeds, it does push-ups on the man's nose and flips its hat in the air. The mosquito has a personality: egotistical, persistent, and calculating (as when it whets its proboscis on a stone wheel). It makes eye contact with the viewers and waves at them. McCay balances horror with humor, as when the mosquito finds itself so engorged with blood that it must lie down. ## Background Winsor McCay (c. 1869–1934) developed prodigiously accurate and detailed drawing skills early in life. As a young man, he earned a living drawing portraits and posters in dime museums, and attracted large crowds with his ability to draw quickly in public. McCay began working as a full-time newspaper illustrator in 1898, and started drawing comic strips in 1903. His greatest comic-strip success was the children's fantasy Little Nemo in Slumberland, which he launched in 1905. McCay began performing on the vaudeville circuit the following year, doing chalk talks—performances in which he drew in front of a live audience. Inspired by flip books his son Robert brought home, McCay said he "came to see the possibility of making moving pictures" of his cartoons. He declared himself "the first man in the world to make animated cartoons", though the American James Stuart Blackton and the French Émile Cohl were among those who had made earlier ones, and McCay had photographed his first animated short under Blackton's supervision. McCay featured his Little Nemo characters in the film, which debuted in movie theatres in 1911, and he soon incorporated it into his vaudeville act. The animated sequences in Little Nemo have no plot: much like the early experiments of Émile Cohl, McCay used his first film to demonstrate the medium's capabilities—with fanciful sequences demonstrating motion for its own sake. In Mosquito he wanted greater believability, and balanced outlandish action with naturalistic timing, motion, and weight. Since he had already demonstrated in his first film that pictures could be made to move, in the second he introduced a simple story. Vaudeville acts and humor magazines commonly joked about large New Jersey mosquitoes they called "Jersey skeeters", and McCay had used mosquitoes in his comic strips—including a Little Nemo episode in which a swarm of mosquitoes attack Nemo after he returns from a trip to Mars. McCay took the idea for the film from a June 5, 1909, episode of his comic strip Dream of the Rarebit Fiend, in which a mosquito (without top hat or briefcase) gorges itself on an alcoholic until it becomes so bloated and drunk that it cannot fly away. ## Production and release McCay began working on the film in May 1911. Shortly after, he left the employ of the New York Herald for the newspapers of William Randolph Hearst—a sign of his rising stardom. A magazine advertisement in July announced a "moving picture, containing six thousand sketches ... will be a 'release' for vaudeville next season by Mr. McCay. The film will be named How a Mosquito Operates." McCay made the 6 000 drawings on translucent rice paper. The film came before the development of cel animation, in which animators draw on clear sheets of celluloid and lay them over static backgrounds. Thus, on each drawing McCay had to redraw the background, which appears to waver slightly due to the difficulty of reproducing it perfectly each time. McCay re-used some of the drawings to loop repeated actions, a technique he used once in Little Nemo and more extensively in his later films. McCay finished drawing the film in December 1911. A snowstorm hit when he was to have the drawings taken to Vitagraph Studios for photographing, so he hired an enclosed horse-drawn taxi to have them taken there. It disappeared, and a few days later the police found the abandoned taxi with the drawings unharmed inside, the horses two to three miles away. The first attempt to shoot the artwork resulted in unacceptable amounts of flicker due to the arc lighting the studio used, and it was re-shot. The completed work came to 600 feet of film. How a Mosquito Operates debuted in January 1912 as part of McCay's vaudeville act, which he toured through that spring and summer. Film producer Carl Laemmle bought the distribution rights under the restriction that he not have the film shown in the US until McCay had finished using it in his vaudeville act. Universal–Jewel released the film in 1916 under the title Winsor McCay and his Jersey Skeeters, and it has sometimes been called The Story of a Mosquito. In a lost live-action prologue, McCay and his daughter, "pestered to death by mosquitoes" at their summer home in New Jersey, find a professor who speaks the insects' language. The professor tells McCay to "make a series of drawings to illustrate just how the insect does its deadly work", and after months of work McCay invites the professor to watch the film. ## Reception and legacy How a Mosquito Operates was released at a time when audience demand for animation outstripped the studios' ability to supply it. According to animator Chris Webster, at a time when most studios struggled to make animation merely work, McCay showed a mastery of the medium and a sense of how to create believable motion. The film opened to large audiences, and was well received. The Detroit Times described audiences laughing until they cried, and "[going] home feeling that [they] had seen one of the best programs" in the theater's history. The paper called the film "a marvelous arrangement of colored drawings", referring to the final explosive sequence, which McCay had hand-painted red (colored versions of this sequence have not survived). The New York Morning Telegraph remarked, "[McCay's] moving pictures of his drawings have caused even film magnates to marvel at their cleverness and humor". Audiences found his animation so lifelike that they suggested he had traced the characters from photographs or resorting to tricks using wires: > I drew a great ridiculous mosquito, pursuing a sleeping man, peeking through a keyhole and pouncing on him over the transom. My audiences were pleased, but declared the mosquito was operated by wires to get the effect before the cameras. To show that he had not used such tricks, McCay chose a creature for his next film that could not have been photographed: a Brontosaurus. The film, Gertie the Dinosaur, debuted as part of his vaudeville act in 1914. Before he brought out Gertie, he hinted at the film's subject in interviews in which he spoke of animation's potential for "serious and educational work". American animator John Randolph Bray's first film, The Artist's Dream, appeared in 1913; it alternates live-action and animated sequences, and features a dog that explodes after eating too many sausages. Though these aspects recall McCay's first two films, Bray said that he did not know of McCay's efforts while working on The Artist's Dream. Following Mosquito, animated films tended to be story-based; for decades they rarely drew attention to the technology underlying it, and live-action sequences became infrequent. Animator and McCay biographer John Canemaker commended McCay for his ability to imbue a mosquito with character and personality, and stated the technical quality of McCay's animation was far ahead of its time, unmatched until the Disney studios gained prominence in the 1930s with films such as Snow White and the Seven Dwarfs (1937). ## See also - History of animation - List of films in the public domain in the United States
47,604,439
Huguenot-Walloon half dollar
1,139,474,997
US commemorative coin issued in 1924
[ "Currencies introduced in 1924", "Early United States commemorative coins", "Fifty-cent coins", "Huguenot history in the United States", "Huguenots", "New York (state) historical anniversaries", "Ships on coins", "United States silver coins", "Walloon diaspora" ]
The Huguenot-Walloon half dollar or Huguenot-Walloon Tercentenary half dollar is a commemorative coin issued by the United States Bureau of the Mint in 1924. It marks the 300th anniversary of the voyage of the Nieuw Nederlandt which landed in the New York area in 1624. Many of the passengers were Huguenots from France or Walloons from what is now Belgium; they became early settlers of New York State and the surrounding area. A commission run by the Federal Council of Churches in America sought issuance of a half dollar to mark the anniversary, and the bill passed through Congress without opposition in 1923 and was signed by President Warren G. Harding. Sketches were prepared by commission chairman Reverend John Baer Stoudt and converted to plaster models by the Mint's aging chief engraver, George T. Morgan. The models were initially rejected by the Commission of Fine Arts, which required revisions under the supervision of Buffalo nickel designer James Earle Fraser. Of the 300,000 coins authorized by Congress, fewer than half were actually struck, and of these, 55,000 were returned to the Mint and released into circulation. The coin excited some controversy because of its sponsorship by a religious group. The choice of William the Silent and Gaspard de Coligny to appear on the obverse was also questioned as the men are considered martyrs by the Huguenots and died decades before the voyage of the Nieuw Nederlandt. The coins are currently valued in the hundreds of dollars, depending on condition. ## Background The Huguenots were French Protestants, who were often in conflict with the Catholic majority. Many Huguenots fled France in the 16th and 17th centuries, when there was intense persecution of them, most notably in the St. Bartholomew's Day massacre of 1572. Among those who fell in the bloodbath that day was the Huguenot military and political leader, Admiral Gaspard de Coligny. Many Huguenots who fled France settled in the Netherlands. William the Silent was one of the leaders of the Dutch Revolt against Spain. He was assassinated in 1584 by Balthasar Gérard, a pro-Spanish zealot. There were Protestant Walloons in what is now Belgium. Some Huguenots and Walloons went elsewhere: on March 29, 1624, the ship Nieuw Nederlandt set out for New Netherland, the Dutch possessions centered on what is now the state of New York; more ships followed. They represented a large proportion of the early settlers of the area. In 1626, Peter Minuit, the Director General for the Dutch West India Company, famously purchased the island of Manhattan from the Native Americans for goods worth some 60 guilders, often rendered as \$24. The Huguenot-Walloon New Netherland Commission was established in 1922 under the auspices of the Federal Council of Churches of Christ in America in anticipation of the upcoming anniversary. President Warren G. Harding was the honorary president of the commission, and King Albert I of Belgium accepted an honorary chairmanship. The commission, led by its chairman, Rev. Dr. John Baer Stoudt, planned an observance for the 300th anniversary of the Nieuw Nederlandt voyage, and sought the issuance of commemorative stamps and coins. At the time, commemorative coins were not sold by the government—Congress, in authorizing legislation, designated an organization which had the exclusive right to purchase the coins at face value and vend them to the public at a premium. ## Legislation A bill for a Huguenot-Walloon half dollar was introduced in the House of Representatives on January 15, 1923, by Pennsylvania Congressman Fred Gernerd, who was of Huguenot descent. It received a hearing before the House Committee on Coinage, Weights, and Measures on February 7, with Indiana Representative Albert Vestal, a Republican, presiding. Congressman Gernerd told the committee that there were plans to have local celebrations in 1924 to celebrate the 300th anniversary of the Huguenot voyage in cities where people of that heritage lived. Gernerd stated that while the sale of half dollars would raise money towards the observance, it was not intended as a serious fundraiser, but as a symbol of the occasion. He reminded the committee that the 300th anniversary of the voyage of the Mayflower had seen a half dollar issued. Vestal stated that he could not support the bill as introduced because it did not designate who should order the coins, but Gernerd indicated that the Fifth National Bank of New York had agreed to act in that capacity. Stoudt appeared before the committee, explaining that his commission planned a design with the arrival of the Nieuw Nederlandt for one side, and for the other, Peter Minuit purchasing Manhattan from the Native Americans. New Jersey Congressman Ernest R. Ackerman briefly addressed the committee in support, noting that the coins would likely be retained as souvenirs, to the profit of the government. West Virginia's Wells Goodykoontz also spoke in favor. The witnesses, all urging passage of the bill, concluded with a number of local pastors, led by E. O. Watson, secretary of the Federal Council of Churches. On February 10, 1923, Vestal issued a report recommending that the bill pass with an amendment adding the bank as the ordering organization. A parallel bill was introduced into the Senate by Pennsylvania's David A. Reed on January 29, 1923, and was referred to the Committee on Banking and Currency. It was reported favorably on February 9 by Pennsylvania's George Pepper, and the Senate passed the bill without objection. The Senate-passed bill was received by the House the following day and was referred to Vestal's committee. On February 19, the House considered its bill on the unanimous consent calendar. Gernerd asked that the Senate-passed bill be substituted for the House bill (the two were identical) and when this was agreed, proffered an amendment to add the bank as the depository for the coins. Texas Congressman Thomas L. Blanton asked several questions about the bank's interest, but subsided once he was told that it would receive no compensation and that President Harding was connected with the commission. The amended bill passed the House without objection. The following day, the Senate agreed to the House amendment, and the bill was enacted by Harding's signature on February 26, 1923. The act provided for a maximum of 300,000 half dollars. ## Preparation Stoudt supplied the concept for the coins, as well as sketches. Rather than seeking a private designer to produce plaster models, the Huguenot-Walloon commission approached the Mint's chief engraver, George T. Morgan, who turned 78 in 1923. Morgan, best remembered for his 1878 design for the Morgan dollar, had been chief engraver since 1917, following forty years as an assistant, mostly under Chief Engraver Charles E. Barber. Both Barber and Morgan felt that coins should be designed by the Mint's engravers, and were sometimes hostile when private sculptors were engaged to do the work. Morgan's models were transmitted on October 26, 1923, to the Commission of Fine Arts, charged with rendering advisory opinions on coins by a 1921 executive order by President Harding. Morgan's work was examined by sculptor member James Earle Fraser, designer of the Buffalo nickel. On November 19, committee chairman Charles Moore wrote to Mint Director Robert J. Grant, "while the ideas intended to be expressed are excellent, the execution is bad. The lettering is poor, the heads are not well modeled and the ship is ill designed. The workmanship is below the standard of excellence attained in previous coins. The models are therefore not approved." After discussion, it was decided to allow Morgan to revise his model under Fraser's supervision. Numismatists Anthony Swiatek and Walter Breen noted, "[This] must have been doubly and trebly humiliating in that Fraser's initial was then adorning the current 5¢ nickel, while neither Barber's nor Morgan's was on any regular issue coinage then in production". On January 3, 1924, Fraser wrote to Moore that the new models had been considerably improved, and complained that Vestal had advised the Huguenot-Walloon commission to have the models made at the Mint as he had been told by its officials that private artists made models in a relief too high to be easily coined. "It seems to me perfectly disgusting that this inane and lying criticism should go on constantly". The Fine Arts Commission approved the revised designs. ## Design The obverse features jugate busts of French admiral Gaspard de Coligny (1519–1572) and Dutch leader William the Silent (1533–1584). Neither had any direct involvement with the voyage of the Nieuw Nederlandt, having been killed forty years or longer before it. Both were Protestant leaders of the Reformation, and according to Swiatek, "their relationship with the 1624 founding was strictly spiritual in nature," as the two are considered martyrs by Huguenots. The 1924 Report of the Director of the Mint explained that both were "leaders in the strife for civil and religious liberty". Slabaugh, noting that the coin caused some controversy after it was issued, suggested that if the obverse had shown someone connected with the settlement of New Netherland, "chances are that the coin would have borne no religious significance and its promotion by the Churches of Christ in America would have been given little notice". The March 29, 1924, edition of the Jesuit journal America contained an article by F. J. Zwierlein, who stated that the new coin "is more Protestant than the descriptions in the newspaper dispatches led us to believe". He asserted that the two men featured on the coin were not killed for their religion and were anti-Catholic: "the United States Government was duped into issuing this Huguenot half-dollar so as to make a Protestant demonstration out of the tercentenary of the colonization of the State of New York". The president of the Huguenot Society of North Carolina responded in a letter to the editor of The New York Times, "that Coligny and William the Silent were 'martyrs in the fight for religious liberty' let the truth of history attest". The men have their names below the busts on the obverse, and wear hats of their period. They gaze toward the legend IN GOD WE TRUST, the only one of the national mottos usually present on U.S. coins to appear. The name of the country arcs above their heads, while HUGUENOT HALF DOLLAR is below them. Morgan's initial "M" is on Coligny's shoulder. The reverse depicts the ship Nieuw Nederlandt and the words, HUGUENOT – WALLOON – TERCENTENARY – FOUNDING OF NEW NETHERLAND with the years 1624 and 1924 to either side of the ship. Stoudt's sketch for the reverse was also used on the one cent denomination of the stamp set issued in conjunction with the tercentenary. Art historian Cornelius Vermeule noted that the half dollar was probably one of Morgan's last works (he died in January 1925) and that the coin "is a worthy conclusion to Morgan's long career of distinguished and rich production, marked by imagination within the conservative framework and by a generally high level of appeal". Vermeule stated that the Huguenot-Walloon half dollar showed "that the die-engravers trained in and around the Mint did have the ability to combine clear-cut designs with considerable detail." ## Production, distribution, and collecting A total of 142,080 Huguenot-Walloon half dollars were struck at the Philadelphia Mint in February and April 1924, with 80 of those pieces retained for inspection and testing by the 1925 Assay Commission. The Huguenot-Walloon commission, to boost sales, engaged as distributor the man they considered to be the most prominent numismatist in the country, Moritz Wormser, president of the American Numismatic Association (ANA). Wormser's involvement, and the fact that Stoudt was an ANA member, led numismatist John F. Jones to deem this issue "the first instance we believe, where the coin fraternity has been consulted in the issue of a commemorative half dollar". The coins were sold to the public for \$1 each through the Fifth National Bank and other outlets. Bulk sales were made to certain groups. The coins did not sell as well as expected, and 55,000 were returned, after which they were placed in circulation. Relatively few are known in worn condition, causing author and coin dealer Q. David Bowers to conclude the public picked them out of pocket change. Money from the Huguenot-Walloon half dollar was used towards a celebration in New York in May 1924, during which the National Huguenot Memorial Church on Staten Island was dedicated. There was some debate in the pages of The Numismatist, the ANA's journal, both as to whether William and Coligny should have appeared on the coin since they had nothing to do with the voyage, and whether the Churches of Christ should be allowed to sponsor a coin in view of the First Amendment's prohibition of an establishment of religion. There was controversy in the press, which criticized the inclusion of William and Coligny as irrelevant to the commemoration and as religious propaganda. This in 1925 made politically infeasible the attempts of Minnesota Representative Ole Juulson Kvale, a Lutheran pastor, to obtain a coin for the Norse-American Centennial; he instead settled for a congressionally authorized medal. Arlie R. Slabaugh, in his 1975 volume on commemoratives, noted that a uniface die trial of the reverse in brass was made. Swiatek in 2012 stated that both obverse and reverse die trials are known. The edition of R. S. Yeoman's A Guide Book of United States Coins published in 2018 lists the half dollar at between \$125 and \$650 depending on condition; a near-pristine specimen sold for \$15,275 in 2015.
2,193,582
Al-Musta'li
1,173,738,369
Fatimid caliph and imam (1074–1101)
[ "1074 births", "1101 deaths", "11th-century Fatimid caliphs", "12th-century Fatimid caliphs", "Egyptian Ismailis", "Muslims of the First Crusade", "Musta'li imams", "Sons of Fatimid caliphs", "Year of birth uncertain" ]
Abū al-Qāsim Aḥmad ibn al-Mustanṣir (Arabic: أبو القاسم أحمد بن المستنصر; 15/16 September 1074 – 12 December 1101), better known by his regnal name al-Mustaʿlī Biʾllāh (المستعلي بالله, lit. 'The One Raised Up by God'), was the ninth Fatimid caliph and the nineteenth imam of Musta'li Ismailism. Although not the eldest (and most likely the youngest) of the sons of Caliph al-Mustansir Billah, al-Musta'li became caliph through the machinations of his brother-in-law, the vizier al-Afdal Shahanshah. In response, his oldest brother and most likely candidate for their father's succession, Nizar, rose in revolt in Alexandria, but was defeated and executed. This caused a major split in the Isma'ili movement. Many communities, especially in Persia and Iraq, split off from the officially sponsored Isma'ili hierarchy and formed their own Nizari movement, holding Nizar and his descendants as the rightful imams. Throughout his reign, al-Musta'li remained subordinate to al-Afdal, who was the de facto ruler of the Fatimid Caliphate. The Caliphate's core territory in Egypt experienced a period of good government and prosperity, but the Fatimids suffered setbacks in Syria, where they were faced with the advance of the Sunni Seljuk Turks. Al-Afdal managed to recover the port city of Tyre, and even recapture Jerusalem in the turmoil caused by the arrival of the First Crusade in northern Syria. Despite Fatimid attempts to make common cause with the Crusaders against the Seljuks, the latter advanced south and captured Jerusalem in July 1099, sealing their success with a major victory over the Fatimid army led by a-Afdal at the Battle of Ascalon shortly after. Al-Musta'li died in 1101 and was succeeded by his five-year-old son, al-Amir. ## Life ### Origin and background Ahmad, the future al-Musta'li, was born in Cairo on 20 Mukharram 467 AH (15 or 16 September 1074), or perhaps on 18 or 20 Mukharram 468 AH (2 or 4 September 1075) to the eighth Fatimid caliph, al-Mustansir Billah (r. 1036–1094), and was most likely the youngest of all of al-Mustansir's sons. Another son of al-Mustansir had been born in 1060 with the same name—Abu'l-Qasim Ahmad—as the future al-Musta'li, and some later sources have confused this as al-Musta'li's birth date. It is assumed by modern scholars that this older brother had died in the meantime, allowing the name to be reused for al-Musta'li. In one source he is called Abu'l-Qasim Ahmad 'the Younger' (or possibly 'the Youngest', i.e. of all sons). At the time of his birth, the Fatimid Caliphate, established in Egypt with Cairo as its capital since 973, was undergoing a profound crisis: it had lost most of Syria to the Seljuk Turks, while in Egypt itself, clashes between the Fatimid army's Turkish and black African troops led to the breakdown of the central government and widespread famine and anarchy, leaving al-Mustansir as a powerless figurehead, virtually imprisoned in his palace and at the mercy of military warlords. In January 1074, the general Badr al-Jamali assumed the vizierate and proceeded to restore peace and order in the country and repel a Seljuk invasion, saving al-Mustansir's life and his dynasty; but at the cost of al-Mustansir delegating all his powers over the government, army, and the religious and judicial administration to him. ### Disputed succession Ahmad's oldest half-brother, Nizar ibn al-Mustansir, was apparently considered at the time as the most likely successor to their father, as had been the custom until then; indeed Nizar is often stated even by modern historians to have been the designated successor of his father. No formal designation of Nizar as heir is recorded by the time of al-Mustansir's death; both Badr al-Jamali and his son and successor al-Afdal Shahanshah favoured the accession of Ahmad. Shortly before his death, al-Mustansir consented to the wedding of Ahmad with Badr's daughter Sitt al-Mulk. Al-Mustansir died on 29 December 1094, on the day of Eid al-Ghadir, the most important Shi'a festival. According to the Mamluk-era historian al-Maqrizi, al-Afdal placed Ahmad on the throne and declared him caliph as al-Musta'li bi'llah (lit. 'The One Raised Up by God'). He then summoned three of al-Mustansir's sons—Nizar, Abdallah, and Isma'il, apparently the most prominent among the caliph's progeny—to the palace, where they were called on to do homage to their brother. All three refused, each claiming to have been designated as successor by their father. This refusal apparently took al-Afdal completely by surprise, and the brothers were even allowed to leave the palace; but while Abdallah and Isma'il sought refuge in a nearby mosque, Nizar immediately fled Cairo. To add to the confusion, when learning of al-Mustansir's death, Baraqat, the chief missionary (da'i) of Cairo (and thus head of the Isma'ili religious establishment), proclaimed Abdallah as caliph with the regnal name al-Muwaffaq ('The Blessed One'). Soon, however, al-Afdal regained control: Baraqat was arrested (and later executed), Abdallah and Isma'il were placed under surveillance and eventually acknowledged Ahmad, and a grand assembly of officials was held, which acclaimed Ahmad as imam and caliph. In 1122, Ahmad's son and successor, al-Amir (r. 1101–1130), issued a public proclamation, the al-Hidaya al-Amiriyya, to defend his father's succession, especially against the claims of Nizar's partisans. In it he put forth several arguments, such as the fact that when al-Mustansir sent his sons to the provinces to protect them from the turmoil at the capital, this was supposedly done in order of rank, those closest to Cairo being the highest in rank: Abu Abdallah was to go to Acre; Abu'l-Qasim Muhammad (father of al-Hafiz, caliph in 1131–1149) to Ascalon; Nizar to the port of Damietta; and Ahmad was not even allowed to leave the palace. Modern historians such as Paul E. Walker point out that this was a deliberately misconstrued argument, as the princes were sent away for their protection, not because of their rank. According to Walker, Abu Abdallah's dispatch to Acre, where the strong army of Badr al-Jamali was stationed, is, if anything, an indication of his high importance and of his father's desire to keep him safe. At the same time, since the reliable al-Maqrizi dates the event to 1068, the underage son left in Cairo was clearly not the future al-Musta'li, who had not been born yet, but rather his namesake older brother. Other pro-Musta'li traditions maintain that Ahmad was designated as heir by al-Mustansir at Ahmad's wedding banquet. On the occasion of the proclamation of the al-Hidaya al-Amiriyya, furthermore, a supposed full sister of Nizar was presented, hidden behind a veil, who affirmed that on his deathbed, al-Mustansir had chosen Ahmad as heir and left this as a bequest with one of Ahmad's sisters. Modern historians, such as Farhad Daftary, believe that these stories are most likely attempts to justify and retroactively legitimize Ahmad's accession, which they view as a de facto coup d'état by al-Afdal. According to this view, al-Afdal chose his brother-in-law because his own position was still insecure, as he had but recently succeeded his father Badr. Ahmad, who was tied to al-Afdal by virtue of his marriage and completely dependent on him for his accession, would be a compliant figurehead who was unlikely to threaten al-Afdal's as yet fragile hold on power by attempting to appoint another to the vizierate. ### Nizar's revolt and the Nizari schism After fleeing from Cairo, Nizar went to Alexandria, where he gained the support of the local governor and populace, and proclaimed himself imam and caliph with the regnal name of al-Mustafa li-Din Allah ('The Chosen One for God's Religion'). Nizar's partisans repulsed al-Afdal's first attempt to seize Alexandria, and Nizar's forces raided up to the outskirts of Cairo. Eventually, Nizar's forces were pushed back to Alexandria, which was placed under siege, until Nizar and his remaining followers were forced to surrender. They were taken back to Cairo, where Nizar was immured and left to die. A letter sent to the queen of Yemen, Arwa al-Sulayhi, announcing al-Musta'li's accession, gives the officially disseminated version of events. According to the letter, like the other sons of al-Mustansir, Nizar had at first accepted al-Musta'li's imamate and paid him homage, before being moved by greed and envy to revolt. The events up to the capitulation of Alexandria are reported in some detail, but nothing is mentioned of Nizar's fate. These events caused a bitter and permanent schism in the Isma'ili movement, that lasts to the present day. Although al-Musta'li was recognized by the Fatimid establishment and the official Isma'ili missionary organization (the da'wa), as well as the Isma'ili communities dependent on it in Egypt, Syria and Yemen, most of the Isma'ili communities in the wider Middle East, and especially Persia and Iraq, rejected his accession. Whether out of conviction or as a convenient excuse, the Persian Isma'ilis under Hassan-i Sabbah swiftly recognized Nizar as the rightful imam, severed relations with Cairo, and set up their own independent hierarchy (the da'wa jadida, lit. 'new calling'). This marked the permanent split of the Isma'ili movement into the rival branches of Musta'li Isma'ilism and Nizari Isma'ilism. At least one of Nizar's sons, al-Husayn, fled in 1095 with other members of the dynasty (including three of al-Mustansir's other sons, Muhammad, Isma'il, and Tahir) from Egypt to the Maghreb, where they formed a sort of opposition in exile to the new regime in Cairo. As late as 1162, descendants, or purported descendants, of Nizar appeared to challenge the Fatimid caliphs, and were able to attract considerable followings based on lingering loyalist sentiments of the population. ### Reign Throughout his reign, al-Musta'li was subordinate to al-Afdal. According to the 13th-century Egyptian historian Ibn Muyassar, "[al-Musta'li] had no noteworthy life, since al-Afdal directed the affairs of state like a sultan or king, not like a vizier". Al-Afdal even supplanted the caliph in public ceremonies, keeping al-Musta'li out of sight, confined to the palace. Al-Afdal was a capable administrator, and his good governance ensured the continued prosperity of Egypt throughout the reign. Al-Musta'li is praised for his upright character by the Sunni contemporary historian Ibn al-Qalanisi, though other medieval historians stress his fanatical devotion to Shi'ism; it appears that the Isma'ili da'wa was very active during his reign. The 15th-century Yemeni pro-Musta'li religious leader and historian Idris Imad al-Din preserves much information about his dealings with the Isma'ili da'wa in Yemen, particularly with Queen Arwa and the local da'i, Yahya ibn Lamak ibn Malik al-Hammadi. In foreign affairs, the Fatimids faced an increasing rivalry with the Sunni Seljuks and the Seljuk-backed Abbasid caliph, al-Mustazhir: the Seljuks expanded their rule in Syria up to Gaza and, in 1095, the Abbasid caliph published a letter proclaiming the Fatimids' claims of Alid descent to be fraudulent. The Fatimids achieved some successes, with the voluntary submission of Apamea in northern Syria in 1096, followed by the recovery of Tyre in February/March 1097. Al-Afdal also tried to conclude an alliance with the Seljuk ruler of Aleppo, Ridwan, against Duqaq, the Seljuk ruler of Damascus. In early 1097, Ridwan agreed to recognize the suzerainty of al-Musta'li, and on 28 August had the Friday sermon read on behalf of the Fatimid caliph. This provoked such a backlash among the other Seljuk rulers of Syria that Ridwan was forced to backtrack after four weeks, and dropped al-Musta'li's name in favour of al-Mustazhir. In the same year, 1097, the First Crusade entered Syria and laid siege to Antioch. Al-Afdal sent an embassy to make contact with the Crusaders, and used the distraction provided by the Crusade to recover control of Jerusalem from its Artuqid Turkish rulers in July/August 1098. This exposed the Fatimids to accusations by Sunni sources that they had made common cause with the Crusaders; the 13th-century historian Ibn al-Athir even claims that the Fatimids invited the Crusaders to Syria to combat the Seljuks, who previously stood ready to invade Egypt itself. Believing that he had reached an agreement with the Crusaders, al-Afdal did not expect them to march south, and was caught by surprise when they moved against Jerusalem in 1099. The city was captured after a siege on 15 July 1099, and the subsequent defeat of a Fatimid army under al-Afdal's personal command at the Battle of Ascalon on 12 August 1099 confirmed the new status quo. As a result of the Crusader advance, many Syrians fled to Egypt, where a famine broke out in 1099 or 1100 as a result. Al-Musta'li died on 17 Safar 495 AH (11 or 12 December 1102), with rumours that he had been poisoned by al-Afdal. He left three infant sons, of whom the eldest, the not quite five years old al-Mansur, was swiftly proclaimed caliph with the regnal name al-Amir bi-Ahkam Allah. ## See also - List of Ismaili imams - Lists of rulers of Egypt
5,951
Cleveland
1,173,635,827
City and county seat of Cuyahoga County, Ohio, United States
[ "1796 establishments in the Northwest Territory", "Cities in Cuyahoga County, Ohio", "Cities in Ohio", "Cleveland", "County seats in Ohio", "Inland port cities and towns in Ohio", "Ohio populated places on Lake Erie", "Populated places established in 1796", "Populated places on the Underground Railroad" ]
Cleveland (/ˈkliːvlənd/ KLEEV-lənd), officially the City of Cleveland, is a city in the U.S. state of Ohio and the county seat of Cuyahoga County. Located in Northeast Ohio along the southern shore of Lake Erie, it is situated across the U.S. maritime border with Canada and lies approximately 60 miles (97 km) west of Pennsylvania. The largest city on Lake Erie and one of the major cities of the Great Lakes region, Cleveland ranks as the second-most populous city in Ohio and 54th-most populous city in the U.S. with a 2020 population of 372,624. The city anchors both the Greater Cleveland metropolitan statistical area (MSA) and the larger Cleveland–Akron–Canton combined statistical area (CSA). The CSA is the most populous in Ohio and the 17th-largest in the country, with a population of 3.63 million in 2020, while the MSA ranks as 33rd-largest at 2.18 million. Cleveland was founded in 1796 near the mouth of the Cuyahoga River by General Moses Cleaveland, after whom the city was named. Its location on both the river and the lake shore allowed it to grow into a major commercial and industrial center, attracting large numbers of immigrants and migrants. Cleveland is a port city, connected to the Atlantic Ocean via the Saint Lawrence Seaway. Its economy relies on diverse sectors that include higher education, manufacturing, financial services, healthcare, and biomedicals. The GDP for the Greater Cleveland MSA was \$135 billion in 2019. Combined with the Akron MSA, the seven-county Cleveland–Akron metropolitan economy was \$175 billion in 2019, the largest in Ohio, accounting for 25% of the state's GDP. Designated as a global city by the Globalization and World Cities Research Network, Cleveland is home to several major cultural institutions, including the Cleveland Museum of Art, the Cleveland Museum of Natural History, the Cleveland Orchestra, Playhouse Square, and the Rock and Roll Hall of Fame. Known as "The Forest City" among many other nicknames, Cleveland serves as the center of the Cleveland Metroparks nature reserve system. The city's major league professional sports teams include the Cleveland Browns, the Cleveland Cavaliers, and the Cleveland Guardians. ## History ### Establishment Cleveland was established on July 22, 1796, by surveyors of the Connecticut Land Company when they laid out Connecticut's Western Reserve into townships and a capital city. They named the new settlement "Cleaveland" after their leader, General Moses Cleaveland, a veteran of the American Revolutionary War. Cleaveland oversaw the New England–style design of the plan for what would become the modern downtown area, centered on Public Square, before returning to Connecticut, never again to visit Ohio. The town's name was often shortened to "Cleveland", even by Cleaveland's original surveyors. A common myth emerged that the spelling was altered by The Cleveland Advertiser in order to fit the name on the newspaper's masthead. The first permanent European settler in Cleveland was Lorenzo Carter, who built a cabin on the banks of the Cuyahoga River. The emerging community served as an important supply post for the U.S. during the Battle of Lake Erie in the War of 1812. Locals adopted Commodore Oliver Hazard Perry as a civic hero and erected a monument in his honor decades later. Largely through the efforts of the settlement's first lawyer Alfred Kelley, the village of Cleveland was incorporated on December 23, 1814. In spite of the nearby swampy lowlands and harsh winters, the town's waterfront location proved to be an advantage, giving it access to Great Lakes trade. It grew rapidly after the 1832 completion of the Ohio and Erie Canal. This key link between the Ohio River and the Great Lakes connected Cleveland to the Atlantic Ocean via the Erie Canal and Hudson River, and later via the Saint Lawrence Seaway. The town's growth continued with added railroad links. In 1836, Cleveland, then only on the eastern banks of the Cuyahoga, was officially incorporated as a city, and John W. Willey was elected its first mayor. That same year, it nearly erupted into open warfare with neighboring Ohio City over a bridge connecting the two communities. Ohio City remained an independent municipality until its annexation by Cleveland in 1854. A center of abolitionist activity, Cleveland (code-named "Station Hope") was a major stop on the Underground Railroad for escaped African American slaves en route to Canada. The city also served as an important center for the Union during the American Civil War. Decades later, in July 1894, the wartime contributions of those serving the Union from Cleveland and Cuyahoga County would be honored with the Soldiers' and Sailors' Monument on Public Square. ### Growth and expansion The Civil War vaulted Cleveland into the first rank of American manufacturing cities and fueled unprecedented growth. Its prime geographic location as a transportation hub on the Great Lakes played an important role in its development as an industrial and commercial center. In 1870, John D. Rockefeller founded Standard Oil in Cleveland, and in 1885, he moved its headquarters to New York City, which had become a center of finance and business. The city's economic growth and industrial jobs attracted large waves of immigrants from Southern and Eastern Europe as well as Ireland. Urban growth was accompanied by significant strikes and labor unrest, as workers demanded better wages and working conditions. Between 1881 and 1886, 70 to 80% of strikes were successful in improving labor conditions in Cleveland. The Cleveland Streetcar Strike of 1899 was one of the more violent instances of labor strife in the city during this period. By 1910, Cleveland had become known as the "Sixth City" due to its status at the time as the sixth-largest U.S. city. Its businesses included automotive companies such as Peerless, Chandler, and Winton, maker of the first car driven across the U.S. Other manufacturing industries in Cleveland included steam cars produced by White and electric cars produced by Baker. The city counted major Progressive Era politicians among its leaders, most prominently the populist Mayor Tom L. Johnson, who was responsible for the development of the Cleveland Mall Plan. The era of the City Beautiful movement in Cleveland architecture, this period saw wealthy patrons support the establishment of the city's major cultural institutions. The most prominent among them were the Cleveland Museum of Art, which opened in 1916, and the Cleveland Orchestra, established in 1918. In addition to the large immigrant population, African American migrants from the rural South arrived in Cleveland (among other Northeastern and Midwestern cities) as part of the Great Migration for jobs, constitutional rights, and relief from racial discrimination. By 1920, the year in which the Cleveland Indians won their first World Series championship, Cleveland had grown into a densely-populated metropolis of 796,841, making it the fifth-largest city in the nation, with a foreign-born population of 30%. At this time, Cleveland saw the rise of radical labor movements, most prominently the Industrial Workers of the World (IWW), in response to the conditions of the largely immigrant and migrant workers. In 1919, the city attracted national attention amid the First Red Scare for the Cleveland May Day Riots, in which local socialist and IWW demonstrators clashed with anti-socialists. The riots occurred during the broader strike wave that swept the US that year. The Roaring Twenties saw the establishment of Cleveland's Playhouse Square, and the rise of the risqué Short Vincent. The Bal-Masque balls of the avant-garde Kokoon Arts Club scandalized the city. Jazz came to prominence in Cleveland during this period. Despite the immigration restrictions of 1921 and 1924, the city's population continued to grow throughout the decade. Prohibition first took effect in Ohio in May 1919 (although it was not well-enforced in Cleveland), became law with the Volstead Act in 1920, and was eventually repealed nationally by Congress in 1933. The ban on alcohol led to the rise of speakeasies throughout the city and organized crime gangs, such as the Mayfield Road Mob, who smuggled bootleg liquor across Lake Erie from Canada into Cleveland. The era of the flapper marked the beginning of the golden age in Downtown Cleveland retail, centered on major department stores Higbee's, Bailey's, the May Company, Taylor's, Halle's, and Sterling Lindner Davis, which collectively represented one of the largest and most fashionable shopping districts in the country, often compared to New York's Fifth Avenue. In 1929, the city hosted the first of many National Air Races, and Amelia Earhart flew to the city from Santa Monica, California in the Women's Air Derby. The Van Sweringen brothers commenced construction of the Terminal Tower skyscraper in 1926 and oversaw it to completion in 1927. By the time the building was dedicated as part of Cleveland Union Terminal in 1930, the city had a population of over 900,000. Cleveland was hit hard by the Wall Street Crash of 1929 and the subsequent Great Depression. A center of union activity, the city saw significant labor struggles in this period, including strikes by workers against Fisher Body in 1936 and against Republic Steel in 1937. The city was also aided by major federal works projects sponsored by President Franklin D. Roosevelt's New Deal. In commemoration of the centennial of Cleveland's incorporation as a city, the Great Lakes Exposition debuted in June 1936 at the city's North Coast Harbor, along the Lake Erie shore north of downtown. Conceived by Cleveland's business leaders as a way to revitalize the city during the Depression, it drew four million visitors in its first season, and seven million by the end of its second and final season in September 1937. On December 7, 1941, Imperial Japan attacked Pearl Harbor and declared war on the United States. Two of the victims of the attack were Cleveland natives – Rear Admiral Isaac C. Kidd and ensign William Halloran. The attack signaled America's entry into World War II. A major hub of the "Arsenal of Democracy", Cleveland under Mayor Frank Lausche contributed massively to the U.S. war effort as the fifth largest manufacturing center in the nation. During his tenure, Lausche also oversaw the establishment of the Cleveland Transit System, the predecessor to the Greater Cleveland Regional Transit Authority. ### Late 20th and early 21st centuries After the war, Cleveland initially experienced an economic boom, and businesses declared the city to be the "best location in the nation". In 1949, the city was named an All-America City for the first time, and in 1950, its population reached 914,808. In sports, the Indians won the 1948 World Series, the hockey team, the Barons, became champions of the American Hockey League, and the Browns dominated professional football in the 1950s. As a result, along with track and boxing champions produced, Cleveland was declared the "City of Champions" in sports at this time. Additionally, the 1950s saw the rising popularity of a new music genre that local WJW (AM) disc jockey Alan Freed dubbed "rock and roll". However, by the 1960s, Cleveland's economy began to slow down, and residents increasingly sought new housing in the suburbs, reflecting the national trends of suburban growth following federally subsidized highways. Industrial restructuring, particularly in the railroad and steel industries, resulted in the loss of numerous jobs in Cleveland and the region, and the city suffered economically. The burning of the Cuyahoga River in June 1969 brought national attention to the issue of industrial pollution in Cleveland and served as a catalyst for the American environmental movement. Housing discrimination and redlining against African Americans led to racial unrest in Cleveland and numerous other Northern U.S. cities. In Cleveland, the Hough riots erupted from July 18 to 23, 1966, and the Glenville Shootout took place from July 23 to 25, 1968. In November 1967, Cleveland became the first major American city to elect an African American mayor, Carl B. Stokes, who served from 1968 to 1971 and played an instrumental role in restoring the Cuyahoga River. In December 1978, during the turbulent tenure of Dennis Kucinich as mayor, Cleveland became the first major American city since the Great Depression to enter into a financial default on federal loans. The national recession of the early 1980s "further eroded the city's traditional economic base." While unemployment during the period peaked in 1983, Cleveland's rate of 13.8% was higher than the national average due to the closure of several steel production centers. The city began a gradual economic recovery under Mayor George V. Voinovich in the 1980s. The downtown area saw the construction of the Key Tower and 200 Public Square skyscrapers, as well as the development of the Gateway Sports and Entertainment Complex – consisting of Progressive Field and Rocket Mortgage FieldHouse – and the North Coast Harbor, including the Rock and Roll Hall of Fame, Cleveland Browns Stadium, and the Great Lakes Science Center. The city emerged from default in 1987. By the turn of the 21st century, Cleveland succeeded in developing a more diversified economy and gained a national reputation as a center for healthcare and the arts. The city's downtown and several neighborhoods have experienced significant population growth since 2010, while overall population decline has slowed. Nevertheless, challenges remain for the city, with economic development of neighborhoods, improvement of city schools, and continued efforts to tackle poverty and urban blight being top municipal priorities. ## Geography According to the United States Census Bureau, the city has a total area of 82.47 square miles (213.60 km<sup>2</sup>), of which 77.70 square miles (201.24 km<sup>2</sup>) is land and 4.77 square miles (12.35 km<sup>2</sup>) is water. The shore of Lake Erie is 569 feet (173 m) above sea level; however, the city lies on a series of irregular bluffs lying roughly parallel to the lake. In Cleveland these bluffs are cut principally by the Cuyahoga River, Big Creek, and Euclid Creek. The land rises quickly from the lake shore elevation of 569 feet. Public Square, less than one mile (1.6 km) inland, sits at an elevation of 650 feet (198 m), and Hopkins Airport, 5 miles (8 km) inland from the lake, is at an elevation of 791 feet (241 m). Cleveland borders several inner-ring and streetcar suburbs. To the west, it borders Lakewood, Rocky River, and Fairview Park, and to the east, it borders Shaker Heights, Cleveland Heights, South Euclid, and East Cleveland. To the southwest, it borders Linndale, Brooklyn, Parma, and Brook Park. To the south, the city borders Newburgh Heights, Cuyahoga Heights, and Brooklyn Heights and to the southeast, it borders Warrensville Heights, Maple Heights, and Garfield Heights. To the northeast, along the shore of Lake Erie, Cleveland borders Bratenahl and Euclid. ### Cityscapes ### Architecture Cleveland's downtown architecture is diverse. Many of the city's government and civic buildings, including City Hall, the Cuyahoga County Courthouse, the Cleveland Public Library, and Public Auditorium, are clustered around the open Cleveland Mall and share a common neoclassical architecture. They were built in the early 20th century as the result of the 1903 Group Plan. They constitute one of the most complete examples of City Beautiful design in the United States. Completed in 1927 and dedicated in 1930 as part of the Cleveland Union Terminal complex, the Terminal Tower was the tallest building in North America outside New York City until 1964 and the tallest in the city until 1991. It is a prototypical Beaux-Arts skyscraper. The two newer skyscrapers on Public Square, Key Tower (currently the tallest building in Ohio) and the 200 Public Square, combine elements of Art Deco architecture with postmodern designs. Other Cleveland architectural landmarks include the Cleveland Trust Company Building, completed in 1907 and renovated in 2015 as a downtown Heinen's supermarket, and the Cleveland Arcade (sometimes called the Old Arcade), a five-story arcade built in 1890 and renovated in 2001 as a Hyatt Regency Hotel. Running east from Public Square through University Circle is Euclid Avenue, which was known for its prestige and elegance as a residential street. In the late 1880s, writer Bayard Taylor described it as "the most beautiful street in the world". Known as "Millionaires' Row", Euclid Avenue was world-renowned as the home of such major figures as John D. Rockefeller, Mark Hanna, and John Hay. Cleveland's historic ecclesiastical architecture includes the Presbyterian Old Stone Church in downtown Cleveland and the onion domed St. Theodosius Russian Orthodox Cathedral in Tremont, along with myriad ethnically inspired Roman Catholic churches. ### Parks and nature Known locally as the "Emerald Necklace", the Olmsted-inspired Cleveland Metroparks encircle Cleveland and Cuyahoga County. The city proper encompasses the Metroparks' Brookside and Lakefront Reservations, as well as significant parts of the Rocky River, Washington, and Euclid Creek Reservations. The Lakefront Reservation, which provides public access to Lake Erie, consists of four parks: Edgewater Park, Whiskey Island–Wendy Park, East 55th Street Marina, and Gordon Park. Three more parks fall under the jurisdiction of the Euclid Creek Reservation: Euclid Beach, Villa Angela, and Wildwood Marina. Further south, bike and hiking trails in the Brecksville and Bedford Reservations, along with Garfield Park, provide access to trails in the Cuyahoga Valley National Park. Also included in the Metroparks system is the Cleveland Metroparks Zoo, established in 1882. Located in Big Creek Valley, the zoo has one of the largest collections of primates in North America. In addition to the Metroparks, the Cleveland Public Parks District oversees the city's neighborhood parks, the largest of which is the historic Rockefeller Park. The latter is notable for its late 19th century landmark bridges, the Rockefeller Park Greenhouse, and the Cleveland Cultural Gardens, which celebrate the city's ethnic diversity. Just outside of Rockefeller Park, the Cleveland Botanical Garden in University Circle, established in 1930, is the oldest civic garden center in the nation. In addition, the Greater Cleveland Aquarium, located in the historic FirstEnergy Powerhouse in the Flats, is the only independent, free-standing aquarium in the state of Ohio. ### Neighborhoods The Cleveland City Planning Commission has officially designated 34 neighborhoods in Cleveland. Centered on Public Square, Downtown Cleveland is the city's central business district, encompassing a wide range of subdistricts, such as the Nine-Twelve District, the Campus District, the Civic Center, East 4th Street, and Playhouse Square. It also historically included the lively Short Vincent entertainment district, which attracted both notorious mobsters like Shondor Birns and visiting celebrities like Frank Sinatra and Lauren Bacall. Mixed-use areas, such as the Warehouse District and the Superior Arts District, are occupied by industrial and office buildings as well as restaurants, cafes, and bars. The number of condominiums, lofts, and apartments has been on the increase since 2000 and especially 2010, reflecting downtown's growing population. Clevelanders geographically define themselves in terms of whether they live on the east or west side of the Cuyahoga River. The East Side includes the neighborhoods of Buckeye–Shaker, Buckeye–Woodhill, Central, Collinwood (including Nottingham), Euclid–Green, Fairfax, Glenville, Goodrich–Kirtland Park (including Asiatown), Hough, Kinsman, Lee–Miles (including Lee–Harvard and Lee–Seville), Mount Pleasant, St. Clair–Superior, Union–Miles Park, and University Circle (including Little Italy). The West Side includes the neighborhoods of Brooklyn Centre, Clark–Fulton, Cudell, Detroit–Shoreway, Edgewater, Ohio City, Old Brooklyn, Stockyards, Tremont (including Duck Island), West Boulevard, and the four neighborhoods colloquially known as West Park: Kamm's Corners, Jefferson, Bellaire–Puritas, and Hopkins. The Cuyahoga Valley neighborhood (including the Flats) is situated between the East and West Sides, while Broadway–Slavic Village is sometimes referred to as the South Side. Several neighborhoods have begun to attract the return of the middle class that left the city for the suburbs in the 1960s and 1970s. These neighborhoods are on both the West Side (Ohio City, Tremont, Detroit–Shoreway, and Edgewater) and the East Side (Collinwood, Hough, Fairfax, and Little Italy). Much of the growth has been spurred on by attracting creative class members, which has facilitated new residential development and the transformation of old industrial buildings into loft spaces for artists. ### Climate Typical of the Great Lakes region, Cleveland exhibits a continental climate with four distinct seasons, which lies in the humid continental (Köppen Dfa) zone. The climate is transitional with the Cfa humid subtropical climate. Summers are hot and humid, while winters are cold and snowy. East of the mouth of the Cuyahoga, the land elevation rises rapidly in the south. Together with the prevailing winds off Lake Erie, this feature is the principal contributor to the lake-effect snow that is typical in Cleveland (especially on the city's East Side) from mid-November until the surface of the lake freezes, usually in late January or early February. The lake effect causes a relative differential in geographical snowfall totals across the city. On the city's far West Side, the Hopkins neighborhood only reached 100 inches (254 cm) of snowfall in a season three times since record-keeping for snow began in 1893. By contrast, seasonal totals approaching or exceeding 100 inches (254 cm) are not uncommon as the city ascends into the Heights on the east, where the region known as the 'Snow Belt' begins. Extending from the city's East Side and its suburbs, the Snow Belt reaches up the Lake Erie shore as far as Buffalo. The all-time record high in Cleveland of 104 °F (40 °C) was established on June 25, 1988, and the all-time record low of −20 °F (−29 °C) was set on January 19, 1994. On average, July is the warmest month with a mean temperature of 74.5 °F (23.6 °C), and January, with a mean temperature of 29.1 °F (−1.6 °C), is the coldest. Normal yearly precipitation based on the 30-year average from 1991 to 2020 is 41.03 inches (1,042 mm). The least precipitation occurs on the western side and directly along the lake, and the most occurs in the eastern suburbs. Parts of Geauga County to the east receive over 44 inches (1,100 mm) of liquid precipitation annually. ## Demographics At the 2020 census, there were 372,624 people and 170,549 households in Cleveland. The population density was 4,901.51 inhabitants per square mile (1,892.5/km<sup>2</sup>). The median household income was \$30,907 and the per capita income was \$21,223. 32.7% of the population was living below the poverty line. Of the city's population over the age of 25, 17.5% held a bachelor's degree or higher, and 80.8% had a high school diploma or equivalent. The median age was 36.6 years. As of 2020, the racial composition of the city was 47.5% African American, 32.1% non-Hispanic white, 13.1% Hispanic or Latino, 2.8% Asian and Pacific Islander, 0.2% Native American, and 3.8% from two or more races. 85.3% of Clevelanders age 5 and older spoke English at home as a primary language. 14.7% spoke a foreign language, including Spanish, Arabic, Chinese, Albanian, and various Slavic languages (Russian, Polish, Serbian, Croatian, and Slovene). The city's spoken accent is an advanced form of Inland Northern American English, similar to other Great Lakes cities, but distinctive from the rest of Ohio. ### Ethnicity In the 19th and early 20th centuries, Cleveland saw a massive influx of immigrants from Ireland, Italy, and the Austro-Hungarian, German, Russian, and Ottoman empires, most of whom were attracted by manufacturing jobs. As a result, Cleveland and Cuyahoga County today have substantial communities of Irish (especially in West Park), Italians (especially in Little Italy), Germans, and several Central-Eastern European ethnicities, including Czechs, Hungarians, Lithuanians, Poles, Romanians, Russians, Rusyns, Slovaks, Ukrainians, and ex-Yugoslav groups, such as Slovenes, Croats and Serbs. The presence of Hungarians within Cleveland proper was, at one time, so great that the city boasted the highest concentration of Hungarians in the world outside of Budapest. Cleveland has a long-established Jewish community, historically centered on the East Side neighborhoods of Glenville and Kinsman, but now mostly concentrated in East Side suburbs such as Cleveland Heights and Beachwood, location of the Maltz Museum of Jewish Heritage. The availability of jobs attracted African Americans from the South. Between 1910 and 1970, the black population of Cleveland, largely concentrated on the city's East Side, increased significantly as a result of the First and Second Great Migrations. Cleveland's Latino community consists primarily of Puerto Ricans, as well as smaller numbers of immigrants from Mexico, Cuba, the Dominican Republic, South and Central America, and Spain. The city's Asian community, centered on historical Asiatown, consists of Chinese, Koreans, Vietnamese, and other groups. Additionally, the city and the county have significant communities of Albanians, Arabs (especially Lebanese, Syrians, and Palestinians), Armenians, French, Greeks, Iranians, Scots, Turks, and West Indians. A 2020 analysis found Cleveland to be the most ethnically and racially diverse major city in Ohio. ### Religion The influx of immigrants in the 19th and early 20th centuries drastically transformed Cleveland's religious landscape. From a homogeneous settlement of New England Protestants, it evolved into a city with a diverse religious composition. The predominant faith among Clevelanders today is Christianity (Catholic, Protestant, and Eastern and Oriental Orthodox), with Jewish, Muslim, Hindu, and Buddhist minorities. ### Immigration Within Cleveland, the neighborhoods with the highest foreign-born populations are Asiatown/Goodrich–Kirtland Park (32.7%), Clark–Fulton (26.7%), West Boulevard (18.5%), Brooklyn Centre (17.3%), Downtown (17.2%), University Circle (15.9%, with 20% in Little Italy), and Jefferson (14.3%). Recent waves of immigration have brought new groups to Cleveland, including Ethiopians and South Asians, as well as immigrants from Russia and the former USSR, Southeast Europe (especially Albania), the Middle East, East Asia, and Latin America. In the 2010s, the immigrant population of Cleveland and Cuyahoga County began to see significant growth, becoming a major center for immigration in the Great Lakes region. A 2019 study found Cleveland to be the city with the shortest average processing time in the nation for immigrants to become U.S. citizens. The city's annual One World Day in Rockefeller Park includes a naturalization ceremony of new immigrants. ## Economy Cleveland's location on the Cuyahoga River and Lake Erie has been key to its growth as a major commercial center. Steel and many other manufactured goods emerged as leading industries. The city has since diversified its economy in addition to its manufacturing sector. Established in 1914, the Federal Reserve Bank of Cleveland is one of 12 U.S. Federal Reserve Banks. Its downtown building, located on East 6th Street and Superior Avenue, was completed in 1923 by the Cleveland architectural firm Walker and Weeks. The headquarters of the Federal Reserve System's Fourth District, the bank employs 1,000 people and maintains branch offices in Cincinnati and Pittsburgh. The president and CEO is Loretta Mester. Cleveland and Cuyahoga County are home to the corporate headquarters of Fortune 500 companies Cleveland-Cliffs, Progressive, Sherwin-Williams, Parker-Hannifin, KeyCorp, and Travel Centers of America. Other large companies based in the city and the county include Aleris, American Greetings, Applied Industrial Technologies, Eaton, Forest City Realty Trust, Heinen's Fine Foods, Hyster-Yale Materials Handling, Lincoln Electric, Medical Mutual of Ohio, Moen Incorporated, NACCO Industries, Nordson Corporation, OM Group, Swagelok, Things Remembered, Third Federal S&L, TransDigm Group, and Vitamix. NASA maintains a facility in Cleveland, the Glenn Research Center. Jones Day, one of the largest law firms in the U.S., was founded in Cleveland in 1893. ### Healthcare Healthcare plays a major role in Cleveland's economy. The city's "Big Three" hospital systems are the Cleveland Clinic, University Hospitals, and MetroHealth. The Cleveland Clinic is the largest private employer in the city of Cleveland and the state of Ohio, with a workforce of over 55,000 as of 2022. It carries the distinction as being among America's best hospitals with top ratings published in U.S. News & World Report. The clinic is led by Croatian-born president and CEO Tomislav Mihaljevic and it is affiliated with Case Western Reserve University School of Medicine. University Hospitals includes the University Hospitals Cleveland Medical Center and its Rainbow Babies & Children's Hospital. Cliff Megerian serves as that system's CEO. MetroHealth on the city's west side is led by president and CEO Airica Steed. Formerly known as City Hospital, it operates one of two Level I trauma centers in the city, and has various locations throughout Greater Cleveland. In 2013, Cleveland's Global Center for Health Innovation opened with 235,000 square feet (21,800 m<sup>2</sup>) of display space for healthcare companies across the world. To take advantage of the proximity of universities and other medical centers in Cleveland, the Veterans Administration moved the region's VA hospital from suburban Brecksville to a new facility in University Circle. ## Education ### Primary and secondary education The Cleveland Metropolitan School District is the second-largest K–12 district in the state of Ohio. It is the only district in Ohio under the direct control of the mayor, who appoints a school board. Approximately 1 square mile (2.6 km<sup>2</sup>) of Cleveland's Buckeye–Shaker neighborhood is part of the Shaker Heights City School District. The area, which has been a part of the Shaker school district since the 1920s, permits these Cleveland residents to pay the same school taxes as the Shaker residents, as well as vote in the Shaker school board elections. There are several private and parochial schools in Cleveland. These include Benedictine High School, Cleveland Central Catholic High School, Eleanor Gerson School, St. Ignatius High School, St. Joseph Academy, Villa Angela-St. Joseph High School, and St. Martin de Porres. ### Higher education Cleveland is home to a number of colleges and universities. Most prominent among them is Case Western Reserve University (CWRU), a widely recognized research and teaching institution in University Circle. A private university with several prominent graduate programs, CWRU was ranked 44th in the nation in 2023 by U.S. News & World Report. University Circle also contains the Cleveland Institute of Art and the Cleveland Institute of Music. Cleveland State University (CSU), based in Downtown Cleveland, is the city's public four-year university. In addition to CSU, downtown hosts the metropolitan campus of Cuyahoga Community College, the county's two-year higher education institution. Ohio Technical College is also based in Cleveland. Cleveland's suburban universities and colleges include Baldwin Wallace University in Berea, John Carroll University in University Heights, Ursuline College in Pepper Pike, and Notre Dame College in South Euclid. ### Public library system Established in 1869, the Cleveland Public Library is one of the largest public libraries in the nation with a collection of over 10 million materials in 2021. Its John G. White Special Collection includes the largest chess library in the world, as well as a significant collection of folklore and rare books on the Middle East and Eurasia. Under head librarian William Howard Brett, the library adopted an "open shelf" philosophy, which allowed patrons open access to the library's bookstacks. Brett's successor, Linda Eastman, became the first woman ever to lead a major library system in the world. She oversaw the construction of the library's main building on Superior Avenue, designed by Walker and Weeks and opened on May 6, 1925. David Lloyd George, British Prime Minister from 1916 to 1922, laid the cornerstone for the building. The Louis Stokes Wing addition was completed in April 1997. Between 1904 and 1920, 15 libraries built with funds from Andrew Carnegie were opened in the city. Known as the "People's University", the library presently maintains 27 branches. It serves as the headquarters for the CLEVNET library consortium, which includes 47 public library systems in Northeast Ohio. ## Arts and culture ### Theater and performing arts Cleveland's Playhouse Square is the second largest performing arts center in the United States behind New York City's Lincoln Center. It includes the State, Palace, Allen, Hanna, and Ohio theaters. The theaters host Broadway musicals, special concerts, speaking engagements, and other events throughout the year. Playhouse Square's resident performing arts companies include Cleveland Ballet, the Cleveland International Film Festival, the Cleveland Play House, Cleveland State University Department of Theatre and Dance, DANCECleveland, the Great Lakes Theater Festival, and the Tri-C Jazz Fest. A city with strong traditions in theater and vaudeville, Cleveland has produced many renowned performers, most prominently comedian Bob Hope. Outside Playhouse Square is Karamu House, the oldest African American theater in the nation, established in 1915. On the West Side, the Gordon Square Arts District in the Detroit–Shoreway neighborhood is the location of the Capitol Theatre, the Near West Theatre, and an Off-Off-Broadway playhouse, the Cleveland Public Theatre. The Dobama Theatre and the Beck Center for the Arts are based in Cleveland's streetcar suburbs of Cleveland Heights and Lakewood respectively. ### Music The Cleveland Orchestra is widely considered one of the world's finest orchestras, and often referred to as the finest in the nation. It is one of the "Big Five" major orchestras in the United States. The orchestra plays at Severance Hall in University Circle during the winter and at Blossom Music Center in Cuyahoga Falls during the summer. The city is also home to the Cleveland Pops Orchestra, the Cleveland Youth Orchestra, the Contemporary Youth Orchestra, the Cleveland Youth Wind Symphony, and the biennial Cleveland International Piano Competition which has, in the past, often featured the Cleveland Orchestra. One Playhouse Square, now the headquarters for Cleveland's public broadcasters, was initially used as the broadcast studios of WJW (AM), where disc jockey Alan Freed first popularized the term "rock and roll". Beginning in the 1950s, Cleveland gained a strong reputation as a key breakout market for rock music. Its popularity in the city was so great that Billy Bass, the program director at the WMMS radio station, referred to Cleveland as "The Rock and Roll Capital of the World". The Cleveland Agora Theatre and Ballroom has served as a major venue for rock concerts in the city since the 1960s. From 1974 through 1980, the city hosted the World Series of Rock at Cleveland Municipal Stadium. Jazz and R&B have a long history in Cleveland. Many major figures in jazz performed in the city, including Louis Armstrong, Cab Calloway, Duke Ellington, Ella Fitzgerald, Dizzy Gillespie, and Billie Holiday. Legendary pianist Art Tatum regularly played in Cleveland clubs in the 1930s, and gypsy jazz guitarist Django Reinhardt gave his U.S. debut performance in Cleveland in 1946. Prominent jazz artist Noble Sissle was a graduate of Cleveland Central High School, and Artie Shaw worked and performed in Cleveland early in his career. The Tri-C Jazz Fest has been held annually in Cleveland at Playhouse Square since 1980, and the Cleveland Jazz Orchestra was established in 1984. Joe Siebert's documentary film The Sax Man on the life of Cleveland street saxophonist Maurice Reedus Jr. was released in 2014. The city has a history of polka music being popular both past and present and is the location of the Polka Hall of Fame. There is even a subgenre called Cleveland-style polka, named after the city. The music's popularity is due in part to the success of Frankie Yankovic, a Cleveland native who was considered "America's Polka King". There is a significant hip hop music scene in Cleveland. In 1997, the Cleveland hip hop group Bone Thugs-n-Harmony won a Grammy for their song "Tha Crossroads". ### Film and television The first film shot in Cleveland was in 1897 by the company of Ohioan Thomas Edison. Before Hollywood became the center for American cinema, filmmaker Samuel R. Brodsky and playwright Robert H. McLaughlin operated a film studio at the Andrews mansion on Euclid Avenue (now the WEWS-TV studio). There they produced major silent-era features, such as Dangerous Toys (1921), which are now considered lost. Brodsky also directed the weekly Plain Dealer Screen Magazine that ran in theaters in Cleveland and Ohio from 1917 to 1924. In addition, Cleveland hosted over a dozen sponsored film studios, including Cinécraft Productions, which still operates in Ohio City. In the "talkie" era, Cleveland featured in several major studio films, such as Howard Hawks's Ceiling Zero (1936) with James Cagney and Pat O'Brien. Michael Curtiz's pre-Code classic Goodbye Again (1933) with Warren William and Joan Blondell was set in Cleveland, and players from the 1948 Cleveland Indians appeared in The Kid from Cleveland (1949). Billy Wilder's The Fortune Cookie (1966) was set and filmed in the city and marked the first onscreen pairing of Walter Matthau and Jack Lemmon. Labor struggles in Cleveland were depicted in Native Land (1942), narrated by Paul Robeson, and in Norman Jewison's F.I.S.T. (1978) with Sylvester Stallone. Clevelander Jim Jarmusch's Stranger Than Paradise (1984) – a deadpan comedy about two New Yorkers who travel to Florida by way of Cleveland – was a favorite of the Cannes Film Festival. Major League (1989) reflected the perennial struggles of the Cleveland Indians, while American Splendor (2003) reflected the life of Cleveland graphic novelist Harvey Pekar. Kill the Irishman (2011) depicted the 1970s turf war between Danny Greene and the Cleveland crime family. Cleveland has doubled for other locations in films. The wedding and reception scenes in The Deer Hunter (1978), while set in the Pittsburgh suburb of Clairton, were shot in Cleveland's Tremont neighborhood. A Christmas Story (1983) was set in Indiana, but drew many external shots from Cleveland. The opening shots of Air Force One (1997) were filmed in and above Severance Hall, and Downtown Cleveland doubled for New York in Spider-Man 3 (2007), The Avengers (2012), and The Fate of the Furious (2017). More recently, Judas and the Black Messiah (2021), though set in Chicago, was filmed in Cleveland. Future productions are handled by the Greater Cleveland Film Commission at the Leader Building on Superior Avenue. In television, the city is the setting for the popular network sitcom The Drew Carey Show, starring Cleveland native Drew Carey. Hot in Cleveland, a comedy that aired on TV Land, premiered on June 16, 2010, and ran for six seasons until its finale on June 3, 2015. Cleveland Hustles, the CNBC reality show co-created by LeBron James, was filmed in the city. ### Literature Cleveland has a thriving literary and poetry community, with regular poetry readings at bookstores, coffee shops, and various other venues. In 1925, Russian Futurist poet Vladimir Mayakovsky came to Cleveland and gave a poetry recitation to the city's ethnic working class, as part of his trip to America. The Cleveland State University Poetry Center serves as an academic center for poetry in the city. Langston Hughes, preeminent poet of the Harlem Renaissance and child of an itinerant couple, lived in Cleveland as a teenager and attended Central High School in Cleveland in the 1910s. At Central High, the young writer was taught by Helen Maria Chesnutt, daughter of Cleveland-born African American novelist Charles W. Chesnutt. Hughes authored some of his earliest poems, plays, and short stories in Cleveland and contributed to the school newspaper. The African American avant-garde poet Russell Atkins lived in the city as well. The American modernist poet Hart Crane was born in nearby Garrettsville, Ohio in 1899. His adolescence was divided between Cleveland and Akron before he moved to New York City in 1916. Aside from factory work during World War I, he served as a reporter to The Plain Dealer for a short period, before achieving recognition in the Modernist literary scene. On the Case Western Reserve University campus, a statue of Crane, designed by sculptor William McVey, stands behind the Kelvin Smith Library. Cleveland was the home of Joe Shuster and Jerry Siegel, who created the comic book character Superman in 1932. Both attended Glenville High School, and their early collaborations resulted in the creation of "The Man of Steel". Harlan Ellison, noted author of speculative fiction, was born in Cleveland in 1934; his family subsequently moved to nearby Painesville, though Ellison moved back to Cleveland in 1949. As a young man, he published a series of short stories appearing in the Cleveland News, and performed in a number of productions for the Cleveland Play House. Cleveland is the site of the Anisfield-Wolf Book Award, established by poet and philanthropist Edith Anisfield Wolf in 1935, which recognizes books that have made important contributions to the understanding of racism and human diversity. Presented by the Cleveland Foundation, it remains the only American book prize focusing on works that address racism and diversity. ### Museums and galleries Cleveland has two main art museums. The Cleveland Museum of Art is a major American art museum, with a collection that includes more than 60,000 works of art ranging from ancient masterpieces to contemporary pieces. The Museum of Contemporary Art Cleveland showcases established and emerging artists, particularly from the Cleveland area, through hosting and producing temporary exhibitions. Both museums offer free admission to visitors, with the Cleveland Museum of Art declaring their museum free and open "for the benefit of all the people forever." The two museums are part of Cleveland's University Circle, a 550-acre (2.2 km<sup>2</sup>) concentration of cultural, educational, and medical institutions located 5 miles (8.0 km) east of downtown. In addition to the art museums, the neighborhood includes the Cleveland Botanical Garden, Case Western Reserve University, University Hospitals, Severance Hall, the Cleveland Museum of Natural History, and the Western Reserve Historical Society. Also located at University Circle is the Cleveland Cinematheque at the Cleveland Institute of Art, hailed by The New York Times as one of the country's best alternative movie theaters. The I. M. Pei-designed Rock and Roll Hall of Fame is located on Cleveland's Lake Erie waterfront at North Coast Harbor downtown. Neighboring attractions include Cleveland Browns Stadium, the Great Lakes Science Center, the Steamship Mather Museum, the International Women's Air & Space Museum, and the USS Cod, a World War II submarine. Designed by architect Levi T. Scofield, the Soldiers' and Sailors' Monument at Public Square is Cleveland's major Civil War memorial and a major attraction in the city. Other city attractions include Grays Armory and the Children's Museum of Cleveland. A Cleveland holiday attraction, especially for fans of Jean Shepherd's A Christmas Story, is the Christmas Story House and Museum in Tremont. ### Annual events Cleveland hosts the WinterLand holiday display lighting festival annually at Public Square. The Cleveland International Film Festival has been held since 1977, and it drew a record 106,000 people in 2017. The Cleveland National Air Show, an indirect successor to the National Air Races, has been held at the city's Burke Lakefront Airport since 1964. Sponsored by the Great Lakes Brewing Company, the Great Lakes Burning River Fest, a two-night music and beer festival at Whiskey Island, has been held since 2001. Many ethnic festivals are held in Cleveland throughout the year. These include the annual Feast of the Assumption in Little Italy, Russian Maslenitsa in Rockefeller Park, the Puerto Rican Parade and Cultural Festival in Clark–Fulton, the Cleveland Asian Festival in Asiatown, the Tremont Greek Fest, and the St. Mary Romanian Festival in West Park. Cleveland also hosts annual Polish Dyngus Day and Slovene Kurentovanje celebrations. The city's annual Saint Patrick's Day parade brings hundreds of thousands to the streets of Downtown. The Cleveland Thyagaraja Festival held each spring at Cleveland State University is the largest Indian classical music and dance festival in the world outside of India. Since 1946, the city has annually marked One World Day in the Cleveland Cultural Gardens in Rockefeller Park, celebrating all of its ethnic communities. ### Cuisine Cleveland's mosaic of ethnic communities and their various culinary traditions have long played an important role in defining the city's cuisine. Local mainstays include an abundance of Slavic, Hungarian, and Central-Eastern European contributions, such as kielbasa, stuffed cabbage, pierogies, goulash, and chicken paprikash. German, Irish, Jewish, and Italian American cuisines are also prominent in Cleveland, as are Lebanese, Greek, Chinese, Puerto Rican, Mexican, and numerous other ethnic cuisines. Vendors at the West Side Market in Ohio City offer many ethnic foods for sale. In addition, the city boasts a vibrant barbecue and soul food scene. Cleveland has plenty of corned beef, with nationally renowned Slyman's Deli, on the near East Side, a perennial winner of various accolades for its celebrated sandwich. Another famed sandwich, the Polish Boy, is a popular street food and Cleveland original frequently sold at downtown hot dog carts and stadium concession stands. With its blue-collar roots well intact, and plenty of Lake Erie perch available, the tradition of Friday night fish fries remains alive and thriving in Cleveland, particularly in ethnic parish-based settings, especially during the season of Lent. For dessert, the Cleveland Cassata Cake is a unique treat invented in the local Italian community and served in Italian establishments throughout the city. Another popular dessert, the locally crafted Russian Tea Biscuit, is common in many Jewish bakeries in Cleveland. Cleveland is noted in the world of celebrity food culture. Famous local figures include chef Michael Symon and food writer Michael Ruhlman, both of whom achieved local and national attention for their contributions to the culinary world. In 2007, Symon helped gain the spotlight when he was named "The Next Iron Chef" on the Food Network. That same year, Ruhlman collaborated with Anthony Bourdain, to do an episode of his Anthony Bourdain: No Reservations focusing on Cleveland's restaurant scene. In 2023, Travel + Leisure named Cleveland the 7th best food city in the nation. ### Breweries Ohio produces the fifth most beer in the United States, with its largest brewery being Cleveland's Great Lakes Brewing Company. Cleveland has had a long history of brewing, tied to many of its ethnic immigrants, and has reemerged as a regional leader in production. Dozens of breweries exist in the city limits, including large producers such as Market Garden Brewery and Platform Beer Company. Breweries can be found throughout the city, but the highest concentration is in the Ohio City neighborhood. Cleveland hosts expansions from other countries as well, including the Scottish BrewDog and German Hofbrauhaus. ## Sports ### Professional Major League Minor League ### College ### Overview Cleveland's major professional sports teams are the Cleveland Guardians (Major League Baseball), the Cleveland Browns (National Football League), and the Cleveland Cavaliers (National Basketball Association). Other professional teams include the Cleveland Monsters (American Hockey League), the Cleveland Charge (NBA G League), the Cleveland Crunch (Major League Indoor Soccer), Cleveland SC (National Premier Soccer League), and the Cleveland Fusion (Women's Football Alliance). Local sporting venues include Progressive Field, Cleveland Browns Stadium, Rocket Mortgage FieldHouse, the Wolstein Center, and the I-X Center. #### Teams The Cleveland Guardians – known as the Indians from 1915 to 2021 – won the World Series in 1920 and 1948. They also won the American League pennant, making the World Series in the 1954, 1995, 1997, and 2016 seasons. Between 1995 and 2001, Jacobs Field (now known as Progressive Field) sold out 455 consecutive games, a Major League Baseball record until it was broken in 2008. Historically, the Browns have been among the most successful franchises in American football history, winning eight titles during a short period of time – 1946, 1947, 1948, 1949, 1950, 1954, 1955, and 1964. The Browns have never played in a Super Bowl, getting close five times by making it to the NFL/AFC Championship Game in 1968, 1969, 1986, 1987, and 1989. Former owner Art Modell's relocation of the Browns after the 1995 season (to Baltimore creating the Ravens), caused tremendous heartbreak and resentment among local fans. Cleveland mayor, Michael R. White, worked with the NFL and Commissioner Paul Tagliabue to bring back the Browns beginning in the 1999 season, retaining all team history. In Cleveland's earlier football history, the Cleveland Bulldogs won the NFL Championship in 1924, and the Cleveland Rams won the NFL Championship in 1945 before relocating to Los Angeles. The Cavaliers won the Eastern Conference in 2007, 2015, 2016, 2017 and 2018 but were defeated in the NBA Finals by the San Antonio Spurs and then by the Golden State Warriors, respectively. The Cavs won the Conference again in 2016 and won their first NBA Championship coming back from a 3–1 deficit, finally defeating the Golden State Warriors. Afterwards, over 1.3 million people attended a parade held in the Cavs' honor on June 22, 2016 in Downtown Cleveland. Previously, the Cleveland Rosenblums dominated the original American Basketball League, and the Cleveland Pipers, owned by George Steinbrenner, won the American Basketball League championship in 1962. The Cleveland Monsters of the American Hockey League won the 2016 Calder Cup. They were the first Cleveland AHL team to do so since the 1964 Barons. Collegiately, NCAA Division I Cleveland State Vikings have 19 varsity sports, nationally known for their Cleveland State Vikings men's basketball team. NCAA Division III Case Western Reserve Spartans have 17 varsity sports, most known for their Case Western Reserve Spartans football team. The headquarters of the Mid-American Conference (MAC) are in Cleveland. The conference stages both its men's and women's basketball tournaments at Rocket Mortgage FieldHouse. #### Individuals Cleveland has produced several athletes who have won top individual accolades, most notably U.S. Olympic Hall of Fame champion Jesse Owens, who participated in the 1936 Summer Olympics in Berlin, where he won four gold medals. A statue commemorating Owens is located at Fort Huntington Park in Downtown Cleveland. Other famous Cleveland area athletes include Olympic track and field gold medalist Harrison Dillard, boxer Johnny Kilbane, mixed martial artist Stipe Miocic, snowboarder Red Gerard, and pole vaulter Katie Nageotte. #### Annual and special events The Cleveland Marathon has been hosted annually since 1978. In addition, several chess championships have taken place in Cleveland. The second American Chess Congress, a predecessor the current U.S. Championship, was held in 1871, and won by George Henry Mackenzie. The 1921 and 1957 U.S. Open Chess Championship took place in the city, and were won by Edward Lasker and Bobby Fischer, respectively. The Cleveland Open is held annually. In 2014, Cleveland hosted the ninth official Gay Games ceremony. Funded by the Cleveland Foundation, the 2014 games hosted thousands of athletes and tourists and was said to bring in about \$52.1 million for the local economy. ## Environment With its extensive cleanup of its Lake Erie shore and the Cuyahoga River, Cleveland has been recognized by national media as an environmental success story and a national leader in environmental protection. Since the city's industrialization, the Cuyahoga River had become so affected by industrial pollution that it "caught fire" a total of 13 times beginning in 1868. It was the river fire of June 1969 that spurred the city to action under Mayor Carl B. Stokes, and played a key role in the passage of the Clean Water Act in 1972 and the National Environmental Policy Act later that year. Since that time, the Cuyahoga has been extensively cleaned up through the efforts of the city and the Ohio Environmental Protection Agency (OEPA). In 2019, the American Rivers conservation association named the river "River of the Year" in honor of "50 years of environmental resurgence." In addition to continued efforts to improve freshwater and air quality, Cleveland is now exploring renewable energy. The city's two main electrical utilities are FirstEnergy and Cleveland Public Power. Its climate action plan, updated in December 2018, has a 2050 target of 100% renewable power, along with reduction of greenhouse gases to 80% below the 2010 level. In recent decades, Cleveland has been working to address the issue of harmful algal blooms on Lake Erie, fed primarily by agricultural runoff, which have presented new environmental challenges for the city and for northern Ohio. ## Government and politics ### Government and courts Cleveland operates on a mayor–council (strong mayor) form of government, in which the mayor is the chief executive and Cleveland City Council serves as the legislative branch. City council members are elected from 17 wards to four-year terms. From 1924 to 1931, the city briefly experimented with a council–manager government under William R. Hopkins and Daniel E. Morgan before returning to the mayor–council system. Cleveland is served by Cleveland Municipal Court, the first municipal court in the state. The city also anchors the U.S. District Court for the Northern District of Ohio, based at the Carl B. Stokes U.S. Courthouse and the historic Howard M. Metzenbaum U.S. Courthouse. The Chief Judge for the Northern District is Patricia Anne Gaughan and the Clerk of Court is Sandy Opacich. The current U.S. Attorney is Michelle Baeppler and the U.S. Marshal is Peter Elliott. ### Politics The office of the mayor has been held by Justin Bibb since 2022. Previous mayors include progressive Democrat Tom L. Johnson, World War I-era War Secretary and BakerHostetler founder Newton D. Baker, U.S. Supreme Court Justice Harold Hitz Burton, two-term Ohio Governor and Senator Frank J. Lausche, former U.S. Health, Education, and Welfare Secretary Anthony J. Celebrezze, two-term Ohio Governor and Senator George V. Voinovich, former U.S. Congressman Dennis Kucinich, and Carl B. Stokes, the first African American mayor of a major U.S. city. Frank G. Jackson was the city's longest-serving mayor. The current Cleveland City Council President is Blaine Griffin, the council Majority Leader is Kerry McCormack, and the Majority Whip is Jasmin Santana. Patricia Britt serves as the Clerk of Council. Historically, from the Civil War era to the 1940s, Cleveland had been dominated by the Republican Party, with the notable exceptions of the Johnson and Baker mayoral administrations. Businessman and Senator Mark Hanna was among Cleveland's most influential Republican figures, both locally and nationally. Another nationally prominent Ohio Republican, former U.S. President James A. Garfield, was born in Cuyahoga County's Orange Township (today the Cleveland suburb of Moreland Hills). His resting place is the James A. Garfield Memorial in Cleveland's Lake View Cemetery. Today Cleveland is a major stronghold for the Democratic Party in Ohio. Although local elections are nonpartisan, Democrats still dominate every level of government. Politically, Cleveland and several of its neighboring suburbs comprise Ohio's 11th congressional district. The district is represented by Shontel Brown, one of five Democrats representing the state of Ohio in the U.S. House of Representatives. Cleveland hosted three Republican national conventions in its history, in 1924, 1936, and 2016. Additionally, the city hosted the Radical Republican convention of 1864. Cleveland has not hosted a national convention for the Democrats, despite the position of Cuyahoga County as a Democratic stronghold in Ohio. Cleveland has hosted several national election debates, including the second 1980 U.S. Presidential debate, the 2004 U.S. Vice-Presidential debate, one 2008 Democratic primary debate, and the first 2020 U.S. Presidential debate. Founded in 1912, the City Club of Cleveland provides a platform for national and local debates and discussions. Known as Cleveland's "Citadel of Free Speech", it is one of the oldest continuous independent free speech and debate forums in the country. ## Public safety ### Police and law enforcement Like in other major American cities, crime in Cleveland is concentrated in areas with higher rates of poverty and lower access to jobs. In recent decades, the rate of crime in the city, although higher than the national average, experienced a significant decline, following a nationwide trend in falling crime rates. However, as in other major U.S. cities, crime in Cleveland saw an abrupt rise in 2020-21. Cleveland's law enforcement agency is the Cleveland Division of Police, established in 1866. The division had 1,400 sworn officers as of 2022, covering five police districts. The district system was introduced in the 1930s by Cleveland Public Safety Director Eliot Ness (of the Untouchables), who later ran for mayor of Cleveland in 1947. The current Chief of Police is Wayne Drummond. In addition, the Cuyahoga County Sheriff's Office is based in Downtown Cleveland at the Justice Center Complex. In May 2015, Cleveland agreed to a consent decree with the U.S. Department of Justice to revise its policies and implement independent oversight over its police force. In June of that year, Chief U.S. District Judge Solomon Oliver Jr. approved the consent decree, beginning the process of police reform. ### Fire department Cleveland is served by the firefighters of the Cleveland Division of Fire, established in 1863. The fire department operates out of 22 active fire stations throughout the city in five battalions. Each Battalion is commanded by a Battalion Chief, who reports to an on-duty Assistant Chief. The Division of Fire operates a fire apparatus fleet of twenty-two engine companies, eight ladder companies, three tower companies, two task force rescue squad companies, hazardous materials ("haz-mat") unit, and numerous other special, support, and reserve units. The current Chief of Department is Anthony Luke. ### Emergency medical services Cleveland EMS is operated by the city as its own municipal third-service EMS division. Cleveland EMS is the primary provider of Advanced Life Support and ambulance transport within the city of Cleveland, while Cleveland Fire assists by providing fire response medical care. Although a merger between the fire and EMS departments was proposed in the past, the idea was subsequently abandoned. ### Military Cleveland serves as headquarters to Coast Guard District 9 and is responsible for all U.S. Coast Guard operations on the five Great Lakes, the Saint Lawrence Seaway, and surrounding states accumulating 6,700 miles of shoreline and 1,500 miles of international shoreline with Canada, reporting up through the U.S. Department of Homeland Security. Station Cleveland Harbor, located in North Coast Harbor, has a responsibility covering about 550 square miles of the federally navigable waters of Lake Erie, including the Cuyahoga and Rocky rivers, as well as a number of their tributaries. ## Media ### Print Cleveland's primary daily newspaper is The Plain Dealer and its associated online publication, Cleveland.com. Defunct major newspapers include the Cleveland Press, an afternoon publication which printed its last edition on June 17, 1982; and the Cleveland News, which ceased publication in 1960. Additional publications include Cleveland Magazine, a regional culture magazine published monthly; Crain's Cleveland Business, a weekly business newspaper; and Cleveland Scene, a free alternative weekly paper which absorbed its competitor, the Cleveland Free Times, in 2008. The digital Belt Magazine was founded in Cleveland in 2013. Time magazine was published in Cleveland for a brief period from 1925 to 1927. Cleveland's ethnic publications include the Call and Post, a weekly newspaper that primarily serves the city's African American community; the Cleveland Jewish News, a weekly Jewish newspaper; the bi-weekly Russian-language Cleveland Russian Magazine; the Mandarin Erie Chinese Journal; La Gazzetta Italiana in English and Italian; the Ohio Irish American News; and the Spanish language Vocero Latino News. Historically, the Hungarian language newspaper Szabadság served the Hungarian community. ### TV The Cleveland-area television market is served by 11 full power stations, including WKYC (NBC), WEWS-TV (ABC), WJW (Fox), WDLI-TV (Bounce), WOIO (CBS), WVPX-TV (Ion), WVIZ (PBS), WUAB (CW), WRLM (TCT), WBNX-TV (independent), and WQHS-DT (Univision). As of 2021, the market, which includes the Akron and Canton areas, was the 19th-largest in the country, as measured by Nielsen Media Research. The Mike Douglas Show, a nationally syndicated daytime talk show, began in Cleveland in 1961 on KYW-TV (now WKYC), while The Morning Exchange on WEWS-TV served as the model for Good Morning America. Tim Conway and Ernie Anderson first established themselves in Cleveland while working together at KYW-TV and later WJW-TV (now WJW). Anderson both created and performed as the immensely popular Cleveland horror host Ghoulardi on WJW-TV's Shock Theater, and was later succeeded by the long-running late night duo Big Chuck and Lil' John. Another Anderson protégé – Ron Sweed – would become a popular Cleveland late night movie host in his own right as "The Ghoul". ### Radio Cleveland is directly served by 28 full power AM and FM radio stations, 21 of which are licensed to the city. Music stations – which are frequently the highest-rated in the market – include WQAL (hot adult contemporary), WDOK (adult contemporary), WFHM (Christian contemporary), WAKS (contemporary hits), WHLK (adult hits), WMJI (classic hits), WMMS (active rock/hot talk), WNCX (classic rock), WNWV (alternative rock), WGAR-FM (country), WZAK (urban adult contemporary), WENZ (mainstream urban), WJMO (urban gospel), and WCLV (classical/jazz). News/talk stations include WHK, WTAM, and WERE. During the Golden Age of Radio, WHK was the first radio station to broadcast in Ohio, and one of the first in the country. WTAM is the AM flagship for both the Cleveland Cavaliers and the Cleveland Guardians. Sports stations include WKNR (ESPN), WARF (Fox) and WKRK-FM (CBS), with WKNR and WKRK-FM serving as co-flagship stations for the Cleveland Browns. Religious stations include WHKW, WCCR, and WCRF. As the regional NPR affiliate, WKSU serves all of Northeast Ohio (including both the Cleveland and Akron markets). College stations include WBWC (Baldwin Wallace), WCSB (Cleveland State), WJCU (John Carroll), and WRUW-FM (Case Western Reserve). ## Transportation ### Urban transit systems Cleveland has a bus and rail mass transit system operated by the Greater Cleveland Regional Transit Authority (RTA). The rail portion is officially called the RTA Rapid Transit, but local residents refer to it as The Rapid. It consists of three light rail lines, known as the Blue, Green, and Waterfront Lines, and a heavy rail line, the Red Line. In 2008, RTA completed the HealthLine, a bus rapid transit line, for which naming rights were purchased by the Cleveland Clinic and University Hospitals. It runs along Euclid Avenue from downtown through University Circle, ending at the Louis Stokes Station at Windermere in East Cleveland. In 1968, Cleveland became the first city in the nation to have a direct rail transit connection linking the city's downtown to its major airport. ### Walkability In 2021, Walk Score ranked Cleveland the 17th most walkable of the 50 largest cities in the United States, with a Walk Score of 57, a Transit Score of 45, and a Bike Score of 55 (out of a maximum of 100). Cleveland's most walkable areas can be found in the Downtown, Ohio City, Detroit–Shoreway, University Circle, and Buckeye–Shaker neighborhoods. Like other major cities, the urban density of Cleveland reduces the need for private vehicle ownership. In 2016, 23.7% of Cleveland households lacked a car, while the national average was 8.7%. Cleveland averaged 1.19 cars per household in 2016, compared to a national average of 1.8. ### Roads Cleveland's road system consists of numbered streets running roughly north–south, and named avenues, which run roughly east–west. The numbered streets are designated "east" or "west", depending on where they lie in relation to Ontario Street, which bisects Public Square. The two downtown avenues which span the Cuyahoga change names on the west side of the river. Superior Avenue becomes Detroit Avenue on the West Side, and Carnegie Avenue becomes Lorain Avenue. The bridges that make these connections are the Hope Memorial (Lorain–Carnegie) Bridge and the Veterans Memorial (Detroit–Superior) Bridge. ### Freeways Cleveland is served by three two-digit interstate highways – Interstate 71, Interstate 77, and Interstate 90 – and by two three-digit interstates – Interstate 480 and Interstate 490. Running due east–west through the West Side suburbs, I-90 turns northeast at the junction with I-490, and is known as the Cleveland Inner Belt. The Cleveland Memorial Shoreway carries Ohio State Route 2 along its length, and at varying points carries US 6, US 20 and I-90. At the junction with the Shoreway, I-90 makes a 90-degree turn in the area known as Dead Man's Curve, then continues northeast. The Jennings Freeway (State Route 176) connects I-71 just south of I-90 to I-480. A third highway, the Berea Freeway (State Route 237 in part), connects I-71 to the airport and forms part of the boundary between Brook Park and Cleveland's Hopkins neighborhood. ### Airports Cleveland is a major US air market, with 4.93 million people. Cleveland Hopkins International Airport is the city's primary major airport and an international airport that serves the broader region. Originally known as Cleveland Municipal Airport, it was the first municipally owned airport in the country. Cleveland Hopkins is a significant regional air freight hub hosting FedEx Express, UPS Airlines, United States Postal Service, and major commercial freight carriers. In addition to Hopkins, Cleveland is served by Burke Lakefront Airport, on the north shore of downtown between Lake Erie and the Shoreway. Burke is primarily a commuter and business airport. ### Seaport The Port of Cleveland, at the Cuyahoga River's mouth, is a major bulk freight and container terminal on Lake Erie, receiving much of the raw materials used by the region's manufacturing industries. The Port of Cleveland is the only container port on the Great Lakes with bi-weekly container service between Cleveland and the Port of Antwerp in Belgium on a Dutch service called the Cleveland-Europe Express. In addition to freight, the Port of Cleveland welcomes regional and international tourists who pass through the city on Great Lakes cruises. ### Railroads Cleveland has a long history as a major railroad hub in the United States. Today, Amtrak provides service to Cleveland, via the Capitol Limited and Lake Shore Limited routes, which stop at Cleveland Lakefront Station. Additionally, Cleveland hosts several inter-modal freight railroad terminals, for Norfolk Southern, CSX and several smaller companies. ### Inter-city bus lines National intercity bus service is provided at a Greyhound station, located behind Playhouse Square. Megabus provides service to Cleveland and has a stop at the Stephanie Tubbs Jones Transit Center on the east side of downtown. Akron Metro, Brunswick Transit Alternative, Laketran, Lorain County Transit, and Medina County Transit provide connecting bus service to the Greater Cleveland Regional Transit Authority. Geauga County Transit and Portage Area Regional Transportation Authority (PARTA) also offer connecting bus service in their neighboring areas. ## International relations As of , Cleveland maintains cultural, economic, and educational ties with 23 sister cities around the world. It concluded its first sister city partnership with Lima, Peru, in 1964. In addition, Cleveland hosts the Consulate General of the Republic of Slovenia, which, until Slovene independence in 1991, served as an official consulate for Tito's Yugoslavia. The Cleveland Clinic operates the Cleveland Clinic Abu Dhabi hospital, a sports medicine clinic in Toronto, and a hospital campus in London. The Cleveland Council on World Affairs was established in 1923. Historically, Cleveland industrialist Cyrus S. Eaton, an apprentice of John D. Rockefeller, played a significant role in promoting dialogue between the US and the USSR during the Cold War. In October 1915 at Cleveland's Bohemian National Hall, Czech American and Slovak American representatives signed the Cleveland Agreement, calling for the formation of a joint Czech and Slovak state. ## See also - List of people from Cleveland - List of references to Cleveland in popular culture - USS Cleveland, 4 ships
47,402
Titan (moon)
1,171,806,900
Largest moon of Saturn
[ "Astronomical objects discovered in 1655", "Discoveries by Christiaan Huygens", "Moons with a prograde orbit", "Titan (moon)" ]
Titan is the largest moon of Saturn, the second-largest in the Solar System and larger than any of the dwarf planets of the Solar System. It is the only moon known to have a dense atmosphere, and is the only known object in space other than Earth on which clear evidence of stable bodies of surface liquid has been found. Titan is one of the seven gravitationally rounded moons in orbit around Saturn, and the second most distant from Saturn of those seven. Frequently described as a planet-like moon, Titan is 50% larger (in diameter) than Earth's Moon and 80% more massive. It is the second-largest moon in the Solar System after Jupiter's moon Ganymede, and is larger than the planet Mercury, but only 40% as massive. Discovered in 1655 by the Dutch astronomer Christiaan Huygens, Titan was the first known moon of Saturn, and the sixth known planetary satellite (after Earth's moon and the four Galilean moons of Jupiter). Titan orbits Saturn at 20 Saturn radii. From Titan's surface, Saturn subtends an arc of 5.09 degrees, and if it were visible through the moon's thick atmosphere, it would appear 11.4 times larger in the sky, in diameter, than the Moon from Earth, which subtends 0.48° of arc. Titan is primarily composed of ice and rocky material, which is likely differentiated into a rocky core surrounded by various layers of ice, including a crust of ice I<sub>h</sub> and a subsurface layer of ammonia-rich liquid water. Much as with Venus before the Space Age, the dense opaque atmosphere prevented understanding of Titan's surface until the Cassini–Huygens mission in 2004 provided new information, including the discovery of liquid hydrocarbon lakes in Titan's polar regions. The geologically young surface is generally smooth, with few impact craters, although mountains and several possible cryovolcanoes have been found. The atmosphere of Titan is largely nitrogen; minor components lead to the formation of methane and ethane clouds and heavy organonitrogen haze. The climate—including wind and rain—creates surface features similar to those of Earth, such as dunes, rivers, lakes, seas (probably of liquid methane and ethane), and deltas, and is dominated by seasonal weather patterns as on Earth. With its liquids (both surface and subsurface) and robust nitrogen atmosphere, Titan's methane cycle bears a striking similarity to Earth's water cycle, albeit at the much lower temperature of about 94 K (−179 °C; −290 °F). Due to these factors, Titan has been described as the most Earth-like celestial object in the Solar System. ## History ### Discovery Titan was discovered on March 25, 1655, by the Dutch astronomer Christiaan Huygens. Huygens was inspired by Galileo's discovery of Jupiter's four largest moons in 1610 and his improvements in telescope technology. Christiaan, with the help of his elder brother Constantijn Huygens Jr., began building telescopes around 1650 and discovered the first observed moon orbiting Saturn with one of the telescopes they built. It was the sixth moon ever discovered, after Earth's Moon and the Galilean moons of Jupiter. Titan is the largest and brightest moon of Saturn, and so is the easiest to observe of Saturn's moons with a standard optical telescope from Earth. ### Naming Huygens named his discovery Saturni Luna (or Luna Saturni, Latin for "moon of Saturn"), publishing in the 1655 tract De Saturni Luna Observatio Nova (A New Observation of Saturn's Moon). After Giovanni Domenico Cassini published his discoveries of four more moons of Saturn between 1673 and 1686, astronomers fell into the habit of referring to these and Titan as Saturn I through V (with Titan then in fourth position). Other early epithets for Titan include "Saturn's ordinary satellite". The International Astronomical Union officially numbers Titan as Saturn VI. The name Titan, and the names of all seven satellites of Saturn then known, came from John Herschel (son of William Herschel, discoverer of two other Saturnian moons, Mimas and Enceladus), in his 1847 publication Results of Astronomical Observations Made during the Years 1834, 5, 6, 7, 8, at the Cape of Good Hope. Numerous small moons have been discovered around Saturn since then. Saturnian moons are named after mythological giants. The name Titan comes from the Titans, a race of immortals in Greek mythology. ## Orbit and rotation Titan orbits Saturn once every 15 days and 22 hours. Like Earth's Moon and many of the satellites of the giant planets, its rotational period (its day) is identical to its orbital period; Titan is tidally locked in synchronous rotation with Saturn, and permanently shows one face to the planet. Longitudes on Titan are measured westward, starting from the meridian passing through this point. Its orbital eccentricity is 0.0288, and the orbital plane is inclined 0.348 degrees relative to the Saturnian equator, and hence also about a third of a degree off of the equatorial ring plane. Viewed from Earth, Titan reaches an angular distance of about 20 Saturn radii (just over 1,200,000 kilometers (750,000 mi)) from Saturn and subtends a disk 0.8 arcseconds in diameter. The small, irregularly shaped satellite Hyperion is locked in a 3:4 orbital resonance with Titan. Hyperion probably formed in a stable orbital island, whereas the massive Titan absorbed or ejected any other bodies that made close approaches. ## Bulk characteristics Titan is 5,149.46 kilometers (3,199.73 mi) in diameter, 1.06 times that of the planet Mercury, 1.48 that of the Moon, and 0.40 that of Earth. Titan is the tenth-largest object in the solar system, including the Sun. Before the arrival of Voyager 1 in 1980, Titan was thought to be slightly larger than Ganymede (diameter 5,262 kilometers (3,270 mi)) and thus the largest moon in the Solar System; this was an overestimation caused by Titan's dense, opaque atmosphere, with a haze layer 100-200 kilometres above its surface. This increases its apparent diameter. Titan's diameter and mass (and thus its density) are similar to those of the Jovian moons Ganymede and Callisto. Based on its bulk density of 1.88 g/cm<sup>3</sup>, Titan's composition is half ice and half rocky material. Though similar in composition to Dione and Enceladus, it is denser due to gravitational compression. It has a mass 1/4226 that of Saturn, making it the largest moon of the gas giants relative to the mass of its primary. It is second in terms of relative diameter of moons to a gas giant; Titan being 1/22.609 of Saturn's diameter, Triton is larger in diameter relative to Neptune at 1/18.092. Titan is probably partially differentiated into distinct layers with a 3,400-kilometer (2,100 mi) rocky center. This rocky center is believed to be surrounded by several layers composed of different crystalline forms of ice, and/or water. The exact structure depends heavily on the heat flux from within Titan itself, which is poorly constrained. The interior may still be hot enough for a liquid layer consisting of a "magma" composed of water and ammonia between the ice I<sub>h</sub> crust and deeper ice layers made of high-pressure forms of ice. The heat flow from inside Titan may even be too high for high pressure ices to form, with the outermost layers instead consisting primarily of liquid water underneath a surface crust. The presence of ammonia allows water to remain liquid even at a temperature as low as 176 K (−97 °C) (for eutectic mixture with water). The Cassini probe discovered evidence for the layered structure in the form of natural extremely-low-frequency radio waves in Titan's atmosphere. Titan's surface is thought to be a poor reflector of extremely-low-frequency radio waves, so they may instead be reflecting off the liquid–ice boundary of a subsurface ocean. Surface features were observed by the Cassini spacecraft to systematically shift by up to 30 kilometers (19 mi) between October 2005 and May 2007, which suggests that the crust is decoupled from the interior, and provides additional evidence for an interior liquid layer. Further supporting evidence for a liquid layer and ice shell decoupled from the solid core comes from the way the gravity field varies as Titan orbits Saturn. Comparison of the gravity field with the RADAR-based topography observations also suggests that the ice shell may be substantially rigid. ## Formation The moons of Jupiter and Saturn are thought to have formed through co-accretion, a similar process to that believed to have formed the planets in the Solar System. As the young gas giants formed, they were surrounded by discs of material that gradually coalesced into moons. Whereas Jupiter possesses four large satellites in highly regular, planet-like orbits, Titan overwhelmingly dominates Saturn's system and possesses a high orbital eccentricity not immediately explained by co-accretion alone. A proposed model for the formation of Titan is that Saturn's system began with a group of moons similar to Jupiter's Galilean satellites, but that they were disrupted by a series of giant impacts, which would go on to form Titan. Saturn's mid-sized moons, such as Iapetus and Rhea, were formed from the debris of these collisions. Such a violent beginning would also explain Titan's orbital eccentricity. A 2014 analysis of Titan's atmospheric nitrogen suggested that it was possibly sourced from material similar to that found in the Oort cloud and not from sources present during the co-accretion of materials around Saturn. ## Atmosphere Titan is the only known moon with a significant atmosphere, and its atmosphere is the only nitrogen-rich dense atmosphere in the Solar System aside from Earth's. Observations of it made in 2004 by Cassini suggest that Titan is a "super rotator", like Venus, with an atmosphere that rotates much faster than its surface. Observations from the Voyager space probes have shown that Titan's atmosphere is denser than Earth's, with a surface pressure about 1.45 atm. It is also about 1.19 times as massive as Earth's overall, or about 7.3 times more massive on a per surface area basis. Opaque haze layers block most visible light from the Sun and other sources and obscure Titan's surface features. Titan's lower gravity means that its atmosphere is far more extended than Earth's. The atmosphere of Titan is opaque at many wavelengths and as a result, a complete reflectance spectrum of the surface is impossible to acquire from orbit. It was not until the arrival of the Cassini–Huygens spacecraft in 2004 that the first direct images of Titan's surface were obtained. Titan's atmospheric composition is nitrogen (97%), methane (2.7±0.1%), and hydrogen (0.1–0.2%), with trace amounts of other gases. There are trace amounts of other hydrocarbons, such as ethane, diacetylene, methylacetylene, acetylene and propane, and of other gases, such as cyanoacetylene, hydrogen cyanide, carbon dioxide, carbon monoxide, cyanogen, argon and helium. The hydrocarbons are thought to form in Titan's upper atmosphere in reactions resulting from the breakup of methane by the Sun's ultraviolet light, producing a thick orange smog. Titan spends 95% of its time within Saturn's magnetosphere, which may help shield it from the solar wind. Energy from the Sun should have converted all traces of methane in Titan's atmosphere into more complex hydrocarbons within 50 million years—a short time compared to the age of the Solar System. This suggests that methane must be replenished by a reservoir on or within Titan itself. The ultimate origin of the methane in its atmosphere may be its interior, released via eruptions from cryovolcanoes. On April 3, 2013, NASA reported that complex organic chemicals, collectively called tholins, likely arise on Titan, based on studies simulating the atmosphere of Titan. On June 6, 2013, scientists at the IAA-CSIC reported the detection of polycyclic aromatic hydrocarbons in the upper atmosphere of Titan. On September 30, 2013, propene was detected in the atmosphere of Titan by NASA's Cassini spacecraft, using its composite infrared spectrometer (CIRS). This is the first time propene has been found on any moon or planet other than Earth and is the first chemical found by the CIRS. The detection of propene fills a mysterious gap in observations that date back to NASA's Voyager 1 spacecraft's first close planetary flyby of Titan in 1980, during which it was discovered that many of the gases that make up Titan's brown haze were hydrocarbons, theoretically formed via the recombination of radicals created by the Sun's ultraviolet photolysis of methane. On October 24, 2014, methane was found in polar clouds on Titan. On December 1, 2022, astronomers reported viewing clouds, likely made of methane, moving across Titan, using the James Webb Space Telescope. ## Climate Titan's surface temperature is about 94 K (−179.2 °C). At this temperature, water ice has an extremely low vapor pressure, so the little water vapor present appears limited to the stratosphere. Titan receives about 1% as much sunlight as Earth. Before sunlight reaches the surface, about 90% has been absorbed by the thick atmosphere, leaving only 0.1% of the amount of light Earth receives. Atmospheric methane creates a greenhouse effect on Titan's surface, without which Titan would be much colder. Conversely, haze in Titan's atmosphere contributes to an anti-greenhouse effect by absorbing sunlight, cancelling a portion of the greenhouse effect and making its surface significantly colder than its upper atmosphere. Titan's clouds, probably composed of methane, ethane or other simple organics, are scattered and variable, punctuating the overall haze. The findings of the Huygens probe indicate that Titan's atmosphere periodically rains liquid methane and other organic compounds onto its surface. Clouds typically cover 1% of Titan's disk, though outburst events have been observed in which the cloud cover rapidly expands to as much as 8%. One hypothesis asserts that the southern clouds are formed when heightened levels of sunlight during the southern summer generate uplift in the atmosphere, resulting in convection. This explanation is complicated by the fact that cloud formation has been observed not only after the southern summer solstice but also during mid-spring. Increased methane humidity at the south pole possibly contributes to the rapid increases in cloud size. It was summer in Titan's southern hemisphere until 2010, when Saturn's orbit, which governs Titan's motion, moved Titan's northern hemisphere into the sunlight. When the seasons switch, it is expected that ethane will begin to condense over the south pole. ## Surface features The surface of Titan has been described as "complex, fluid-processed, [and] geologically young". Titan has been around since the Solar System's formation, but its surface is much younger, between 100 million and 1 billion years old. Geological processes may have reshaped Titan's surface. Titan's atmosphere is four times as thick as Earth's, making it difficult for astronomical instruments to image its surface in the visible light spectrum. The Cassini spacecraft used infrared instruments, radar altimetry and synthetic aperture radar (SAR) imaging to map portions of Titan during its close fly-bys. The first images revealed a diverse geology, with both rough and smooth areas. There are features that may be volcanic in origin, disgorging water mixed with ammonia onto the surface. There is also evidence that Titan's ice shell may be substantially rigid, which would suggest little geologic activity. There are also streaky features, some of them hundreds of kilometers in length, that appear to be caused by windblown particles. Examination has also shown the surface to be relatively smooth; the few objects that seem to be impact craters appeared to have been filled in, perhaps by raining hydrocarbons or volcanoes. Radar altimetry suggests height variation is low, typically no more than 150 meters. Occasional elevation changes of 500 meters have been discovered and Titan has mountains that sometimes reach several hundred meters to more than 1 kilometer in height. Titan's surface is marked by broad regions of bright and dark terrain. These include Xanadu, a large, reflective equatorial area about the size of Australia. It was first identified in infrared images from the Hubble Space Telescope in 1994, and later viewed by the Cassini spacecraft. The convoluted region is filled with hills and cut by valleys and chasms. It is criss-crossed in places by dark lineaments—sinuous topographical features resembling ridges or crevices. These may represent tectonic activity, which would indicate that Xanadu is geologically young. Alternatively, the lineaments may be liquid-formed channels, suggesting old terrain that has been cut through by stream systems. There are dark areas of similar size elsewhere on Titan, observed from the ground and by Cassini; at least one of these, Ligeia Mare, Titan's second-largest sea, is almost a pure methane sea. ### Lakes The possibility of hydrocarbon seas on Titan was first suggested based on Voyager 1 and 2 data that showed Titan to have a thick atmosphere of approximately the correct temperature and composition to support them, but direct evidence was not obtained until 1995 when data from Hubble and other observations suggested the existence of liquid methane on Titan, either in disconnected pockets or on the scale of satellite-wide oceans, similar to water on Earth. The Cassini mission confirmed the former hypothesis. When the probe arrived in the Saturnian system in 2004, it was hoped that hydrocarbon lakes or oceans would be detected from the sunlight reflected off their surface, but no specular reflections were initially observed. Near Titan's south pole, an enigmatic dark feature named Ontario Lacus was identified (and later confirmed to be a lake). A possible shoreline was also identified near the pole via radar imagery. Following a flyby on July 22, 2006, in which the Cassini spacecraft's radar imaged the northern latitudes (that were then in winter), several large, smooth (and thus dark to radar) patches were seen dotting the surface near the pole. Based on the observations, scientists announced "definitive evidence of lakes filled with methane on Saturn's moon Titan" in January 2007. The Cassini–Huygens team concluded that the imaged features are almost certainly the long-sought hydrocarbon lakes, the first stable bodies of surface liquid found outside Earth. Some appear to have channels associated with liquid and lie in topographical depressions. The liquid erosion features appear to be a very recent occurrence: channels in some regions have created surprisingly little erosion, suggesting erosion on Titan is extremely slow, or some other recent phenomena may have wiped out older riverbeds and landforms. Overall, the Cassini radar observations have shown that lakes cover only a small percentage of the surface, making Titan much drier than Earth. Most of the lakes are concentrated near the poles (where the relative lack of sunlight prevents evaporation), but several long-standing hydrocarbon lakes in the equatorial desert regions have also been discovered, including one near the Huygens landing site in the Shangri-La region, which is about half the size of the Great Salt Lake in Utah, USA. The equatorial lakes are probably "oases", i.e. the likely supplier is underground aquifers. In June 2008, the Visual and Infrared Mapping Spectrometer on Cassini confirmed the presence of liquid ethane beyond doubt in Ontario Lacus. On December 21, 2008, Cassini passed directly over Ontario Lacus and observed specular reflection in radar. The strength of the reflection saturated the probe's receiver, indicating that the lake level did not vary by more than 3 mm (implying either that surface winds were minimal, or the lake's hydrocarbon fluid is viscous). On July 8, 2009, Cassini's VIMS observed a specular reflection indicative of a smooth, mirror-like surface, off what today is called Jingpo Lacus, a lake in the north polar region shortly after the area emerged from 15 years of winter darkness. Specular reflections are indicative of a smooth, mirror-like surface, so the observation corroborated the inference of the presence of a large liquid body drawn from radar imaging. Early radar measurements made in July 2009 and January 2010 indicated that Ontario Lacus was extremely shallow, with an average depth of 0.4–3 m, and a maximum depth of 3 to 7 m (9.8 to 23.0 ft). In contrast, the northern hemisphere's Ligeia Mare was initially mapped to depths exceeding 8 m, the maximum discernable by the radar instrument and the analysis techniques of the time. Later science analysis, released in 2014, more fully mapped the depths of Titan's three methane seas and showed depths of more than 200 meters (660 ft). Ligeia Mare averages from 20 to 40 m (66 to 131 ft) in depth, while other parts of Ligeia did not register any radar reflection at all, indicating a depth of more than 200 m (660 ft). While only the second largest of Titan's methane seas, Ligeia "contains enough liquid methane to fill three Lake Michigans". In May 2013, Cassini'''s radar altimeter observed Titan's Vid Flumina channels, defined as a drainage network connected to Titan's second-largest hydrocarbon sea, Ligeia Mare. Analysis of the received altimeter echoes showed that the channels are located in deep (up to \~570 m), steep-sided, canyons and have strong specular surface reflections that indicate they are currently filled with liquid. Elevations of the liquid in these channels are at the same level as Ligeia Mare to within a vertical precision of about 0.7 m, consistent with the interpretation of drowned river valleys. Specular reflections are also observed in lower order tributaries elevated above the level of Ligeia Mare, consistent with drainage feeding into the main channel system. This is likely the first direct evidence of the presence of liquid channels on Titan and the first observation of hundred-meter deep canyons on Titan. Vid Flumina canyons are thus drowned by the sea but there are a few isolated observations to attest to the presence of surface liquids standing at higher elevations. During six flybys of Titan from 2006 to 2011, Cassini gathered radiometric tracking and optical navigation data from which investigators could roughly infer Titan's changing shape. The density of Titan is consistent with a body that is about 60% rock and 40% water. The team's analyses suggest that Titan's surface can rise and fall by up to 10 metres during each orbit. That degree of warping suggests that Titan's interior is relatively deformable, and that the most likely model of Titan is one in which an icy shell dozens of kilometres thick floats atop a global ocean. The team's findings, together with the results of previous studies, hint that Titan's ocean may lie no more than 100 kilometers (62 mi) below its surface. On July 2, 2014, NASA reported the ocean inside Titan may be as salty as the Dead Sea. On September 3, 2014, NASA reported studies suggesting methane rainfall on Titan may interact with a layer of icy materials underground, called an "alkanofer", to produce ethane and propane that may eventually feed into rivers and lakes. In 2016, Cassini found the first evidence of fluid-filled channels on Titan, in a series of deep, steep-sided canyons flowing into Ligeia Mare. This network of canyons, dubbed Vid Flumina, ranges in depth from 240 to 570 m and has sides as steep as 40°. They are believed to have formed either by crustal uplifting, like Earth's Grand Canyon, a lowering of sea level, or perhaps a combination of the two. The depth of erosion suggests that liquid flows in this part of Titan are long-term features that persist for thousands of years. ### Impact craters Radar, SAR and imaging data from Cassini have revealed few impact craters on Titan's surface. These impacts appear to be relatively young, compared to Titan's age. The few impact craters discovered include a 392-kilometer-wide (244 mi) two-ring impact basin named Menrva seen by Cassini's ISS as a bright-dark concentric pattern. A smaller, 80-kilometer-wide (50 mi), flat-floored crater named Sinlap and a 30 km (19 mi) crater with a central peak and dark floor named Ksa have also been observed. Radar and Cassini imaging have also revealed "crateriforms", circular features on the surface of Titan that may be impact related, but lack certain features that would make identification certain. For example, a 90-kilometer-wide (56 mi) ring of bright, rough material known as Guabonito has been observed by Cassini. This feature is thought to be an impact crater filled in by dark, windblown sediment. Several other similar features have been observed in the dark Shangri-La and Aaru regions. Radar observed several circular features that may be craters in the bright region Xanadu during Cassini's April 30, 2006 flyby of Titan. Many of Titan's craters or probable craters display evidence of extensive erosion, and all show some indication of modification. Most large craters have breached or incomplete rims, despite the fact that some craters on Titan have relatively more massive rims than those anywhere else in the Solar System. There is little evidence of formation of palimpsests through viscoelastic crustal relaxation, unlike on other large icy moons. Most craters lack central peaks and have smooth floors, possibly due to impact-generation or later eruption of cryovolcanic lava. Infill from various geological processes is one reason for Titan's relative deficiency of craters; atmospheric shielding also plays a role. It is estimated that Titan's atmosphere reduces the number of craters on its surface by a factor of two. The limited high-resolution radar coverage of Titan obtained through 2007 (22%) suggested the existence of nonuniformities in its crater distribution. Xanadu has 2–9 times more craters than elsewhere. The leading hemisphere has a 30% higher density than the trailing hemisphere. There are lower crater densities in areas of equatorial dunes and in the north polar region (where hydrocarbon lakes and seas are most common). Pre-Cassini models of impact trajectories and angles suggest that where the impactor strikes the water ice crust, a small amount of ejecta remains as liquid water within the crater. It may persist as liquid for centuries or longer, sufficient for "the synthesis of simple precursor molecules to the origin of life". ### Cryovolcanism and mountains Scientists have long speculated that conditions on Titan resemble those of early Earth, though at a much lower temperature. The detection of argon-40 in the atmosphere in 2004 indicated that volcanoes had spawned plumes of "lava" composed of water and ammonia. Global maps of the lake distribution on Titan's surface revealed that there is not enough surface methane to account for its continued presence in its atmosphere, and thus that a significant portion must be added through volcanic processes. Still, there is a paucity of surface features that can be unambiguously interpreted as cryovolcanoes. One of the first of such features revealed by Cassini radar observations in 2004, called Ganesa Macula, resembles the geographic features called "pancake domes" found on Venus, and was thus initially thought to be cryovolcanic in origin, until Kirk et al. refuted this hypothesis at the American Geophysical Union annual meeting in December 2008. The feature was found to be not a dome at all, but appeared to result from accidental combination of light and dark patches. In 2004 Cassini also detected an unusually bright feature (called Tortola Facula), which was interpreted as a cryovolcanic dome. No similar features have been identified as of 2010. In December 2008, astronomers announced the discovery of two transient but unusually long-lived "bright spots" in Titan's atmosphere, which appear too persistent to be explained by mere weather patterns, suggesting they were the result of extended cryovolcanic episodes. A mountain range measuring 150 kilometers (93 mi) long, 30 kilometers (19 mi) wide and 1.5 kilometers (0.93 mi) high was also discovered by Cassini in 2006. This range lies in the southern hemisphere and is thought to be composed of icy material and covered in methane snow. The movement of tectonic plates, perhaps influenced by a nearby impact basin, could have opened a gap through which the mountain's material upwelled. Prior to Cassini, scientists assumed that most of the topography on Titan would be impact structures, yet these findings reveal that similar to Earth, the mountains were formed through geological processes. In 2008 Jeffrey Moore (planetary geologist of Ames Research Center) proposed an alternate view of Titan's geology. Noting that no volcanic features had been unambiguously identified on Titan so far, he asserted that Titan is a geologically dead world, whose surface is shaped only by impact cratering, fluvial and eolian erosion, mass wasting and other exogenic processes. According to this hypothesis, methane is not emitted by volcanoes but slowly diffuses out of Titan's cold and stiff interior. Ganesa Macula may be an eroded impact crater with a dark dune in the center. The mountainous ridges observed in some regions can be explained as heavily degraded scarps of large multi-ring impact structures or as a result of the global contraction due to the slow cooling of the interior. Even in this case, Titan may still have an internal ocean made of the eutectic water–ammonia mixture with a temperature of 176 K (−97 °C), which is low enough to be explained by the decay of radioactive elements in the core. The bright Xanadu terrain may be a degraded heavily cratered terrain similar to that observed on the surface of Callisto. Indeed, were it not for its lack of an atmosphere, Callisto could serve as a model for Titan's geology in this scenario. Jeffrey Moore even called Titan Callisto with weather. In March 2009, structures resembling lava flows were announced in a region of Titan called Hotei Arcus, which appears to fluctuate in brightness over several months. Though many phenomena were suggested to explain this fluctuation, the lava flows were found to rise 200 meters (660 ft) above Titan's surface, consistent with it having erupted from beneath the surface. In December 2010, the Cassini mission team announced the most compelling possible cryovolcano yet found. Named Sotra Patera, it is one in a chain of at least three mountains, each between 1000 and 1500 m in height, several of which are topped by large craters. The ground around their bases appears to be overlaid by frozen lava flows. Crater-like landforms possibly formed via explosive, maar-like or caldera-forming cryovolcanic eruptions have been identified in Titan's polar regions. These formations are sometimes nested or overlapping and have features suggestive of explosions and collapses, such as elevated rims, halos, and internal hills or mountains. The polar location of these features and their colocalization with Titan's lakes and seas suggests volatiles such as methane may help power them. Some of these features appear quite fresh, suggesting that such volcanic activity continues to the present. Most of Titan's highest peaks occur near its equator in so-called "ridge belts". They are believed to be analogous to Earth's fold mountains such as the Rockies or the Himalayas, formed by the collision and buckling of tectonic plates, or to subduction zones like the Andes, where upwelling lava (or cryolava) from a melting descending plate rises to the surface. One possible mechanism for their formation is tidal forces from Saturn. Because Titan's icy mantle is less viscous than Earth's magma mantle, and because its icy bedrock is softer than Earth's granite bedrock, mountains are unlikely to reach heights as great as those on Earth. In 2016, the Cassini team announced what they believe to be the tallest mountain on Titan. Located in the Mithrim Montes range, it is 3,337 m tall. If volcanism on Titan really exists, the hypothesis is that it is driven by energy released from the decay of radioactive elements within the mantle, as it is on Earth. Magma on Earth is made of liquid rock, which is less dense than the solid rocky crust through which it erupts. Because ice is less dense than water, Titan's watery magma would be denser than its solid icy crust. This means that cryovolcanism on Titan would require a large amount of additional energy to operate, possibly via tidal flexing from nearby Saturn. The low-pressure ice, overlaying a liquid layer of ammonium sulfate, ascends buoyantly, and the unstable system can produce dramatic plume events. Titan is resurfaced through the process by grain-sized ice and ammonium sulfate ash, which helps produce a wind-shaped landscape and sand dune features. Titan may have been much more geologically active in the past; models of Titan's internal evolution suggest that Titan's crust was only 10 kilometers thick until about 500 million years ago, allowing vigorous cryovolcanism with low viscosity water magmas to erase all surface features formed before that time. Titan's modern geology would have formed only after the crust thickened to 50 kilometers and thus impeded constant cryovolcanic resurfacing, with any cryovolcanism occurring since that time producing much more viscous water magma with larger fractions of ammonia and methanol; this would also suggest that Titan's methane is no longer being actively added to its atmosphere and could be depleted entirely within a few tens of millions of years. Many of the more prominent mountains and hills have been given official names by the International Astronomical Union. According to JPL, "By convention, mountains on Titan are named for mountains from Middle-earth, the fictional setting in fantasy novels by J. R. R. Tolkien." Colles (collections of hills) are named for characters from the same Tolkien works. ### Dark equatorial terrain In the first images of Titan's surface taken by Earth-based telescopes in the early 2000s, large regions of dark terrain were revealed straddling Titan's equator. Prior to the arrival of Cassini, these regions were thought to be seas of liquid hydrocarbons. Radar images captured by the Cassini spacecraft have instead revealed some of these regions to be extensive plains covered in longitudinal dunes, up to 330 ft (100 m) high, about a kilometer wide, and tens to hundreds of kilometers long. Dunes of this type are always aligned with average wind direction. In the case of Titan, steady zonal (eastward) winds combine with variable tidal winds (approximately 0.5 meters per second). The tidal winds are the result of tidal forces from Saturn on Titan's atmosphere, which are 400 times stronger than the tidal forces of the Moon on Earth and tend to drive wind toward the equator. This wind pattern, it was hypothesized, causes granular material on the surface to gradually build up in long parallel dunes aligned west-to-east. The dunes break up around mountains, where the wind direction shifts. The longitudinal (or linear) dunes were initially presumed to be formed by moderately variable winds that either follow one mean direction or alternate between two different directions. Subsequent observations indicate that the dunes point to the east although climate simulations indicate Titan's surface winds blow toward the west. At less than 1 meter per second, they are not powerful enough to lift and transport surface material. Recent computer simulations indicate that the dunes may be the result of rare storm winds that happen only every fifteen years when Titan is in equinox. These storms produce strong downdrafts, flowing eastward at up to 10 meters per second when they reach the surface. The "sand" on Titan is likely not made up of small grains of silicates like the sand on Earth, but rather might have formed when liquid methane rained and eroded the water-ice bedrock, possibly in the form of flash floods. Alternatively, the sand could also have come from organic solids called tholins, produced by photochemical reactions in Titan's atmosphere. Studies of dunes' composition in May 2008 revealed that they possessed less water than the rest of Titan, and are thus most likely derived from organic soot like hydrocarbon polymers clumping together after raining onto the surface. Calculations indicate the sand on Titan has a density of one-third that of terrestrial sand. The low density combined with the dryness of Titan's atmosphere might cause the grains to clump together because of static electricity buildup. The "stickiness" might make it difficult for the generally mild breeze close to Titan's surface to move the dunes although more powerful winds from seasonal storms could still blow them eastward. Around equinox, strong downburst winds can lift micron-sized solid organic particles up from the dunes to create Titanian dust storms, observed as intense and short-lived brightenings in the infrared. ## Observation and exploration Titan is never visible to the naked eye, but can be observed through small telescopes or strong binoculars. Amateur observation is difficult because of the proximity of Titan to Saturn's brilliant globe and ring system; an occulting bar, covering part of the eyepiece and used to block the bright planet, greatly improves viewing. Titan has a maximum apparent magnitude of +8.2, and mean opposition magnitude 8.4. This compares to +4.6 for the similarly sized Ganymede, in the Jovian system. Observations of Titan prior to the space age were limited. In 1907 Spanish astronomer Josep Comas i Solà observed limb darkening of Titan, the first evidence that the body has an atmosphere. In 1944 Gerard P. Kuiper used a spectroscopic technique to detect an atmosphere of methane. ### Fly-by missions: Pioneer and Voyager The first probe to visit the Saturnian system was Pioneer 11 in 1979, which revealed that Titan was probably too cold to support life. It took images of Titan, including Titan and Saturn together in mid to late 1979. The quality was soon surpassed by the two Voyagers. Titan was examined by both Voyager 1 and 2 in 1980 and 1981, respectively. Voyager 1's trajectory was designed to provide an optimized Titan flyby, during which the spacecraft was able to determine the density, composition, and temperature of the atmosphere, and obtain a precise measurement of Titan's mass. Atmospheric haze prevented direct imaging of the surface, though in 2004 intensive digital processing of images taken through Voyager 1's orange filter did reveal hints of the light and dark features now known as Xanadu and Shangri-la, which had been observed in the infrared by the Hubble Space Telescope. Voyager 2, which would have been diverted to perform the Titan flyby if Voyager 1 had been unable to, did not pass near Titan and continued on to Uranus and Neptune. ### Cassini–Huygens Even with the data provided by the Voyagers, Titan remained a body of mystery—a large satellite shrouded in an atmosphere that makes detailed observation difficult. The Cassini–Huygens spacecraft reached Saturn on July 1, 2004, and began the process of mapping Titan's surface by radar. A joint project of the European Space Agency (ESA) and NASA, Cassini–Huygens proved a very successful mission. The Cassini probe flew by Titan on October 26, 2004, and took the highest-resolution images ever of Titan's surface, at only 1,200 kilometers (750 mi), discerning patches of light and dark that would be invisible to the human eye. On July 22, 2006, Cassini made its first targeted, close fly-by at 950 kilometers (590 mi) from Titan; the closest flyby was at 880 kilometers (550 mi) on June 21, 2010. Liquid has been found in abundance on the surface in the north polar region, in the form of many lakes and seas discovered by Cassini. #### Huygens landing Huygens was an atmospheric probe that touched down on Titan on January 14, 2005, discovering that many of its surface features seem to have been formed by fluids at some point in the past. Titan is the most distant body from Earth to have a space probe land on its surface. The Huygens probe landed just off the easternmost tip of a bright region now called Adiri. The probe photographed pale hills with dark "rivers" running down to a dark plain. Current understanding is that the hills (also referred to as highlands) are composed mainly of water ice. Dark organic compounds, created in the upper atmosphere by the ultraviolet radiation of the Sun, may rain from Titan's atmosphere. They are washed down the hills with the methane rain and are deposited on the plains over geological time scales. After landing, Huygens photographed a dark plain covered in small rocks and pebbles, which are composed of water ice. The two rocks just below the middle of the image on the right are smaller than they may appear: the left-hand one is 15 centimeters across, and the one in the center is 4 centimeters across, at a distance of about 85 centimeters from Huygens. There is evidence of erosion at the base of the rocks, indicating possible fluvial activity. The ground surface is darker than originally expected, consisting of a mixture of water and hydrocarbon ice. In March 2007, NASA, ESA, and COSPAR decided to name the Huygens landing site the Hubert Curien Memorial Station in memory of the former president of the ESA. ### Dragonfly The Dragonfly mission, developed and operated by the Johns Hopkins Applied Physics Laboratory, will launch in June 2027. It consists of a large drone powered by an RTG to fly in the atmosphere of Titan as New Frontiers 4. Its instruments will study how far prebiotic chemistry may have progressed. The mission is planned to arrive at Titan in 2034. ### Proposed or conceptual missions There have been several conceptual missions proposed in recent years for returning a robotic space probe to Titan. Initial conceptual work has been completed for such missions by NASA (and JPL), and ESA. At present, none of these proposals have become funded missions. The Titan Saturn System Mission (TSSM) was a joint NASA/ESA proposal for exploration of Saturn's moons. It envisions a hot-air balloon floating in Titan's atmosphere for six months. It was competing against the Europa Jupiter System Mission (EJSM) proposal for funding. In February 2009 it was announced that ESA/NASA had given the EJSM mission priority ahead of the TSSM. The proposed Titan Mare Explorer (TiME) was a low-cost lander that would splash down in a lake in Titan's northern hemisphere and float on the surface of the lake for three to six months. It was selected for a Phase-A design study in 2011 as a candidate mission for the 12th NASA Discovery Program opportunity, but was not selected for flight. Another mission to Titan proposed in early 2012 by Jason Barnes, a scientist at the University of Idaho, is the Aerial Vehicle for In-situ and Airborne Titan Reconnaissance (AVIATR): an uncrewed plane (or drone) that would fly through Titan's atmosphere and take high-definition images of the surface of Titan. NASA did not approve the requested \$715 million, and the future of the project is uncertain. A conceptual design for another lake lander was proposed in late 2012 by the Spanish-based private engineering firm SENER and the Centro de Astrobiología in Madrid. The concept probe is called Titan Lake In-situ Sampling Propelled Explorer (TALISE). The major difference compared to the TiME probe would be that TALISE is envisioned with its own propulsion system and would therefore not be limited to simply drifting on the lake when it splashes down. A Discovery Program contestant for its mission \#13 is Journey to Enceladus and Titan (JET), an astrobiology Saturn orbiter that would assess the habitability potential of Enceladus and Titan. In 2015, the NASA Innovative Advanced Concepts program (NIAC) awarded a Phase II grant to a design study of a Titan Submarine to explore the seas of Titan. ## Prebiotic conditions and life Titan is thought to be a prebiotic environment rich in complex organic compounds, but its surface is in a deep freeze at −179 °C (−290.2 °F; 94.1 K) so life as we know it cannot exist on the moon's frigid surface. However, Titan seems to contain a global ocean beneath its ice shell, and within this ocean, conditions are potentially suitable for microbial life. The Cassini–Huygens'' mission was not equipped to provide evidence for biosignatures or complex organic compounds; it showed an environment on Titan that is similar, in some ways, to ones hypothesized for the primordial Earth. Scientists surmise that the atmosphere of early Earth was similar in composition to the current atmosphere on Titan, with the important exception of a lack of water vapor on Titan. ### Formation of complex molecules The Miller–Urey experiment and several following experiments have shown that with an atmosphere similar to that of Titan and the addition of UV radiation, complex molecules and polymer substances like tholins can be generated. The reaction starts with dissociation of nitrogen and methane, forming hydrogen cyanide and acetylene. Further reactions have been studied extensively. It has been reported that when energy was applied to a combination of gases like those in Titan's atmosphere, five nucleotide bases, the building blocks of DNA and RNA, were among the many compounds produced. In addition, amino acids, the building blocks of protein were found. It was the first time nucleotide bases and amino acids had been found in such an experiment without liquid water being present. On April 3, 2013, NASA reported that complex organic chemicals could arise on Titan based on studies simulating the atmosphere of Titan. On June 6, 2013, scientists at the IAA-CSIC reported the detection of polycyclic aromatic hydrocarbons (PAH) in the upper atmosphere of Titan. On July 26, 2017, Cassini scientists positively identified the presence of carbon chain anions in Titan's upper atmosphere which appeared to be involved in the production of large complex organics. These highly reactive molecules were previously known to contribute to building complex organics in the Interstellar Medium, therefore highlighting a possibly universal stepping stone to producing complex organic material. On July 28, 2017, scientists reported that acrylonitrile, or vinyl cyanide, (C<sub>2</sub>H<sub>3</sub>CN), possibly essential for life by being related to cell membrane and vesicle structure formation, had been found on Titan. In October 2018, researchers reported low-temperature chemical pathways from simple organic compounds to complex polycyclic aromatic hydrocarbon (PAH) chemicals. Such chemical pathways may help explain the presence of PAHs in the low-temperature atmosphere of Titan, and may be significant pathways, in terms of the PAH world hypothesis, in producing precursors to biochemicals related to life as we know it. ### Possible subsurface habitats Laboratory simulations have led to the suggestion that enough organic material exists on Titan to start a chemical evolution analogous to what is thought to have started life on Earth. The analogy assumes the presence of liquid water for longer periods than is currently observable; several hypotheses postulate that liquid water from an impact could be preserved under a frozen isolation layer. It has also been hypothesized that liquid-ammonia oceans could exist deep below the surface. Another model suggests an ammonia–water solution as much as 200 kilometers (120 mi) deep beneath a water-ice crust with conditions that, although extreme by terrestrial standards, are such that life could survive. Heat transfer between the interior and upper layers would be critical in sustaining any subsurface oceanic life. Detection of microbial life on Titan would depend on its biogenic effects, with the atmospheric methane and nitrogen examined. ### Methane and life at the surface It has been speculated that life could exist in the lakes of liquid methane on Titan, just as organisms on Earth live in water. Such organisms would inhale H<sub>2</sub> in place of O<sub>2</sub>, metabolize it with acetylene instead of glucose, and exhale methane instead of carbon dioxide. However, such hypothetical organisms would be required to metabolize at a deep freeze temperature of −179.2 °C (−290.6 °F; 94.0 K). All life forms on Earth (including methanogens) use liquid water as a solvent; it is speculated that life on Titan might instead use a liquid hydrocarbon, such as methane or ethane, although water is a stronger solvent than methane. Water is also more chemically reactive, and can break down large organic molecules through hydrolysis. A life form whose solvent was a hydrocarbon would not face the risk of its biomolecules being destroyed in this way. In 2005, astrobiologist Chris McKay argued that if methanogenic life did exist on the surface of Titan, it would likely have a measurable effect on the mixing ratio in the Titan troposphere: levels of hydrogen and acetylene would be measurably lower than otherwise expected. Assuming metabolic rates similar to those of methanogenic organisms on Earth, the concentration of molecular hydrogen would drop by a factor of 1000 on the Titanian surface solely due to a hypothetical biological sink. McKay noted that, if life is indeed present, the low temperatures on Titan would result in very slow metabolic processes, which could conceivably be hastened by the use of catalysts similar to enzymes. He also noted that the low solubility of organic compounds in methane presents a more significant challenge to any possible form of life. Forms of active transport, and organisms with large surface-to-volume ratios could theoretically lessen the disadvantages posed by this fact. In 2010, Darrell Strobel, from Johns Hopkins University, identified a greater abundance of molecular hydrogen in the upper atmospheric layers of Titan compared to the lower layers, arguing for a downward flow at a rate of roughly 10<sup>28</sup> molecules per second and disappearance of hydrogen near Titan's surface; as Strobel noted, his findings were in line with the effects McKay had predicted if methanogenic life-forms were present. The same year, another study showed low levels of acetylene on Titan's surface, which were interpreted by McKay as consistent with the hypothesis of organisms consuming hydrocarbons. Although restating the biological hypothesis, he cautioned that other explanations for the hydrogen and acetylene findings are more likely: the possibilities of yet unidentified physical or chemical processes (e.g. a surface catalyst accepting hydrocarbons or hydrogen), or flaws in the current models of material flow. Composition data and transport models need to be substantiated, etc. Even so, despite saying that a non-biological catalytic explanation would be less startling than a biological one, McKay noted that the discovery of a catalyst effective at 95 K (−180 °C) would still be significant. With regards to the acetylene findings, Mark Allen, the principal investigator with the NASA Astrobiology Institute Titan team, provided a speculative, non-biological explanation: sunlight or cosmic rays could transform the acetylene in icy aerosols in the atmosphere into more complex molecules that would fall to the ground with no acetylene signature. As NASA notes in its news article on the June 2010 findings: "To date, methane-based life forms are only hypothetical. Scientists have not yet detected this form of life anywhere." As the NASA statement also says: "some scientists believe these chemical signatures bolster the argument for a primitive, exotic form of life or precursor to life on Titan's surface." In February 2015, a hypothetical cell membrane capable of functioning in liquid methane at cryogenic temperatures (deep freeze) conditions was modeled. Composed of small molecules containing carbon, hydrogen, and nitrogen, it would have the same stability and flexibility as cell membranes on Earth, which are composed of phospholipids, compounds of carbon, hydrogen, oxygen, and phosphorus. This hypothetical cell membrane was termed an "azotosome", a combination of "azote", French for nitrogen, and "liposome". ### Obstacles Despite these biological possibilities, there are formidable obstacles to life on Titan, and any analogy to Earth is inexact. At a vast distance from the Sun, Titan is frigid, and its atmosphere lacks CO<sub>2</sub>. At Titan's surface, water exists only in solid form. Because of these difficulties, scientists such as Jonathan Lunine have viewed Titan less as a likely habitat for life than as an experiment for examining hypotheses on the conditions that prevailed prior to the appearance of life on Earth. Although life itself may not exist, the prebiotic conditions on Titan and the associated organic chemistry remain of great interest in understanding the early history of the terrestrial biosphere. Using Titan as a prebiotic experiment involves not only observation through spacecraft, but laboratory experiments, and chemical and photochemical modeling on Earth. ### Panspermia hypothesis It is hypothesized that large asteroid and cometary impacts on Earth's surface may have caused fragments of microbe-laden rock to escape Earth's gravity, suggesting the possibility of panspermia. Calculations indicate that these would encounter many of the bodies in the Solar System, including Titan. On the other hand, Jonathan Lunine has argued that any living things in Titan's cryogenic hydrocarbon lakes would need to be so different chemically from Earth life that it would not be possible for one to be the ancestor of the other. ### Future conditions Conditions on Titan could become far more habitable in the far future. Five billion years from now, as the Sun becomes a red giant, its surface temperature could rise enough for Titan to support liquid water on its surface, making it habitable. As the Sun's ultraviolet output decreases, the haze in Titan's upper atmosphere will be depleted, lessening the anti-greenhouse effect on the surface and enabling the greenhouse created by atmospheric methane to play a far greater role. These conditions together could create a habitable environment, and could persist for several hundred million years. This is proposed to have been sufficient time for simple life to spawn on Earth, though the higher viscosity of ammonia-water solutions coupled with low temperatures would cause chemical reactions to proceed more slowly on Titan. ## See also - Colonization of Titan - Lakes of Titan - Atmosphere of Titan - Life on Titan - List of natural satellites - Saturn's moons in fiction - The sky of Titan - Titan in fiction - Titan Winged Aerobot - Vid Flumina a river of methane and ethane on Titan
22,243,526
University of Washington station
1,172,107,545
Light rail station in Seattle, Washington
[ "2016 establishments in Washington (state)", "Link light rail stations in Seattle", "Railway stations in Washington (state) at university and college campuses", "Railway stations in the United States opened in 2016", "Railway stations located underground in Seattle", "University District, Seattle", "University of Washington campus" ]
University of Washington station is a light rail station on the University of Washington campus in Seattle, Washington, United States. The station is served by the 1 Line of Sound Transit's Link light rail system, which connects Northgate, Downtown Seattle, and Seattle–Tacoma International Airport. University of Washington station is at the intersection of Montlake Boulevard Northeast and Northeast Pacific Street, adjacent to Husky Stadium and the University of Washington Medical Center. The station consists of an underground island platform connected to a surface entrance by elevators and escalators. A pedestrian bridge over Montlake Boulevard connects the station to the University of Washington campus and the Burke-Gilman Trail. University of Washington station was built as part of the University Link Extension, which began construction in 2009 and opened on March 19, 2016. The station was the northern terminus of the 1 Line until the opening of the Northgate Link Extension on October 2, 2021. Light rail trains serve the station twenty hours a day on most days; the headway between trains is six minutes during peak periods with reduced frequency at other times. The station is served by a major bus hub; King County Metro and Sound Transit Express bus routes connect the University District to the Eastside region. ## Location University of Washington station is located at the intersection of Montlake Boulevard Northeast and Northeast Pacific Street in the University District of northern Seattle. The station is situated in the parking lot of Husky Stadium, immediately east of the University of Washington Medical Center. To the northwest is the University of Washington campus, which is accessible via the Rainier Vista bridge, the Burke-Gilman Trail, and Northeast Pacific Street. The surrounding area accommodates 15,511 jobs, constituting one of the Seattle region's major employment centers, as well as 488 residents in Montlake to the south. The station is connected to the Montlake neighborhood by the Montlake Bridge, which carries Montlake Boulevard towards a junction with State Route 520, a major east–west freeway connecting Seattle to the Eastside suburbs. The station is one mile (1.6 km) south of the University Village shopping center and two miles (3.2 km) southwest of Seattle Children's Hospital. The University of Washington has long-term plans to redevelop its parking lots along Montlake Boulevard into additional office and classroom space, forming the new "East Campus" area. ## History ### Background and planning The University of Washington moved from its downtown campus to the north side of Portage Bay in 1895, later expanding during the Alaska–Yukon–Pacific Exposition of 1909 that was hosted at the site. In 1911, urban planner Virgil Bogue's rejected comprehensive plan for Seattle envisioned a citywide subway system, including a line serving the east side of the university campus and connected to Ravenna and eastern Capitol Hill. The Forward Thrust Committee's planned regional rapid transit system, which was rejected by voters in 1968 and 1970, included a subway station at the University Hospital near Husky Stadium, from where trains would continue south through Capitol Hill to Downtown Seattle. A 1986 regional transit plan from the Puget Sound Council of Governments proposed a light rail line through the University District, including a station at the University Hospital, continuing through Eastlake to Downtown Seattle. In the 1990s, a regional transit authority—later Sound Transit—was formed to build a light rail system for the Seattle metropolitan area. The University District was named as a major destination for the system and given two stations at NE Pacific Street and NE 45th Street on the western side of the university campus, which would be connected to Downtown Seattle via a tunnel under Capitol Hill. The \$6.7 billion proposal, including a light rail line continuing north from the University District to Northgate and Lynnwood, was rejected by voters in 1995 and replaced with a smaller plan. In November 1996, voters approved a condensed \$3.9-billion regional transit plan that included a shorter light rail line from the University District to Downtown Seattle and SeaTac. A surface alignment through Eastlake was also proposed in the event boring a tunnel through Capitol Hill and under Portage Bay would be too expensive. Sound Transit finalized its preferred alignment for the light rail project, which included stations at NE Pacific Street and NE 45th Street, in 1999. Sound Transit suspended planning for the Portage Bay tunnel in 2000 after it received construction bids that were \$171 million higher than expected, blamed on a competitive labor market and soil testing that indicated that a deeper tunnel was needed. Faced with budget issues and further schedule delays, Sound Transit deferred construction of the segment between Downtown Seattle and the University District in 2001 while re-evaluating alignment options. The alternatives were narrowed to two options in early 2002: a tunnel under the Ship Canal at University Bridge with a single station at Northeast 45th Street; and a tunnel under the Montlake Cut and stations near Husky Stadium and at Northeast 45th Street. A Sound Transit study determined the Montlake route was the most cost-effective, and the University of Washington endorsed it as the least disruptive to its research buildings. In 2004, the Sound Transit Board confirmed this route, including an underground station at Husky Stadium with a subterranean pedestrian connection to the campus, as the new preferred alignment for the Link light rail project. The \$1.9 billion University Link project, with the Husky Stadium station as the northern terminus, received final approval from Sound Transit and the Federal Transit Administration (FTA) in 2006. Under the plan approved in 2006, the Husky Stadium station would have three entrances connected via underground walkways or overpasses: on the east side of Montlake Boulevard adjacent to the stadium; on the north side of Pacific Place on the Burke-Gilman Trail; and on the west side of Montlake Boulevard near the University of Washington Medical Center. In 2007, the Seattle Design Commission recommended an overpass to cross Montlake Boulevard in lieu of the underground walkway, and Sound Transit updated the station's design plan accordingly, adding a bridge on the north side of the Montlake Triangle across from Rainier Vista. The University of Washington unveiled its plans to redevelop the Montlake Triangle into a landscaped park with a land bridge over Pacific Place, and requested Sound Transit to connect the station through a mid-block crosswalk instead of the bridge. The FTA rejected the mid-block crosswalk and a compromise pedestrian overpass connecting to the center of the Montlake Triangle from Rainier Vista was adopted in 2011. The station was named "University of Washington" in 2012, leading to confusion with the existing University Street station in Downtown Seattle and the future U District station that would open west of the campus in 2021. Other suggested names included Montlake, Husky Stadium, and UW Medical Center. ### Construction and opening The University Link project received an \$813 million grant from the federal government in January 2009, allowing it to move towards final design and construction. A groundbreaking ceremony was held at the future University of Washington station on March 6, 2009, marking the start of construction. Utility relocation and site preparation at the station, consisting of the demolition and replacement of facilities at Husky Stadium—including two ticket offices, a concessions kitchen, and restrooms—had begun in February and continued until August. A new access road around Husky Stadium was built and part of the stadium's parking lot was closed and fenced off in early 2010. Excavation of the station box, along with shoring installation and jet grouting of the soil, began in June 2010. The platform level, at a depth of 100 feet (30 m), was reached in late February 2011, allowing concrete pouring to commence. The project's two tunnel boring machines arrived at University of Washington station for assembly in April 2011 and were dedicated by local and state politicians on May 16. The tunnel boring machines were launched in June and July towards Capitol Hill station, arriving in spring 2012. Station box excavation was completed in June 2012, and contractors Hoffman Construction Company moved on to steel erection and pouring of the station's upper levels. Station construction reached street level in late 2012, and structural elements of the headhouse and Montlake Boulevard overpass were installed. The station's basic structure was finished in early 2014, and landscaping and road access around the entrance was restored while finishing work continued underground. The station and overpass were declared substantially complete in November 2014, while work above ground continued. The University of Washington completed work on the Montlake Triangle in July 2015, and the Montlake Boulevard overpass opened to the public later that month. Capitol Hill and University of Washington stations opened on March 19, 2016, during a community celebration that attracted 67,000 people; the two stations opened six months ahead of schedule. The following week, several bus routes in Northeast Seattle were redirected to feed the new station as part of a major restructuring of service brought on by the cancellation of downtown express routes from the University District. By the end of the year, the station was averaging 9,300 daily boardings, placing it second among Link stations for ridership behind Westlake. The station has had long-term escalator outages that began soon after it was opened, blamed on components failing prematurely. In one incident on March 16, 2018, both of the station's down escalators were broken for three hours, forcing passengers to queue for the elevators. The incident prompted Sound Transit to change their escalator protocol in April, allowing passengers to temporarily use the shut-off escalators as stairs and opening emergency stairways. The downward escalators failed again for an hour on April 27, during which the new escalator protocol was used to allow access to the emergency stairways. In October 2018, Sound Transit approved a \$20 million contract to replace the station's 13 escalators, open one set of stairs to the public, and build a connection between the two sub-mezzanines above the platform. The escalator replacement plan was later cancelled in October 2020 following improved performance due to preventative maintenance under a new contractor. Buses from the Eastside area using State Route 520 were redirected to the station beginning in March 2019 as part of a restructure of downtown bus service before the downtown transit tunnel was closed to buses. A new set of bus stops and dedicated bus lanes on the east side of Montlake Boulevard were built to serve the redirected routes. University of Washington station served as the northern terminus of the 1 Line until October 2, 2021, when an extension to Northgate station opened. Tunnel boring from U District station, located northwest of the university campus, was completed in September 2016. ## Station layout University of Washington station consists of a single, 380-foot-long (120 m) island platform located 95 feet (29 m) below street level. The station has a stated platform capacity of 1,600 people; it was designed to accommodate large crowds attending Husky Stadium events. The station has a 55-foot (17 m) open mezzanine that is split between two stories and requires a change of escalators. The upper mezzanine contains ticket vending machines and passenger information, and is decorated with ceramic tiles and fixtures with green and yellow accents. The colors of the walls drew criticism from fans of the Huskies football team because they were similar to the neon yellow that was later adopted by the Oregon Ducks, a rival football team. The entrance is contained in a two-story glass building, the upper level of which leads to a bridge over Montlake Boulevard; the bridge is also connected via a ramp and stairway to street level adjacent to the station. The surface plaza around the station includes bicycle racks under the bridge's ramp, as well as pay parking in nearby lots owned by the university. The station's elevators lead directly from the platform to the surface entrance and pedestrian overpass levels. The station has 234 bicycle rack spaces and a bicycle locker with capacity for 60 bicycles. The non-public areas of the station include a track crossover, maintenance spaces, and a smoke ventilation system assisted by two surface vents to the north and south of the complex. University of Washington station was designed by LMN Architects, a Seattle-based firm that also worked on thirteen other light rail stations on the future East Link and Lynnwood Link projects. LMN received several design awards for their work on the station, including an American Institute of Architects 2021 Architecture Award, an American Institute of Architects Honor Award for Interior Architecture in 2018, an International Architecture Award from the Chicago Athenaeum, an Award of Merit from the Seattle chapter of the American Institute of Architects, and an Honorable Mention in the Fast Co.Design Innovation By Design Awards. ### Public art One major component of the station's architecture is the chamber-like mezzanine, which contains the station's sole piece of public art, Subterraneum by Leo Saul Berk, funded by Sound Transit's system-wide public art program. Subterraneum consists of 6,000 backlit LED panels lining the walls of the chamber. Berk took inspiration from the geologic maps for the project and symbols representing the strata of layers near the station, while adding some original creations. The installation was praised for its scale and evocative staging by Gary Faigin of The Seattle Times, and dubbed an "underground planetarium" by the Huffington Post. The station is represented on maps and signage by a pictogram of a graduation cap with the University of Washington logo. During construction of the station from 2010 to 2014, a temporary piece of art known as the "Great Wall of Us" was installed on the fence surrounding the work site. The 1,100-foot-long (340 m) wall featured 800 photographs of 1,500 people taken at university events and at Tukwila International Boulevard station, interspersed with viewing windows into the work site and explanatory text. ## Services The station is served by the 1 Line, which runs between Northgate, Downtown Seattle, the Rainier Valley, and Seattle–Tacoma International Airport. University of Washington station is the third southbound station from Northgate and fifteenth northbound station from Angle Lake, the line's northern and southern termini, respectively. It is situated between U District and Capitol Hill stations, connecting to the latter and downtown via the University Link tunnel. 1 Line trains serve University of Washington station twenty hours a day on weekdays and Saturdays, from 5:00 a.m. to 1:00 a.m.; and eighteen hours on Sundays, from 6:00 a.m. to midnight. During regular weekday service, trains operate roughly every eight to ten minutes during rush hour and midday operation, respectively, with longer headways of fifteen minutes in the early morning and twenty minutes at night. During weekends, 1 Line trains arrive at University of Washington station every ten minutes during midday hours and every fifteen minutes during mornings and evenings. The station is approximately forty-four minutes from SeaTac/Airport station, six minutes from Westlake station in Downtown Seattle, and seven minutes from Northgate station. In 2019, an average of 10,697 passengers boarded Link trains at University of Washington stations on weekdays; the station is the second busiest on the line, after Westlake. University of Washington station is also a major bus station, with seven bus stops around the Montlake Triangle and nearby streets serving bus routes primarily from Northeast Seattle and the Eastside. King County Metro operates eleven routes that stop at the station, traveling to the University District, Ballard, Roosevelt, Northgate, Green Lake, Lake City, Sand Point, Kenmore, Kirkland, Bellevue, and Issaquah. Three Sound Transit Express routes connect the station with Bellevue, Issaquah, Kirkland, and Tacoma. Until 2021, six Community Transit commuter routes connected the station to areas in Snohomish County. The bus–rail transfer at University of Washington station has been criticized for its long walking distance and difficulty crossing Montlake Boulevard.
36,278,145
Albertus Soegijapranata
1,169,917,178
Indonesian Catholic archbishop (1896–1963)
[ "1896 births", "1963 deaths", "20th-century Roman Catholic archbishops in Indonesia", "Clergy in World War II", "Converts to Roman Catholicism from Islam", "Indonesian Jesuits", "Indonesian Roman Catholics", "Indonesian former Muslims", "Javanese people", "Jesuit archbishops", "National Heroes of Indonesia", "People from Surakarta" ]
Albertus Soegijapranata, SJ (; Perfected Spelling: Albertus Sugiyapranata; 25 November 1896 – 22 July 1963), better known by his birth name Soegija, was a Jesuit priest who became the Apostolic Vicar of Semarang and later its archbishop. He was the first native Indonesian bishop and known for his pro-nationalistic stance, often expressed as "100% Catholic 100% Indonesian". Soegija was born in Surakarta, Dutch East Indies, to a Muslim courtier and his wife. The family moved to nearby Yogyakarta when Soegija was still young; there he began his education. Known as a bright child, around 1909 he was asked by Father Frans van Lith to enter Xaverius College, a Jesuit school in Muntilan, where Soegija slowly became interested in Catholicism. He was baptised on 24 December 1910. After graduating from Xaverius in 1915 and spending a year as a teacher there, Soegija spent two years at the on-site seminary before going to the Netherlands in 1919. He began his two-year novitiate with the Society of Jesus in September 1920 in Grave, and finished his juniorate there in 1923. After three years studying philosophy at Berchmann College in Oudenbosch, he was sent back to Muntilan as a teacher for a further two years. In 1928, he returned to the Netherlands to study theology at Maastricht, where he was ordained by Bishop of Roermond Laurentius Schrijnen on 15 August 1931; Soegija then added the word "pranata" to the back of his name. He was then sent back to the Indies to preach and became a parochial vicar at the parish in Kidul Loji, Yogyakarta, and in 1934 he was given his own parish in Bintaran. There he focused on creating a sense of Catholicism within the native community, emphasising the need for strong bonds between Catholic families. Soegijapranata was consecrated as the vicar apostolic of the newly established Apostolic Vicariate of Semarang in 1940. Although the population of native Catholics expanded greatly in the years following his consecration, Soegijapranata was soon faced with numerous trials. The Empire of Japan invaded the Indies beginning in early 1942, and during the ensuing occupation numerous churches were seized and clergymen were arrested or killed. Soegijapranata was able to resist several of these seizures, and spent the rest of the occupation serving the Catholics in his vicariate. After President Sukarno proclaimed the country's independence in August 1945, Semarang was overcome with unrest. Soegijapranata helped broker a ceasefire after a five-day battle between Japanese and Indonesian troops and called for the central government to send someone to deal with the unrest and food shortages in the city. However, these problems continued to grow, and in 1947 Soegijapranata moved his seat to Yogyakarta. For the remainder of the national revolution Soegijapranata worked to promote international recognition of Indonesia's independence. Soon after the Dutch, who had returned in late 1945, recognised the country's independence, Soegijapranata returned to Semarang. During the post-revolution years, he wrote extensively against communism and expanded the church; he also served as a mediator between several political factions. He was made an archbishop on 3 January 1961, when Semarang was elevated to an ecclesiastical province. At the time he was in Europe, participating in the first session of the Second Vatican Council. Soegijapranata died in 1963, in Steyl, the Netherlands. His body was flown back to Indonesia, where he was made a national hero and interred at Giri Tunggal Heroes' Cemetery in Semarang. Soegijapranata continues to be viewed with respect by both Catholic and non-Catholic Indonesians. Several biographies have been written, and in 2012 a fictionalised biographical film by Garin Nugroho, entitled Soegija, was released to popular acclaim. Soegijapranata Catholic University, a large university in Semarang, is named after him. ## Early life Soegija was born on 25 November 1896 in Surakarta to Karijosoedarmo, an abdi dalem (courtier) at the Sunanate of Surakarta, and his wife Soepiah. The family was abangan Muslim, and Soegija's grandfather Soepa was a kyai; Soegija followed their religion. Soegija – whose name was derived from the Javanese word soegih, meaning rich – was the fifth of nine children. The family later moved to Ngabean, Yogyakarta. There, Karijosoedarmo began to serve as a courtier at the Kraton Ngayogyakarta Hadiningrat to Sultan Hamengkubuwono VII, while his wife sold fish; despite this, the family was poor and sometimes had little food. Soegija was a daring child, quick to fight, skilled at football, and noted for his intellect from a young age. While Soegija was still young, his father made him fast in accordance with Islamic law. Soegija started his formal education at a school in the Kraton complex, known locally as a Sekolah Angka Loro (Number Two School), where he learned to read and write. He later transferred to a school in Wirogunan, Yogyakarta, near Pakualaman. Beginning in his third year he attended a Dutch-run school for native Indonesians (Hollands Inlands School) in Lempuyangan. Outside of school he studied gamelan and singing with his parents. Around 1909 he was asked by Father Frans van Lith to join the Jesuit school in Muntilan, 30 kilometres (19 mi) north-west of Yogyakarta. Although his parents were initially worried that Soegija would become too Europeanised, they agreed. ## Xaverius College In 1909 Soegija started at the Xaverius College in Muntilan, a school for aspiring teachers, and stayed in the dormitory. He was one of 54 students in his year. The boys followed a strict schedule, attending classes in the morning and engaging in other activities, such as gardening, discussions, and chess, in the afternoon. The Catholic students had regular prayers. Although the college did not require students to be Catholic, Soegija was pressured by his Catholic classmates, leading to several fights. Feeling dissatisfied, Soegija complained to his teacher that the Dutch priests were like merchants, thinking only of money. The priest replied that the teachers were unpaid and only hoped for the students' good. This led Soegija to better respect the priests, and when van Rijckevorsel told the other students that Soegija did not want to be Catholic, they stopped pressuring him. The following year Soegija asked to join the Catholic-education classes, citing a desire to fully use the facilities at Xaverius. His teacher, Father Mertens, told Soegija that he required permission from his parents first; although they refused, Soegija was nevertheless allowed to study Catholicism. He was intrigued by the Trinity, and asked several of the priests for clarification. Van Lith cited the works of Thomas Aquinas, while Mertens discussed the Trinity as explained by Augustine of Hippo; the latter told him that humans were not meant to understand God with their limited knowledge. Soegija, who wanted to learn more, asked to be baptised, quoting the Finding in the Temple to show why he should not need his parents' permission. The priests agreed, and Soegija was baptised on 24 December 1910, taking the baptismal name Albertus, for Albertus Magnus. During the Christmas holidays, he told his family that he had converted. Although his immediate family eventually accepted this, and may have eventually supported him, Soegija's other relatives refused to speak to him afterwards. Soegija and the students continued their education at Xaverius, receiving further instruction. According to Father G. Budi Subanar, a lecturer on theology at Sanata Dharma University, during this period one of the teachers taught the Fourth Commandment, "Honour your father and your mother, that your days may be long in the land which the Lord your God gives you.", as relating not only to one's birth father and mother, but all who had come before; this left the students with nationalistic tendencies. On another occasion, a visit by a Capuchin missionary – who was physically quite different from the Jesuit teachers – led Soegija to consider becoming a priest, an idea which his parents accepted. In 1915 Soegija finished his education at Xaverius, but stayed on as a teacher. The following year he joined the on-site seminary, one of three native Indonesians who entered the seminary that year. He graduated in 1919, having studied French, Latin, Greek, and literature. ## Path to priesthood Soegija and his classmates sailed to Uden, in the Netherlands, to further their studies in 1919. In Uden Soegija spent a year further studying Latin and Greek, necessary for his preaching back in the Indies. He and his classmates adapted to Dutch culture. On 27 September 1920 Soegija began his novitiate to join the Jesuits, the first of his classmates. While completing his studies at Mariëndaal in Grave, he was separated from much of the world and spent his time in introspection. He completed his novitiate on 22 September 1922 and was initiated into the Jesuits, taking their oath of poverty, chastity and obedience. After joining the Jesuits Soegija spent another year in Mariëndaal in juniorate. Beginning in 1923 he studied philosophy at Berchmann College in Oudenbosch; during this time he examined the teachings of Thomas Aquinas and began writing on Christianity. In a letter dated 11 August 1923, he wrote that the Javanese were so far unable to discern between Catholics and Protestants, and that the best way to convert the Javanese was by deeds, not words. He also translated some of the results of the 27th International Eucharistic Congress, held in Amsterdam in 1924, for the Javanese-language magazine Swaratama, which circulated mainly among Xaverius alumni. Several of Soegija's other writings were published in St. Claverbond, Berichten uit Java. He graduated from Berchmann in 1926, then began preparations to return to the Indies. Soegija arrived in Muntilan in September 1926, where he began teaching algebra, religion, and Javanese at Xaverius. Little is known about his period as an instructor at the school, although records indicate that he based his teaching style on that of van Lith, who had died in early 1926, explaining religious concepts in terms based on Javanese tradition. He supervised the school's gamelan and gardening programs and became the chief editor of Swaratama. Soegijapranata wrote editorials that covered a variety of topics, including condemnations of communism and discussions of various aspects of poverty. After two years at Xaverius, in August 1928, Soegija returned to the Netherlands, where he studied theology at Maastricht. On 3 December 1929 he and four other Asian Jesuits joined Jesuit General Wlodzimierz Ledóchowski in a meeting with Pope Pius XI in Vatican City; the pope told the Asian men that they were to be the "backbones" of Catholicism in their respective nations. Soegija was made a deacon in May 1931; he was ordained by Bishop of Roermond Laurentius Schrijnen on 15 August 1931, while still studying theology. After his ordination, Soegija appended the word pranata, meaning "prayer" or "hope", as a suffix to his birth name; such additions were a common practice in Javanese culture after its bearer reached an important milestone. He finished his theology studies in 1932 and in 1933 spent his tertianship in Drongen, Belgium. That year he wrote an autobiography, entitled La Conversione di un Giavanese (The Conversion of a Javanese); the work was released in Italian, Dutch, and Spanish. ## Preaching On 8 August 1933 Soegijapranata and two fellow priests departed for the Indies; Soegijapranata was assigned to preach at Kidul Loji in Yogyakarta, near Kraton. He served as parochial vicar for Father van Driessche, one of his teachers from Xaverius. The elder priest taught Soegijapranata how to better address the needs of his parish, while van Driessche likely used Soegijapranata to preach to the city's growing native Catholic population. Soegijapranata was, by this point, a short and chubby man with what the Dutch historian Geert Arend van Klinken described as "a boyish sense of humour that won him many friends". After the St Yoseph Church in Bintaran, about 1 kilometre (0.62 mi) from Kidul Loji, opened in April 1934, Soegijapranata was transferred there to become its priest; the church primarily served the Javanese Catholic community. Bintaran was one of four centres of Catholic presence in Yogyakarta at the time, along with Kidul Loji, Kotabaru, and Pugeran; each major church served a wide area, and the priests from the major churches gave sermons in the furthest reaches of their parishes. After van Driessche's death in June 1934, Soegijapranata's duties were extended to include the village of Ganjuran, Bantul, 20 kilometres (12 mi) south of the city, which was home to more than a thousand native Catholics. He was also a spiritual adviser to several local groups and established a Catholic credit union. The Catholic Church at the time faced difficulty retaining converts. Some Javanese, who had converted as students, returned to Islam after reentering society and facing social ostracism. In a 1935 meeting with other Jesuits, Soegijapranata blamed the problem on the lack of a united Catholic identity, or sensus Catholicus, as well as few intermarriages between native Catholics. Soegijapranata opposed marriage between Catholics and non-Catholics. He counselled young Catholic couples before marriage, believing that these unions helped unite the Catholic families in the city, and continued to write for Swaratama, again serving as its editor in chief. In 1938, he was chosen to advise the Society of Jesus, coordinating Jesuit work in the Indies. ## Vicar apostolic The increasing population of Catholics in the Indies led Mgr. Petrus Willekens, then Vicar Apostolic of Batavia, to suggest that a new apostolic vicariate be established in Central Java, headquartered in Semarang, as the area was culturally different and geographically separate from Batavia (now Jakarta). The Apostolic Vicariate of Batavia was split in two on 25 June 1940; the eastern half became the Apostolic Vicariate of Semarang. On 1 August 1940 Willekens received a telegram from Pro-Secretary of State Giovanni Battista Montini ordering that Soegijapranata be put in charge of the newly established apostolic vicariate. This was forwarded to Soegijapranata in Yogyakarta, who agreed to the appointment, despite being surprised and nervous. His assistant Hardjosoewarno later recalled that Soegijapranata cried after reading the telegram – an uncharacteristic response – and, when eating a bowl of soto, asked if Hardjosoewarno had ever seen a bishop eating the dish. Soegijapranata left for Semarang on 30 September 1940 and was consecrated by Willekens on 6 October at the Holy Rosary Church in Randusari, which later became his seat; this consecration made Soegijapranata the first native Indonesian bishop. The ceremony was attended by numerous political figures and sultans from Batavia, Semarang, Yogyakarta, and Surakarta, as well as clergy from Malang and Lampung; Soegijapranata's first act as vicar was to issue a pastoral letter with Willekens that outlined the historical background that led to his appointment, including Pope Benedict XV's apostolic letter Maximum illud which called for more local clergy, and Pope Pius XI and Pope Pius XII's efforts to appoint more pastors and bishops from native ethnic groups worldwide. Soegijapranata began working on the Church hierarchy in the region, establishing new parishes. In Soegijapranata's apostolic vicariate there were 84 pastors (73 European and 11 native), 137 brothers (103 European and 34 native), and 330 nuns (251 European and 79 native). The vicariate included Semarang, Yogyakarta, Surakarta, Kudus, Magelang, Salatiga, Pati, and Ambarawa; its geographic conditions ranged from the fertile lowlands of the Kedu Plain to the arid Gunung Sewu mountainous area. The vast majority of its population was ethnic Javanese, consisting of more than 15,000 native Catholics, as well as a similar number of European Catholics. The number of native Catholics quickly outpaced the number of European ones, and had doubled by 1942. There were also several Catholic groups, mostly working in education. However, the Indonesian Catholics were less prominent than the Protestants. ### Japanese occupation After the Japanese occupied the Indies in early 1942, on 9 March 1942 Governor-General Tjarda van Starkenborgh Stachouwer and head of the Royal Netherlands East Indies Army General Hein ter Poorten capitulated. This brought numerous changes in the governance of the archipelago, reducing the quality of life for non-Japanese. In his diary, Soegijapranata wrote of the invasion that "fires were everywhere ... no soldiers, no police, no workers. The streets are full of burnt out vehicles. ... Luckily least there are still some lawmakers and Catholics out there. They work as representatives of their groups to ensure the city is in order." The occupation government captured numerous (mostly Dutch) men and women, both clergy and laymen, and instituted policies that changed how services were held. They forbade the use of Dutch in services and in writing and seized several church properties. Soegijapranata attempted to resist these seizures, at times filling the locations with people to make them unmanageable or indicating that other buildings, such as cinemas, would serve Japanese needs better. When the Japanese attempted to seize Randusari Cathedral, Soegijapranata replied that they could take it only after decapitating him; the Japanese later found another location for their office. He prevented the Japanese from taking Gedangan Presbytery, where he lived, and assigned guardians for schools and other facilities to prevent seizure. These efforts were not always successful, however, and several Church-run institutions were seized, as were church funds. Soegijapranata was unable to prevent Japanese torture of prisoners of war, including the clergy, but was himself well-treated by the Japanese forces; he was often invited to Japanese ceremonies, but never attended, sending bouquets in his stead. He used this position of respect to lobby for fair treatment of those interned. He successfully petitioned the Japanese overlords to exempt nuns from the paramilitary draft and allow them to work at hospitals. He and the Catholic populace also gathered food and other supplies for interned clergy, and Soegijapranata kept in contact with the prisoners, supplying and receiving news, such as recent deaths, and other information. As the number of clergy was severely limited, Soegijapranata roamed from church to church to attend to parishioners, actively preaching and serving as the de facto head of the Catholic Church in the country; this was in part to counteract rumours of his detention by the Japanese. He travelled by foot, bicycle, and carriage, as his car had been seized. He sent pastors to apostolic prefectures in Bandung, Surabaya, and Malang to deal with the lack of clergy there. Soegijapranata worked to ensure that the seminary would continue to produce new pastors and appointed the recently ordained Father Hardjawasita as its rector. He also granted native priests the authority to perform marriages. To calm the Catholic populace, he visited their homes and convinced them that the streets were safe. ### Indonesian National Revolution After the atomic bombings of Hiroshima and Nagasaki and the proclamation of Indonesian independence in August 1945, the Japanese began withdrawing from the country. In support of the new Republic, Soegijapranata had an Indonesian flag flown in front of the Gedangan Rectory; however, he did not formally recognise the nation's independence, owing to his correspondence with Willekens regarding the Church's neutrality. He and his clergy treated injured Dutch missionaries, who had recently been released from internment, at the rectory. The Dutch clergy were malnourished, and several required treatment at a hospital. Some were later taken to Indonesian-run internment camps, but the Catholics were still allowed to look after them. Meanwhile, inter-religious strife led to the burning of several mission buildings and the murder of some clergymen. The government also took several buildings, and some that had been seized by the Japanese were not returned. Allied forces sent to disarm the Japanese and repatriate prisoners of war arrived in Indonesia in September 1945. In Semarang, this led to a conflict between Japanese forces and Indonesian rebels, that began on 15 October; the Indonesians aimed to confiscate the Japanese weapons. Allied forces began landing in the city on 20 October 1945; a small group was sent to Gedangan to speak with Soegijapranata. Concerned with civilian suffering, the vicar apostolic told the Allies that they must stop the battle; the Allies could not comply as they did not know the Japanese commander. Soegijapranata then contacted the Japanese and, that afternoon, brokered a cease-fire agreement in his office at Gedangan, despite Indonesian forces' firing at the Gurkha soldiers posted in front of the building. Military conflicts throughout the area and an ongoing Allied presence led to food shortages throughout the city, as well as constant blackouts and the establishment of a curfew. Civilian-run groups attempted to deal with the food shortages but were unable to cope. In an attempt to deal with these issues, Soegijapranata sent a local man, Dwidjosewojo, to the capital at Jakarta – renamed from Batavia during the Japanese occupation – to speak with the central government. Dwidjosewojo met with Prime Minister Sutan Sjahrir, who sent Wongsonegoro to help establish a civilian government, installing Moch. Ikhsan as mayor. The city's government was, however, still unable to handle the crisis, and the major figures in this government were later captured by the Dutch-run Netherlands Indies Civil Administration (NICA) and imprisoned; Soegijapranata, although he at times harboured Indonesian revolutionaries, was spared. In January 1946 the Indonesian government moved from Jakarta – by then under Dutch control – to Yogyakarta. This was followed by a widespread exodus of civilians fleeing the advancing NICA soldiers. Soegijapranata at first stayed in Semarang, working to establish patrols and watches. He also corresponded with Willekens in Jakarta, although the elder bishop considered the revolution an internal security matter for the Dutch and not an issue for the Church. However, in early 1947 Soegijapranata moved to Yogyakarta, allowing easy communication with the political leadership. He established his seat at St Yoseph in Bintaran and counselled young Catholics to fight for their country, saying that they should only return "once they were dead".[^1] Soegijapranata was present during several battles that arose where he was preaching. After the Linggadjati Agreement failed to solve conflicts between Indonesia and the Netherlands and the Dutch attacked republicans on 21 July 1947, Soegijapranata declared that Indonesia's Catholics would work with the Indonesians and called for an end to the war in a speech on Radio Republik Indonesia; van Klinken describes the address as "passionate" and considers it to have boosted the Catholic populace's morale. Soegijapranata wrote extensively to the Holy See. In response, the Church leadership sent Georges de Jonghe d'Ardoye to Indonesia as its delegate, initiating formal relations between the Vatican and Indonesia. D'Ardoye arrived in the new republic in December 1947 and met with President Sukarno; however, formal diplomatic relations were not opened until 1950. Soegijapranata later became a friend of the president. After the Dutch captured the capital during Operation Kraai on 19 December 1948, Soegijapranata ordered that the Christmas festivities be kept simple to represent the Indonesian people's suffering. During the Dutch occupation Soegijapranata smuggled some of his writings out of the country; the works, later published in Commonweal with the help of George McTurnan Kahin, described Indonesians' daily lives under Dutch rule and called for international condemnation of the occupation. Soegijapranata further opined that the Dutch blockade on Indonesia, aside from strangling the new country's economy, increased the influence of its communist groups. After the Dutch retreated in the wake of the General Attack of 1 March 1949, Soegijapranata began working to ensure Catholic representation in the government. With I. J. Kasimo, he organised the All-Indonesia Catholic Congress (Kongres Umat Katolik Seluruh Indonesia). Held between 7 and 12 December, the congress resulted in the union of seven Catholic political parties into the Catholic Party. Soegijapranata continued his efforts to consolidate the Party after the revolution. ### Post-revolution After the Dutch recognised Indonesia's independence on 27 December 1949, following a several month-long conference in the Hague, Soegijapranata returned to Semarang. The post-revolution period was marked by a drastic increase in enrolment at the nation's seminary; the 100th native Indonesian clergyman was ordained in 1956. The government, however, enacted several laws that limited the Church's ability to expand. In 1953 the Ministry for Religion decreed that no foreign missionaries would be allowed into the country, and a subsequent law prohibited those already in Indonesia from teaching. In response, Soegijapranata encouraged eligible clergy to apply for Indonesian citizenship, circumventing the new laws. Aside from overseeing the new clergy, Soegijapranata continued to work towards Catholic education and prosperity, similar to his pre-war work. He emphasised that students must not only be good Catholics, but also good Indonesians. The Church began further development of its schools, ranging from elementary schools to universities. Soegijapranata also began reforming the Church in his apostolic vicariate, making it more Indonesian. He advocated the use of local languages and Indonesian during mass, allowing it throughout his diocese beginning in 1956. In addition, he pressed for the use of gamelan music to accompany services, and agreed to the use of wayang shows to teach the Bible to children. As the Cold War heated up, tensions developed between the Church in Indonesia and the Indonesian Communist Party (PKI). Soegijapranata believed that the PKI was making progress with the poor through its promises of workers' rights in a communist-led union. To combat this, he worked with other Catholics to establish labour groups, open to both Catholics and non-Catholics. He hoped that these would empower workers and thus limit the PKI's influence. One such labour group was Buruh Pancasila, which was formed on 19 June 1954; through the organisation Soegijapranata helped promote the state philosophy of Pancasila (the five tenets). The following year the Church Representatives Conference of Indonesia (KWI), recognising Soegijapranata's devotion to the poor, put him in charge of establishing social-support programmes throughout the archipelago. On 2 November 1955, he and several other bishops issued a decree denouncing communism, Marxism, and materialism, and asking the government to ensure fair and equitable treatment for all citizens. Relations between Indonesia and the Netherlands continued to be poor, specifically in regard to control of West Papua, historically under Dutch control but claimed by Indonesia. Soegijapranata firmly supported the Indonesian position; West Papua was annexed in 1963. There was also friction within the Catholic groups, first over Sukarno's 1957 decree that he was president for life and establishment of a guided democracy policy. A faction, led by Soegijapranata, supported this decree, while Catholic Party leader I. J. Kasimo's faction was heavily against it. Sukarno, who had a good working relationship with Soegijapranata, asked the vicar to join the National Council, a request that Soegijapranata refused; he did, however, assign two delegates to the council, ensuring Catholic representation. This, along with Soegijapranata's support of Sukarno's decree on 5 July 1959 calling for a return to the 1945 constitution, resulted in Bishop of Jakarta Adrianus Djajasepoetra's denunciation of Soegijapranata as a sycophant. However, Soegijapranata was strongly against Sukarno's idea of Nasakom, which based part of the nation's government on communism. ## Archbishop of Semarang and death During the latter half of the 1950s, the KWI met several times to discuss the need for a self-determined Indonesian Catholic hierarchy. At these annual meetings, they touched on administrative and pastoral issues, including the translation of songs into Indonesian languages. In 1959 Cardinal Grégoire-Pierre Agagianian visited the country to see the Church's preparations. The KWI formally requested their own hierarchy in a May 1960 letter; this letter received a reply from Pope John XXIII dated 20 March 1961, which divided the archipelago into six ecclesiastical provinces: two in Java, one in Sumatra, one in Flores, one in Sulawesi and Maluku, and one in Borneo. Semarang became the seat of the province of Semarang, and Soegijapranata its archbishop. He was elevated on 3 January 1961. When this happened, Soegijapranata was in Europe to attend the Second Vatican Council as part of the Central Preparatory Commission; he was one of eleven diocesan bishops and archbishops from Asia. He was able to attend the first session, where he voiced concerns about the declining quality of pastoral work and called for the modernisation of the Church. He then returned to Indonesia, but his health, poor since the late 1950s, quickly declined. After a stay at Elisabeth Candi Hospital in Semarang in 1963, Soegijapranata was forbidden from undertaking active duties. Justinus Darmojuwono, a former internee of the Japanese army and vicar general of Semarang since 1 August 1962, served as acting bishop. On 30 May 1963, Soegijapranata left Indonesia for Europe to attend the election of Pope Paul VI. Soegijapranata then went to Canisius Hospital in Nijmegen, where he underwent treatment from 29 June until 6 July; this was unsuccessful. He died on 22 July 1963, at a nunnery in Steyl, the Netherlands, having had a heart attack shortly before his death. As Sukarno did not want Soegijapranata buried in the Netherlands, his body was flown to Indonesia after last rites were performed by Cardinal Bernardus Johannes Alfrink. Soegijapranata was declared a National Hero of Indonesia on 26 July 1963, through Presidential Decree No. 152/1963, while his body was still in transit. Soegijapranata's body arrived at Kemayoran Airport in Jakarta on 28 July and was brought to the Jakarta Cathedral for further rites, including a speech by Sukarno, presided by Bishop of Jakarta Adrianus Djajasepoetra. The following day Soegijapranata's body was flown to Semarang, accompanied by several Church and government luminaries. He was buried at Giri Tunggal Heroes' Cemetery in a military funeral on 30 July, after several further rites. Darmojuwono was appointed as the new archbishop in December 1963; he was consecrated on 6 April 1964 by Archbishop Ottavio De Liva. ## Legacy Soegijapranata is remembered with pride by Javanese Catholics, who praise his strength of will during the occupation and national revolution. The historian Anhar Gonggong described Soegijapranata as not just a bishop, but an Indonesian leader who "was tested as a good leader and deserved the hero status". The Indonesian historian Anton Haryono described Soegijapranata's ascension to bishophood as "monumental", considering that Soegijapranata had only been ordained nine years previously and was chosen ahead of non-Indonesian priests several years his senior. Henricia Moeryantini, a nun in the Order of Carolus Borromeus, writes that the Catholic Church became nationally influential under Soegijapranata, and that the archbishop cared too much for the people to take an outsider's approach. Van Klinken writes that Soegijapranata eventually became like a priyayi, or Javanese nobleman, within the church, as "committed to hierarchy and the status quo as to the God who created them". According to van Klinken, by coming to the nascent republic Soegijapranata had been willing to see "the coming Javanese paradise" at great personal risk. Soegijapranata is the namesake of a large Catholic university in Semarang. Streets in several Indonesian cities are named after him, including in Semarang, Malang, and Medan. His grave in Giritunggal is often the site of pilgrimage for Indonesian Catholics, who hold graveside masses. In June 2012 director Garin Nugroho released a biopic on Soegijapranata entitled Soegija. Starring Nirwan Dewanto in the titular role, the film followed Soegijapranata's activities during the 1940s, amidst a backdrop of the Japanese occupation and the war for Indonesian independence. The film, which had a Rp 12 billion (US\$1.3 million) budget, sold over 100,000 tickets on its first day. Its launch was accompanied by a semi-fictional novelisation of Soegija's life, written by Catholic author Ayu Utami. Several non-fiction biographies of Soegija, by both Catholic and non-Catholic writers, were released concurrently. In Indonesian popular culture, Soegijapranata is known for his motto "100% Catholic, 100% Indonesian" ("100% Katolik, 100% Indonesia"). The motto, which has been used to advertise several biographies and the film Soegija'', is derived from Soegijapranata's opening speech at the 1954 All-Indonesia Catholic Congress in Semarang, as follows: > If we consider ourselves good Christians, then we should also become good patriots. As such, we should feel 100% patriotic because we are 100% Catholic. According to the Fourth Commandment, as written in the Catechism, we must love the Catholic Church and, it follows, we must love our country with all our hearts. ## See also - Catholic Church in Indonesia [^1]: Original: "... baru boleh pulang kalau mati."
11,942,501
Tropic Thunder
1,173,424,750
2008 film by Ben Stiller
[ "2000s American films", "2000s British films", "2000s English-language films", "2000s German films", "2000s Mandarin-language films", "2000s satirical films", "2008 action comedy films", "2008 comedy films", "2008 films", "African-American-related controversies in film", "American action comedy films", "American satirical films", "Blackface minstrel shows and films", "British action comedy films", "British satirical films", "DreamWorks Pictures films", "English-language German films", "Films about actors", "Films about filmmaking", "Films about terrorism in Asia", "Films about the United States Army", "Films directed by Ben Stiller", "Films produced by Ben Stiller", "Films scored by Theodore Shapiro", "Films set in Los Angeles", "Films shot in Hawaii", "Films shot in Los Angeles", "Films with screenplays by Etan Cohen", "Films with screenplays by Justin Theroux", "German action comedy films", "German satirical films", "Metafictional works", "Military humor in film", "Paramount Pictures films", "Race-related controversies in film", "Red Hour Productions films", "Torture in films" ]
Tropic Thunder is a 2008 satirical action comedy film directed by Ben Stiller, who wrote the screenplay with Justin Theroux and Etan Cohen. The film stars Stiller, Jack Black, Robert Downey Jr., Jay Baruchel, and Brandon T. Jackson as a group of prima donna actors making a Vietnam War film. When their frustrated director (Steve Coogan) drops them in the middle of a jungle, they are forced to rely on their acting skills to survive the real action and danger. Tropic Thunder parodies many prestigious war films (specifically those based on the Vietnam War), the modern Hollywood studio system, and method acting. The ensemble cast includes Nick Nolte, Danny McBride, Matthew McConaughey, Bill Hader, and Tom Cruise. Stiller developed Tropic Thunder's premise during the production of Empire of the Sun in the spring of 1987, and later enlisted Theroux and Cohen to complete a script. The film was green-lit in 2006 and produced by Stuart Cornfeld, Stiller, and Eric McLeod for Red Hour Productions and DreamWorks Pictures as an international coproduction between the United States, Germany, and the United Kingdom. Filming took place in 2007 on the Hawaiian island of Kaua'i over thirteen weeks and was the largest film production in the island's history. The extensive marketing campaign included faux websites for three of the main characters and their fictional films, a fictional television special, and selling the energy drink advertised in the film, "Booty Sweat". Paramount Pictures and DreamWorks released Tropic Thunder in the United States on August 13, 2008. It received generally positive reviews for its characters, story, faux trailers, and cast performances, with Downey Jr. being the most positively praised for his performance. However, the depiction of disabled people and the use of brownface makeup attracted controversy. The film opened at the top of the American box office and retained the number-one position for three consecutive weeks, ultimately grossing more than \$195 million worldwide before its release on home media on November 18, 2008. Downey was nominated for an Academy Award, a BAFTA Award, and a Screen Actors Guild Award, while both he and Cruise received nominations for a Golden Globe Award. ## Plot Hook-handed Vietnam veteran Staff Sergeant John "Four Leaf" Tayback's memoir Tropic Thunder is being made into a film. Except for newcomer supporting actor Kevin Sandusky, the cast—has-been action hero Tugg Speedman, overbearing five-time Academy Award-winning Australian method actor Kirk Lazarus, closeted homosexual rapper Alpa Chino, and drug-addicted comedian Jeff Portnoy—all cause problems for the inexperienced director Damien Cockburn, who cannot control them, resulting in a million-dollar pyrotechnics scene being wasted. With the project months behind schedule, studio executive Les Grossman gives Damien an ultimatum: get the cast under control, or the project will be canceled. On Four Leaf's advice, Damien drops the actors into the middle of the jungle, with hidden cameras and rigged special effects explosions to film "guerrilla-style." The actors have guns that fire blanks, along with a map and scene listing that will lead to a helicopter waiting at the end of the route. Unknown to the actors and production, the group has been dropped in the middle of the Golden Triangle, the home of the heroin-producing Flaming Dragon gang. Just as the group is about to set off, Damien inadvertently steps on an old land mine and is blown up, stunning the actors. Tugg, believing Damien faked his death to encourage the cast to give better performances, assures the others that Damien is alive and that they are still shooting the film. Kirk is unconvinced but joins them in their trek to escape the jungle. When Four Leaf and pyrotechnics operator Cody Underwood try to locate the deceased director, they are captured by Flaming Dragon. Four Leaf is revealed to have hands; he confesses to Underwood that he actually served in the Coast Guard, has never left the United States, and that he wrote his "memoir" as a tribute. As the actors continue through the jungle, Kirk, who has become convinced that Tugg's ineptitude is jeopardizing them, and Kevin, the only actor who bothered to properly prepare for his role, discover that Tugg is leading them in the wrong direction. The resulting argument results in Kirk leading the rest of the cast back toward the resort they are staying at as an increasingly delirious Tugg is captured by Flaming Dragon. Taken to their base, Tugg believes it is a POW camp from the script. The gang discovers he is the star of their favorite film, the box-office bomb Simple Jack, and forces him to reenact it several times a day, leading him to become brainwashed. Meanwhile, in Los Angeles, Tugg's agent, Rick "Pecker" Peck, confronts Les over an unfulfilled term in Tugg's contract that entitles him to a TiVo. Flaming Dragon calls during the discussion and demands a ransom for Tugg, but Les instead delivers a profanity-laden death threat. Les is uninterested in rescuing Tugg and is instead delighted at the prospect of a large insurance payout if Tugg dies. He attempts to convince Pecker to play along by promising a Gulfstream V jet and "lots of money." Kirk, Alpa, Jeff, and Kevin discover Flaming Dragon's heroin factory. After witnessing Tugg being tortured, they plan a rescue attempt based on the film's script. Kirk impersonates a farmer towing a "captured" Jeff on the back of a water buffalo, distracting the armed guards so Alpa and Kevin can infiltrate and find the prisoners. Still, a combination of broken Mandarin Chinese and inconsistencies in his story sets off the gang's boss, the 12-year-old Tran. Knowing their cover has been blown, the actors begin firing, fooling the gang members into surrendering. Their control of the gang falls apart when Jeff grabs Tran and heads for the drugs, and the gang, realizing the guns fire blanks, recover their guns and fight back. The four actors locate Four Leaf, Underwood, and Tugg and cross a bridge rigged to explode to get to Underwood's helicopter. Tugg initially remains behind, believing Flaming Dragon to be his "family," but runs back screaming, chased by an angry horde. Four Leaf destroys the bridge, rescuing Tugg, but as the helicopter takes off, Tran fires an RPG at the helicopter. Rick unexpectedly stumbles out of the jungle and saves them by throwing a TiVo box into the rocket's path. The crew return to Hollywood, and footage from the hidden cameras is compiled into the feature film Tropic Blunder, which becomes a major critical and commercial success. The film wins Tugg his first Academy Award, which Kirk presents to him at the ceremony. ## Cast - Ben Stiller as Tugg Speedman: Once the highest-paid, highest-grossing action film star ever due to his Scorcher franchise, his career has stalled, and he now has a reputation for appearing in nothing but box office bombs. After drawing negative coverage for his portrayal in Simple Jack, in which he plays a mentally challenged farm boy who can talk to animals, he takes the role of Four Leaf Tayback in an attempt to save his career. Tugg's faux trailer at the film's start is a preview for Scorcher VI: Global Meltdown, the latest in his series and a spoof of long-running summer action blockbuster franchises. - Jack Black as Jeff Portnoy: A drug-addicted comedian-actor well known for portraying multiple parts in films that rely on toilet humor, particularly jokes about flatulence. In the film-within-a-film, he plays a raspy-voiced soldier named Fats. He fears he is only considered an actor because of his farts and nothing else. Portnoy's faux trailer for juvenile family comedy The Fatties: Fart 2, about a family (with each member played by Portnoy) that enjoys passing gas, spoofs Eddie Murphy's portrayal of multiple characters in films such as Nutty Professor II: The Klumps. - Robert Downey Jr. as Kirk Lazarus as Lincoln Osiris: An Australian method actor and five-time Academy Award winner, Lazarus had a controversial "pigmentation alteration" surgery to temporarily darken his skin for his portrayal of the black character, Staff Sergeant Lincoln Osiris. Lazarus refuses to break character until he has recorded the DVD commentary for a part and only speaks in his character's African American Vernacular English. Lazarus's faux trailer, Satan's Alley, is about two gay monks in a 12th-century Irish monastery, parodying films like Brokeback Mountain and Downey's own scenes with Tobey Maguire (who in a cameo portrays himself playing the other monk) in Wonder Boys. Downey said he modeled Lazarus on three actors: Russell Crowe, Daniel Day-Lewis and Colin Farrell. Lazarus was originally intended to be Irish, but Downey felt more comfortable using an Australian accent, since he had portrayed an Australian character in Natural Born Killers. Makeup effects legend Rick Baker designed and created Downey's Osiris makeup while John Blake and Greg Nelson handled the on-set application. - Nick Nolte as Four Leaf Tayback: The author of Tropic Thunder, a fake memoir of his war experiences on which the film-within-a-film is based. He suggests the idea of dropping the actors in the middle of the jungle to get them looking and feeling like soldiers lost in a foreign land. - Steve Coogan as Damien Cockburn: The inexperienced British film director who is unable to control the actors in the film. The character was partly inspired by Richard Stanley, and his experience of directing the 1996 film The Island of Dr. Moreau, with Val Kilmer and Marlon Brando. - Jay Baruchel as Kevin Sandusky: A novice actor, he is the only cast member to have read the script and book and attended the assigned boot camp prior to the film. Sandusky plays a young soldier named Brooklyn in the film-within-a-film. Brooklyn and Sandusky each occupy the position of straight man in character in the film-within-a-film and its cast, being the only actor without an internal conflict or deep-seated insecurity. He often serves as a mediator when tensions between the cast get high. - Danny McBride as Cody Underwood: The film's explosives expert and helicopter pilot. He has developed a reputation for being a dangerous pyromaniac after an incident while working on Freaky Friday nearly blinded Jamie Lee Curtis. - Brandon T. Jackson as Alpa Chino: A closeted homosexual rapper who is attempting to cross over into acting, portraying a soldier named Motown, while promoting his "Bust-A-Nut" candy bar and "Booty Sweat" energy drink. He feels his image as a rapper would not allow him to be openly gay. His name is a play on Al Pacino. Kevin Hart turned down the role because he did not want to play a gay character. - Bill Hader as Studio Executive Rob Slolom: Grossman's assistant and right-hand man - Brandon Soo Hoo as Tran: The 12-year-old leader of the Flaming Dragon gang. The character was compared to God's Army guerrilla leaders Johnny and Luther Htoo. - Reggie Lee as Byong: The second-in-command of the Flaming Dragon gang. - Trieu Tran as Tru: A dedicated mercenary and actor in the Flaming Dragon gang. - Matthew McConaughey as Rick "The Pecker" Peck: Speedman's extremely devoted agent and best friend. - Tom Cruise as Les Grossman: The profane, ill-tempered studio executive producing Tropic Thunder. Various commentators and Hollywood insiders believe he is loosely based on film producer Scott Rudin, famous for his volcanic temper and poor treatment of others and Harvey Weinstein. Various actors and celebrities portray themselves, including Tobey Maguire, Tyra Banks, Maria Menounos, Martin Lawrence, The Mooney Suzuki, Jason Bateman, Lance Bass, Jennifer Love Hewitt, Alicia Silverstone, Christine Taylor, Mini Anden, Anthony Ruivivar, Yvette Nicole Brown, Rachel Avery, Sean Penn, and Jon Voight. Co-writer Justin Theroux appears in two brief roles as a UH-1 Huey gunner and the disc jockey from Zoolander (shown in a deleted scene). ## Production ### Script Stiller developed the premise for Tropic Thunder while shooting Empire of the Sun, in which he played a small part. Stiller wanted to make a film based on the actors he knew who, after taking part in boot camps to prepare for war film roles, became "self-important" and "self-involved" and appeared to believe they had been part of a real military unit. Co-writer Theroux revealed that the initial script concept was to have actors go to a mock boot camp and return with posttraumatic stress disorder. The final script was developed to satirize Vietnam War films such as Apocalypse Now, Rambo, Missing in Action, Platoon, Full Metal Jacket, Hamburger Hill, and The Deer Hunter. Theroux pointed out that since viewers had an increased awareness of the inner workings of Hollywood due to celebrity websites and Hollywood news sources, the script was easier to write. Dialogue for unscripted portions of the storyboard was developed on set by the actors or was improvised. ### Casting Stiller's original plan was to cast Keanu Reeves as Tugg Speedman and himself as Rick Peck. Etan Cohen created the role of Kirk Lazarus as a way of lampooning the great lengths that some method actors go to depict a role. Downey was approached by Stiller about the part while on vacation in Hawaii. Downey said on CBS' The Early Show that his first reaction was, "This is the stupidest idea I've ever heard!" and that Stiller responded, "Yeah, I know – isn't it great?" In another interview, Downey said that he accepted the part but, having no idea where or even how to start building the character of Lazarus, eventually settled on a jive-esque speech pattern and a ragged bass voice; he then auditioned Lazarus' voice over the phone to Stiller, who approved the characterization immediately. Downey revealed that he modeled the character on actors Russell Crowe, Colin Farrell, and Daniel Day-Lewis. The initial script was written for Downey's character to be Irish, but was altered after Downey stated he could improvise better as an Australian, having previously played a similar outlandish Australian character in the film Natural Born Killers. Downey's practice of remaining in character between takes and even off the film set was also written into the script for his character to perform. Downey required between one-and-a-half and two hours of makeup application. According to Downey, "One makeup artist would start on one side of my face and a second makeup artist would start on the other side, and then they'd meet in the middle." Downey acknowledged the potential controversy over his role: "At the end of the day, it's always about how well you commit to the character. If I didn't feel it was morally sound, or that it would be easily misinterpreted that I'm just C. Thomas Howell [in Soul Man], I would've stayed home." Co-star Brandon T. Jackson stated: "When I first read the script, I was like: What? Blackface? But when I saw him [act] he, like, became a black man ... It was just good acting. It was weird on the set because he would keep going with the character. He's a method actor." Stiller commented on Downey's portrayal of a white actor playing a black man: "When people see the movie – in the context of the film, he's playing a method actor who's gone to great lengths to play a black guy. The movie is skewering actors and how they take themselves so seriously." Stiller previewed the film before the NAACP, and several black journalists reacted positively to the character. Cruise was initially set to cameo as Stiller's character's agent, Rick Peck. Instead, Cruise suggested adding a studio head character, and the idea was incorporated into the script. Stiller and Cruise worked together to create the new character, Les Grossman, as a middle-aged businessman. The role required that Cruise don a fatsuit, large prosthetic hands, and a bald cap. It was Cruise's idea to give the character large hands and dance to "Low". Stiller intended to keep Cruise's role a secret until the film's release. In addition, Paramount Pictures refused to release promotional pictures of Cruise's character to the media. In November 2007, images of Cruise wearing a bald headpiece and a fatsuit appeared on Inside Edition, as well as on the Internet. Cruise's attorneys threatened a lawsuit if photos showing Cruise in costume were published. They approached various sites that were hosting the image and quickly had it removed. A representative for Cruise stated: "Mr. Cruise's appearance was supposed to be a surprise for his fans worldwide. Paparazzi have ruined what should have been a fun discovery for moviegoers." The photography agency INF, who debuted the image, responded with a statement: "While these pictures were taken without breaking any criminal or civil laws, we've decided to pull them from circulation effective immediately." Serving as a last-minute replacement, Tobey Maguire was available to be on set for only two hours to film his scenes in Satan's Alley. Downey said he was amazed Maguire would agree to do the film and felt like they were creating a "karmic pay-off" for their scenes together in the 2000 film Wonder Boys, where Downey's character has a one-night stand with Maguire's character. After Cruise vacated the role of Rick Peck, Owen Wilson was cast to play the part. Following his suicide attempt in August 2007, Wilson dropped out of the film and was replaced by Matthew McConaughey. ### Filming Although Southern California and Mexico were considered for the main unit filming, the Hawaiian island of Kaua'i (where Stiller has a home) was selected for the majority of the shooting. Kaua'i was chosen over Mexico because a tax credit for in-state spending was negotiated with the Kaua'i Film Commission. John Toll, the cinematographer, stated the island was also selected for its similarity to Vietnam, based on its dense foliage, variety of terrains, and weather. Kaua'i was first scouted as a possible location to film Tropic Thunder in 2004. Stiller spent more than 25 hours over 6 weeks exploring the island, using all-terrain vehicles, boats, and helicopters. After the film was greenlit by DreamWorks in 2006, preproduction lasted for six months, most of this time spent on scouting additional locations for filming. Filming for the Los Angeles and interior scenes occurred on sets at Universal Studios in Hollywood. Tropic Thunder was the first major studio production on Kaua'i in five years. After filming was completed, it was deemed the largest production filmed on the island to date, and contributed more than \$60 million to the local economy. Tim Ryan, the executive editor of Hawaii Film & Video Magazine, commented on the filming on the island: "I think Tropic Thunder will give Kaua'i much needed and long idled publicity in the production arena ... It should put Kaua'i back on the production consideration radar." Preliminary production crews were on the island starting in December 2006 and principal photography began in July 2007, with filming lasting thirteen weeks over seven separate locations on the island. Much of the filming took place on private land as well as conservation status designated areas. Casting calls on the island sought 500 residents to portray the villagers in the film. Two units shot simultaneously on the island from the ground, and an aerial unit shot from helicopters. Many of the sets and the bridge used for one of the final scenes were built in three months. The island's erratic weather hampered filming with rain and lighting issues. The crew also faced complications in moving the equipment and cast due to the difficult terrain. The film advising company Warriors Inc. was enlisted to ensure the war scenes, including the attire worn by the actors, looked authentic. Former members of the U.S. military taught the actors how to handle, fire, and reload their weapons, as well as perform various tactical movements. The opening war scene was filmed over three weeks and required fifty stuntmen. Animatics were used to map out the necessary camera angles for filming. ### Effects Six companies working on different scenes and elements created 500 shots of visual effects in the film. These were at times altered weekly due to the reactions of test audiences in screenings. CIS Visual Effects Group assisted with the Scorcher VI faux trailer and twenty additional shots for the home media release. To expand on the comedy in the film, some of the explosions and crashes were embellished to look more destructive. The visual effects supervisor Michael Fink reflected on the exaggerated explosions: "We worked really hard to make the CG crashing helicopter in the hot landing sequence look real. Ben was adamant about that, but at the same time he wanted the explosion to be huge. When you see it hit the ground, it was like it was filled with gasoline! It was the same thing with Ben's sergeant character, who almost intercepts a hand grenade ... Now, I was in the Army for three years and no hand grenade would make an explosion like that ... But it was a big dramatic moment and it looks really cool ... and feels kind of real." Filming the large napalm explosion in the opening scene of the film required a 450-foot (137-meter) row of explosive pots containing 1,100 gallons (4,165 liters) of gasoline and diesel fuel. All the palm trees used in the explosion were moved to the specific location after the crew determined the impact of the lighting and necessary camera angles. Due to the size and cost of the 1.25-second explosion, it was performed only once and was captured by twelve cameras. For the safety of the crew and cast, the detonators were added one hour before the explosion and nobody was allowed to be within 400 feet (120 m) during detonation. The explosion was made up of twelve individual explosions and resulted in a mushroom cloud that reached 350 feet (110 m) in the air. For the scene in the film, Danny McBride's character, Cody Underwood, was the only actor shown in the shot of the explosion. All the other characters were added digitally. The explosion of the bridge in one of the final scenes used nine cameras to capture the shot, and the crew was required to be 3,000 feet (910 m) away for their safety. ## Promotion A trailer for the film was released in April 2008. The Calgary Herald gave it a rating of 3/5, commenting: "This could either be good or very, very bad." Gary Susman of Entertainment Weekly questioned whether the film would "... turn into precisely the kind of bloated action monstrosity that it's making fun of." The trailer received the "Best Comedy Trailer" award at the 9th annual Golden Trailer Awards. DreamWorks also released a red band trailer, the first of its kind used by the studio to promote one of its films. Stiller, Downey, and Black appeared on the seventh-season finale of American Idol in a sketch as The Pips performing with Gladys Knight (via archival footage). The three actors also later performed a sketch at the 2008 MTV Movie Awards which featured the actors attempting to create a successful viral video to promote the film with awkward results. In September 2008, Stiller and Downey attended the San Sebastián International Film Festival to promote the film. A screening was shown, but it was not chosen to compete against the other films at the festival. Between April 2008 and the film's commercial release in August 2008, the film had over 250 promotional screenings. On August 3, 2008, Stiller, Downey, and Black visited Camp Pendleton, a U.S. Marine Corps base in California, to present a screening to over a thousand military members and their families. The screening was on behalf of the United Service Organizations and included the actors heading to the screening by helicopter and Humvees. On August 8, 2008, a special 30-minute fictional E! True Hollywood Story aired about the making of Tropic Thunder. In video games, a themed scavenger hunt was incorporated into Tom Clancy's Rainbow Six: Vegas 2, and Stiller allowed his likeness to be used in the online Facebook application game based on the film. As a tie-in for the film's release, Paramount announced it would market the energy drink known in the film as "Booty Sweat". Michael Corcoran, Paramount's president of consumer products, commented on the release: "We're very excited, because it has the potential to live for quite a while, well beyond the film." The drink was sold in college bookstores, on Amazon.com, and at other retailers. ### Faux websites and mockumentary Several faux websites were created for the characters and some of their prior film roles. A website for Simple Jack, a faux film exhibited within the film, was removed by DreamWorks on August 4, 2008, due to protests from disability advocates. In addition, other promotional websites were created for "Make Pretty Skin Clinic", the fictitious company that performed the surgery of the film's character Kirk Lazarus, along with one for the energy drink "Booty Sweat". In mid-July 2008, a faux trailer for the mockumentary Rain of Madness was released. The mockumentary was a parody of Hearts of Darkness: A Filmmaker's Apocalypse. It follows co-writer Justin Theroux as a fictitious documentarian named Jan Jürgen documenting the behind-the-scenes aspects of the film within the film. Marketing for the faux documentary included a movie poster and an official website prior to Tropic Thunder's release. The mockumentary was released on the iTunes Store after the film's release and was also included on the home video release. Amy Powell, an advertising executive with Paramount, reflected on the timing of the release of Madness: "We always thought that people would be talking about Tropic Thunder at the water cooler, and that's why we decided to release Rain of Madness two weeks into Tropic's run—to keep this positive buzz going." ## Release ### Theatrical release Tropic Thunder held an early screening at the 2008 San Diego Comic-Con, two weeks before it officially premiered on August 11, 2008, at the Mann Village Theatre in Westwood, California and two days before its wide release. Members of several disability groups picketed before the premiere, protesting at the portrayal of intellectual disability shown in the film. The groups revealed that it was the first time that they had ever protested together at an event. As a result of the protest, the normally unobstructed views of the red carpet leading to the premiere were blocked off by 10-foot (3-m)-high fences and there was an increase in the number of security personnel present. No protests were held at the United Kingdom's September premiere. The North American release was scheduled for July 11, 2008, but was delayed until August 15, before being brought forward to August 13. As a result of the move from July, 20th Century Fox moved its family comedy Meet Dave in the open slot. The August 13 release date was also the opening weekends for the animated family film Star Wars: The Clone Wars and the horror film Mirrors. Studios consider the third week of August to be a weaker performing period than earlier in the summer because of students returning to school. Previous R-rated comedies such as The 40-Year-Old Virgin and Superbad were released in mid-August and performed well at the box office. Reacting to Tropic Thunders release date, Rob Moore, vice chairman of Paramount Pictures, stated, "For a young person at the end of summer, you want to have some fun and forget about going back to school. What better than a crazy comedy?" ### Home media Tropic Thunder was released in the United States on DVD and Blu-ray on November 18, 2008, three months after its release and a week after the end of its theatrical run in the U.S. and Canada. The film was released on home video on January 26, 2009, in the United Kingdom. Special features include an unrated director's cut of the film which is 12 minutes longer than the theatrical release, audio commentaries (including one featuring Stiller, Downey, and Black, with Downey providing his commentary as Lincoln Osiris, a nod to a joke in the film that Lazarus never breaks character until he completes the DVD commentary), several featurettes, deleted scenes, an alternate ending, and the Rain of Madness mockumentary. For the film's first week of release, Tropic Thunder placed on several video charts. It reached second place on the Nielsen VideoScan First Alert sales chart and Nielsen's Blu-ray Disc chart, earning \$19,064,959 (not including Blu-ray sales). In rentals, it placed first on the Home Media Magazine'''s video rental chart. The DVD sales in 2008 totaled \$42,271,059, placing it in 28th for DVD sales for the year. By September 2009, 2,963,000 DVD units had been sold, gathering revenue of \$49,870,248. An HDR Dolby Vision mastered Ultra HD Blu-ray was released through Kino Lorber. ## Reception ### Critical response The review aggregation website Rotten Tomatoes gives the film a rating of 82% based on 252 reviews and an average rating of 7.1/10. The website's critical consensus reads, "With biting satire, plenty of subversive humor, and an unforgettable turn by Robert Downey Jr., Tropic Thunder is a triumphant late summer comedy." Metacritic, which assigns a weighted average score from reviews by mainstream critics, gave a film score of 71 out of 100 based on 39 critics, indicating "generally favorable reviews". After attending an industry screening in April 2008, Michael Cieply from the New York Times stated that the film was "shaping up as one of [DreamWorks]'s best prospects for the summer." Claudia Puig of USA Today gave the film a positive review, writing "There are some wildly funny scenes, a few leaden ones and others that are scattershot, with humorous satire undercut by over-the-top grisliness. Still, when it's funny, it's really funny." A review in Variety by Todd McCarthy was critical: "Apart from startling, out-there comic turns by Robert Downey Jr. and Tom Cruise, however, the antics here are pretty thin, redundant and one-note." Glenn Kenny of RogerEbert.com would later call the film "intermittently amusing but entirely smug and hateful." Rick Groen of The Globe and Mail also gave the film a negative review, calling it "... an assault in the guise of a comedy—watching it is like getting mugged by a clown." J.R. Jones of Chicago Reader stated "The rest of the movie never lives up to the hilarity of the opening, partly because the large-scale production smothers the gags but mostly because those gags are so easy to smother." Roger Ebert of the Chicago Sun-Times gave 3.5/4 and wrote, "The movie is, may I say, considerably better than Stiller’s previous film, Zoolander (2001). It’s the kind of summer comedy that rolls in, makes a lot of people laugh and rolls on to video." The faux trailers before the film also received mixed reviews. David Ansen of Newsweek approved of the trailers, writing "Tropic Thunder is the funniest movie of the summer—so funny, in fact, that you start laughing before the film itself has begun." Christy Lemire, writing for the Associated Press, called the trailers "... the best part of the trip." Robert Wilonsky of The Village Voice was critical, saying that the trailers' comedy "... resides in the land of the obvious, easy chuckle." Downey, Stiller, Black and Cruise were repeatedly singled out for praise by numerous critics, claiming that they "stole the show", were "... off-the-charts hilarious ...", and would bring viewers "... the fondest memories of [their] work." Scott Feinberg, of the Los Angeles Times, criticized the concept of Downey's portrayal of an African-American, writing "... I just can't imagine any circumstance under which a blackface performance would be acceptable, any more than I can imagine any circumstance under which the use of the N-word would be acceptable." Sara Vilkomerson said Cruise did "... an astonishingly funny and surprising supporting performance." Logan Hill of New York argued against Cruise's cameo saying that it "... just makes him look a little lost and almost pathetic—shucking and jiving, trying to appeal to the younger moviegoers who are abandoning him." Several critics commented on the controversy over lines in the film talking about the mentally disabled. Duane Dudek of the Milwaukee Journal Sentinel wrote that the film "... is just sophomoric enough to offend. And while it is also funny, it is without the empathy or compassion to cause us to wonder why we are laughing." Christian Toto of The Washington Times argued against the opposition, "Tropic Thunder is drawing fire from special interest groups for ... its frequent use of the word 'retard', but discerning audiences will know where the humor is targeted. And they'll be laughing too hard to take offense." Kurt Loder of MTV contrasted the opposition to the lines with Downey's character portraying an African American, "The scene in which the derisive Alpa Chino (Brandon T. Jackson) nails Lazarus' recitation of black-uplift homilies as nothing more than the lyrics to the Jeffersons theme is funny; but the one in which Lazarus quietly explains to Speedman that his Simple Jack character failed because he made the mistake of going 'full retard'—rather than softening his character with cuteness in the manner of Forrest Gump—is so on-the-nose accurate, it takes your breath away." ### Critics' lists In January 2009, Entertainment Weekly included Tropic Thunder in its list "25 Great Comedies From the Past 25 Years" for its "spot-on skewering of Hollywood." The film also appeared on several critics' top ten lists of the best films of 2008. Stephen King placed it at the fourth position, calling the film "the funniest, most daring comedy of the year." The Oregonian's Marc Mohan, placed it sixth, and several critics placed it seventh: Elizabeth Weitzman of the New York Daily News, Premiere magazine, Mike Russell of The Oregonian, as well as Peter Hartlaub of the San Francisco Chronicle. David Ansen of Newsweek placed it eighth and Lisa Schwarzbaum of Entertainment Weekly included the film in the tenth position. ### Box office Stacey Snider, the chief executive of DreamWorks, suggested that the film would earn around \$30 million in its opening weekend and go on to be as successful as Borat: Cultural Learnings of America for Make Benefit Glorious Nation of Kazakhstan, which earned \$129 million in the U.S. and Canada and \$260 million worldwide. The Dark Knight had been the number one film at the box office for the four weeks prior to the release of Tropic Thunder. Bob Thompson, a writer for the National Post, speculated that Tropic Thunder's opening weekend would outperform The Dark Knight for the weekend. In a list compiled prior to the summer's film releases, Entertainment Weekly predicted that the film would be the tenth highest-grossing film of the summer at the American box office with \$142.6 million. Tropic Thunder opened in 3,319 theaters and, for its first five days of American and Canadian release, earned \$36,845,588. The film placed first in the weekend's box office with \$25,812,796, surpassing Star Wars: The Clone Wars and Mirrors, which debuted the same weekend. Reacting to the film's opening receipts, DreamWorks spokesman Chip Sullivan stated, "We're thrilled, quite frankly. It played out exactly how we hoped." In foreign markets for the film's opening weekend, it was released in 418 Russian and 19 United Arab Emirates locations earning \$2.2 million and \$319,000, respectively. The film maintained its number one position at the American and Canadian box office for the following two weekends, making it the second film in 2008 (after The Dark Knight) to hold the number-one position for more than two consecutive weekends. The film's widest release was in 3,473 theaters, placing it in the top 25 widest releases in the U.S. for 2008. For 2008, the film was the fifth-highest-grossing domestic R-rated film. The film's U.S. and Canada gross of over \$110 million made Tropic Thunder Stiller's most successful film as a director. The film has had gross receipts of \$110,515,313 in the U.S. and Canada and \$85,187,498 in international markets for a total of \$195,702,811 worldwide. ### Accolades In October 2008, Paramount chose to put end-of-year award push funds behind Tropic Thunder and began advertising for Downey to receive a nomination by the Academy Awards for Best Supporting Actor. In a November 2008 issue by Entertainment Weekly, Downey's film role was considered one of the three contenders for Best Supporting Actor. As a way of extending the film-within-a-film "universe" into real life, there were at least two online "For Your Consideration" ads touting Downey's character, Kirk Lazarus, for Best Supporting Actor; one of these contains "scenes" from Satan's Alley that were not in the trailer as released in theaters. At least one of the ads was produced by Paramount Pictures and intended for early For Your Consideration awareness for Downey's role. On January 22, 2009, the Academy of Motion Picture Arts and Sciences nominated Downey for Best Supporting Actor. At the 81st Academy Awards, Downey lost to Heath Ledger, who won posthumously for his role as The Joker in The Dark Knight. With the onset of the annual Hollywood film award season at the end of 2008, Tropic Thunder began receiving nominations and awards starting with a win for "Hollywood Comedy of the Year Award" at the 12th annual Hollywood Film Festival on October 27, 2008. The film was nominated for Best Motion Picture, Comedy or Musical, for the Satellite Awards. In addition, Downey was nominated for Best Actor in a Supporting Role. The Broadcast Film Critics Association nominated Downey for Best Supporting Actor and awarded Tropic Thunder Best Comedy Movie at the BFCA's Critics' Choice Awards. Both Downey and Cruise received nominations from the Hollywood Foreign Press Association for Golden Globes for Best Supporting Actor. The Boston Society of Film Critics recognized the cast with its Best Ensemble award. Downey was also nominated by both the Screen Actors Guild and the British Academy of Film and Television Arts for Best Supporting Actor awards. ## Controversy Tropic Thunder was criticized by the disability advocacy community. The website for Simple Jack was withdrawn on August 4 amid several groups' concerns over its portrayal of intellectual disability. A spokesman for DreamWorks said, "We heard their concerns, and we understand that taken out of context, the site appeared to be insensitive to people with disabilities." A coalition of more than 20 disability advocacy groups, including the Special Olympics and the Arc of the United States, objected to the film's repeated use of the word "retard". DreamWorks offered to screen the film for the groups on August 8 to determine if it still offended them. The screening was postponed to the same day of the premiere on August 11. After representatives for the groups attended the private screening and were still offended by its content, the groups picketed outside the film's premiere. Timothy Shriver, the chairman of the Special Olympics, stated, "This population struggles too much with the basics to have to struggle against Hollywood. We're sending a message that this hate speech is no longer acceptable." Disability advocates and others who previewed the film reported that the offensive treatment of individuals with mental disabilities was woven throughout the film's plot. Disability advocates urged people not to see the film, claiming it is demeaning to individuals with mental disabilities and would encourage bullying. Stiller defended the film, stating "We screened the movie so many times and this didn't come up until very late ... in the context of the film I think it's really clear, they were making fun of the actors and actors who try to use serious subjects to win awards." Co-writer Etan Cohen echoed Stiller's rationale: "Some people have taken this as making fun of handicapped people, but we're really trying to make fun of the actors who use this material as fodder for acclaim." He went on to state that the film lampoons actors who portray intellectually disabled or autistic characters such as Dustin Hoffman in Rain Man, Tom Hanks in Forrest Gump, and Sean Penn in I Am Sam. A DreamWorks spokesman did not directly respond to the criticism, claiming that Tropic Thunder "is an R-rated comedy that satirizes Hollywood and its excesses" and "makes its point by featuring inappropriate and over-the-top characters in ridiculous situations." The film's advertising was altered, but none of the scenes in the film were edited as a result of the opposition. In response to the controversy, the director's cut of the DVD (but not the Blu-ray) includes a public service announcement in the special features that discourages use of the word "retard". Another aspect that drew warning before the release of the movie and criticism afterwards was Downey Jr. playing a white Australian actor who dons blackface as part of his method acting the role of an African-American man. He responded by pointing out that this was a case of donning blackface in order to point out how wrong it is. Others have pointed out that the wrongness of blackface is addressed within the movie itself by an actual African-American, and that the climax of movie pins on Downey Jr.'s shedding of his method acting; in this way, the movie mocks—rather than embraces—both blackface and the extreme and ridiculous things method actors sometimes do for their roles. Some have alleged that the film's characterization—and the non-Jewish Tom Cruise's portrayal—of the Jewish character Les Grossman is anti-Semitic. In addition to his Jewish name, the character of Grossman also references the Jewish holiday of Purim. Critics have also referred to this performance as "Jewface" as early as 2008, calling it "vulgar" and "exploitation"; others, however, including the St. Louis Jewish Light, which referenced Tropic Thunder in particular, noted that Jewface was a "riff on the practice of blackface and is nowhere near its equivalent." Cruise was largely responsible for the final form Grossman took, including using him as an additional villain, the hairiness of the character, and the "fat hands". In February 2023, Stiller defended Tropic Thunder on his Twitter account by stating he had "no apologies" and that he is "proud of it and the work everyone did on it." Stiller's defense was a response to a fan of the film who suggested that the former cease apologizing for making the film in light of the cancel culture which rose during the late 2010s and early 2020s. ## Music Tropic Thunder's score and soundtrack were released on August 5, 2008, the week before the film's theatrical release. The score was composed by Theodore Shapiro and performed by the Hollywood Studio Symphony. William Ruhlmann of AllMusic gave the score a positive review, stating it is "...an affectionate and knowing satire of the history of Hollywood action movie music, penned by an insider." Thomas Simpson of Soundtrack.Net called it "...a mixture of fun, seriousness, rock n' roll and great scoring." Five songs—"Cum On Feel the Noize" by Quiet Riot, "Sympathy for the Devil" by The Rolling Stones, "For What It's Worth" by Buffalo Springfield, "Low" by Flo Rida and T-Pain, and "Get Back" by Ludacris—were not present on the soundtrack despite appearing in the film. The soundtrack features songs from The Temptations, MC Hammer, Creedence Clearwater Revival, Edwin Starr, and other artists. The single "Name of the Game" by The Crystal Method, featuring Ryu, has an exclusive remix on the soundtrack. The soundtrack debuted 20th on Billboards Top Soundtracks list and peaked at 39th on its Top Independent Albums list. James Christopher Monger of allmusic compared the music to other film's soundtracks such as Platoon, Full Metal Jacket, and Forrest Gump'' and called it "...a fun but slight listen that plays out like an old late-'70s K-Tel compilation with a few bonus cuts from the future." ## Possible spin-off Cruise reprised his character Les Grossman for the 2010 MTV Movie Awards. A spin-off film centering on Grossman was announced in 2010. A script has been written by Michael Bacall. In 2012, Bacall said the film will explore the origin of Grossman's anger issues. As of 2022, Cruise and frequent collaborator Christopher McQuarrie are developing the spin-off, though it is not clear whether Grossman will be the protagonist or a supporting character. ## See also - List of films featuring fictional films
48,419
Mary of Teck
1,171,950,993
Queen of the United Kingdom from 1910 to 1936
[ "1867 births", "1953 deaths", "19th-century British people", "19th-century British women", "20th-century British people", "20th-century British women", "British debutantes", "British people of German descent", "British queens consort", "Burials at St George's Chapel, Windsor Castle", "Companions of the Order of the Crown of India", "Dames Grand Cross of the Order of St John", "Dames Grand Cross of the Order of the British Empire", "Dames Grand Cross of the Royal Victorian Order", "Dames of the Order of Saint Isabel", "Duchesses of Cornwall", "Duchesses of Rothesay", "Duchesses of York", "German princesses", "Grand Cordons of the Order of the Precious Crown", "House of Windsor", "Indian empresses", "Knights Grand Commander of the Order of the Star of India", "Ladies of the Garter", "Ladies of the Royal Order of Victoria and Albert", "Mary of Teck", "Members of the Royal Red Cross", "People associated with Queen Mary University of London", "People from Kensington", "Princesses of Wales", "Queen mothers", "Queen's Own Rifles of Canada", "Residents of White Lodge, Richmond Park", "Teck-Cambridge family", "Wives of British princes" ]
Mary of Teck (Victoria Mary Augusta Louise Olga Pauline Claudine Agnes; 26 May 1867 – 24 March 1953) was Queen of the United Kingdom and the British Dominions, and Empress of India, from 6 May 1910 until 20 January 1936 as the wife of King-Emperor George V. Born and raised in the United Kingdom, Mary was the daughter of Francis, Duke of Teck, a German nobleman, and Princess Mary Adelaide of Cambridge, a granddaughter of King George III. She was informally known as "May", after the month of her birth. At the age of 24, she was betrothed to her second cousin once removed Prince Albert Victor, Duke of Clarence and Avondale, the eldest son of the Prince of Wales and second in line to the throne. Six weeks after the announcement of the engagement, he died unexpectedly during an influenza pandemic. The following year, she became engaged to Albert Victor's only surviving brother, George, who subsequently became king. Before her husband's accession, she was successively Duchess of York, Duchess of Cornwall, and Princess of Wales. As queen consort from 1910, Mary supported her husband through the First World War, his ill health, and major political changes arising from the aftermath of the war. After George's death in 1936, she became queen mother when her eldest son, Edward VIII, ascended the throne. To her dismay, he abdicated later the same year in order to marry twice-divorced American socialite Wallis Simpson. She supported her second son, George VI, until his death in 1952. Mary died the following year, ten weeks before her granddaughter Elizabeth II was crowned. An ocean liner, a battlecruiser, and a university were named in her honour. ## Early life Princess Victoria Mary of Teck was born on 26 May 1867 at Kensington Palace, London, in the room where Queen Victoria, her first cousin once removed, had been born 48 years and two days earlier. Queen Victoria came to visit the baby, writing that she was "a very fine one, with pretty little features and a quantity of hair". The princess's father was Prince Francis, Duke of Teck, the son of Duke Alexander of Württemberg by his morganatic wife, Countess Claudine Rhédey von Kis-Rhéde. Her mother was Princess Mary Adelaide of Cambridge, a granddaughter of King George III and the third child and younger daughter of Prince Adolphus, Duke of Cambridge, and Princess Augusta of Hesse-Kassel. The infant was baptised in the Chapel Royal of Kensington Palace on 27 July 1867 by Charles Thomas Longley, Archbishop of Canterbury. From an early age, she was known to her family, friends and the public by the diminutive name of "May", after her birth month. May's upbringing was "merry but fairly strict". She was the eldest of four children and the only daughter. She "learned to exercise her native discretion, firmness, and tact" by resolving her three younger brothers' petty boyhood squabbles. They played with their cousins, the children of the Prince of Wales, who were similar in age. She grew up at Kensington Palace and White Lodge, in Richmond Park, which was granted by Queen Victoria on permanent loan. She was educated at home by her mother and governess (as were her brothers until they were sent to boarding schools). The Duchess of Teck spent an unusually long time with her children for a lady of her time and class, and enlisted May in various charitable endeavours, which included visiting the tenements of the poor. Although May was a great-grandchild of George III, she was only a minor member of the British royal family. Her father, the Duke of Teck, had no inheritance or wealth and carried the lower royal style of Serene Highness because his parents' marriage was morganatic. The Duchess of Teck was granted a parliamentary annuity of £5,000 and received about £4,000 a year from her mother, the Duchess of Cambridge, but she donated lavishly to dozens of charities. Prince Francis was deeply in debt and moved his family abroad with a small staff in 1883, in order to economise. They travelled throughout Europe, visiting their various relations. For a time they stayed in Florence, Italy, where May enjoyed visiting the art galleries, churches and museums. She was fluent in English, German, and French. In 1885, the family returned to London and lived for some time in Chester Square. May was close to her mother and acted as an unofficial secretary, helping to organise parties and social events. She was also close to her aunt Augusta, Grand Duchess of Mecklenburg-Strelitz, and wrote to her every week. During the First World War, the Crown Princess of Sweden helped pass letters from May to Augusta, who lived in enemy territory in Germany until her death in 1916. ## Engagements In 1886, May was a debutante in her first season, and was introduced at court. Her status as the only unmarried British princess who was not descended from Queen Victoria made her a suitable candidate for the royal family's most eligible bachelor, Prince Albert Victor, Duke of Clarence and Avondale, her second cousin once removed and the eldest son of the Prince of Wales. On 3 December 1891 at Luton Hoo, then the country residence of Danish Ambassador Christian Frederick de Falbe, Albert Victor proposed marriage to May and she accepted. The choice of May as bride for the Duke owed much to Queen Victoria's fondness for her, as well as to her strong character and sense of duty. However, Albert Victor died six weeks later, in a recurrence of the worldwide 1889–90 influenza pandemic. Albert Victor's brother, Prince George, Duke of York, now second in line to the throne, evidently became close to May during their shared period of mourning, and Queen Victoria still thought of her as a suitable candidate to marry a future king. The public was also anxious that the Duke of York should marry and settle the succession. In May 1893, George proposed, and May accepted. They were soon deeply in love, and their marriage was a success. George wrote to May every day they were apart and, unlike his father, never took a mistress. ## Duchess of York (1893–1901) Mary married Prince George, Duke of York, in London on 6 July 1893 at the Chapel Royal, St James's Palace. The couple lived in York Cottage on the Sandringham Estate in Norfolk, and in apartments in St James's Palace. York Cottage was a modest house for royalty, but it was a favourite of George, who liked a relatively simple life. They had six children: Edward, Albert, Mary, Henry, George, and John. The children were put into the care of a nanny, as was usual in upper-class families at the time. The first nanny was dismissed for insolence and the second for abusing the children. This second woman, anxious to suggest that the children preferred her to anyone else, would pinch Edward and Albert whenever they were about to be presented to their parents so that they would start crying and be speedily returned to her. On discovery, she was replaced by her effective and much-loved assistant, Charlotte Bill. Sometimes, Mary and George appear to have been distant parents. At first, they failed to notice the nanny's abuse of their sons Edward and Albert, and their youngest son, John, was housed in a private farm on the Sandringham Estate, in Charlotte Bill's care, perhaps to hide his epilepsy from the public. Despite Mary's austere public image and her strait-laced private life, she was a caring mother and comforted her children when they suffered from her husband's strict discipline. Edward wrote fondly of his mother in his memoirs: "Her soft voice, her cultivated mind, the cosy room overflowing with personal treasures were all inseparable ingredients of the happiness associated with this last hour of a child's day ... Such was my mother's pride in her children that everything that happened to each one was of the utmost importance to her. With the birth of each new child, Mama started an album in which she painstakingly recorded each progressive stage of our childhood". He expressed a less charitable view, however, in private letters to his wife after his mother's death: "My sadness was mixed with incredulity that any mother could have been so hard and cruel towards her eldest son for so many years and yet so demanding at the end without relenting a scrap. I'm afraid the fluids in her veins have always been as icy cold as they are now in death." The Duke and Duchess of York carried out a variety of public duties. In 1897, Mary became the patron of the London Needlework Guild in succession to her mother. The guild, initially established as The London Guild in 1882, was renamed several times and was named after Mary between 1914 and 2010. Samples of her own embroidery range from chair seats to tea cosies. On 22 January 1901, Queen Victoria died, and Mary's father-in-law ascended the throne as Edward VII. For most of the rest of that year, George and Mary were known as the "Duke and Duchess of Cornwall and York". For eight months they toured the British Empire, visiting Gibraltar, Malta, Egypt, Ceylon, Singapore, Australia, New Zealand, Mauritius, South Africa and Canada. No royal had undertaken such an ambitious tour before. She broke down in tears at the thought of leaving her children, who were to be left in the care of their grandparents, for such a long time. ## Princess of Wales (1901–1910) On 9 November 1901, nine days after arriving back in Britain and on the King's 60th birthday, George was created Prince of Wales. The family moved their London residence from St James's Palace to Marlborough House. As Princess of Wales, Mary accompanied her husband on trips to Austria-Hungary and Württemberg in 1904. The following year, she gave birth to her last child, John. It was a difficult labour, and although she recovered quickly, her newborn son developed respiratory problems. From October 1905 the Prince and Princess of Wales undertook another eight-month tour, this time of India, and the children were once again left in the care of their grandparents. They passed through Egypt both ways and on the way back stopped in Greece. The tour was almost immediately followed by a trip to Spain for the wedding of King Alfonso XIII to Victoria Eugenie of Battenberg, at which the bride and groom narrowly avoided assassination. Only a week after returning to Britain, Mary and George went to Norway for the coronation of George's brother-in-law and sister, King Haakon VII and Queen Maud. ## Queen and empress consort (1910–1936) On 6 May 1910, Edward VII died. Mary's husband ascended the throne and she became queen consort. When her husband asked her to drop one of her two official names, Victoria Mary, she chose to be called Mary, preferring not to be known by the same style as her husband's grandmother, Queen Victoria. She was the first British queen consort born in Britain since Catherine Parr. Mary was crowned alongside her husband at a coronation on 22 June 1911 in Westminster Abbey. Later in the year, the King and Queen travelled to India for the Delhi Durbar held on 12 December 1911, and toured the sub-continent as Emperor and Empress of India, returning to Britain in February. The beginning of Mary's period as consort brought her into conflict with her mother-in-law, Queen Alexandra. Although the two were on friendly terms, Alexandra could be stubborn; she demanded precedence over Mary at the funeral of Edward VII, was slow in leaving Buckingham Palace, and kept some of the royal jewels that should have been passed to the new queen. During the First World War, Queen Mary instituted an austerity drive at the palace, where she rationed food, and visited wounded and dying servicemen in hospital, which caused her great emotional strain. After three years of war against Germany, and with anti-German feeling in Britain running high, the Russian imperial family, which had been deposed by a revolutionary government, was refused asylum. News of the tsar's abdication provided a boost to those in Britain who wished to replace their own monarchy with a republic. The war ended in 1918 with the defeat of Germany and the abdication and exile of the kaiser. Two months after the end of the war, Prince John died at the age of thirteen. Queen Mary described her shock and sorrow in her diary and letters, extracts of which were published after her death: "our poor darling little Johnnie had passed away suddenly ... The first break in the family circle is hard to bear but people have been so kind & sympathetic & this has helped us [the King and me] much." The Queen's staunch support of her husband continued during the later half of his reign. She advised him on speeches and used her extensive knowledge of history and royalty to advise him on matters affecting his position. He appreciated her discretion, intelligence, and judgement. She maintained an air of self-assured calm throughout all her public engagements in the years after the war, a period marked by civil unrest over social conditions, Irish independence, and Indian nationalism. In the late 1920s, George V became increasingly ill with lung problems, exacerbated by his heavy smoking. Queen Mary paid particular attention to his care. During his illness in 1928, one of his doctors, Sir Farquhar Buzzard, was asked who had saved the King's life. He replied, "The Queen". In 1935, King George V and Queen Mary celebrated their silver jubilee, with celebrations taking place throughout the British Empire. In his jubilee speech, George paid public tribute to his wife, having told his speechwriter, "Put that paragraph at the very end. I cannot trust myself to speak of the Queen when I think of all I owe her." ## Queen mother (1936–1952) George V died on 20 January 1936, after his physician, Lord Dawson of Penn, gave him an injection of morphine and cocaine that may have hastened his death. Queen Mary's eldest son ascended the throne as Edward VIII. She was then to be known as Her Majesty Queen Mary. Within the year, Edward's intention to marry his twice-divorced American mistress, Wallis Simpson, led to his abdication. Mary disapproved of divorce as it was contrary to the teaching of the Anglican Church, and thought Simpson wholly unsuitable to be the wife of a king. After receiving advice from British prime minister Stanley Baldwin, as well as the Dominion governments, that he could not remain king and marry Simpson, Edward abdicated. Though loyal and supportive of her son, Mary could not comprehend why Edward would neglect his royal duties in favour of his personal feelings. Simpson had been presented formally to both King George V and Queen Mary at court, but Mary later refused to meet her either in public or privately. She saw it as her duty to provide moral support for her second son, the reserved Prince Albert, Duke of York. Albert ascended the throne on Edward's abdication, taking the name George VI. When Mary attended the coronation of George VI, she became the first British dowager queen to do so. Edward's abdication did not lessen her love for him, but she never wavered in her disapproval of his actions. Mary took an interest in the upbringing of her granddaughters Elizabeth and Margaret. She took them on various excursions in London, to art galleries and museums. (The princesses' own parents thought it unnecessary for them to be burdened with a demanding educational regime.) In May 1939, Mary was in a car crash: her car was overturned but she escaped with minor injuries and bruises. During the Second World War, George VI wished his mother to be evacuated from London. Although she was reluctant, she decided to live at Badminton House, Gloucestershire, with her niece Mary Somerset, Duchess of Beaufort, the daughter of her brother Adolphus. Her personal belongings were transported from London in seventy pieces of luggage. Her household, which comprised fifty-five servants, occupied most of the house, except for the Duke and Duchess's private suites, until after the war. The only people to complain about the arrangements were the royal servants, who found the house too small. From Badminton, in support of the war effort, Queen Mary visited troops and factories and directed the gathering of scrap materials. She was known to offer lifts to soldiers she spotted on the roads. In 1942, her son George, Duke of Kent, was killed in an air crash while on active service. Mary finally returned to Marlborough House in June 1945, after the war in Europe had resulted in the defeat of Nazi Germany. Mary was an eager collector of objects and pictures with a royal connection. She paid above-market estimates when purchasing jewels from the estate of Dowager Empress Marie of Russia and paid almost three times the estimate when buying the family's Cambridge Emeralds from Lady Kilmorey, the mistress of her late brother Prince Francis. After Francis's death, Mary had intervened to ensure his will was sealed by a court to cover his affair with Kilmorey. This set a precedent for royal wills to be sealed. In 1924, the famous architect Sir Edwin Lutyens created Queen Mary's Dolls' House for her collection of miniature pieces. She has sometimes been criticised for her aggressive acquisition of objets d'art for the Royal Collection. On several occasions, she would express to hosts, or others, that she admired something they had in their possession, in the expectation that the owner would be willing to donate it. Mary's extensive knowledge of, and research into, the Royal Collection helped in identifying artefacts and artwork that had gone astray over the years. The royal family had lent out many pieces over previous generations. Once she had identified unreturned items through old inventories, she would write to the holders, requesting that they be returned. In addition to being an avid collector, Mary also commissioned many gifts of jewellery, including rings which she presented to her ladies-in-waiting on the occasion of their engagements. ## Final year and death In 1952, George VI died, the third of Queen Mary's children to predecease her; her eldest granddaughter, Princess Elizabeth, ascended the throne as Queen Elizabeth II. The death of a third child profoundly affected her. Mary remarked to Princess Marie Louise: "I have lost three sons through death, but I have never been privileged to be there to say a last farewell to them." On the accession of Elizabeth II, there was some dispute regarding the dynasty to which descendants of Elizabeth and her husband Phillip would belong. Mary expressed to Prime Minister Winston Churchill her aversion to the idea of the House of Mountbatten succeeding the House of Windsor as the royal dynasty. Mary died on 24 March 1953 in her sleep at the age of 85, ten weeks before her granddaughter's coronation. She had let it be known that should she die, the coronation should not be postponed. Her remains lay in state at Westminster Hall, where large numbers of mourners filed past her coffin. She is buried beside her husband in the nave of St George's Chapel, Windsor Castle. Mary's will was sealed in London after her death. Her estate was valued at £406,407 (or £ in when adjusted for inflation). ## Legacy Actresses who have portrayed Queen Mary include Dame Flora Robson (in A King's Story, 1965), Dame Wendy Hiller (on the London stage in Crown Matrimonial, 1972), Greer Garson (in the television production of Crown Matrimonial, 1974), Judy Loe (in Edward the Seventh, 1975), Dame Peggy Ashcroft (in Edward & Mrs. Simpson, 1978), Phyllis Calvert (in The Woman He Loved, 1988), Gaye Brown (in All the King's Men, 1999), Miranda Richardson (in The Lost Prince, 2003), Margaret Tyzack (in Wallis & Edward, 2005), Claire Bloom (in The King's Speech, 2010), Judy Parfitt (in W.E., 2011), Valerie Dane (in the television version of Downton Abbey, 2013), Dame Eileen Atkins (in Bertie and Elizabeth, 2002 and The Crown, 2016), Geraldine James (in the film version of Downton Abbey, 2019), and Candida Benson (in The Crown, 2022). Many places and buildings have been named in Mary's honour, including Queen Mary University of London, Queen Mary Reservoir in Surrey, and Queen Mary College in Lahore. Sir Henry "Chips" Channon wrote that Queen Mary was "above politics ... magnificent, humorous, worldly, in fact nearly sublime, though cold and hard. But what a grand Queen." ## Titles, honours and arms Queen Mary's arms were the royal coat of arms of the United Kingdom impaled with her family arms – the arms of her grandfather Prince Adolphus, Duke of Cambridge, in the 1st and 4th quarters, and the arms of her father, Prince Francis, Duke of Teck, in the 2nd and 3rd quarters. The shield is surmounted by the imperial crown, and supported by the crowned lion of England and "a stag Proper" as in the arms of Württemberg. ## Issue ## Ancestry ## See also - Crown of Queen Mary - Household of George V and Mary - List of covers of Time magazine (1920s), (1930s)
71,714,556
1937–38 Gillingham F.C. season
1,169,143,892
null
[ "English football clubs 1937–38 season", "Gillingham F.C. seasons" ]
During the 1937–38 English football season, Gillingham F.C. competed in the Football League Third Division South, the third tier of the English football league system. It was the 18th season in which Gillingham competed in the Football League. The team won only three times in nineteen Football League matches between August and December; in November and December they played six league games and lost every one without scoring a goal, leaving them bottom of the division at the end of 1937. Although Gillingham's performances improved in the second half of the season, with seven wins between January and May, they remained in last place at the end of the season, meaning that the club was required to apply for re-election to the League. The application was rejected, and as a result the club lost its place in the Football League and joined the regional Southern League. Gillingham also competed in two knock-out competitions. The team were eliminated in the first round of the FA Cup but reached the second round of the Third Division South Cup. The team played 45 competitive matches, winning 11, drawing 6 and losing 28. Jimmy Watson was the club's top goalscorer with 8 goals in Third Division South matches and 13 in all competitions. Dave Whitelaw and Tug Wilson made the most appearances; both played 42 games. The highest attendance recorded at the club's home ground, Priestfield Road, during the season was 9,831 for a game against Millwall on 9 October 1937. ## Background and pre-season The 1937–38 season was Gillingham's 18th season playing in the third and lowest level of the Football League. The club had been among the founder members of the Football League Third Division in 1920, which was renamed the Third Division South when a parallel Third Division North was created a year later. In Gillingham's 17 seasons in this division, the team had consistently struggled, only finishing in the top half of the league table three times. They had finished in the bottom two on four occasions, requiring them to apply each time for re-election to the League, most recently in the 1931–32 season. Alan Ure was the club's manager; he had been appointed at the conclusion of the previous season following the resignation of Fred Maven. Jack Oxberry assisted him in the role of trainer. The club signed several players ahead of the new season, including half-back Jimmy Nichol, who arrived from Portsmouth. Nichol had spent three seasons with Gillingham in the 1920s and returned for a second spell with the club at the age of 34; he was appointed team captain. Other new signings included Bryan Dalton from Reading, Fred Smith from Exeter City, Albert Taylor from Lincoln City, Cyril Walker from Watford and Archie Young from Leicester City. The team wore Gillingham's usual kit of blue shirts and white shorts. Pre-season matches between Football League members were not permitted at the time, and clubs instead generally prepared for the season with a public trial match between two teams chosen from within their own squad of players. Gillingham staged such a match in August but it had to be abandoned at half-time due to torrential rain. ## Third Division South ### August–December The club's first match of the season, on 28 August, was away to Bristol City; Nichol, Young, Smith, Walker, Taylor and Dalton all made their debuts and Taylor scored Gillingham's first goal of the season. Gillingham lost the match 3–1; the correspondent for the Sunday Dispatch wrote that they "did not impress". Four days later, Gillingham played the first game of the season at their home ground, Priestfield Road; a goal from Walker gave them a 1–0 win over Newport County. The Daily Herald's reporter praised Gillingham's full-backs and forwards but identified the poor quality of their half-backs as a "serious problem". After a draw against Watford, Gillingham lost three consecutive games. The run of defeats ended with a 5–3 victory away to Exeter City on 18 September, the most goals the team would score in a match during the season. Both Smith and Walker scored twice. Gillingham ended September 18th out of 22 teams in the Third Division South league table. Gillingham's first four matches of October all resulted in defeat, beginning when they lost 2–0 away to Aldershot. A week later the team lost to Millwall; having been three goals down at half-time, Gillingham scored twice in the second half but lost 3–2. It was the last game that both Walker, who had only joined the club at the start of the season, and long-serving full-back Fred Lester played for the club; both were transferred to Sheffield Wednesday of the Second Division hours before Gillingham's next match. After two further defeats, Gillingham were bottom of the Third Division South table. Full-back George Tweed, a new signing from Bristol Rovers, made his debut on 23 October and would play in every game for the remainder of the season. The team ended their winless run by defeating Walsall in the final match of October and as a result moved above their opponents on goal average, ending the month in 21st place in the table. During November and December Gillingham played six Third Division South games and lost every one without scoring a goal. The sequence began on 6 November with a 4–0 defeat away to Cardiff City. it continued with a 2–0 defeat at home to Bournemouth & Boscombe Athletic, after which Gillingham were bottom of the table. Gillingham next lost 1–0 away to Brighton & Hove Albion; the Sunday Pictorial's correspondent praised Gillingham's goalkeeper Dave Whitelaw and full-back Bill Armstrong for restricting the victors to one goal. After further defeats away to Southend United and Torquay United, Gillingham's last game of 1937 was a 1–0 defeat away to Notts County; the attendance of 23,337 at Meadow Lane was the largest in front of which Gillingham played during the season. The result meant that they finished the calendar year bottom of the Third Division South with 8 points from 19 games, 7 points below 21st-placed Aldershot. ### January–May Gillingham's first game of 1938 was at home to Bristol City; Albert Brallisford scored Gillingham's first league goal since October to give his team a 1–0 win. A week later, Gillingham lost again, being defeated 5–1 at home by Queens Park Rangers, the first time they had conceded as many goals during the season. The team remained unbeaten for the remainder of the month with two draws and a victory over Exeter City but stayed bottom of the table. Two young players, 20-year-old Charlie Campbell and 18-year-old Fred Herbert, made their Football League debuts against Exeter, and Herbert scored Gillingham's second goal. Herbert would remain a regular in the team for the rest of the season, missing only two matches. Gillingham lost four of their five games in February, the second a 5–0 defeat away to Millwall which was the team's heaviest defeat of the entire season; all the goals came in the second half. Gillingham finished the game with ten men after Frank Donoghue, making his Football League debut, was carried off injured. The game following the defeat to Millwall, a 2–1 defeat at home to Clapton Orient, drew an attendance of 1,789, the lowest of the season for any Gillingham game, either home or away. In their first game of March, Gillingham defeated Northampton Town at Priestfield Road; Herbert scored twice to take his league tally to five goals in six games. After losing 3–1 away to Walsall, Gillingham won home games against Notts County on 16 March and Cardiff City three days later, the first time the team had won two consecutive games during the season. The reporter for the Daily Herald wrote that George Ballsom and Reginald Neal were "outstanding" against Cardiff but that overall the quality of the game was "rarely worthy of League class"; the victory lifted Gillingham off the bottom of the league table, putting them one point above Walsall. Gillingham lost 2–0 away to Bournemouth on 26 March but remained 21st in the table at the end of March. Gillingham's first two matches of April resulted in a draw and a defeat, but they remained 21st in the table, one point above Walsall. In a nine-day period beginning on 15 April, Gillingham played four matches and lost three of them. The sequence began with a 3–0 defeat away to Crystal Palace; Norman Brickenden, a 23-year old goalkeeper, made his debut and was praised for his performance by the correspondent for the Daily News, but the same writer criticised Gillingham's forwards as "unbalanced" and "too individualistic" in their play. After beating Southend United the next day Gillingham lost again to Crystal Palace, taking a 2–0 lead in the first half before conceding four goals after the interval. The result left them once again bottom of the table. Gillingham lost two of their final three games, finishing with a 2–0 defeat away to Reading, the team's 26th league defeat of the season. Gillingham finished bottom of the division with 26 points from 42 games; the points total was the lowest and the number of defeats the highest recorded by the team in 18 seasons in the Football League. ### Match details Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ### Partial league table ## Cup matches ### FA Cup As a Third Division South club, Gillingham entered the 1937–38 FA Cup in the first round, where they were paired with fellow Third Division South club Swindon Town. Jimmy Watson scored Gillingham's only hat-trick of the season, including two goals from penalty kicks, but Gillingham lost 4–3 and were eliminated from the competition. #### Match details Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ### Third Division South Cup Gillingham entered the 1937–38 Third Division South Cup in the first round, where they played Brighton & Hove Albion. In front of an attendance of 2,000, one of the lowest of the season at Priestfield Road, Watson scored twice in a 3–1 win for Gillingham. In the second round, Gillingham played Millwall. Several fringe players were brought into the team in place of regular starters, and Gillingham lost 4–0, ending their participation in the competition. #### Match details Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ## Players During the season, 32 players made at least one appearance for Gillingham. Whitelaw and forward Tug Wilson made the most; both played in 42 of the team's 45 competitive matches. Full-back Syd Hartley was the only other player to take part in 40 or more matches. Four players made only one appearance each, including Richard Maudsley, who played in one Third Division South Cup game but never made an appearance in the Football League for Gillingham or any other club. Bill Williams also played his only game for the club during the season; Leslie Williams made his sole Football League appearance for Gillingham during the season but remained with the club and played non-League football the following season, as did Donoghue, whose one Football League game for Gillingham during the season was the only appearance he made in the competition during his career. Watson was the team's top goalscorer, with eight goals in the Third Division South and five in the cup competitions. He was the only player to reach double figures; Herbert had the second-highest total, despite playing in fewer than half the team's games, with seven goals. ## Aftermath As a result of finishing last, Gillingham were again required to apply for re-election. The only non-League club to apply to join the Third Division South was Ipswich Town, who had finished third in the Southern League. They joined the two bottom teams in the division, Walsall and Gillingham, in a ballot among the League's member clubs for two places in the division for the subsequent season. Ipswich received 36 votes, Walsall 34, and Gillingham 28, meaning that Ipswich were elected to the Football League and Gillingham lost their place. After initial rumours that the club would fold completely, Gillingham returned to the Southern League and finished third in that league's First Division in the 1938–39 season. The club applied to rejoin the Football League at the first opportunity, but the application was rejected. Gillingham would be elected back into the Football League when the two Third Divisions were expanded from 22 to 24 clubs each in 1950.
71,651,693
Hajj: Journey to the Heart of Islam
1,161,378,404
2012 exhibition at the British Museum
[ "2012 in Islam", "2012 in London", "2012 in art", "Articles containing video clips", "Exhibitions at the British Museum", "Hajj", "Islam in London", "Islamic art" ]
Hajj: Journey to the Heart of Islam was an exhibition held at the British Museum in London from 26 January to 15 April 2012. It was the world's first major exhibition telling the story, visually and textually, of the hajj – the pilgrimage to Mecca which is one of the five pillars of Islam. Textiles, manuscripts, historical documents, photographs, and art works from many different countries and eras were displayed to illustrate the themes of travel to Mecca, hajj rituals, and the Kaaba. More than two hundred objects were included, drawn from forty public and private collections in a total of fourteen countries. The largest contributor was David Khalili's family trust, which lent many objects that would later be part of the Khalili Collection of Hajj and the Arts of Pilgrimage. The exhibition was formally opened by Prince Charles in a ceremony attended by Prince Abdulaziz bin Abdullah, son of King Abdullah, the custodian of the Two Holy Mosques. It was popular both with Muslims and non-Muslims, attracting nearly 120,000 adult visitors and favourable press reviews. This success inspired the Museum of Islamic Art in Doha, the Arab World Institute in Paris, the National Museum of Ethnology in Leiden, and the Tropenmuseum in Amsterdam to stage their own hajj-themed exhibitions with contributions from the Khalili Collection. An exhibition catalogue with essays about the hajj, edited by Venetia Porter, was published by the British Museum in 2012, along with a shorter illustrated guide to the hajj. An academic conference, linked to the exhibition, resulted in another book about the topic. ## Background: The Hajj The hajj (Arabic: حَجّ) is an annual pilgrimage to the sacred city of Mecca in Saudi Arabia, the holiest city for Muslims. It is a mandatory religious duty that must be carried out at least once in their lifetime by all adult Muslims who are physically and financially capable of undertaking the journey, and can support their family during their absence. At the time of the exhibition, the journey was being made by three million pilgrims each year. The hajj is one of the five pillars of Islam, along with shahadah (confession of faith), salat (prayer), zakat (charity), and sawm (fasting). It is a demonstration of the solidarity of the Muslim people, and their submission to God (Allah). The word "hajj" means "to attend a journey", which connotes both the outward act of a journey and the inward act of intentions. In the centre of the Masjid al-Haram mosque in Mecca is the Kaaba, a black cubic building known in Islam as the House of God. A hajj consists of several distinct rituals including the tawaf (procession seven times anticlockwise round the Kaaba), wuquf (a vigil at Mount Arafat where Mohammed is said to have preached his last sermon), and ramy al-jamarāt (stoning of the Devil). Of the five pillars, the hajj is the only one not open to non-Muslims, since Mecca is restricted to Muslims only. Over the centuries, the hajj and its destination the Kaaba have inspired creative works in many media, including literature, folk art, and photography. ## Preparation and launch There had been no previous major exhibitions devoted to the hajj. The British Museum's planning for its exhibition spanned a two-year period. This included research projects funded by the Arts and Humanities Research Council. The lead curator was Venetia Porter and the project curator was Qaisra Khan, both staff of the British Museum. Curators negotiated for public and private collections to loan objects for display; forty collections from fourteen countries contributed more than two hundred objects. The largest contributor was David Khalili's family trust. Preparation for the event included promotion to Muslim communities. Khan collected photographs, recordings, and souvenirs during her own hajj in 2010, and assisted with community outreach. The exhibition was presented in partnership with the King Abdulaziz Public Library and with the support of HSBC Amanah. Prince Charles gave a speech to formally open the exhibition on 26 January 2012. Prince Abdulaziz bin Abdullah travelled from Saudi Arabia to represent his father, the custodian of the Two Holy Mosques, at this opening ceremony. ## Content The exhibition was held in the circular British Museum Reading Room. To set the mood, visitors entered through a narrow passage where audio recordings of an adhan (call to prayer) were played. The displays were arranged to draw visitors around the circular space, mimicking the tawaf: the anticlockwise walk around the Kaaba that is a core ritual of the hajj. An early section illustrated the preparations traditionally taken before a hajj, which can include settling debts and preparing a will. Before trains and air travel, a hajj pilgrimage could take many months and involve a significant risk of death either from transmissible disease or bandits. Also displayed in this section were examples of ihram clothing: white clothes that mark the spiritual purpose and collective unity of hajj pilgrims. The bulk of the content was organised around three themes: pilgrimage routes, the rituals of the hajj, and Mecca. The first section described five different pilgrimage routes towards Mecca: the traditional routes through Arabia, North Africa, the Ottoman Empire, and Asia, plus the modern route by air from Britain. It thus contrasted the early pilgrims' arduous, risky journey across desert or ocean with the ease of modern travel. Pilgrimages of past centuries were illustrated by manuscripts of hajj-related literature, including the Anis Al-Hujjaj, the Dala'il al-Khayrat, the Shahnameh, the Futuh al-Haramayn, and the Jami' al-tawarikh. Mansa Musa, king of the Mali Empire, travelled to Mecca in 1324 with 60,000 courtiers as atonement for accidentally killing his mother; he was depicted in a panel from a 14th-century Catalan Atlas. The importance of Mecca to Muslims was illustrated by these ancient maps and diagrams as well as by qibla compasses which help devotees turn towards the city, which they are required to do for prayer. The stories of individual historical pilgrims were told through diaries and photographs. These included Westerners such as the explorer Richard Francis Burton (a non-Muslim who made the trip in disguise in 1853), intelligence officer Harry St John Philby, and aristocrat Lady Evelyn Cobbold. Philby took part in cleaning the Kaaba on his trip, and the brush and cloth he used were included in the exhibition. The King of Bone's diary, written in the Bugis language, was one of several objects from pilgrims who travelled from what is now Indonesia. Other texts included a travelogue by the 19th-century Chinese scholar Ma Fuchu and a 13th-century manuscript of the Maqamat al-Hariri story collection. One of the earliest surviving Qurans was on display: an 8th-century manuscript lacking the decorative calligraphy associated with later versions. A seven-minute video illustrated the rituals of the hajj. The rituals section also displayed textiles from the holy sites, including sections from kiswahs (ornate textile coverings that had decorated the Kaaba), sitaras (ornamental curtains) from other holy sites and a mahmal (ceremonial litter conveyed by camel from Cairo to Mecca with the pilgrim caravan). Some exhibits were personal items that pilgrims brought or acquired on their journey. These included prayer beads, travel tickets, and flasks for drinking water from the Zamzam Well. Also displayed were hajj certificates, showing that a hajj has been completed, often with illustrations of holy sites. Hajj banknotes can be bought before the journey and exchanged for Saudi currency, protecting the pilgrim from exchange rate fluctuations. The section on Mecca used past and present photographs and paintings to show how the mosque surrounding the Kaaba (the Masjid al-Haram) has been modernised to make space for much larger numbers of pilgrims, resulting in some ancient buildings being demolished. The 19th-century photographs included some by Muhammad Sadiq of holy sites in Mecca and Christiaan Snouck Hurgronje's portraits of pilgrims. Towards the end of the exhibition were several pieces of contemporary art, including works by Ahmed Mater, Idris Khan, Walid Siti, Kader Attia, Ayman Yossri, and Abdulnasser Gharem. A final section played audio testimonies of British hajj pilgrims and invited guests to write down their own reflections. ## Reception and legacy The museum's target of 80,000 visitors was quickly exceeded. By the end of the run, 119,948 adult tickets had been sold (children had free entry and were not counted). According to the British Museum's annual report, educational events connected to the exhibition attracted nearly 32,000 participants. Forty-seven percent of visitors were Muslims. Some non-Muslim visitors reported that overhearing Muslim families' conversations or striking up conversations with them, helped them appreciate the spiritual importance of the hajj. In surveys, 89% of attendees reported emotional or spiritual reactions such as reflection on faith. Steph Berns, a doctoral researcher at the University of Kent, interviewed attendees and found a small minority for whom contemplating the artefacts or personal testimonies induced a sense of closeness to God. The aspects of the exhibition most often remarked on by visitors were the personal accounts of hajj pilgrims in the video, photographs, and textual diaries. The artifacts that attracted the most visitor comments were the textiles and contemporary art pieces. Berns observed that, for most visitors, the exhibition could not fully reproduce the personal and emotional experience of the hajj, which is crucially connected to the specific location of Mecca. She described this as an unavoidable result of presenting the topic within a museum thousands of miles away. In The Guardian, Jonathan Jones wrote "This is one of the most brilliant exhibitions the British Museum has put on", awarding it five stars out of five. He described its celebration of Islam as "challenging" to non-Muslim Westerners used to negative portrayal of the religion. The Londonist praised an "eye-opening and fascinating" exhibition that demystified an aspect of Islam poorly understood by most of the public. Brian Sewell in the Evening Standard described the exhibition as "of profound cultural importance", praising it as an example of "what multiculturalism should be – information, instruction and understanding, academically rigorous, leaving both cultures (the enquiring and the enquired) intact". For The Diplomat, Amy Foulds described the first part of the exhibition as very interesting but felt that the section about Mecca was anti-climactic, though somewhat redeemed by the contemporary art pieces. Reviewing for The Arts Desk, Fisun Guner awarded four out of five stars to "an exhibition about faith that even an avowed atheist might find rather moving [...] as we read and listen to the words of believers experiencing what must be seen for them not only as an encounter with God but a deep sense of connection with fellow Muslims". For The Independent, Arifa Akbar, who went on hajj in 2006, found it "utterly refreshing" to see a focus on personal experiences of the hajj rather than the politics of Islam and how it is perceived by non-Muslims. She observed that a museum visit is unavoidably dry compared to the intense experience of joining the throng around the Kaaba, but praised the curators' originality and courage in tackling the subject. For Akbar, the highlights included the 8th-century Quran and a sitara. Also in The Independent, Jenny Gilbert found the logistical details of travel – a "dry" topic for those not already interested in manuscript maps – less appealing than the colourful accounts of historical and modern pilgrims. The journalist and broadcaster Sarfraz Manzoor took his 78-year-old mother to the exhibition since she had long wanted to perform the hajj but was too infirm to make the trip. He contrasted his mother's joyous reaction against his own mixed feelings on the subject matter as a British Muslim. "And yet", he wrote, "the exhibition does illuminate the magnetic appeal of the hajj – of knowing that hundreds of millions have visited the site and completed the same rituals." The scholar of religion Karen Armstrong recommended the exhibition as an antidote to Western stereotypes of Islam that focus on violence and extremism. She described it as an insight into how the vast majority of Muslims view and practise their religion. For the Sunday Times art critic Waldemar Januszczak, an exhibition on a topic for which there is relatively little visual material was "heroic" and showed a determination to help visitors understand the world. He drew a parallel with exhibitions of conceptual art; since texts rather than visual art played a crucial role, "so much of the extraordinary story line laid out for us [...] takes place in the mind". Among the visual art, he singled out the textiles as providing "a visceral artistic buzz to the display". In Newsweek, Jason Goodwin said the exhibition fulfilled the British Museum's purpose of "explain[ing] the world to itself" but said that Saudi influence resulted in "a palpable air of self-congratulation and a tendency to soft-pedal the role of the Ottoman Turks in maintaining the major hajj routes across their empire from the 16th to the 20th centuries." Nick Cohen, in an Observer piece accusing British cultural institutions of "selling their souls" to dictatorships, criticised the exhibition for ignoring aspects of the hajj documented by historians of Islam. He speculated that topics had been excluded so as not to offend the Saudi royal family, including deaths at the hajj (by violence or by incompetent crowd control) and the destruction of buildings in Mecca where Mohammad and his family had lived. The museum responded that the Saudi royal family had not funded the exhibition and had no curatorial control. Jonathan Jones responded to Cohen, defending the five-star review he had given. For Jones, the exhibition was driven not by political or theological goals but by a genuine enthusiasm for the beauty and significance of Islamic culture. That some exhibits had come from Saudi Arabia was, in his view, not significant. ### Publications Two books resulted directly from the exhibition, both edited by Venetia Porter. Hajj: Journey to the Heart of Islam is an exhibition catalogue that also includes interdisciplinary essays explaining the history, culture, and religious significance of the hajj. The authors include Karen Armstrong, Muhammad Abdel-Haleem, Hugh N. Kennedy, Robert Irwin, and Ziauddin Sardar. The Art of Hajj is a shorter book describing Mecca, Medina, and the rituals of the hajj with visual examples. Qamar Adamjee, a curator at the Asian Art Museum in San Francisco, described both books as accessible to a broad audience while covering many different aspects of the subject. An academic conference was held in conjunction with the exhibition from 22 to 24 March. Its proceedings, including thirty papers on different aspects of the hajj, were published by the British Museum in 2013 as The Hajj: Collected Essays, edited by Venetia Porter and Liana Saif. The Khalili Collection of Hajj and the Arts of Pilgrimage subsequently expanded into a five-thousand-object collection documenting the Islamic holy sites of Mecca and Medina. In 2022 it was published in a single illustrated volume by Qaisra Khan, who had co-curated the London exhibition and had become the curator of Hajj and the Arts and Pilgrimage at the Khalili Collections. An eleven-volume catalogue is scheduled for publication in 2023. ### Related exhibitions The success of Hajj: Journey to the Heart of Islam prompted museums and art institutions in other countries to inquire about hosting hajj-themed exhibitions. It was not possible for the London exhibition to go on tour; it had involved special loans from 40 different sources, arranged by years of negotiation. Instead, these institutions created exhibitions on the theme of the hajj using items loaned by the Khalili Collection, among other collections. These included the Museum of Islamic Art in Doha and the Arab World Institute in Paris. The Doha exhibition was titled Hajj: The Journey Through Art and drew most of its content from Qatari art collections. Since France has many North African immigrants, the Paris exhibition focused on hajj routes from North Africa. A Dutch exhibition titled Longing for Mecca: The Pilgrim's Journey was held in 2013 at the National Museum of Ethnology in Leiden and in an expanded version at the Tropenmuseum in Amsterdam from January 2019 to February 2020. This combined objects from Dutch collections with the Khalili Collection objects that had been exhibited in London. ## See also - History of the Hajj
465,004
1957 Canadian federal election
1,173,068,454
null
[ "1957 elections in Canada", "Canadian federal elections by year", "John Diefenbaker", "June 1957 events in Canada" ]
The 1957 Canadian federal election was held June 10, 1957, to select the 265 members of the House of Commons of Canada of the 23rd Parliament of Canada. In one of the greatest upsets in Canadian political history, the Progressive Conservative Party (also known as "PCs" or "Tories"), led by John Diefenbaker, brought an end to 22 years of Liberal rule, as the Tories were able to form a minority government despite losing the popular vote to the Liberals. The Liberal Party had governed Canada since 1935, winning five consecutive elections. Under Prime Ministers William Lyon Mackenzie King and Louis St. Laurent, the government gradually built a welfare state. During the Liberals' fifth term in office, the opposition parties depicted them as arrogant and unresponsive to Canadians' needs. Controversial events, such as the 1956 "Pipeline Debate" over the construction of the Trans-Canada Pipeline, had hurt the government. St. Laurent, nicknamed "Uncle Louis", remained popular, but exercised little supervision over his cabinet ministers. In 1956, Tory leader George A. Drew unexpectedly resigned due to ill health. In his place, the PC party elected the fiery and charismatic Diefenbaker. The Tories ran a campaign centred on their new leader, who attracted large crowds to rallies and made a strong impression on television. The Liberals ran a lacklustre campaign, and St. Laurent made few television appearances. Uncomfortable with the medium, the Prime Minister read his speeches from a script and refused to wear makeup. Abandoning their usual strategy of trying to make major inroads in Liberal-dominated Quebec, the Tories focused on winning seats in the other provinces. They were successful; though they gained few seats in Quebec, they won 112 seats overall to the Liberals' 105. With the remaining seats won by other parties, the PC party only had a plurality in the House of Commons, but the margin was sufficient to make John Diefenbaker Canada's first Tory Prime Minister since R. B. Bennett in 1935. ## Background ### Liberal domination The Tories had last governed Canada under R.B. Bennett, who had been elected in 1930. Bennett's government had limited success in dealing with the Depression, and was defeated in 1935, as Liberal William Lyon Mackenzie King, who had previously served two times as Prime Minister, was restored to power. The Liberals won five consecutive elections between 1935 and 1953, four of the victories resulting in powerful majority governments. The Liberals worked closely with the civil service (drawing several of their ministers from those ranks) and their years of dominance saw prosperity. When Mackenzie King retired in 1948, he was succeeded by Minister of Justice Louis St. Laurent, a bilingual Quebecer who took office at the age of 66. An adept politician, St. Laurent projected a gentle persona and was affectionately known to many Canadians as Uncle Louis. In actuality, St. Laurent was uncomfortable away from Ottawa, was subject to fits of depression (especially after 1953), and on political trips was carefully managed by advertising men from the firm of Cockfield Brown. St. Laurent led the Liberals to an overwhelming triumph in the 1949 election, campaigning under the slogan "You never had it so good". The Liberals won a fifth successive mandate in 1953, with St. Laurent content to exercise a highly relaxed leadership style. The Mackenzie King and St. Laurent governments laid the groundwork for the welfare state, a development initially opposed by many Tories. C.D. Howe, considered one of the leading forces of the St. Laurent government, told his Tory opponents when they alleged that the Liberals would abolish tariffs if the people would let them, "Who would stop us? ... Don't take yourselves too seriously. If we wanted to get away with it, who would stop us?" ### Tory struggles At the start of 1956, the Tories were led by former Ontario premier George A. Drew, who had been elected PC leader in 1948 over Saskatchewan MP John Diefenbaker. Drew was the fifth man to lead the Tories in their 21 years out of power. None had come close to defeating the Liberals; the best performance was in 1945, when John Bracken secured 67 seats for the Tories. The Liberals, though, had won 125 seats, and maintained their majority. In the 1953 election, the PC party won 51 seats out of the 265 in the House of Commons. Subsequently, the Tories picked up two seats from the Liberals in by-elections, and the Liberals (who had won 169 seats in 1953) lost an additional seat to the Co-operative Commonwealth Federation (CCF, the predecessor of the New Democratic Party (NDP)). After over two decades in opposition, the Tories were closely associated with that role in the public eye. The Tories were seen as the party of the wealthy and of English-speaking Canada and drew about 30% of the vote in federal elections. The Tories had enjoyed little success in Quebec in the past forty years. By 1956, the Social Credit Party was becoming a potential rival to the Tories as Canada's main right-wing party. Canadian journalist and author Bruce Hutchison discussed the state of the Tories in 1956: > When a party calling itself Conservative can think of nothing better than to outbid the Government's election promises; when it demands economy in one breath and increased spending in the next; when it proposes an immediate tax cut regardless of inflationary results ... when in short, the Conservative party no longer gives us a conservative alternative after twenty-one years ... then our political system desperately requires an opposition prepared to stand for something more than the improbable chance of quick victory. ### Run-up to the campaign In 1955, the Tories, through a determined filibuster, were able to force the government to withdraw amendments to the Defence Procurement Act, which would have made temporary, extraordinary powers granted to the government permanent. Drew led the Tories in a second battle with the government the following year: in the so-called "Pipeline Debate", the government invoked closure repeatedly in a weeks-long debate which ended with the Speaker ignoring points of order as he had the division bells rung. Both measures were closely associated with Howe, which, in combination with his earlier comments, led to Tory claims that Howe was indifferent to the democratic process. Tory preparations for an upcoming election campaign were thrown into disarray in August 1956 when Drew fell ill. Tory leaders felt that the party needed vigorous leadership with a federal election likely to be called within a year. In September, Drew resigned. Diefenbaker, who had failed in two prior bids for the leadership, announced his candidacy, as did Tory frontbenchers Davie Fulton and Donald Fleming. Diefenbaker, a criminal defence lawyer from Prince Albert, Saskatchewan, was on the populist left of the PC party. Those Tory leaders who disliked Diefenbaker and his views, and hoped to find a candidate from Drew's conservative wing of the party, wooed University of Toronto president Sidney Smith as a candidate. However, Smith refused to run. Tory leaders scheduled a leadership convention for December. In early November, the Suez crisis erupted. Minister of External Affairs Lester Pearson played a major part in the settlement of that dispute, and was later awarded the Nobel Peace Prize for his role. Diefenbaker, as the Tories' foreign policy critic and as the favourite in the leadership race, gained considerable attention for his speeches on Suez. The Tories attacked Pearson for, as they said, being an errand boy for the United States government; he responded that it was better to be such a lackey than to be an obedient colonial doing Britain's will unquestioningly. While Suez would come to be regarded by many as one of the finest moments in Canadian foreign policy, at the time it cost the Liberals support outside of Quebec. Diefenbaker was the favourite throughout the leadership campaign. At the convention in Ottawa in December, he refused to abide by the custom of having a Quebecer be either the proposer or seconder of his candidacy, and instead selected an Easterner (from New Brunswick) and a Westerner to put his name in nomination. With most Quebec delegates backing his opponents, Diefenbaker felt that having a Quebecer as a nominator would not increase his support. Diefenbaker was elected on the first ballot, and a number of Quebec delegates walked out of the convention after his victory. Other Diefenbaker opponents, such as those who had urged Smith to run, believed that the 61-year-old Diefenbaker would be merely a caretaker, who would serve a few years and then step down in favour of a younger man, and that the upcoming election would be lost to the Liberals regardless of who led the Tories. When Parliament convened in January, the Liberals introduced no major proposals, and proposed nothing controversial. Diefenbaker turned over his parliamentary duties to British Columbia MP Howard Green and spent much of his time on the road making speeches across the country. Diefenbaker toured a nation in which Liberal support at the provincial level had slowly been eroding. When the Liberals gained Federal power in 1935, they controlled eight of the nine provincial governments, all except Alberta. By early 1957, the Liberals controlled the legislatures only in the tenth province, Newfoundland, and in Prince Edward Island and Manitoba. In March, Finance Minister Walter Harris, who was believed to be St. Laurent's heir apparent, introduced his budget. The budget anticipated a surplus of \$258 million, of which \$100 million was to be returned in the form of increased welfare payments, with an increase of \$6 per month (to a total of \$46) for old age pensioners—effective after the election. Harris indicated that no more could be returned for fear of increasing inflation. Diefenbaker attacked the budget, calling for higher old age pensions and welfare payments, more aid to the poorer provinces, and aid to farmers. St. Laurent had informed Diefenbaker that Parliament would be dissolved in April, for an election on June 10. A final parliamentary conflict was sparked by the suicide of Canadian Ambassador to Egypt E.H. Norman in the midst of allegations made by a United States Senate subcommittee that Norman had communist links. Pearson had defended Norman when the allegations became public, and defended him again after his death, suggesting to the Commons that the allegations were false. It quickly became apparent that the information released by the Americans might have come from Canadian intelligence sources, and after severe questioning of Pearson by Diefenbaker and the other parties' foreign policy critics, Pearson made a statement announcing that Norman had had communist associations in his youth, but had passed a security review. The minister evaded further questions regarding what information had been provided, and the discussion was cut short when Parliament was dissolved on April 12. Peter Regenstreif, who studied the four elections between 1957 and 1963, wrote of the situation at the start of the election campaign, "In 1957, there was no tangible indication that the Liberals would be beaten or, even in the opposition's darkest moment of reflection, could be. All the hindsight and post hoc gazing at entrails cannot change that objective fact." ## Issues The Liberals and PC party differed considerably on fiscal and tax policies. In his opening campaign speech at Massey Hall in Toronto, Diefenbaker contended that Canadians were overtaxed in the amount of \$120 per family of four. Diefenbaker pledged to reduce taxes and castigated the Liberals for not reducing taxes despite the government surplus. St. Laurent also addressed tax policy in his opening speech, in Winnipeg. St. Laurent noted that since 1953, tax rates had declined, as had the national debt, and that Canada had a reputation as a good place for investments. The Prime Minister argued that the cost of campaign promises made by the Progressive Conservatives would inevitably drive up the tax rate. Diefenbaker also assailed tight-money monetary policies which kept interest rates high, complaining that they were hitting Atlantic and Western Canada hard. The Tories promised changes in agricultural policies. Many Canadian farmers were unable to find buyers for their wheat; the PC party promised generous cash advances on unsold wheat and promised a protectionist policy regarding foreign agricultural products. The Liberals argued that such tariffs were not worth the loss of bargaining position in efforts to seek foreign markets for Canadian agricultural products. The institution of the welfare state was by 1957 accepted by both major parties. Diefenbaker promised to expand the national health insurance scheme to cover tubercular and mental health patients. He characterized the old age pension increase which the Liberal government was instituting as a mere pittance, not even enough to keep up with the cost of living. Diefenbaker noted that the increase only amounted to twenty cents a day, using that figure to ridicule Liberal contentions that an increase would add to the rate of inflation. All three opposition parties promised to increase the pension, with the Social Crediters and CCF even stating the specific amounts it would be raised by. The Liberals were content to rest on their record in foreign affairs, and doubted that the Tories could better them. In a radio address on May 30, Minister of Transport George Marler commented, "You will wonder as I do who in the Conservative Party would take the place of the Honourable Lester Pearson, whose knowledge and experience of world affairs has been put to such good use in recent years." Diefenbaker, however, refused to concede the point and in a televised address stated that Canadians were "asking Pearson to explain his bumbling of External Affairs". Though they were reluctant to discuss the Norman affair, the Tories suggested that the government had irresponsibly allowed gossip to be transmitted to United States congressional committees. They also attacked the government over Pearson's role in the Suez settlement, suggesting that Canada had let Britain down. Some members of the Tories' campaign committee had urged Diefenbaker not to build his campaign around the Pipeline Debate, contending that the episode was now a year in the past and forgotten by the voters, who did not particularly care what went on in Parliament anyway. Diefenbaker replied, "That's the issue, and I'm making it." Diefenbaker referred to the conduct of the government in the Pipeline Debate more frequently than he did any other issue during the campaign. St. Laurent initially dealt with the question flippantly, suggesting in his opening campaign address that the debate had been "nearly as long as the pipeline itself and quite as full of another kind of natural gas". As the issue gained resonance with the voters, the Liberals devoted more time to it, and St. Laurent devoted a major part of his final English television address to the question. The Liberals defended their conduct, and contended that a minority should not be allowed to impose its will on an elected majority. St. Laurent suggested that the Tories had performed badly as an opposition in the debate, and suggested that the public give them more practice at being an opposition. Finally, the Tories contended that the Liberals had been in power too long, and that it was time for a change. The PC party stated that the Liberals were arrogant, inflexible, and not capable of looking at problems from a new point of view. Liberals responded that with the country prosperous, there was no point to a change. ## Campaign ### Progressive Conservative In 1953, almost half of the Tories' campaign funds were spent in Quebec, a province in which the party won only four of seventy-five seats. After the 1953 election, Tory MP Gordon Churchill studied the Canadian federal elections since Confederation. He concluded that the Progressive Conservatives were ill-advised to continue pouring money into Quebec in an effort to win seats in the province; the Tories could win at least a minority government by maximizing their opportunities in English-speaking Canada, and if the party could also manage to win twenty seats in Quebec, it could attain a majority. Churchill's conclusions were ignored by most leading Tories—except Diefenbaker. Diefenbaker's successful leadership race had been run by Allister Grosart, an executive for McKim Advertising Ltd. Soon after taking the leadership, Diefenbaker got Grosart to help out at Tory headquarters, and soon appointed him national director of the party and national campaign manager. Grosart appointed a national campaign committee, something which had not been done previously by the Tories, but which, according to Grosart, provided the organizational key to success in 1957. The party was ill-financed, having only \$1,000,000 to wage the campaign—half what it had in 1953. Grosart divided most of that money equally by constituency, to the disgruntlement of Quebec Tories, who were used to receiving a disproportionate share of the national party's financing. The Tory campaign opened at Massey Hall in Toronto on April 25, where Diefenbaker addressed a crowd of 2,600, about 200 short of capacity. At the Massey Hall rally, a large banner hung behind Diefenbaker, which did not mention the name of his party, but which instead stated, "It's Time for a Diefenbaker Government." The slogan, coined by Grosart, sought to blur Canadian memories of the old Tory party of Bennett and Drew and instead focus attention on the party's new leader. Posters for election rallies contained Diefenbaker's name in large type; only the small print contained the name of the party. When St. Laurent complained that the Tories were not campaigning under their own name, Grosart sent copies of the Prime Minister's remarks in a plain envelope to every Liberal candidate, and was gratified when they began inserting the allegation into their own speeches. According to Professor J. Murray Beck in his history of Canadian general elections, "His political enemies were led to make the very point he was striving to drive home: Diefenbaker was, in effect, leading a new party, not an old one with a repellent image." Grosart later stated that he structured the entire campaign around the personality of John Diefenbaker, and threw away the Tory party and its policies. Diefenbaker began his campaign with a week of local campaigning in his riding, after which he went to Toronto for the Massey Hall speech. After two days in Ontario, he spoke at a rally in Quebec City, before spending the remainder of the first week in the Maritimes. The next week saw Diefenbaker spend two days in Quebec, after which he campaigned in Ontario. The next two weeks included a Western tour, with brief returns to Ontario, the most populous province. The final two weeks saw Diefenbaker spend much of the time in Ontario, though with brief journeys east to the Maritimes and Quebec and twice west to Saskatchewan. He returned to his riding for the final weekend before the Monday election. He spent 39 days on the campaign trail, eleven more than the Prime Minister. According to Professor John Meisel, who wrote a book about the 1957 campaign, Diefenbaker's speaking style was "reminiscent of the fiery orators so popular in the nineteenth century. Indeed, Mr. Diefenbaker's oratory has been likened to that of the revivalist preacher." As a new face on the national scene given to outspoken attacks on the government, he began to attract unexpectedly large crowds early in the campaign. When reduced to the written word, however, Diefenbaker's rhetoric sometimes proved to be without much meaning. According to journalist and author Peter C. Newman, "On the printed page, it makes little sense. But from the platform, its effect was far different." Both Newman and Meisel cite as an example of this the conclusion to the leader's Massey Hall speech: > If we are dedicated to this—and to this we are—you, my fellow Canadians, will require all the wisdom, all the power that comes from those spiritual springs that make freedom possible—all the wisdom, all the faith and all the vision which the Conservative Party gave but yesterday under Macdonald, change to meet changing conditions, today having the responsibility of this party to lay the foundations of this nation for a great and glorious future. Diefenbaker's speeches contained words which evoked powerful emotions in his listeners. His theme was that Canada was on the edge of greatness—if it could only get rid of its incompetent and arrogant government. He stressed that the only alternative to the Liberals was a "Diefenbaker government". According to Newman, Diefenbaker successfully drew on the discontent both of those who had prospered in the 1950s, and sought some deeper personal and national purpose, as well as those who had been left out of the prosperity. However, Diefenbaker spoke French badly and the excitement generated by his campaign had little effect in francophone Quebec, where apathy prevailed and Le Devoir spoke of "une campaigne électorale au chloroforme". The Tories had performed badly in British Columbia in 1953, finishing a weak fourth. However, the province responded to Diefenbaker, and 3,800 turned out for his Victoria speech on May 21, his largest crowd yet. This was bettered two days later in Vancouver with a crowd of 6,000, with even the street outside the Georgia Street Auditorium packed with Tory partisans. Diefenbaker responded to this by delivering what Dick Spencer (who wrote a book on Diefenbaker's campaigns) considered his greatest speech of the 1957 race, and which Newman considered the turning point of Diefenbaker's campaign. Diefenbaker stated, "I give this assurance to Canadians—that the government shall be the servant and not the master of the people ... The road of the Liberal party, unless it is stopped—and Howe has said, 'Who's going to stop us?'—will lead to the virtual extinction of parliamentary government. You will have the form, but the substance will be gone." The Liberal-leaning Winnipeg Free Press, writing shortly after Diefenbaker's speeches in British Columbia, commented on them: > Facts were overwhelmed with sound, passion substituted for arithmetic, moral indignation pumped up to the bursting point. But Mr. Diefenbaker provided the liveliest show of the election ... and many listeners undoubtedly failed to notice that he was saying even less than the Prime Minister, though saying it more shrilly and with evangelistic fervour ... Mr. Diefenbaker has chosen instead to cast himself as the humble man in a mood of protest, the common Canadian outraged by Liberal prosperity, the little guy fighting for his rights. > > So far as the crowds mean anything, that posture is a brilliant success at one-night stands. On June 6, the two major party campaigns crossed paths in Woodstock, Ontario. Speaking in the afternoon, St. Laurent drew a crowd of 200. To the shock of St. Laurent staffers, who remained for the Diefenbaker appearance, the PC leader drew an overflow crowd of over a thousand that evening, even though he was an hour late, with announcements made to the excited crowd that he was slowed by voters who wanted only to see him or shake his hand. Diefenbaker's intensive campaign exhausted the handful of national reporters who followed him. Clark Davey of The Globe and Mail stated, "We did not know how he did it." Reporters thought the Progressive Conservatives might, at best, gain 30 or 35 seats over the 53 they had at dissolution, and when Diefenbaker, off the record, told the reporters that the Tories would win 97 seats (which would still allow the Liberals to form the government), they concluded he was guilty of wishful thinking. Diefenbaker was even more confident in public; after he concluded his national tour and returned to his constituency, he addressed his final rally in Nipawin, Saskatchewan: "On Monday, I'll be Prime Minister." ### Liberal St. Laurent was utterly confident of an election victory, so much so that he did not even bother to fill the sixteen vacancies in the Senate. He had been confident of re-election when Drew led the Tories, and, according to Liberal minister Lionel Chevrier, Diefenbaker's victory in the party leadership race increased his confidence by a factor of ten. At his press conference detailing his election tour, St. Laurent stated, "I have no doubt about the election outcome." He indicated that his campaign would open April 29 in Winnipeg, and that the Prime Minister would spend ten days in Western Canada before moving east. However, he indicated he would first go home to Quebec City for several days around Easter (April 21 in 1957). This break kept him out of the limelight for ten days at a time when Diefenbaker was already actively campaigning and making daily headlines. At a campaign stop in Jarvis, Ontario, St. Laurent told an aide that he was afraid the right-wing, anti-Catholic Social Credit Party would be the next Opposition. St. Laurent denied Opposition claims that he would resign after an election victory, and the 75-year-old indicated that he planned to run again in 1961, if he was still around. The Liberals made no new, radical proposals during their campaign, but instead ran a quiet campaign with occasional attacks on the opposition parties. They were convinced that the public still supported their party, and that no expensive promises need be made to voters. St. Laurent was made the image of the nation's prosperity, and the Liberals refused to admit any reason for discontent existed. When Minister of Finance Harris proposed raising the upcoming increase in old age pension by an additional four dollars a month, St. Laurent refused to consider it, feeling that the increase had been calculated on the basis of the available facts, and those facts had not changed. During the Prime Minister's Western swing, St. Laurent made formal speeches only in major cities. In contrast to Diefenbaker's whistle-stop train touring, with a hasty speech in each town as the train passed through, the Liberals allowed ample time for "Uncle Louis" to shake hands with voters, pat their children on the head, and kiss their babies. In British Columbia, St. Laurent took the position that there were hardly any national issues worth discussing—the Liberals had brought Canada prosperity and all that was needed for more of the same was to return the party to office. After touring Western Canada, St. Laurent spent the remainder of the second week of the campaign returning to, and in, Ottawa. The third week opened with a major speech in Quebec City, followed by intensive campaigning in Ontario. The fifth week was devoted to the Maritime provinces and Eastern Quebec. The sixth week opened with a major rally in Ottawa, before St. Laurent returned to the Maritimes and Quebec, and the final week was spent in Ontario before St. Laurent returned to his hometown of Quebec City for the election. St. Laurent tried to project an image as a family man, and to that end often addressed schoolchildren. As he had in previous elections, he spoke to small groups of children regarding Canadian history or civics. The strategy backfired while addressing children in Port Hope, Ontario. With the children inattentive, some playing tag or sticking cameras in his face, St. Laurent angrily told them that it was their loss if they did not pay attention, as the country would be theirs to worry about far longer than it would be his. St. Laurent and the Liberals suffered other problems during the campaign. According to Newman, St. Laurent sometimes seemed unaware of what was happening around him, and at one campaign stop, shook hands with the reporters who were following him, under the apparent impression they were local voters. On the evening of Diefenbaker's Vancouver speech, St. Laurent drew 400 voters to a rally in Sherbrooke, Quebec, where he had once lived. C.D. Howe, under heavy pressure from the campaign of CCF candidate Doug Fisher in his Ontario riding, intimated that Fisher had communist links. At a rally in Manitoba, Howe offended a voter who told him the farmers were starving to death, poking the voter in the stomach and saying "Looks like you've been eating pretty well under a Liberal government." At another rally, Howe dismissed a persistent Liberal questioner, saying "Look here, my good man, when the election comes, why don't you just go away and vote for the party you support? In fact, why don't you just go away?" The Liberals concluded their campaign with a large rally at Toronto's Maple Leaf Gardens on June 7. Entertainment at the event was provided by the Leslie Bell singers, and according to Grosart, many in the audience were Tory supporters who had turned out to hear them. St. Laurent's speech at the rally was interrupted when William Hatton, a 15-year-old boy from Malton, Ontario, climbed onto the platform. Hatton carried a banner reading, "This Can't Be Canada" with a Liberal placard bearing St. Laurent's photograph, and moved to face the Prime Minister. Meisel describes Hatton as an "otherwise politically apathetic boy who ... slowly and deliberately tore up a photograph of the Prime Minister as the latter was speaking" and states that Hatton engaged in "intensely provocative behavior". Liberal partisans interceded, and in the ensuing fracas, Hatton fell from the platform, audibly hitting his head on the concrete floor. St. Laurent watched in apparent shock, according to his biographer Dale Tompson, as officials aided the boy and took him from the hall. According to Tompson, the crowd "turned its indignation on the men on the platform" and spent the remainder of the evening wondering about the boy's possible injuries rather than listening to the Prime Minister's speech. Hatton was not seriously injured, but, according to Newman, "the accident added to the image of the Liberal Party as an unrepentant arrogant group of old men, willing to ride roughshod over voters". Grosart later described the incident as "the turning point" of the campaign. Professor Meisel speculated that the Hatton incident might have been part of an organized campaign to annoy St. Laurent out of his pleasant "Uncle Louis" persona, and Grosart later related that Liberal frontbencher Jack Pickersgill always accused him of being behind the boy's actions, but that the incident was "a sheer accident". Hatton's mother described his actions as "[j]ust a schoolboy prank," and a reaction to reading an article about how the art of heckling was dying. According to public relations executive J.G. Johnston in a letter to Diefenbaker on June 10, Hatton had come to the rally with several other boys, including Johnston's son, but had gone off on his own while the other boys paraded with Diefenbaker posters which had been smuggled inside. According to Johnston, Hatton was caught on CBC tape saying to St. Laurent, "I can no longer stand your hypocrisy, Sir" before tearing the St. Laurent poster. Attempts by Johnston to have the Liberal activist who pushed Hatton off the platform arrested failed, according to Johnston, on the ground that the police could find no witnesses. ### CCF The CCF was a socialist party, which had much of its strength in Saskatchewan, though it ran candidates in several other provinces. At Parliament's dissolution in April 1957, it had 23 MPs, from five different provinces. Aside from the Liberals and the Tories, it was the only party to nominate a candidate in a majority of the ridings. In 1957, the party was led by Saskatchewan MP M.J. Coldwell. In 1956, the party adopted the Winnipeg Declaration, a far more moderate proposal than its previous governing document, the 1933 Regina Manifesto. For example, the Regina Manifesto pledged the CCF to the eradication of capitalism; the Winnipeg Declaration recognized the utility of private ownership of business, and stated that the government should own business only when it was in the public interest. In its election campaign, the CCF did not promise to nationalize any industries. It promised changes in the tax code in order to increase the redistribution of wealth in Canada. It pledged to increase exemptions from income tax, to allow medical expenses to be considered deductions from income for tax purposes, and to eliminate sales tax on food, clothing, and other necessities of life. It also promised to raise taxes on the higher income brackets and to eliminate the favourable tax treatment of corporate dividends. The CCF represented many agricultural areas in the Commons, and it proposed several measures to assure financial security for farmers. It proposed national growers' cooperatives for agricultural products which were exported. It proposed cash advances for farm-stored wheat, short and long-term loans for farmers at low interest rates, and government support of prices, to assure the farmer a full income even in bad years. For the Atlantic fisherman, the CCF proposed cash advances at the start of the fishing season and government-owned depots which would sell fishing equipment and supplies to fishermen at much lower than market prices. Coldwell suffered from a heart condition, and undertook a much less ambitious schedule than the major party leaders. The party leader left Ottawa for his riding, Rosetown—Biggar in Saskatchewan, on April 26, and remained there until May 10. He spent three days campaigning in Ontario, then moved west to the major cities of the prairie provinces and British Columbia, before returning to his riding for the final days before the June 10 election. Other CCF leaders took charge of campaigning in Quebec and the Maritimes. ### Social Credit By 1957, the Social Credit Party of Canada had moved far afield from the theories of social credit economics, which its candidates rarely mentioned. Canada's far-right party, the Socreds were led by Solon Low, though its Alberta leader, Premier Ernest Manning, was highly influential in the party. The Socreds' election programme was based on the demand "that Government get out of business and make way for private enterprise" and on their hatred of "all government-inspired schemes to degrade man and make him subservient to the state or any monopoly". The Socreds proposed an increase in the old age pension to \$100 per month. They called for the reversal of the government's tight money policies, and for low income loans for small business and farmers. It asked for income tax exemptions to be increased to meet the cost of living, and a national housing programme to make home ownership possible for every Canadian family. The party called for a national security policy based on the need for defence, rather than "aggression", and for a foreign policy which would include food aid to the less-developed nations. The Socreds also objected to the CBC and other spending in the arts and broadcasting. The party felt that the government should solve economic problems before spending money on the arts. Low challenged the Prime Minister over the Suez issue, accusing him of sending a threatening telegram that caused British Prime Minister Anthony Eden to back off the invasion and so gave the Soviets the opportunity for a military buildup in Egypt. St. Laurent angrily denied the charge and offered to open his correspondence to any of the fifty privy councillors who could then announce whether St. Laurent was telling the truth. Low took St. Laurent up on his challenge, and selected the only living former Prime Minister, Tory Arthur Meighen, but the matter was not resolved before the election, and Meighen was not called upon to examine the correspondence in the election's aftermath. The Social Credit Party was weakened by considerable conflict between its organizations in the two provinces it controlled, Alberta and British Columbia. It failed to establish a strong national office to run the campaign due to infighting between the two groups. However, it was better financed than the CCF, due to its popularity among business groups in the West. The Socreds hoped to establish themselves in Ontario, and scheduled their opening rally for Massey Hall in Toronto. The rally was a failure, and even though it ran 40 candidates in Ontario (up from 9 in 1953), the party won no seats in the province. ### Use of television In 1957, the Canadian Broadcasting Corporation gave the four recognized parties free air time for political statements. The television broadcast's audio tracks served also for use on radio. The three opposition parties gave their party leader all of the broadcast time, though Diefenbaker, who did not speak French well, played only a limited role in the Tories' French-language broadcasts. In one, he introduced party president Léon Balcer, a Quebecer, who gave the speech. Diefenbaker had no objection to makeup and, according to Meisel, was prepared to adopt any technique which would make his presentation more effective. A survey in populous southwestern Ontario showed that Diefenbaker made the strongest impression of the four leaders. According to Liberal minister Paul Martin Sr. (the father of the future Prime Minister), St. Laurent had the potential to be quite good on television, but disliked the medium. He was prejudiced against television appearances, considering such speeches the equivalent of carefully planned performances, such as stage shows. He refused to be made up for the telecasts, and insisted on reading his speech from a script. His advisers switched him to a teleprompter, but this failed to make his performances more relaxed. When reading, he would rarely look at the camera. However, St. Laurent made only occasional television appearances (three in each language), letting his ministers make the remainder. Only one cabinet member, Minister of National Defence Ralph Campney took advantage of the course in television techniques offered by the Liberal Party in a dummy studio in its Ottawa headquarters. This lack of preparation, according to Meisel, led to "a fiasco" during a television address by Minister of Justice Stuart Garson in early June when the teleprompter stopped working during the speech and Garson "was unable to cope effectively with the failure". ## Election Most predictions had the Tories picking up seats, but the Liberals maintaining a majority. The New York Times reported that Liberals expected to lose a majority of Ontario's seats, but retain a narrow majority in the House of Commons. Time magazine predicted a Tory gain of 20 to 40 seats and stated that any result which denied the Liberals 133 seats and a majority "would rank as a major upset". Beck indicates that many journalists, including those sympathetic to the Tories, saw signs of the coming upset, but disregarded them, convinced that the government was invulnerable. Maclean's, which printed its postelection issue before the election to go on sale the morning after, ran an editorial noting that Canadians had freely chosen to reelect the Liberal Party. On Election Night, the first returns came in from Sable Island, Nova Scotia. Usually a Liberal stronghold, the handful of residents there favoured the Tories by two votes. St. Laurent listened to the election returns on a radio in the living room of his home on Grande Allée in Quebec City, and when the radio broke, moved to an upstairs television set. Diefenbaker began the evening at his house in Prince Albert, and once his re-election to the Commons was certain, moved to his local campaign headquarters. The Conservatives did well in Atlantic Canada, gaining two seats in Newfoundland and nine in Nova Scotia, and sweeping Prince Edward Island's four seats. However, in Quebec, they gained only five seats as the province returned 62 Liberals. The Tories gained 29 seats in Ontario. Howe was defeated by Fisher, and told the media that some strange disease was sweeping the country, but as for him, he was going to bed. The Liberals still led by a narrow margin as the returns began to come in from Manitoba, and St. Laurent told Liberal minister Pickersgill that he hoped that the Tories would get at least one more seat than the Liberals so they could get out of an appalling situation. As the Tories forged ahead in Western Canada, Diefenbaker flew from Prince Albert to Regina to deliver a television address and shouted to Grosart as yet another cabinet minister was defeated, "Allister, how does the architect feel?" Late that evening, St. Laurent went to the Château Frontenac hotel for a televised speech, delivered before fifty supporters. The Tories finished with 112 seats to the Liberals' 105, while both the CCF and Social Credit gained seats in Western Canada and finished with 25 and 19 seats respectively. Nine cabinet ministers, including Howe, Marler, Garson, Campney and Harris were defeated. Though the Liberals outpolled the Tories by over 100,000 votes, most of those votes were wasted running up huge margins in Quebec. St. Laurent could have constitutionally hung onto power until defeated in the House, but he chose not to, and John Diefenbaker took office as Prime Minister of Canada on June 21, 1957. ### Irregularities After the election, the Chief Electoral Officer reported to the Speaker of the House of Commons that "the general election appears to have been satisfactorily conducted in accordance with the procedure in the Canada Elections Act". There were, however, a number of irregularities. In the Toronto riding of St. Paul's, four Liberal workers were convicted of various offences for adding almost five hundred names to the electoral register. One of the four was also convicted for two counts of perjury. While the unsuccessful Liberal candidate, former MP James Rooney, was not charged, Ontario Chief Justice James Chalmers McRuer, who investigated the matter, doubted that this could have been done without the candidate's knowledge. Various violations of law, including illicitly opening ballot boxes, illegal possession of ballots, and adding names to the electoral register, took place in twelve Quebec ridings. The RCMP felt it had enough evidence to prosecute in five ridings, and a total of twelve people were convicted. The offences did not affect the outcome of any races. The election of the Liberal candidate in Yukon was contested by the losing Tory candidate. After a trial before the Yukon Territorial Court, that court voided the election, holding that enough ineligible people had been permitted to vote to affect the outcome, though the court noted that it was not the fault of the Liberal candidate that these irregularities had occurred. The Tory, Erik Nielsen, won the new election in December 1957. The election in one Ontario riding, Wellington South was postponed pursuant to statute after the death of the Liberal candidate and MP, Henry Alfred Hosking during the campaign. The Tory candidate, Alfred Hales, defeated Liberal David Tolton and CCF candidate Thomas Withers on July 15, 1957. ## Impact The unexpected defeat of the Liberals was ascribed to various causes. The Ottawa Citizen stated that the defeat could be attributed to "the uneasy talk ... that the Liberals have been in too long." Tom Kent of the Winnipeg Free Press, a future Liberal deputy minister, wrote that though the Liberal record had been the best in the democratic world, the party had failed miserably to explain it. Author and political scientist H.S. Ferns disagreed with Kent, stating that Kent's view reflected the Liberal "assumption that 'Nobody's going to shoot Santa Claus and that Canadians in the 1957 election were motivated by things other than material interests. Peter Regenstreif cited the Progressive Conservative strategy in the 1957 and 1958 elections "as classics of ingenuity unequalled in Canadian political history. Much of the credit belongs to Diefenbaker himself; at least some must go to Allister Grosart". A survey taken of those who abandoned the Liberals in 1957 showed that 5.1% did so because of Suez, 38.2% because of the Pipeline Debate, 26.7% because of what they considered an inadequate increase in the old age pension, and 30% because it was time for a change. The results of the election surprised the civil service as much as it did the rest of the public. Civil servant and future Liberal minister Mitchell Sharp asked C.D. Howe's replacement as Minister of Trade and Commerce, Gordon Churchill, not to come to the Ministry's offices for several days as they were redecorating. Churchill later learned that the staff were moving files out. When Churchill finally came to the Ministry's offices, he was met with what he termed "the coldest reception that I have ever received in my life". The new Minister of Labour, Michael Starr, was ticketed three days in a row for parking in the minister's spot. St. Laurent resigned as leader of the Liberal Party on September 5, 1957, but agreed to stay on until a successor was elected. With a lame-duck leader, the Liberals were ineffective in opposition. Paul Martin stated that "I'm sure I never thought the day would come when [Diefenbaker] would ever be a member of a government, let alone head of it. When that happened, the world had come to an end as far as I was concerned." In January 1958, St. Laurent was succeeded by Lester Pearson. Even in reporting the election result, newspapers suggested that Diefenbaker would soon call another election and seek a majority. Quebec Tory MP William Hamilton (who would soon become Postmaster General under Diefenbaker) predicted on the evening of June 10 that there would soon be another election, in which the Tories would do much better in Quebec. The Tory government initially proved popular among the Canadian people, and shortly after Pearson became Liberal leader, Diefenbaker called a snap election. On March 31, 1958, the Tories won the greatest landslide in Canadian federal electoral history in terms of the percentage of seats, taking 208 seats (including fifty in Quebec) to the Liberals' 48, with the CCF winning eight and none for Social Credit. Michael Bliss, who wrote a survey of the Canadian Prime Ministers, alluded to Howe's dismissive comments regarding the Tories as he summed up the 1957 election: > The pipeline debate of 1956 sparked a dramatic opposition stand on the importance of free parliamentary debate. Their arrogance did the Grits [Liberals] enormous damage, contributing heavily to the government's problems in the 1957 general election. Under the glare of television cameras in that campaign, St. Laurent, Howe, and company now appeared to be a lot of wooden, tired old men who had lost touch with Canada. The voters decided to stop them. ## National results Turnout: 74.1% of eligible voters voted. - The Other 4 seats were (2) Independent, (1) Independent Liberal, (1) Independent PC. One Liberal-Labour candidate was elected and sat with the Liberal caucus, as happened after the 1953 election. ## Vote and seat summaries ## Results by province ## See also - List of Canadian federal general elections - List of political parties in Canada
10,158,601
Black-breasted buttonquail
1,143,247,023
Species of bird
[ "Birds described in 1837", "Birds of New South Wales", "Birds of Queensland", "Endemic birds of Australia", "Taxa named by John Gould", "Turnix" ]
The black-breasted buttonquail (Turnix melanogaster) is a rare buttonquail endemic to eastern Australia. As with other buttonquails, it is unrelated to the true quails. The black-breasted buttonquail is a plump quail-shaped bird 17–19 cm (6.7–7.5 in) in length with predominantly marbled black, rufous, and pale brown plumage, marked prominently with white spots and stripes, and white eyes. Like other buttonquails, the female is larger and more boldly coloured than the male, with a distinctive black head and neck sprinkled with fine white markings. The usual sex roles are reversed, as the female mates with multiple male partners and leaves them to incubate the eggs. The black-breasted buttonquail is usually found in rainforests, foraging on the ground for invertebrates in large areas of thick leaf litter. Most of its original habitat has been cleared and the remaining populations are fragmented. The species is rated as vulnerable on the International Union for Conservation of Nature (IUCN)'s Red List of Endangered species and is listed as vulnerable under the Environment Protection and Biodiversity Conservation Act 1999. A three-year conservation project has been under way since 2021. ## Taxonomy The black-breasted buttonquail was originally described by ornithologist John Gould in 1837 as Hemipodius melanogaster, from specimens collected around Moreton Bay in Queensland. Its specific epithet is derived from the Ancient Greek terms melas "black" and gaster "belly". In 1840 English zoologist George Robert Gray established that the genus name Turnix, coined in 1790 by French naturalist Pierre Joseph Bonnaterre, had priority over Hemipodius, which had been published in 1815 by Coenraad Jacob Temminck. In his 1865 Handbook to the Birds of Australia, Gould used its current name Turnix melanogaster. Gregory Mathews placed it in its own genus Colcloughia in 1913, which was not followed by later authors. He also described a subspecies Colcloughia melanogaster goweri from Gowrie on the basis of less extensive black plumage, though this was later regarded as individual variation. Along with other buttonquails, the black-breasted buttonquail was traditionally placed in the order Gruiform, but more recent molecular analysis shows it belongs to an early offshoot within the shorebirds (Charadriiformes). "Black-breasted buttonquail" has been designated the official name by the International Ornithologists' Union (IOU). "Black-fronted buttonquail" is an alternative vernacular name. Gould called it "black-breasted hemipode" initially, and then "black-breasted turnix", corresponding with its scientific name. The buttonquail species were generally known as "quail" (hence "black-breasted quail" or "black-fronted quail") until the Royal Australasian Ornithologists Union (RAOU) promoted the current usage of "buttonquail" in 1978, which was then universally adopted. The Butchulla people, the traditional owners of K'gari (Fraser Island), know the bird as the mur'rindum bird. ## Description The black-breasted buttonquail is a plump quail-shaped bird with predominantly marbled black, rufous and pale brown plumage, marked prominently with white spots and stripes, white eyes, a grey bill and yellowish feet. The short tail has twelve rectrices and the wings are short with round tips. The length ranges from 17 to 19 cm (6.5 to 7.5 in), with females tending to be larger and heavier, weighing 80–119 g (2.8–4.2 oz), compared to males, which weigh 50–87 g (1.8–3.1 oz). Like other buttonquails, the female is more distinctively coloured than the male. Its head, neck, and breast are black with a chestnut tinge on the and rear of its , and small white spots on its neck and face forming a moustache and eyebrow-like pattern. The white spots coalesce into bars on its breast, and its are dark grey. The male has a whitish face and neck with black speckles and darker , and a brown-grey crown and nape. Its breast has black and white bars and spots, with red-brown on its flanks and more grey with dark barring on the rest of its underparts. The juvenile resembles the adult male though has a blue-grey iris, duller brown-grey more heavily blotched with black on outer back and and less pale streaks. The female makes a low-pitched oom call – a sequence of 5–7 notes that last 1.5–2.0 seconds each – which can be repeated 14–21 (or less commonly 1–4) times. This advertising call cannot be heard more than 50 m (160 ft) away, and is uttered only after there has been sufficient rainfall of 100 mm (4 in) within a few days. The female whistles quietly to its young. The male makes a range of high staccato and clucking alarm or rallying calls, including an ak ak call when separated from others in its covey. Juveniles have a range of chirping or piping calls to induce feeding or raise an alarm. The black markings and large size of the female and the dark markings and whitish face of the male distinguish the species from the co-occurring painted buttonquail (Turnix varius). The regurgitated globular pellets of the black-breasted buttonquail have a distinctive hook at the end, in contrast to those of the painted buttonquail, which are more cylindrical and gently curved. ## Distribution and habitat The black-breasted buttonquail is found from Hervey Bay in central Queensland south to the northeastern corner of New South Wales, generally in areas receiving 770–1,200 mm (30–47 in) rainfall annually. There had been only ten reports from New South Wales in the decade leading up to 2009. Fieldwork across the Wide Bay–Burnett region from 2016 to 2018 found it in scattered locations in its suitable habitat from Teewah Beach to Inskip Point on the mainland and along the east coast of near K'Gari. It is found in Palmgrove National Park, which has been identified by BirdLife International as an Important Bird Area for the species. The black-breasted buttonquail was once populous on Inskip Point, with the area a destination for birdwatchers wanting to see this species. Mike West, former president of Birds Queensland, blamed dingoes and wild dogs for wiping out the population. The bird is rare and its habitat is fragmented. It is found in dry rainforest and nearby areas, as well as bottle tree (Brachychiton rupestris) scrub, lantana thickets, dune scrub, and mature hoop pine (Araucaria cunninghamii) plantations with a closed canopy and developed undergrowth. Many canopy plants such as Acacia species produce abundant leaf litter, which the species forages in. No other buttonquail species lives in its type of habitat. ## Behaviour The black-breasted buttonquail is generally ground-dwelling. It has no hind toe and so cannot perch in trees. If startled, it generally freezes or runs rather than flying. ### Breeding The usual sex roles are reversed in the buttonquail genus (Turnix), as the larger and more brightly-coloured female mates with multiple male partners and leaves them to incubate the eggs. The breeding habits of the species are not well known as both the birds and their nests are difficult to find and monitor. There are conflicting reports on the duration of the breeding season; field observations by John Young in northern New South Wales indicate this is restricted to between October and March, yet there are other reports of chicks year-round, suggesting opportune breeding can take place at any time. Minimum temperatures in the studied areas in New South Wales can drop to −2 °C (28 °F) in cooler months; reproduction has been known to be inhibited by cold weather in captivity, hence breeding may be related to temperature in this part of its range. For most of the year, the female black-breasted buttonquail forms a covey with one to three males. During breeding season, the female establishes a territory while the males often form small territories within it. Agonistic behaviour between females has been observed but it is unclear how common it is. The female utters drumming calls as courtship, which is answered by clucking from the male. The nest is a shallow depression measuring 10 by 6 cm (4 by 2.5 in) scraped out of the leaf litter and ground, lined with leaves, moss and dried vegetation. It is often sited between the buttress roots of a plant, or in a crevice or sheltered by a tree root, and within or near undergrowth vegetation such as lantana (Lantana camara), bracken (Pteridium esculentum) or prickly rasp fern (Doodia aspera). It is not known which sex builds the nest. Three or four shiny grey-white or buff eggs splotched with dark brown-black and lavender are laid measuring 28 by 23 mm (1.10 by 0.91 in). Incubation lasts 18 to 21 days. The hatchlings are precocial and nidifugous, and are able to forage by 8–11 days of age, though parents may feed them for two weeks. By 8–12 weeks, they gain adult plumage and are able to breed at three to five months old. ### Feeding The black-breasted buttonquail forages on the ground in large areas of thick leaf litter in vine forest, and thickets of vines or lantana. Leaves fall on these areas year-round, with litter layers 3–10 cm (1–4 in) thick being preferred. A covey of birds scrapes out up to a hundred plate-shaped shallow feeding sites, though ten to forty is more usual. The buttonquail makes these by scratching at the ground with alternate legs in a circular pattern moving either clockwise or counterclockwise, creating the 20 cm (8 in) depression and pecking for invertebrates in the exposed ground. A 1995 study recovered the exoskeletons of ants, beetles (including weevils), spiders such as jumping spiders and the brown trapdoor spider (Euoplos variabilis), centipedes, millipedes, and snails such as Nitor pudibunda from pellets; the remains of soft-bodied invertebrates were not discernible. A 2018 analysis of faecal pellets revealed beetles, ants and earwigs to be prominent, though the authors concluded the black-breasted buttonquail is a generalist insectivore. Plant material was scant, though this might have been an artefact due to its greater digestibility. ## Conservation status The species is classified as vulnerable on the International Union for Conservation of Nature's Red List. It is listed as vulnerable by the Australian Government under the Environment Protection and Biodiversity Conservation Act 1999. On a state level, it is listed as ‘Vulnerable’ under the Queensland Nature Conservation Act 1992 and ‘Endangered’ under the New South Wales Threatened Species Conservation Act 1995. The population has been estimated at as few as 2500 breeding birds and declining, with no single population containing more than 250 individuals. The dry rainforest it lives in, although often adjacent to wet rainforest, is often located outside of national parks and protected areas and is thus at risk from further clearance for agriculture or development. Since European settlement, 90% of its habitat has been lost and much of what is left is fragmented. Furthermore, fieldwork in southeast Queensland showed that it did not forage in remnants under 7 ha (17 acres) in area. On the mainland, they are also at risk from feral animals such as cats, foxes and pigs, as well as humans, and weeds. As of 2021, the Butchulla Land and Sea Rangers are collaborating with researchers on a three-year project aimed at reducing threats to the bird and improving its habitat to ensure survival into the future. In August 2021 they set up 19 cameras on K'gari and five at Inskip Point and Double Island Point, leaving them in place for seven weeks. They saw evidence of damage from feral animals on the mainland, but also saw baby birds, and much evidence of the birds at Rainbow Beach and Inskip Point on the mainland, and Dilli Village and Champagne Pools on the island. They are setting pig and cat traps and managing weeds in the area, and will be doing traditional burns in the winter to manage the risk of bushfire on the island.
378,579
Bupropion
1,171,795,963
Substituted cathinone medication mainly used for depression and smoking cessation
[ "5-HT3 antagonists", "Anorectics", "Antidepressants", "Antiobesity drugs", "Aphrodisiacs", "Attention deficit hyperactivity disorder management", "CYP2D6 inhibitors", "Cathinones", "Chlorobenzenes", "Dermatoxins", "Ergogenic aids", "Female sexual dysfunction drugs", "GSK plc brands", "Nicotinic antagonists", "Norepinephrine–dopamine reuptake inhibitors", "Phenethylamines", "Prodrugs", "Smoking cessation", "Stimulants", "Substituted amphetamines", "Tert-butyl compounds", "Wikipedia medicine articles ready to translate", "World Health Organization essential medicines" ]
Bupropion, sold under the brand name Wellbutrin among others, is an atypical antidepressant primarily used to treat major depressive disorder and to support smoking cessation. It is also popular as an add-on medication in the cases of "incomplete response" to the first-line selective serotonin reuptake inhibitor (SSRI) antidepressant. Bupropion has several features that distinguish it from other antidepressants: it does not usually cause sexual dysfunction; it is not associated with weight gain and sleepiness, and it is more effective than SSRIs at improving symptoms of hypersomnia and fatigue. Bupropion, particularly the immediate release formulation, carries a higher risk of seizure than many other antidepressants, hence caution is recommended in patients with a history of seizure disorder. Common adverse effects of bupropion with the greatest difference from placebo are dry mouth, nausea, constipation, insomnia, anxiety, tremor, and excessive sweating. Raised blood pressure is notable. Rare but serious side effects include seizure, liver toxicity, psychosis, and risk of overdose. Bupropion use during pregnancy may be associated with increased odds of congenital heart defects. Bupropion acts as a norepinephrine–dopamine reuptake inhibitor and a nicotinic receptor antagonist (NDRI). However, its effects on dopamine are weak. Chemically, bupropion is an aminoketone that belongs to the class of substituted cathinones and more generally that of substituted amphetamines and substituted phenethylamines. Bupropion was invented by Nariman Mehta, who worked at Burroughs Wellcome, in 1969. It was first approved for medical use in the United States in 1985. Bupropion was originally called by the generic name amfebutamone, before being renamed in 2000. In 2020, it was the eighteenth most commonly prescribed medication in the United States, with more than 28 million prescriptions. It is on the World Health Organization's List of Essential Medicines. ## Medical uses ### Depression The evidence overall supports the efficacy of bupropion over placebo for the treatment of depression. However, the quality of evidence is low. Most meta-analyses report that bupropion has an at-most small effect size for depression. Only one meta-analysis reported a large effect size. However, there were methodological limitations with this meta-analysis, including using a subset of only five trials for the effect size calculation, substantial variability in effect sizes between the selected trials—which led the authors to state that their findings in this area should be interpreted with "extreme caution"—and general lack of inclusion of unpublished trials in the meta-analysis. Unpublished trials are more likely to be negative in findings, and other meta-analyses have included unpublished trials. Evidence suggests that the efficacy of bupropion for depression is similar to that of other antidepressants. Over the fall and winter months, bupropion prevents development of depression in those who have recurring seasonal affective disorder: 15% of participants on bupropion experienced a major depressive episode vs. 27% of those on placebo. Bupropion also improves depression in bipolar disorder, with the efficacy and risk of affective switch being similar to other antidepressants. Bupropion has several features that distinguish it from other antidepressants: for instance, unlike the majority of antidepressants, it does not usually cause sexual dysfunction, and the occurrence of sexual side effects is not different from placebo. Bupropion treatment is not associated with weight gain; on the contrary, the majority of studies observed significant weight loss in bupropion-treated participants. Bupropion treatment also is not associated with the sleepiness that may be produced by other antidepressants. Bupropion is more effective than selective serotonin reuptake inhibitors (SSRIs) at improving symptoms of hypersomnia and fatigue in depressed patients. There appears to be a modest advantage for the SSRIs compared to bupropion in the treatment of depression with high anxiety; they are equivalent for depression with moderate or low anxiety. The addition of bupropion to a prescribed SSRI is a common strategy when people do not respond to the SSRI, and it is supported by clinical trials; however, it appears to be inferior to the addition of atypical antipsychotic aripiprazole. ### Smoking cessation Prescribed as an aid for smoking cessation, bupropion reduces the severity of craving for nicotine and withdrawal symptoms such as depressed mood, irritability, difficulty concentrating, and increased appetite. Initially, bupropion slows the weight gain that often occurs in the first weeks after quitting smoking. With time, however, this effect becomes negligible. The bupropion treatment course lasts for seven to twelve weeks, with the patient halting the use of tobacco about ten days into the course. After the course, the effectiveness of bupropion for maintaining abstinence from smoking declines over time, from 37% of tobacco abstinence at 3 months to 20% at one year. It is unclear whether extending bupropion treatment helps to prevent relapse of smoking. Overall, six months after the therapy, bupropion increases the likelihood of quitting smoking by approximately 1.6 fold as compared to placebo. In this respect, bupropion is as effective as nicotine replacement therapy but inferior to varenicline. Combining bupropion and nicotine replacement therapy does not improve the quitting rate. In children and adolescents, the use of bupropion for smoking cessation does not appear to offer any significant benefits. The evidence for its use to aid smoking cessation in pregnant women is insufficient. ### Attention deficit hyperactivity disorder In the United States, the treatment of attention deficit hyperactivity disorder (ADHD) is not an approved indication of bupropion, and it is not mentioned in the current (2019) guideline on the ADHD treatment from the American Academy of Pediatrics. Systematic reviews of bupropion for the treatment of ADHD in both adults and children note that bupropion may be effective for ADHD but warn that this conclusion has to be interpreted with caution, because clinical trials were of low quality due to small sizes and risk of bias. Similarly to atomoxetine, bupropion has a delayed onset of action for ADHD, and several weeks of treatment are required for therapeutic effects. This is in contrast to stimulants, such as amphetamine and methylphenidate, which have an immediate onset of effect in the condition. ### Sexual dysfunction Bupropion is less likely than other antidepressants to cause sexual dysfunction. A range of studies indicate that bupropion not only produces fewer sexual side effects than other antidepressants but can actually help to alleviate sexual dysfunction including sexual dysfunction induced by SSRI antidepressants. There have also been small studies suggesting that bupropion or a bupropion/trazodone combination may improve some measures of sexual function in women who have hypoactive sexual desire disorder (HSDD) and are not depressed. According to an expert consensus recommendation from the International Society for the Study of Women's Sexual Health, bupropion can be considered as an off-label treatment for HSDD despite limited safety and efficacy data. Likewise, a 2022 systematic review and meta-analysis of bupropion for sexual desire disorder in women reported that although data were limited, bupropion appeared to be dose-dependently effective for the condition. ### Obesity Bupropion, when used for treating obesity over a period of 6 to 12 months, results in an average weight loss of 2.7 kg (5.9 lbs) over placebo. This is not much different from the weight loss produced by several other weight-loss medications such as sibutramine or orlistat. The combination drug naltrexone/bupropion has been approved by the U.S. Food and Drug Administration (FDA) for the treatment of obesity. ### Other uses Bupropion is not effective in the treatment of cocaine dependence, but it is showing promise in reducing drug use in light methamphetamine users. Based on studies indicating that bupropion lowers the level of the inflammatory mediator TNF-alpha, there have been suggestions that it might be useful in treating inflammatory bowel disease, psoriasis, and other autoimmune conditions, but very little clinical evidence is available. Bupropion is not effective in treating chronic low back pain. ### Available forms Bupropion is available as an oral tablet in a number of different formulations. It is formulated mostly as the hydrochloride salt but also to a lesser extent as the hydrobromide salt. The available forms of bupropion hydrochloride include IR (instant-release) tablets (50, 75, 100 mg), SR (sustained-release) tablets (50, 100, 150, 200 mg), and XL (extended-release) tablets (150, 300, 450 mg). The only marketed form of bupropion hydrobromide is Aplenzin, an extended-release oral tablet (174, 348, 522 mg). In addition to single-drug formulations, bupropion is formulated in combinations including naltrexone/bupropion (Contrave; 8 mg/90 mg extended-release tablets) and dextromethorphan/bupropion (Auvelity; 45 mg/105 mg tablets). ## Contraindications The drug label advises that bupropion should not be prescribed to individuals with epilepsy or other conditions that lower the seizure threshold, such as anorexia nervosa, bulimia nervosa, benzodiazepine or alcohol withdrawal. It should be avoided in individuals who are taking monoamine oxidase inhibitors (MAOIs). The label recommends that caution should be exercised when treating people with liver damage, severe kidney disease, and severe hypertension, and in children, adolescents and young adults due to the increased risk of suicidal ideation. ## Side effects The common adverse effects of bupropion with the greatest difference from placebo are dry mouth, nausea, constipation, insomnia, anxiety, tremor, and excessive sweating. Bupropion has the highest incidence of insomnia of all second-generation antidepressants, apart from desvenlafaxine. It is also associated with about 20% increased risk of headache. Bupropion raises blood pressure in some people. One study showed an average rise of 6 mm Hg in systolic blood pressure in 10% of patients. The prescribing information notes that hypertension, sometimes severe, is observed in some people taking bupropion, both with and without pre-existing hypertension. Safety of bupropion in people with cardiovascular conditions and its general cardiovascular safety profile remain unclear due to the lack of data. Seizure is a rare but serious adverse effect of bupropion. It is strongly dose-dependent: for the immediate release preparation, the seizure incidence is 0.4% at the dose 300–450 mg per day; the incidence climbs almost ten-fold for the higher than recommended dose of 600 mg. For comparison, the incidence of unprovoked seizure in the general population is 0.07 to 0.09%, and the risk of seizure for a variety of other antidepressants is generally between 0 and 0.5% at the recommended doses. Cases of liver toxicity leading to death or liver transplantation have been reported for bupropion. It is considered to be one of several antidepressants with greater risk of hepatotoxicity. The prescribing information warns about bupropion triggering an angle-closure glaucoma attack. On the other hand, bupropion may decrease the risk of development of open angle glaucoma. Bupropion use by mothers in the first trimester of pregnancy is associated with 23% increase of the odds in congenital heart defects in their children. Bupropion has rarely been associated with instances of Stevens–Johnson syndrome. ### Psychiatric The FDA requires all antidepressants, including bupropion, to carry a boxed warning stating that antidepressants may increase the risk of suicide in people younger than 25. This warning is based on a statistical analysis conducted by the FDA which found a 2-fold increase in suicidal thought and behavior in children and adolescents, and 1.5-fold increase in the 18–24 age group. For this analysis the FDA combined the results of 295 trials of 11 antidepressants in order to obtain statistically significant results. Considered in isolation, bupropion was not statistically different from placebo. Bupropion prescribed for smoking cessation results in 25% increase of the risk of psychiatric side effects, in particular, anxiety (about 40% increase) and insomnia (about 80% increase). The evidence is insufficient to determine whether bupropion is associated with suicides or suicidal behavior. In rare cases, bupropion-induced psychosis may develop. It is associated with higher doses of bupropion; many cases described are at higher than recommended doses. Concurrent antipsychotic medication appears to be protective. In most cases the psychotic symptoms are eliminated by reducing the dose, ceasing treatment or adding antipsychotic medication. Although studies are lacking, a handful of case reports suggest that abrupt discontinuation of bupropion may cause antidepressant discontinuation syndrome. ## Overdose Bupropion is considered moderately dangerous in overdose. According to an analysis of US National Poison Data System, adjusted for the number of prescriptions, bupropion and venlafaxine are the two new generation antidepressants (that is excluding tricyclic antidepressants) that result in the highest mortality and morbidity. For significant overdoses, seizures have been reported in about a third of all cases; other serious effects include hallucinations, loss of consciousness, and abnormal heart rhythms. When bupropion was one of several kinds of pills taken in an overdose, fever, muscle rigidity, muscle damage, hypertension or hypotension, stupor, coma, and respiratory failure have been reported. While most people recover, some people have died, having had multiple uncontrolled seizures and myocardial infarction. ## Interactions Since bupropion is metabolized to hydroxybupropion by the enzyme CYP2B6, drug interactions with CYP2B6 inhibitors are possible: this includes such medications as paroxetine, sertraline, norfluoxetine (active metabolite of fluoxetine), diazepam, clopidogrel, and orphenadrine. The expected result is the increase of bupropion and decrease of hydroxybupropion blood concentration. The reverse effect (decrease of bupropion and increase of hydroxybupropion) can be expected with CYP2B6 inducers such as carbamazepine, clotrimazole, rifampicin, ritonavir, St John's wort, and phenobarbital. Indeed, carbamazepine decreases exposure to bupropion by 90% and increases exposure to hydroxybupropion by 94%. Ritonavir, lopinavir/ritonavir, and efavirenz have been shown to decrease levels of bupropion and/or its metabolites. Ticlopidine and clopidogrel, both potent CYP2B6 inhibitors, have been found to considerably increase bupropion levels as well as decrease levels of its metabolite hydroxybupropion. Bupropion and its metabolites are inhibitors of CYP2D6, with hydroxybupropion responsible for most of the inhibition. Additionally, bupropion and its metabolites may decrease expression of CYP2D6 in the liver. The end effect is a significant slowing of the clearance of other drugs metabolized by this enzyme. For instance, bupropion has been found to increase area-under-the-curve of desipramine, a CYP2D6 substrate, by 5-fold. Bupropion has also been found to increase levels of atomoxetine by 5.1-fold, while decreasing the exposure to its main metabolite by 1.5-fold. As another example, the ratio of dextromethorphan (a drug that is mainly metabolized by CYP2D6) to its major metabolite dextrorphan increased approximately 35-fold when it was administered to people being treated with 300 mg/day bupropion. When people on bupropion are given MDMA, about 30% increase of exposure to both drugs is observed, with enhanced mood but decreased heart rate effects of MDMA. Interactions with other CYP2D6 substrates, such as metoprolol, imipramine, nortriptyline, venlafaxine, and nebivolol have also been reported. However, in a notable exception, bupropion does not seem to affect the concentrations of CYP2D6 substrates fluoxetine and paroxetine. Bupropion lowers the seizure threshold, and therefore can potentially interact with other medications that also lower it, such as antipsychotics, tricyclic antidepressants, theophylline, and systemic corticosteroids. The prescribing information recommends minimizing the use of alcohol, since in rare cases bupropion reduces alcohol tolerance. Caution should be observed when combining bupropion with a monoamine oxidase inhibitor (MAOI), as it may result in hypertensive crisis. ## Pharmacology ### Pharmacodynamics The mechanism of action of bupropion in the treatment of depression and for other indications is unclear. However, it is thought to be related to the fact that bupropion is a norepinephrine–dopamine reuptake inhibitor (NDRI) and antagonist of several nicotinic acetylcholine receptors. It is uncertain whether bupropion is a norepinephrine–dopamine releasing agent. Pharmacological actions of bupropion, to a substantial degree, are due to its active metabolites hydroxybupropion, threo-hydrobupropion, and erythro-hydrobupropion that are present in the blood plasma at comparable or much higher levels. In fact, bupropion could accurately be conceptualized as a prodrug of these metabolites. Overall action of these metabolites, and particularly one enantiomer S,S-hydroxybupropion, is also characterized by inhibition of norepinephrine and dopamine reuptake and nicotinic antagonism (see the chart on the right). Bupropion has no meaningful direct activity at a variety of receptors, including α- and β-adrenergic, dopamine, serotonin, histamine, and muscarinic acetylcholine receptors. The occupancy of dopamine transporter (DAT) by bupropion (300 mg/day) and its metabolites in the human brain as measured by several positron emission tomography (PET) studies is approximately 20%, with a mean occupancy range of about 14 to 26%. For comparison, the NDRI methylphenidate at therapeutic doses is thought to occupy greater than 50% of DAT sites. In accordance with its low DAT occupancy, no measurable dopamine release in the human brain was detected with bupropion (one 150 mg dose) in a PET study. Bupropion has also been shown to increase striatal VMAT2, though it is unknown if this effect is more pronounced than other DRIs. These findings raise questions about the role of dopamine reuptake inhibition in the pharmacology of bupropion, and suggest that other actions may be responsible for its therapeutic effects. More research is needed in this area. No data are available on occupancy of the norepinephrine transporter (NET) by bupropion and its metabolites. However, due to the increased exposure of hydroxybupropion over bupropion itself, which has higher affinity for the NET than the DAT, bupropion's overall pharmacological profile in humans may end up making it effectively more of a norepinephrine reuptake inhibitor than a dopamine reuptake inhibitor. Accordingly, the clinical effects of bupropion are more consistent with noradrenergic activity than with dopaminergic actions. ### Pharmacokinetics After oral administration, bupropion is rapidly and completely absorbed reaching the peak blood plasma concentration after 1.5 hours (t<sub>max</sub>). Sustained release (SR) and extended release (XL) formulations have been designed to slow down absorption resulting in t<sub>max</sub> of 3 hours and 5 hours, respectively. Absolute bioavailability of bupropion is unknown but is presumed to be low, at 5–20%, due to the first-pass metabolism. As for the relative bioavailability of the formulations, XL formulation has lower bioavailability (68%) compared to SR formulation and immediate release bupropion. Bupropion is metabolized in the body by a variety of pathways. The oxidative pathways are by cytochrome P450 isoenzymes CYP2B6 leading to R,R- and S,S-hydroxybupropion and, to a lesser degree, CYP2C19 leading to 4'-hydroxybupropion. The reductive pathways are by 11β-hydroxysteroid dehydrogenase type 1 in the liver and AKR7A2/AKR7A3 in the intestine leading to threo-hydrobupropion and by yet unknown enzyme leading to erythro-hydrobupropion. The metabolism of bupropion is highly variable: the effective doses of bupropion received by persons who ingest the same amount of the drug may differ by as much as 5.5 times (with a half-life of 12–30 hours), while the effective doses of hydroxybupropion may differ by as much as 7.5 times (with a half-life of 15–25 hours). Based on this, some researchers have advocated monitoring of the blood level of bupropion and hydroxybupropion. ## Chemistry Bupropion is an aminoketone that belongs to the class of substituted cathinones and the more general class of substituted phenethylamines. The clinically used bupropion is racemic, that is a mixture of two enantiomers: S-bupropion and R-bupropion. Although the optical isomers on bupropion can be separated, they rapidly racemize under physiological conditions. There have been reported cases of false-positive urine amphetamine tests in persons taking bupropion. ### Synthesis It is synthesized in two chemical steps starting from 3'-chloro-propiophenone. The alpha position adjacent to the ketone is first brominated followed by nucleophilic displacement of the resulting alpha-bromoketone with t-butylamine and treated with hydrochloric acid to give bupropion as the hydrochloride salt in 75–85% overall yield. ## History Bupropion was invented by Nariman Mehta of Burroughs Wellcome (now GlaxoSmithKline) in 1969, and the US patent for it was granted in 1974. It was approved by the U.S. Food and Drug Administration (FDA) as an antidepressant on 30 December 1985, and marketed under the name Wellbutrin. However, a significant incidence of seizures at the originally recommended dosage (400–600 mg/day) caused the withdrawal of the drug in 1986. Subsequently, the risk of seizures was found to be highly dose-dependent, and bupropion was re-introduced to the market in 1989 with a lower maximum recommended daily dose of 450 mg/day. In 1996, the FDA approved a sustained-release formulation of alcohol-resistant bupropion called Wellbutrin SR, intended to be taken twice a day (as compared with three times a day for immediate-release Wellbutrin). In 2003, the FDA approved another sustained-release formulation called Wellbutrin XL, intended for once-daily dosing. Wellbutrin SR and XL are available in generic form in the United States and Canada. In 1997, bupropion was approved by the FDA for use as a smoking cessation aid under the name Zyban. In 2006, Wellbutrin XL was similarly approved as a treatment for seasonal affective disorder. In October 2007, two providers of consumer information on nutritional products and supplements, ConsumerLab.com and The People's Pharmacy, released the results of comparative tests of different brands of bupropion. The People's Pharmacy received multiple reports of increased side effects and decreased efficacy of generic bupropion, which prompted it to ask ConsumerLab.com to test the products in question. The tests showed that "one of a few generic versions of Wellbutrin XL 300 mg, sold as Budeprion XL 300 mg, didn't perform the same as the brand-name pill in the lab." The FDA investigated these complaints and concluded that Budeprion XL is equivalent to Wellbutrin XL in regard to bioavailability of bupropion and its main active metabolite hydroxybupropion. The FDA also said that coincidental natural mood variation is the most likely explanation for the apparent worsening of depression after the switch from Wellbutrin XL to Budeprion XL. On 3 October 2012, however, the FDA reversed this opinion, announcing that "Budeprion XL 300 mg fails to demonstrate therapeutic equivalence to Wellbutrin XL 300 mg." The FDA did not test the bioequivalence of any of the other generic versions of Wellbutrin XL 300 mg, but requested that the four manufacturers submit data on this question to the FDA by March 2013. As of October 2013 the FDA has made determinations on the formulations from some manufacturers not being bioequivalent. In April 2008, the FDA approved a formulation of bupropion as a hydrobromide salt instead of a hydrochloride salt, to be sold under the name Aplenzin by Sanofi-Aventis. In 2009, the FDA issued a health advisory warning that the prescription of bupropion for smoking cessation has been associated with reports about unusual behavior changes, agitation and hostility. Some people, according to the advisory, have become depressed or have had their depression worsen, have had thoughts about suicide or dying, or have attempted suicide. This advisory was based on a review of anti-smoking products that identified 75 reports of "suicidal adverse events" for bupropion over ten years. Based on the results of follow-up trials this warning was removed in 2016. In 2012, the U.S. Justice Department announced that GlaxoSmithKline had agreed to plead guilty and pay a \$3 billion fine, in part for promoting the unapproved use of Wellbutrin for weight loss and sexual dysfunction. In 2017, the European Medicines Agency recommended suspending a number of nationally approved medicines due to misrepresentation of bioequivalence study data by Micro Therapeutic Research Labs in India. The products recommended for suspension included several 300 mg modified-release bupropion tablets. ## Society and culture ### Recreational use While bupropion demonstrates some potential for misuse, this potential is less than of other commonly used stimulants, being limited by features of its pharmacology. Case reports describe misuse of bupropion as producing a "high" similar to cocaine or amphetamine usage but with less intensity. Bupropion misuse is uncommon. There have been a number of anecdotal and case-study reports of bupropion abuse, but the bulk of evidence indicates that the subjective effects of bupropion when taken orally are markedly different from those of addictive stimulants such as cocaine or amphetamine. However, bupropion, by non-conventional routes of administration like injection or insufflation, has been reported to be misused in the United States and Canada, notably in prisons. ### Legal status In Russia bupropion is banned as a narcotic drug, yet not per se but rather as a derivative of methcathinone. In Australia, France, and the UK, smoking cessation is the only licensed use of bupropion, and no generics are marketed. ## Brand names Brand names include Wellbutrin, Aplenzin, Budeprion, Buproban, Forfivo, Zyban, Bupron, Bupisure, Bupep, Smoquite, Elontril, Buxon.
1,035,611
Battle of Marais des Cygnes
1,163,037,115
Battle of the American Civil War
[ "1864 in Kansas", "Battles of the American Civil War in Kansas", "Battles of the Trans-Mississippi Theater of the American Civil War", "Conflicts in 1864", "Linn County, Kansas", "October 1864 events", "Price's Missouri Expedition", "Union victories of the American Civil War" ]
The Battle of Marais des Cygnes (/ˌmɛər də ˈziːn, - ˈsiːn, ˈmɛər də ziːn/) took place on October 25, 1864, in Linn County, Kansas, during Price's Missouri Campaign during the American Civil War. It is also known as the Battle of Trading Post. In late 1864, Confederate Major-General Sterling Price invaded the state of Missouri with a cavalry force, attempting to draw Union troops away from the primary theaters of fighting further east. After several victories early in the campaign, Price's Confederate troops were defeated at the Battle of Westport on October 23 near Kansas City, Missouri. The Confederates then withdrew into Kansas, camping along the banks of the Marais des Cygnes River on the night of October 24. Union cavalry pursuers under Brigadier General John B. Sanborn skirmished with Price's rearguard that night, but disengaged without participating in heavy combat. Overnight, Sanborn's troops were reinforced by cavalry under Lieutenant Colonel Frederick W. Benteen, bringing the total Union strength to 3,500. The battle began early the next morning as Sanborn drove Major General John S. Marmaduke's Confederate rearguard from its position north of the river. Union troops captured cannons, prisoners, and wagons during this stage of the fighting. Marmaduke attempted to make a stand at the river crossing, but his position was outflanked by a Union cavalry regiment, forcing him to abandon it. A rearguard action by Confederate Brigadier General John B. Clark Jr.'s 1,200-man brigade bought Price more time to retreat and disengage. Some of Price's men were still caught near Mine Creek later that morning and were badly beaten in the Battle of Mine Creek. That evening, the Battle of Marmiton River became the day's third action, after which Price burned his supply train so it no longer slowed the retreat. After another defeat at the Second Battle of Newtonia on October 28, Price's column retreated to Texas through Arkansas and the Indian Territory. Only 3,500 of the 12,000 men Price had brought into Missouri remained in his force. ## Background When the American Civil War began in 1861, the state of Missouri was a slave state, but did not secede because the state was politically divided. Governor Claiborne Fox Jackson and the Missouri State Guard (MSG) supported secession and the Confederate States of America, while Brigadier General Nathaniel Lyon and the portion of the Union Army under his command supported the United States and opposed secession. Under Major General Sterling Price, the MSG defeated Union armies at the battles of Wilson's Creek and Lexington in 1861, but by the end of the year, Price and the MSG were restricted to the southwestern portion of the state due to the arrival of Union reinforcements. Meanwhile, Jackson and a portion of the state legislature voted to secede and join the Confederate States of America, while another element voted to reject secession, essentially giving the state two governments. In March 1862, a Confederate defeat at the Battle of Pea Ridge in Arkansas gave the Union control of Missouri. For the rest of the year, and through 1863, Confederate activity in the state was largely restricted to guerrilla warfare and raids. By the beginning of September 1864, events in the eastern United States, especially the Confederate defeat in the Atlanta campaign, gave incumbent president Abraham Lincoln, who supported continuing the war, an edge in the 1864 United States presidential election over George B. McClellan, who promoted ending the war. At this point, the Confederacy had very little chance of winning the war. Meanwhile, in the Trans-Mississippi Theater, the Confederates had defeated Union attackers in the Red River campaign in Louisiana in March through May. As events east of the Mississippi River turned against the Confederates, General Edmund Kirby Smith, commander of the Confederate Trans-Mississippi Department, was ordered to transfer the infantry under his command to the fighting in the Eastern and Western Theaters. This proved to be impossible, as the Union Navy controlled the Mississippi River, preventing a large-scale crossing. Despite having limited resources for an offensive, Smith decided that an attack designed to divert Union troops from the principal theaters of combat would have the same effect as the proposed transfer of troops. Price and the new Confederate Governor of Missouri Thomas Caute Reynolds suggested that an invasion of Missouri would be an effective offensive; Smith approved the plan and appointed Price to command the campaign. Many of the Union troops previously defending Missouri had been transferred out of the state, leaving the Missouri State Militia as the state's primary defensive force. Price expected that the offensive would create a popular uprising against Union control of Missouri, divert Union troops away from principal theaters of combat, and aid McClellan's chance of defeating Lincoln. Price's column entered the state on September 19. This force was formally known as the Army of Missouri and contained three divisions, which were commanded by Major Generals James F. Fagan and John S. Marmaduke and Brigadier General Joseph O. Shelby. The Confederates had 13,000 cavalrymen and 14 small-bore cannons. ## Prelude By September 24, Price's column had reached Fredericktown, where he learned that the town of Pilot Knob and the St. Louis & Iron Mountain Railroad were held by Union forces under the command of Brigadier General Thomas Ewing Jr. Price had no interest in allowing an enemy force to operate in the rear of his army while he advanced to St. Louis, so he sent Marmaduke and Fagan's divisions to Pilot Knob; Shelby and his men operated north of the town. On September 26, Ewing's command fought off Fagan's division at Arcadia before withdrawing to the defenses of Fort Davidson. The next day, Price moved against the fort and offered Ewing surrender terms; the latter refused, as he was afraid of being executed for his unpopular issuance of General Order No. 11 the previous year. Holding out, the Union defenders repulsed multiple assaults, before slipping out of the fort at 03:00 on September 28. The Confederates suffered at least 800 casualties during the engagement and their morale decreased, leading Price to abandon the attempt against St. Louis. After abandoning the St. Louis thrust, Price's army headed for Jefferson City, although the Confederates were slowed by bringing along a large supply train. On October 7, the Confederates approached Jefferson City, which was held by about 7,000 men – mostly inexperienced militia – commanded by Brigadier General Egbert Brown. Faulty Confederate intelligence placed the Union strength at 15,000, and Price, fearing another defeat like Pilot Knob, decided not to attack the city, and began moving his army toward Boonville the next day. Boonville was in the pro-Confederate region of Little Dixie, and Price was able to recruit new soldiers. Estimates of the number of new recruits vary between writers: the historian Charles D. Collins states 1,200 men; Christopher Phillips, writing for the Kansas City Public Library, provides 2,000 men; and the historian Kyle Sinisi states that a minimum of 2,500 men joined the Confederates in the region. Price, needing weapons, authorized two raids away from his main body of troops: Brigadier General John B. Clark Jr. and 1,800 men were sent to Glasgow, and Brigadier General M. Jeff Thompson led Shelby's Iron Brigade to Sedalia. Both raids were successful. Price's army next fought a series of engagements as it moved westwards towards Kansas City, Missouri, culminating in the Battle of Westport on October 23. At Westport, the Confederates were soundly defeated by the commands of Major Generals James G. Blunt and Alfred Pleasonton. Shelby's men provided the Confederates with a rearguard, and the Army of Missouri retreated southwards. The Confederates still had a large supply train with them, slowing the retreat. By the evening of October 24, the Army of Missouri had entered Kansas; Confederate soldiers looted and burned as they went. That night, Price camped near Trading Post in Linn County, with the camp split into two segments by the Marais des Cygnes River. Price believed that the Union pursuers would attempt to swing around his flank and block his path of retreat and was not expecting a significant Union force to attack the Trading Post position. Meanwhile, the Union pursuers were at West Point, Missouri. Blunt suggested an ambitious flanking movement, but was overruled by Major General Samuel R. Curtis, commander of the Department of Kansas. The plan would have involved only using a token force to attack the Confederate position at the Marais des Cygnes and slipping most of the rest of the Union army around the Confederate flank to attack Price's army in the morning. Both the flanking movement and crossing a river at night posed risks, and Blunt's plan did not consider the fact that the terrain south of the Marais des Cygnes was not conducive to rapid movement. It also assumed that the Confederates would remain stationary. Instead, Curtis ordered Pleasonton to make a frontal attack against Price. Pleasonton, who was heavily fatigued, gave temporary control of his division to Brigadier General John B. Sanborn. Sanborn moved against Price with a cavalry force at Trading Post late on the night of the 24th. His line, which consisted of the 4th Iowa Cavalry Regiment and three companies of the 2nd Colorado Cavalry Regiment on the right and the 6th and 8th Missouri State Militia Cavalry Regiments on the left, made contact with Fagan's Confederates, who were now serving as the Confederate rearguard. A brief friendly fire incident involving the 4th Iowa Cavalry and the 2nd Colorado Cavalry ensued due to the Iowans being unaware of the presence of the Colorado unit in their front, as well as some light skirmishing with Fagan's forces. Sanborn was unsure of the Confederates' strength, but thought it might be as many as 10,000 men. With his men fatigued and operating in a thunderstorm, he withdrew most of his line, except for the 6th Missouri State Militia Cavalry, which continued skirmishing throughout the night. Fagan informed Price of the action, and the Confederates began retreating about midnight. ## Battle At around 01:00 the next morning, Curtis was informed that Sanborn had disengaged. Wishing to continue to press Price, he ordered Sanborn to attack at daybreak. Curtis had some of his staff officers assist Sanborn, who had been at least partially stymied by lack of staff assistance. An artillery battery was deployed at this time. Around 02:00, Fagan and Shelby withdrew their troops, and Marmaduke aligned his division to serve as a rearguard; it was over 2,000-strong. Marmaduke withdrew his main force south of the river, but left a skirmish line on a pair of mounds that were 140 feet (43 m) tall. During the night, part of the 2nd Colorado Cavalry broke through the Confederate skirmish line before withdrawing again. A Missouri State Militia unit and the 4th Iowa Cavalry also advanced under cover of darkness. During the night, Sanborn was reinforced by elements of Lieutenant Colonel Frederick W. Benteen's cavalry brigade. One of Benteen's regiments was detached to guard a river crossing to the north to prevent a Confederate flanking attack. At 04:00, Sanborn's artillery, six 3-inch ordnance rifles, opened fire on the Confederate line. At daybreak, the 4th Iowa Cavalry on the Union right attacked, using the broken ground as cover. Union artillery fired on the mounds, but despite aiming at a 15° elevation, overshot the elevated Confederate positions. Some of the misses struck the Confederate camp, accelerating its evacuation. Confederate marksmanship at that portion of the line was very poor, and the Iowans easily took the position, which consisted of one of the mounds. The 6th and 8th Missouri State Militia Cavalry Regiments attacked on the other end of the line. Again, the fire from the Confederate defenders was ineffective. Both sides were hampered by the rough terrain. The Confederate commander facing the two militia cavalry regiments feared being isolated from Marmaduke's main body on the other side of the river, so the mound was abandoned. The retreat was not detected until after the position had been completely abandoned. The historian Kyle Sinisi wrote that casualties during this stage of the fighting "appear to have been almost nonexistent". With Confederate resistance north of the river broken, Sanborn deployed the 3rd Iowa Cavalry Regiment and the 10th Missouri Cavalry Regiment, as well as the 2nd Arkansas Cavalry Regiment, to exploit the breakthrough. The 2nd Arkansas Cavalry spearheaded the pursuit. Union forces captured 100 Confederate soldiers, as well as two cannons and some wagons, north of the river, including around the Trading Post settlement. Large quantities of equipment, personal effects, and partially cooked food were found left in the camp, including the partially butchered carcasses of livestock. Marmaduke had positioned men and three cannons from Hynson's Texas Battery just south of the river crossing, and these Union troops were temporarily halted, as there were not enough Union soldiers on the field to challenge the Confederate line directly. The river crossing was obstructed with two downed trees and some men from Colonel Thomas R. Freeman's command. Sanborn ordered the 7th Provisional Enrolled Missouri Militia to cross the river upstream from the Confederate position, successfully outflanking the Confederate line and opening a path across the river. As the 7th Provisional Enrolled Missouri Militia cleared the approach from the flank, the 2nd Arkansas also drove across the river. A tributary of the Marais des Cygnes, named Big Sugar Creek, presented another challenge to the crossing. An alternate crossing of the Marais des Cygnes bypassed this roadblock, but Sanborn was not aware of its existence. Serving as a rearguard, Clark aligned his brigade in the path of the Union advance. This line was spotted by Sanborn's men after they forced their way through some forest growth around the river. Sanborn drew up a line with two Provisional Enrolled Missouri Militia units thrown out as skirmishers, and the 2nd Arkansas Cavalry, 2nd Colorado Cavalry, and two additional militia units forming his main line. Confused as to what to do, Sanborn left to personally find Curtis for orders and left Colonel John E. Phelps, commander of the 2nd Arkansas Cavalry, in charge in his absence. Phelps's orders were not to attack unless reinforced, but he assaulted Clark's line with 200 men from his own unit and the two Missouri State Militia commands anyway. The attack was initially successful, but halted and was repulsed. Curtis and Pleasonton had joined Sanborn by this point, and observed the 2nd Arkansas' repulse. They attempted to bring more troops to Phelps' support, but Price's wagons had cut up the roads during their retreat, making maneuvers difficult. By 09:00, Pleasonton, who had regained command of his division from Sanborn, formed a line with the cavalry brigades commanded by Sanborn, Benteen, and Colonel John F. Philips. A small unit of Union artillery also joined the line. Sanborn's command outflanked the right of Clark's line and forced the Confederates to withdraw; another Confederate cannon was captured when Hynson's battery abandoned it during the retreat. Clark's brigade formed a new line containing around 1,200 men, but the weight of the 3,500 Union troopers now present was too much for the Confederates. After Philips's troops threatened his left, Clark ordered a retreat from the field around 10:00. Colonel Colton Greene and his 3rd Missouri Cavalry Regiment provided a rearguard for the Confederates. ## Aftermath Later that morning, Philips and Benteen's troops encountered some of Price's men at the crossing of Mine Creek. The Union troops quickly attacked, and the ensuing Battle of Mine Creek became one of the largest battles between mounted cavalry during the war. The Confederates suffered a serious defeat, as several cannons and about 600 men, including Marmaduke, were captured. Shelby's division served as a rearguard, fighting the Battle of Marmiton River that evening. By the end of October 25, Price's army was so shattered and demoralized that the historian Albert E. Castel described it as essentially an armed mob. That night, Price burned most of his wagon train near Deerfield, Missouri so that it was no longer an encumbrance. By October 28, the Confederates had reached Newtonia, Missouri, where they were defeated by the commands of Blunt and Sanborn in the Second Battle of Newtonia. Price's army began to disintegrate, and the Confederates retreated first into Arkansas and then into the Indian Territory and Texas. Price's Raid, the last major offensive in the Trans-Mississippi Theater, was a failure. By December, Price only had 3,500 men left in an army that had begun the campaign with 12,000. ## Legacy Over 937 acres (379 ha) of the battlefield are preserved by government agencies: 150 acres (61 ha) by the Kansas Department of Fish and Wildlife, and 787.25 acres (318.59 ha) by the United States Fish and Wildlife Service; the land under the control of the latter agency is within Marais des Cygnes National Wildlife Refuge. As of 2010, 92 percent of the battlefield retains historical integrity; of this, only 19 percent is included in the wildlife refuge. Since the land is preserved as a wildlife site instead of a historic site, the only public interpretation of the battle is some signage and trails present at a rest stop maintained by the Kansas Department of Transportation. U.S. Route 69 and Kansas State Highway 52 run through the northern portion of the battlefield, although the landscape is generally free from major development. The site of the battle is not listed on the National Register of Historic Places as of January 2021, although a 2010 survey performed by the American Battlefield Protection Program determined that it is likely eligible for listing. ## See also - List of battles fought in Kansas - Kansas in the American Civil War
5,687,218
Rodrigues parrot
1,165,736,735
Extinct species of parrot that was endemic to Rodrigues
[ "Bird extinctions since 1500", "Birds described in 1867", "Extinct birds of Indian Ocean islands", "Fauna of Rodrigues", "Parrots of Africa", "Psittacidae" ]
The Rodrigues parrot or Leguat's parrot (Necropsittacus rodricanus) is an extinct species of parrot that was endemic to the Mascarene island of Rodrigues. The species is known from subfossil bones and from mentions in contemporary accounts. It is unclear to which other species it is most closely related, but it is classified as a member of the tribe Psittaculini, along with other Mascarene parrots. The Rodrigues parrot bore similarities to the broad-billed parrot of Mauritius, and may have been related. Two additional species have been assigned to its genus (N. francicus and N. borbonicus), based on descriptions of parrots from the other Mascarene islands, but their identities and validity have been debated. The Rodrigues parrot was green, and had a proportionally large head and beak and a long tail. Its exact size is unknown, but it may have been around 50 cm (20 in) long. It was the largest parrot on Rodrigues, and it had the largest head of any Mascarene parrot. It may have looked similar to the great-billed parrot. By the time it was discovered, it frequented and nested on islets off southern Rodrigues, where introduced rats were absent, and fed on the seeds of the shrub Fernelia buxifolia. The species was last mentioned in 1761, and probably became extinct soon after, perhaps due to a combination of predation by introduced animals, deforestation, and hunting by humans. ## Taxonomy Birds thought to be the Rodrigues parrot were first mentioned by the French traveler François Leguat in his 1708 memoir, A New Voyage to the East Indies. Leguat was the leader of a group of nine French Huguenot refugees who colonised Rodrigues between 1691 and 1693 after they were marooned there. Subsequent accounts were written by the French sailor Julien Tafforet, who was marooned on the island in 1726, in his Relation de l'Île Rodrigue, and then by the French astronomer Alexandre Pingré, who travelled to Rodrigues to view the 1761 transit of Venus. In 1867, the French zoologist Alphonse Milne-Edwards described subfossil bird bones from Rodrigues he had received via the British ornithologist Alfred Newton, which had been excavated under the supervision of his brother, Colonial Secretary Edward Newton. Among the bones was a fragmentary front part of an upper beak that he identified as belonging to a parrot. Based on this beak, he scientifically described and named the new species Psittacus rodricanus. While he found the bone similar to the beaks of the lories in the genus Eclectus, he preferred to give it a less precise classification than assigning it to that genus, due to the scant remains. The specific name rodricanus refers to Rodrigues, which is itself named after the discoverer of the island, the Portuguese navigator Diogo Rodrigues. Milne-Edwards corrected the spelling of the specific name to rodericanus (in a footnote in an 1873 compilation of his articles about extinct birds that included the original description), a spelling which was used in the literature henceforward, but was changed back to rodricanus by the IOC World Bird List in 2014. After receiving a more complete upper and lower beak which he thought showed the bird to be close to the parrot genus Palaeornis, Milne-Edwards moved the species to its own genus Necropsittacus in 1873; the name is derived from the Greek words nekros, which means dead, and psittakos, parrot, in reference to the bird being extinct. In another footnote to his 1873 compilation, Milne-Edwards correlated the subfossil species with parrots mentioned by Leguat. In 1875, A. Newton analysed Tafforet's then newly rediscovered account, and identified a description of the Rodrigues parrot. In a footnote in an 1891 edition of Leguat's memoir, the British writer Samuel Pasfield Oliver doubted that the parrots mentioned were the Rodrigues parrot, due to their smaller size, and suggested they may have been Newton's parakeet. As Leguat mentioned both green and blue parrots in the same sentence, the British palaeontologist Julian Hume suggested in 2007 that these could either be interpreted as references to both the Rodrigues parrot and Newton's parakeet, or as two colour morphs of the latter. The current whereabouts of the holotype beak are unknown. It may be specimen UMZC 575, a rostrum that was sent from Milne-Edwards to A. Newton after 1880, which matches the drawing and description in Milne-Edwards's paper, but this cannot be confirmed. In 1879 the German ornithologist Albert Günther and E. Newton described more fossils of the Rodrigues parrot, including a skull and limb bones. Remains of the species are scarce, but subfossils have been discovered in caves on the Plaine Corail and in Caverne Tortue. ### Evolution Many endemic Mascarene birds, including the dodo, are derived from South Asian ancestors, and the British ecologist Anthony S. Cheke and Hume have proposed that this may be the case for all the parrots there as well. Sea levels were lower during the Pleistocene, so it was possible for species to colonise some of the then less isolated islands. Although most extinct parrot species of the Mascarenes are poorly known, subfossil remains show that they shared features such as enlarged heads and jaws, reduced pectoral bones, and robust leg bones. In 1893, E. Newton and the German ornithologist Hans Gadow found the Rodrigues parrot to be closely related to the broad-billed parrot due to their large jaws and other osteological features, but were unable to determine whether they both belonged in the same genus, since a head-crest was only known from the latter. The British ornithologist Graham S. Cowles instead found their skulls too dissimilar for them to be close relatives in 1987. Hume has suggested that the Mascarene parrots have a common origin in the radiation of the tribe Psittaculini, basing this theory on morphological features and the fact that parrots of that group have managed to colonise many isolated islands in the Indian Ocean. The Psittaculini may have invaded the area several times, as many of the species were so specialised that they may have evolved significantly on hotspot islands before the Mascarenes emerged from the sea. ### Hypothetical extinct relatives The British zoologist Walther Rothschild assigned two hypothetical parrot species from the other Mascarene Islands to the genus Necropsittacus; N. francicus in 1905 and N. borbonicus in 1907. Rothschild gave the original description of N. francicus as "head and tail fiery red, rest of body and tail green", and stated it was based on descriptions from voyages to Mauritius in the 17th and early 18th century. N. borbonicus (named for Bourbon, the original name of Réunion) was based on a single account by the French traveller Sieur Dubois, who mentioned "green parrots of the same size [presumably as the Réunion parakeet] with head, upper parts of the wings, and tail the colour of fire" on Réunion. Rothschild considered it to belong to Necropsittacus since Dubois compared it to related species. The two assigned Necropsittacus species have since become the source of much taxonomic confusion, and their identities have been debated. N. borbonicus later received common names such as Réunion red and green parakeet or Réunion parrot, and N. francicus has been called the Mauritian parrot. The Japanese ornithologist Masauji Hachisuka recognised N. borbonicus in 1953, and published a restoration of it with the colouration described by Dubois and the body-plan of the Rodrigues parrot. He did not find the naming of N. francicus to have been necessary, but expressed hope more evidence would be found. In 1967, the American ornithologist James Greenway suggested that N. borbonicus may have been an escaped pet lory seen by Dubois, since 16th century Dutch paintings show the somewhat similar East Indian chattering lory, presumably in captivity. However, Greenway was unable to find any references that matched those Rothschild had given for N. francicus. In 1987, Cheke found the described colour-pattern of N. borbonicus remiscent of Psittacula parrots, but considered N. francicus to be based on confused reports. In 2001 the British writer Errol Fuller suggested Dubois's account of N. borbonicus could either have referred to an otherwise unrecorded species or have been misleading, and found N. francicus to be "one of the most dubious of all hypothetical species". In 2007, Hume suggested that Rothschild had associated N. borbonicus with the Rodrigues parrot because he had mistakenly incorporated Dubois's account into his description of the latter; he stated the Rodrigues parrot also had red plumage (though it was all-green), and had been mentioned by Dubois (who never visited Rodrigues). Rothschild also attributed the sighting of N. francicus to Dubois, repeating the colour-pattern he had described earlier for the Rodrigues parrot, and this led Hume to conclude that the name N. francicus was based solely on "the muddled imagination of Lord Rothschild". Hume added that if Dubois's description of N. borbonicus was based on a parrot endemic to Réunion, it may have been derived from the Alexandrine parakeet, which has a similar colouration, apart from the red tail. ## Description The Rodrigues parrot was described as being the largest parrot species on the island, with a big head and a long tail. Its plumage was described as being of uniform green colouration. Its skull was flat and depressed compared to those of most other parrots, but similar to the genus Ara. The skull was 50 mm (2.0 in) long without the beak, 38 mm (1.5 in) wide, and 24 mm (0.94 in) deep. The coracoid (part of the shoulder) was 35 mm (1.4 in) long, the humerus (upper-arm bone) 53 mm (2.1 in), the ulna (lower-arm bone) 57 mm (2.2 in), the femur (thigh-bone) 49 mm (1.9 in), the tibia (lower-leg bone) 63 mm (2.5 in), and the metatarsus (foot bone) 22 mm (0.87 in). Its exact body length is unknown, but it may have been around 50 cm (20 in), comparable to the size of a large cockatoo. Its tibia was 32% smaller than that of a female broad-billed parrot, yet the pectoral bones were of similar size, and proportionally its head was the largest of any Mascarene species of parrot. The Rodrigues parrot was similar in skeletal structure to the parrot genera Tanygnathus and Psittacula. The pectoral and pelvic bones were similar in size to those of the New Zealand kaka, and it may have looked like the great-billed parrot in life, but with a larger head and tail. It differed from other Mascarene parrots in several skeletal features, including having nostrils that faced upwards instead of forwards. No features of the skull suggest it had a crest like the broad-billed parrot, and there is not enough fossil evidence to determine whether it had pronounced sexual dimorphism. There are intermediate specimens between the longest and shortest examples of the known skeletal elements, which indicates there were no distinct size groups. ## Behaviour and ecology Tafforet's 1726 description is the only detailed account of the Rodrigues parrot in life: > The largest are larger than a pigeon, and have a tail very long, the head large as well as the beak. They mostly come on the islets which are to the south of the island, where they eat a small black seed, which produces a small shrub whose leaves have the smell of the orange tree, and come to the mainland to drink water ... they have their plumage green. Tafforet also mentioned that the parrots ate the seeds of the shrub Fernelia buxifolia ("bois de buis"), which is endangered today, but was common all over Rodrigues and nearby islets during his visit. Leguat mentioned that the parrots of the island ate the nuts of the tree Cassine orientalis ("bois d'olive"). Due to a large population of introduced rats on Rodrigues, the parrots, the Rodrigues starling, and the Rodrigues pigeon, frequented and nested on offshore islets, where the rats were absent. Many of the other endemic species of Rodrigues became extinct after the arrival of humans, so the ecosystem of the island is heavily damaged. Before humans arrived, forests covered the island entirely, but very little remains today due to deforestation. The Rodrigues parrot lived alongside other recently extinct birds such as the Rodrigues solitaire, the Rodrigues rail, Newton's parakeet, the Rodrigues starling, the Rodrigues scops owl, the Rodrigues night heron, and the Rodrigues pigeon. Extinct reptiles include the domed Rodrigues giant tortoise, the saddle-backed Rodrigues giant tortoise, and the Rodrigues day gecko. ## Extinction Of the eight or so parrot species endemic to the Mascarenes, only the echo parakeet of Mauritius has survived. The others were likely all made extinct by a combination of excessive hunting and deforestation by humans. Like mainland Rodrigues, the offshore islets were eventually infested by rats, which is believed to have caused the demise of the Rodrigues parrot and other birds there. Cats may also have hunted the remaining birds. The rats probably preyed on their eggs and chicks. Leguat mentioned use of local parrots as food, but it is uncertain whether the green species was the Rodrigues parrot or a green Newton's parakeet: > There are abundance of green and blew Parrets, they are of a midling and equal bigness; when they are young, their Flesh is as good as young Pigeons. Pingré indicated in 1671 that local species were popular game, and found that the Rodrigues parrot was rare: > The perruche [Newton's parakeet] seemed to me much more delicate [than the flying-fox]. I would not have missed any game from France if this one had been commoner in Rodrigues; but it begins to become rare. There are even fewer perroquets [Rodrigues parrots], although there were once a big enough quantity according to François Leguat; indeed a little islet south of Rodrigues still retains the name Isle of Parrots [Isle Pierrot]. Pingré also reported that the island was becoming deforested by tortoise hunters who set fires to clear vegetation. Along with direct hunting of the parrots, this likely led to a reduction in the population of Rodrigues parrots. Pingré's 1761 account is the last known mention of the species, and it probably became extinct soon after.
7,715,205
2005 Azores subtropical storm
1,170,526,078
Unnamed Atlantic subtropical storm
[ "2005 Atlantic hurricane season", "Hurricanes in the Azores", "Subtropical storms", "Tropical cyclones in 2005" ]
The 2005 Azores subtropical storm was the 19th nameable storm and only subtropical storm of the extremely active 2005 Atlantic hurricane season. It was not named by the National Hurricane Center as it was operationally classified as an extratropical low. It developed in the eastern Atlantic Ocean, an unusual region for late-season tropical cyclogenesis. Nonetheless, the system was able to generate a well-defined centre convecting around a warm core on 4 October. The system was short-lived, crossing over the Azores later on 4 October before becoming extratropical again on 5 October. No damages or fatalities were reported during that time. Its remnants were soon absorbed into a cold front. That system went on to become Hurricane Vince, which affected the Iberian Peninsula. The subtropical nature of this unnamed system was determined several months after the fact, while the National Hurricane Center was performing its annual review of the season. Upon reclassification, the storm was entered into HURDAT, the official hurricane database. ## Meteorological history The system originated out of an upper-level low just west of the Canary Islands on 28 September. The low organized itself over the next days, producing several bursts of convection. While remaining non-tropical with a cold core it moved gradually west to northwest. On 3 October, it became a broad surface low about 400 nautical miles (740 kilometres; 460 miles) southwest of São Miguel Island in the Azores. Early on 4 October, convection increased as the surface low organized itself, and the system became a subtropical depression. Around the same time, the depression turned northeast into a warm sector ahead of an oncoming cold front and strengthened into a subtropical storm. The system continued to track northeast and strengthened slightly, reaching its peak intensity of 85 km/h (53 mph) as it approached the Azores that evening. After tracking through the area, the storm weakened slightly as it moved to the north-northeast. Through an interaction with the cold front early on 5 October, the subtropical storm became extratropical. The system was fully absorbed by the front later that day. The newly absorbed system would separate from the dissolving frontal system and become Subtropical Storm Vince on 8 October. At the time, the system was not believed to have been subtropical. However, there were several post-season findings that confirmed that the system was indeed one. The first finding was the cloud pattern, which had deep convection around the centre and was better organized with a well-defined centre of circulation. In addition, the system had a warm core more typical of tropical cyclones as opposed to the cold core of extratropical cyclones. The warm-core nature also meant that there were no warm or cold fronts attached to the system, as temperatures did not change ahead of and behind the system, until an unrelated cold front passed the Azores. Satellite imagery suggested that the system was briefly a tropical storm as the warm core was found; however, the widespread wind field and the presence of an upper-level trough confirmed that it was only subtropical. ## Impact, classification and records Tropical storm-force winds were reported across parts of the Azores, primarily on the eastern islands. The strongest winds were reported on Santa Maria Island, where 10-minute sustained winds reached 79 km/h (49 mph) with gusts to 94 km/h (58 mph). Ponta Delgada faced 61 km/h (38 mph) winds, with the peak recorded gust being 85 km/h (53 mph). No damage or fatalities were reported. The 2005 Azores storm was not classified as a subtropical storm until April 2006, after a reassessment by the National Hurricane Center. Had it been operationally classified as such, it would have been named Tammy. Every year, the National Hurricane Center (NHC) re-analyzes the systems of the past hurricane season and revises the storm history if there is new data that was operationally unavailable. This reanalysis revealed that the storm became a subtropical storm on 4 October, making it the earliest forming 19th Atlantic tropical or subtropical storm on record. The previous record holder was an unnamed 1933 tropical storm that developed on 26 October. It held this distinction until 2020, when Hurricane Teddy attained tropical storm strength on 14 September. ## See also - Timeline of the 2005 Atlantic hurricane season - List of Azores hurricanes - List of unnamed tropical cyclones
2,385,041
HMS Erin
1,168,076,538
Royal Navy battleship
[ "1913 ships", "Battleships of the Ottoman Navy", "Battleships of the Royal Navy", "Ships built in Barrow-in-Furness", "World War I battleships of the United Kingdom" ]
HMS Erin was a dreadnought battleship of the Royal Navy, originally ordered by the Ottoman government from the British Vickers Company. The ship was to have been named Reşadiye when she entered service with the Ottoman Navy. The Reşadiye class was designed to be at least the equal of any other ship afloat or under construction. When the First World War began in August 1914, Reşadiye was nearly complete and was seized at the orders of Winston Churchill, the First Lord of the Admiralty, to keep her in British hands and prevent her from being used by Germany or German allies. There is no evidence that the seizure played any part in the Ottoman government declaring war on Britain and the Triple Entente. Aside from a minor role in the Battle of Jutland in May 1916 and the inconclusive Action of 19 August the same year, Erin's service during the war generally consisted of routine patrols and training in the North Sea. The ship was deemed obsolete after the war; she was reduced to reserve and used as a training ship. Erin served as the flagship of the reserve fleet at the Nore for most of 1920. She was sold for scrap in 1922 and broken up the following year. ## Design and description The design of the Reşadiye class was based on the King George V class, but employed the six-inch (152 mm) secondary armament of the later Iron Duke class. Erin had an overall length of 559 feet 6 inches (170.54 m), a beam of 91 feet 7 inches (27.9 m) and a draught of 28 feet 5 inches (8.7 m). She displaced 22,780 long tons (23,150 t) at normal load and 25,250 long tons (25,660 t) at deep load. In 1914 her crew numbered 976 officers and ratings and 1,064 a year later. Erin was powered by a pair of Parsons direct-drive steam turbine sets, each driving two shafts using steam from 15 Babcock & Wilcox boilers. The turbines, rated at 26,500 shaft horsepower (19,800 kW), were intended to give the ship a maximum speed of 21 knots (39 km/h; 24 mph). The ship carried enough coal and fuel oil for a maximum range of 5,300 nautical miles (9,800 km; 6,100 mi) at a cruising speed of 10 knots (19 km/h; 12 mph). This radius of action was somewhat less than that of contemporary British battleships, but was adequate for operations in the North Sea. ### Armament and armour The ship was armed with a main battery of ten BL 13.5 in (343 mm) Mk VI guns mounted in five twin-gun turrets, designated 'A', 'B', 'Q', 'X' and 'Y' from front to rear. They were arranged in two superfiring pairs, one forward and one aft of the superstructure; the fifth turret was amidships, between the funnels and the rear superstructure. Close-range defence against torpedo boats was provided by a secondary armament of sixteen BL 6-inch Mk XVI guns. The ship was also fitted with six quick-firing (QF) six-pounder (2.2 in (57 mm)) Hotchkiss guns. As was typical for British capital ships of the period, she was equipped with four submerged 21-inch (533 mm) torpedo tubes on the broadside. Erin was protected by a waterline armoured belt that was 12 inches (305 mm) thick over the ship's vitals. Her decks ranged in thickness from 1 to 3 inches (25 to 76 mm). The main gun turret armour was 11 inches (279 mm) thick and was supported by barbettes 9–10 inches (229–254 mm) thick. #### Wartime modifications Four of the six-pounder guns were removed in 1915–1916, and a QF three-inch (76 mm) 20-cwt anti-aircraft (AA) gun was installed on the former searchlight platform on the aft superstructure. A fire-control director for the main guns was installed on the tripod mast between May and December 1916. A pair of directors for the secondary armament were fitted to the legs of the tripod mast in 1916–1917 and another three-inch AA gun was added on the aft superstructure. In 1918, a high-angle rangefinder was fitted and flying-off platforms were installed on the roofs of 'B' and 'Q' turrets. ## Construction and career Erin originally was ordered by the Ottoman Empire on 8 June 1911, at an estimated cost of £2,500,000, with the name of Reşad V in honour of Mehmed V Reşâd, the ruling Ottoman Sultan, but was renamed Reşadiye during construction. She was laid down at the Vickers shipyard in Barrow-in-Furness on 6 December 1911 with yard number 425, but construction was suspended in late 1912 during the Balkan Wars and resumed in May 1913. The ship was launched on 3 September and completed in August 1914. After the assassination of Archduke Franz Ferdinand on 28 June, the British postponed delivery of Reşadiye on 21 July, despite the completion of payments and the arrival of the Ottoman delegation to collect Reşadiye and another dreadnought battleship, Sultan Osman I, after their sea trials. Churchill ordered the Royal Navy to detain the ships on 29 July and prevent Ottoman naval personnel from boarding them; two days later, soldiers from the Sherwood Foresters Regiment formally seized them and Reşadiye was renamed Erin, a dative name for Ireland. Churchill did this on his own initiative to augment the Royal Navy's margin of superiority over the German High Seas Fleet and to prevent them from being acquired by Germany or its allies. The takeover caused considerable ill will in the Ottoman Empire, where public subscriptions had partially funded the ships. When the Ottoman government had been in a financial deadlock over the budget of the battleships, donations for the Ottoman Navy had come in from taverns, cafés, schools and markets, and large donations were rewarded with a "Navy Donation Medal". The seizure, and the gift of the German battlecruiser Goeben to the Ottomans, influenced public opinion in the Empire to turn away from Britain. Although there is no evidence that the seizure played any part in the Ottoman government declaring war on Britain and the Triple Entente, historian David Fromkin has speculated that the Turks promised to transfer Sultan Osman I to the Germans in exchange for signing a secret defensive alliance on 1 August. Regardless, the Ottoman government was intent on remaining neutral until Russian disasters during the invasion of East Prussia in September persuaded Enver Pasha and Djemal Pasha, the Ministers of War and of the Marine, respectively, that the time was ripe to exploit Russian weakness. Unbeknownst to any of the other members of the government, Enver and Djemal authorized Vice Admiral Wilhelm Souchon, the German commander-in-chief of the Ottoman Navy, to attack Russian ships in the Black Sea in late October under the pretext of defending its warships from Russian attacks. Souchon, frustrated with Ottoman neutrality, took matters into his own hands and bombarded Russian ports in the Black Sea on 29 October as unambiguous evidence of an Ottoman attack and forced the government's hand into joining the war on Germany's side. ### 1914–1915 Captain Victor Stanley was appointed as Erin's first captain. On 5 September, she joined the Grand Fleet, commanded by Admiral John Jellicoe, at Scapa Flow in Orkney and was assigned to the Fourth Battle Squadron (4th BS). Erin steamed with the ships of the Grand Fleet as they departed from Loch Ewe in Scotland on 17 September for gunnery practice west of the Orkney Islands the following day. After the exercise, they began a fruitless search for German ships in the North Sea that were hampered by bad weather. The Grand Fleet arrived at Scapa Flow on 24 September to refuel before departing the next day for more target practice west of Orkney. In early October the Grand Fleet sortied into the North Sea to provide distant cover for a large convoy transporting Canadian troops from Halifax, Nova Scotia and returned to Scapa on 12 October. Reports of U-boats in Scapa Flow led Jellicoe to conclude that the defences there were inadequate, and on 16 October he ordered that the bulk of the Grand Fleet be dispersed to Lough Swilly, Ireland. Jellicoe took the Grand Fleet to sea on 3 November for gunnery training and battle exercises, and the 4th BS returned to Scapa six days later. On the evening of 22 November, the Grand Fleet conducted another abortive sweep in the southern half of the North Sea; Erin stood with the main body in support of Vice-Admiral David Beatty's 1st Battlecruiser Squadron. The fleet was back at Scapa Flow by 27 November. On 16 December, the Grand Fleet sortied during the German raid on Scarborough, Hartlepool and Whitby, but failed to intercept the High Seas Fleet. Erin and the rest of the Grand Fleet made another sweep of the North Sea on 25–27 December. Jellicoe's ships, including Erin, practised gunnery drills on 10–13 January 1915 west of the Orkney and Shetland Islands. On the evening of 23 January, the bulk of the Grand Fleet sailed in support of Beatty's battlecruisers, but the fleet was too far away to participate in the Battle of Dogger Bank the following day. On 7–10 March, the fleet made a sweep in the northern North Sea, during which it conducted training manoeuvres. Another cruise took place on 16–19 March. On 11 April, the Grand Fleet conducted a patrol in the central North Sea and returned to port on 14 April; another patrol in the area took place on 17–19 April, followed by gunnery drills off Shetland on 20–21 April. The Grand Fleet conducted sweeps into the central North Sea on 17–19 May and 29–31 May without encountering German vessels. During 11–14 June, the fleet practised gunnery and battle exercises off Shetland from 11 July. On 2–5 September, the fleet went on another cruise in the northern North Sea and conducted gunnery drills. Throughout the rest of the month, the Grand Fleet conducted training exercises and then made another sweep into the North Sea from 13 to 15 October. Erin participated in another fleet training operation west of Orkney during 2–5 November. The ship was transferred to the Second Battle Squadron (2nd BS) sometime between September and December. ### 1916–1918 The fleet departed for a cruise in the North Sea on 26 February 1916; Jellicoe had intended to use the Harwich Force to sweep the Heligoland Bight but bad weather prevented operations in the southern North Sea, and the operation was confined to the northern end. Another sweep began on 6 March but was abandoned the following day as the weather grew too severe for the destroyer escorts. On the night of 25 March, Erin and the rest of the fleet sailed from Scapa Flow to support Beatty's battlecruisers and other light forces raiding the German Zeppelin base at Tondern. By the time the Grand Fleet approached the area on 26 March, the British and German forces had already disengaged and a strong gale threatened the light craft, so the fleet was ordered to return to base. On 21 April, the Grand Fleet conducted a demonstration off Horns Reef to distract the Germans while the Russian Navy re-laid its defensive minefields in the Baltic Sea. The fleet returned to Scapa Flow on 24 April and refuelled before sailing south, over intelligence reports that the Germans were about to launch a raid on Lowestoft, but the Germans had withdrawn before the fleet arrived. On 2–4 May, the Grand Fleet conducted another demonstration off Horns Reef to keep German attention on the North Sea. #### Battle of Jutland To lure out and destroy a portion of the Grand Fleet, the High Seas Fleet (Admiral Reinhard Scheer) composed of 16 dreadnoughts, 6 pre-dreadnoughts and supporting ships, departed the Jade Bight early on the morning of 31 May. The fleet sailed in concert with Rear Admiral Franz von Hipper's five battlecruisers. Room 40 at the Admiralty had intercepted and decrypted German radio traffic containing plans of the operation. The Admiralty ordered the Grand Fleet, with 28 dreadnoughts and 9 battlecruisers, to sortie the night before, to cut off and destroy the High Seas Fleet. During the Battle of Jutland on 31 May, Beatty's battlecruisers managed to bait Scheer and Hipper into a pursuit as they fell back upon the main body of the Grand Fleet. After Jellicoe deployed his ships into line of battle, Erin was the fourth from the head of the line. Scheer's manoeuvres after spotting the Grand Fleet were generally away from Jellicoe's leading ships, and the poor visibility hindered their ability to close with the Germans before Scheer could disengage under the cover of darkness. Opportunities to shoot during the battle were rare, and she only fired 6 six-inch shells from her secondary armament. Erin was the only British battleship not to fire her main guns during the battle. #### Subsequent activity The Grand Fleet sortied on 18 August to ambush the High Seas Fleet while it advanced into the southern North Sea, but miscommunications and mistakes prevented Jellicoe from intercepting the German fleet before it returned to port. Two light cruisers were sunk by German U-boats during the operation, prompting Jellicoe to decide to not risk the major units of the fleet south of 55° 30' North due to the prevalence of German submarines and mines. The Admiralty concurred and stipulated that the Grand Fleet would not sortie unless the German fleet was attempting an invasion of Britain or that it could be forced into an engagement at a disadvantage. When Stanley was promoted to rear-admiral on 26 April 1917, he was replaced by Captain Walter Ellerton. In April 1918, the High Seas Fleet sortied against British convoys to Norway. Wireless silence was enforced, which prevented Room 40 cryptanalysts from warning the new commander of the Grand Fleet, Admiral Beatty. The British only learned of the operation after an accident aboard the battlecruiser SMS Moltke forced her to break radio silence and inform the German commander of her condition. Beatty ordered the Grand Fleet to sea to intercept the Germans, but he was not able to reach the High Seas Fleet before it turned back for Germany. The ship was at Rosyth, Scotland, when the surrendered High Seas Fleet arrived on 21 November and she remained part of the 2nd BS through 1 March 1919. ### Postwar Captain Herbert Richmond assumed command on 1 January 1919. By 1 May, Erin had been assigned to the 3rd Battle Squadron of the Home Fleet. In October, she was placed in reserve at the Nore but was stationed at Portland Harbour as of 18 November. Richmond was relieved by Captain Percival Hall-Thompson on 1 December. Erin had returned to the Nore by January 1920 and became a gunnery training ship there by February. By June, the ship had become flagship of Rear-Admiral Vivian Bernard, Rear-Admiral, Reserve Fleet, Nore. In July–August 1920, she underwent a refit at Devonport Dockyard. Through 18 December 1920, Erin remained Bernard's flagship and continued to serve as a gunnery training ship. The Royal Navy had originally intended that she should be retained as a training ship under the terms of the Washington Naval Treaty of 1922, but a change of plan meant that this role was filled by Thunderer, so the ship was listed for disposal in May 1922. Erin was sold to the ship-breaking firm of Cox and Danks on 19 December and broken up at Queenborough the following year.
20,206,108
Fifth Test, 1948 Ashes series
1,122,978,596
Final test in a cricket series between Australia and England
[ "1948 Ashes series", "1948 in English sport", "August 1948 sports events in the United Kingdom", "Test cricket matches", "The Invincibles (cricket)" ]
The Fifth Test of the 1948 Ashes series, held at The Oval in London, was the final Test in that cricket series between Australia and England. The match took place on 14–18 August, with a rest day on 15 August. Australia won the match by an innings and 149 runs to complete a 4–0 series win. It was the last Test in the career of Australian captain Donald Bradman, generally regarded as the best batsman in the history of the sport. Going into the match, if Australia batted only once, Bradman needed only four runs from his final innings to have a Test batting average of exactly 100, but he failed to score, bowled second ball for a duck by leg spinner Eric Hollies. With the series already lost, the England selectors continued to make many changes, on this occasion, four. In all, they had used 21 players for the series and were severely criticised for failing to maintain continuity. England captain Norman Yardley won the toss, and elected to bat on a pitch affected by rain. After a delayed start due to inclement weather, the Australian pace attack, led by Ray Lindwall, dismissed England within the first day for just 52. Lindwall was the main destroyer, taking six wickets for 20 runs (6/20). The English batsmen found it difficult to cope with his prodigious swing and pace; four of his wickets were either bowled or leg before wicket. Len Hutton was the only batsman to resist, making 30 before being the final man dismissed. In reply, Australia's opening pair of Arthur Morris and Sid Barnes passed England's score on the same afternoon with no loss of wickets. The opening stand ended at 117 when Barnes fell for 61 and Bradman came to the crease to a standing ovation and three cheers from his opponents. He fell second ball, but Australia reached 153/2 at stumps on the first day. On the second day, Australian batsmen fell regularly once Lindsay Hassett was dismissed at 226/3, most of them being troubled by Hollies, who had been selected after taking 8/107 against Australia for Warwickshire. Morris was an exception and he made 196, more than half his team's total, before being run out as Australia were dismissed for 389. Hollies took 5/131. England reached 54/1 at stumps and by lunch on the third day were 121/2, Hutton and Denis Compton batting steadily. However, they suffered a late collapse to be 178/7 when bad light and rain stopped the day's play. Hutton top-scored for the second time in the match for England, making 64. The next morning, Bill Johnston took the last three wickets as England were bowled out for 188, ending the match. Johnston ended with 4/40 and Lindwall 3/50. The match was followed by speeches from both captains, after which the crowd sang "For He's a Jolly Good Fellow" in Bradman's honour. Having been undefeated in their matches up to this point, the Australians maintained their streak in the remaining fixtures, gaining them the sobriquet of The Invincibles. ## Background After the first four Tests, Australia led the series 3–0, having won all but the Third Test, which was rain-affected. They had taken an unlikely win in the Fourth Test at Headingley, scoring 404/3 in their second innings, the highest ever score in a successful Test runchase. Australia had been unbeaten throughout the tour. Between the Fourth and Fifth Test, they played five tour matches. They defeated Derbyshire by an innings, before having a washout against Glamorgan. The Australians then defeated Warwickshire by nine wickets, before drawing with Lancashire, who hung on with three wickets in hand on the final day. Australia's final lead-in outing was a two-day non-first-class match against Durham, which was drawn after rain washed out the second day. With the series already lost, England made four changes to their team. John Dewes replaced Cyril Washbrook—who broke his thumb in a match for Lancashire against the Australians—at the top of the order. Dewes had gained attention after scoring 51 for Middlesex in the tour match against the Australians. In the three weeks between then and the Test, he had scored 105 and 89 against Lancashire and Sussex respectively. However, he had averaged less than 40 for the season and made three consecutive scores below 20 leading into the Tests. The journalist and former Australian Test cricketer Bill O'Reilly condemned the decision, claiming that aside from defending the ball, Dewes was too reliant on slogging towards the leg side with a horizontal bat. O'Reilly claimed Dewes was not ready for Test cricket and that asking him to face the rampant Australians could have psychologically scarred him. He said the selection "was tantamount to asking a young first-year medical student to carry out an intricate operation with a butcher's knife." Allan Watkins replaced Ken Cranston as the middle order batsman and pace bowler. Both Dewes and Watkins were making their Test debut, and the latter became the second Welshman to play in an Ashes Test. Watkins had scored 19 and taken 1/47 for Glamorgan in their match against Australia two weeks earlier, but had only scored 168 runs at 18.66 and taken 11 wickets in his last six matches. Cranston had made a duck and 10, and taken 1/79 on his debut in the previous Test. While acknowledging Cranston's poor performances and concluding that he had not been of international quality, O'Reilly said Watkins' performance in Glamorgan's match against the Australians "had not inspired anyone with his ability" to counter the tourists' bowling. England played two spinners; left arm orthodox spinner Jack Young replaced fellow finger spinner Jim Laker, while the leg spin of Eric Hollies replaced the pace bowling of Dick Pollard. Hollies was brought into the team because he had caused the Australian batsmen difficulty in the tour match against Warwickshire. He took 8/107 in the first innings, the best innings figures against the Australians for the summer. His performance included bowling Bradman with a topspinner that went between bat and pad. It was part of a month-long run in which he took 52 wickets in seven matches, including two ten-wicket match hauls. Young had taken 12 and 14 wickets in consecutive matches against Northamptonshire and Surrey since his omission following the Third Test, while Pollard and Laker had managed totals of only 2/159 and 3/206 respectively in the Headingley Test. Having made only 5 and 18 in the previous Test, Jack Crapp was originally dropped from the team but was reprieved by Washbrook's injury. The England selectors were widely condemned for their decisions, which were seen as an investment in youth rather than necessarily picking the best players available at the time. Their frequent changes meant the home team had used a total of 21 players for the five Tests. Australia made three changes. Having taken only seven wickets in the first four Tests at an average of 61.00, off spinner Ian Johnson was replaced by leg spinner Doug Ring. Australia's second change was forced on them; the injured medium pacer Ernie Toshack was replaced by the opening batsman Sid Barnes, who had missed the Fourth Test with a rib injury. This meant Australia were playing with one extra batsman and one less frontline bowler. The final change was the return of first-choice wicket-keeper Don Tallon from injury and the omission of his deputy Ron Saggers. The two nations had last met at The Oval in the Fifth Test of the 1938 Ashes series, during Australia's previous tour of England. On that occasion, England made a Test world record score of 903/7 declared, and Len Hutton made 364, an individual Test world record. Australia batted in both innings with only nine men because of injuries sustained by Bradman and Jack Fingleton during Hutton's 13-hour marathon effort. They collapsed to the heaviest defeat in Test history, by an innings and 579 runs. It was Australia's last Test before World War II and they had not lost a Test since then. Hundreds of spectators had slept on wet pavements outside the stadium in rainy weather on the eve of the Test to queue for tickets. Bradman had announced his forthcoming retirement at the end of the season, so the public were anxious to witness his last appearance at Test level. ## Scorecard ### England innings ### Australia innings ## 14 August: Day One English skipper Norman Yardley won the toss and elected to bat on a rain-affected pitch. Precipitation in the week leading up to the match meant the Test could not start until after midday. Yardley's decision was regarded as a surprise. Although The Oval had a reputation as a batting paradise, weather conditions suggested that bowlers would be at an advantage. Jack Fingleton, a former Test teammate of Bradman who was covering the tour as a journalist, thought the Australians would have bowled had they won the toss. However, O'Reilly disagreed, saying the pitch was so wet it should have favoured the batsmen because the ball would bounce slowly from the surface. He further thought the slippery run-up areas would have forced the faster bowlers to operate less vigorously to avoid injuring themselves. The damp conditions necessitated the addition of large amounts of sawdust to allow the bowlers to keep their footing, because parts of the pitch were muddy. The humidity, along with the rain, assisted the bowlers; Lindwall in particular managed to make the ball bounce at variable heights. Dewes and Len Hutton opened for England, a move that attracted criticism of Yardley for exposing the debutant Dewes to the new ball bowling of Lindwall and Keith Miller. After Hutton opened the scoring with a single from the second ball of the day, Dewes was on strike. The single had almost turned into a five when Sam Loxton fired in a wide return, but Sid Barnes managed to prevent from going for four overthrows. Dewes took a single from the opening over—bowled by Lindwall—and thus faced the start of the second over, which was delivered by Miller. Dewes had been troubled by Miller in the past. During the Victory Tests in 1945, Miller had repeatedly dismissed the batsman, and during a match for Cambridge University against the Australians earlier in the tour, Dewes had used towels to pad his torso against Miller's short balls. During his short innings, Dewes was also visibly nervous and kept on moving around, unable to stand still. Miller caused a stoppage after his first ball in order to sprinkle sawdust on the crease. With the second ball, he bowled Dewes—who was playing across the line—middle stump for one with an inswinger to leave England at 2/1. However, despite the early wicket, the bowlers appeared to lack confidence in their run-up on the soggy ground. Bradman made an early bowling change and brought Johnston into the attack to replace Miller after the latter had bowled three overs for the concession of two runs. At this time, Bradman adopted relatively defensive field settings despite the early breakthrough. Bill Edrich joined Hutton and they played cautiously until the former attempted to hook a short ball from Johnston. He failed to get the ball in the middle of the bat and it looped up and travelled around 10 metres (33 ft). Lindsay Hassett took the catch just behind square leg, diving sideways and getting two hands to the ball. This left England at 10/2 as Denis Compton came to the crease. Lindwall bounced Compton, drawing an edge that flew towards the slips cordon. However, the ball continued to rise and cleared the ring of Australian fielders. Hutton called Compton for a run, but his surprised partner was watching the ball narrowly evade the slips catchers and dropped his bat in panic. Luckily for Compton, the ball went to Hassett at third man, who stopped the ball and waited for Compton to regain his bat and his composure before returning the ball, thereby forfeiting the opportunity to run him out. However, this sporting gesture did not cost Australia many runs because when Compton was on three, Lindwall bowled another bouncer. Compton went for a hook shot and Arthur Morris ran from his position at short square leg to take a difficult catch. Bradman later said he had remembered how Compton had been out in exactly the same position in the corresponding match at the same ground during the 1938 series. Fingleton described Morris's effort as "one of the catches of the season". England were 17/3, and Crapp came in to join Hutton. At this point, Bradman began to put in place more attacking field settings. Johnston then hit Hutton on the fingers with a ball that rose sharply after pitching. Bradman took Lindwall off after 50 minutes and replaced him with Miller, who then removed Crapp, caught behind from an outside edge for a 23-ball duck, leaving England at 23/4. When play was adjourned for lunch with England on 29/4, Hutton was 17 while Yardley was on four. According to Fingleton, Hutton "had never been in the slightest difficulty". He had played cautiously but did not seem hurried by the bowling. Miller had taken 2/3 from six overs. After the lunch break, England added six runs to be 35/4, before Lindwall bowled Yardley with a swinging yorker. The debutant Watkins came in, having earned a reputation in Glamorgan's match against Australia for hooking. He made several attempts at the shot in his innings of 16 balls. He attempted a hook shot from a short ball and missed before being hit on the shoulder by another Lindwall bouncer, having tried to hook the ball downwards in an unorthodox manner akin to a tennis serve. He was then dismissed without scoring after playing across the line and being trapped leg before wicket by Johnston for a duck to leave England at 42/6. For his troubles, Watkins also collected a bruise from the hit to the shoulder, which inhibited his bowling later in the match. Lindwall then removed Godfrey Evans, Alec Bedser and Young, all yorked by swinging deliveries in the space of two runs, as England fell from 45/6 to 47/9. This brought Hollies in at No. 11 to accompany Hutton, who then hit the only boundary of the innings, lofting Lindwall for a straight drive back over his head. The ball almost went for six, landing just short of the boundary. The innings ended at 52 when Hutton—who never appeared troubled by the bowling—leg glanced Lindwall and was caught by wicket-keeper Don Tallon, who caught the ball one-handed at full stretch to his left. Lindwall described the catch as one of the best he had ever seen, while O'Reilly called it "extraordinarily good". The match saw Lindwall at his best. In his post-lunch spell, Lindwall bowled 8.1 overs, taking five wickets for eight runs, and finishing with 6/20 from 16.1 overs. Bradman described the spell as "the most devastating and one of the fastest I ever saw in Test cricket". Fingleton, who played against the Bodyline attack in 1932–33, said "I was watching a man almost the equal of Larwood [the Bodyline spearhead] in pace ... Truly a great bowler". O'Reilly wrote Lindwall's "magnificent performance must go down as one of the greatest bowling efforts in Anglo-Australian Tests. He had two gruelingly long sessions in the innings and overcame each so well that he set the seal on his well-earned reputation as one of the best bowlers ever." Hutton was the only batsman to resist the Lindwall-led attack, scoring 30 in 124 minutes and surviving 147 deliveries. The next most resilient display was from Yardley, who scored seven runs in 31 minutes of resistance, facing 33 balls. Miller and Johnston took 2/5 and 2/20 respectively, and Australia's pace trio removed all the batsmen without Bradman having to call upon Ring's leg spin. In contrast, Australia batted with apparent ease, as the overcast skies cleared and sun came out. The debutant Watkins sent down four overs for 19 runs with his bruised shoulder and did not bowl again. He was in much pain and his limp bowling did little to trouble the Australian openers. Morris and Barnes batted comfortably and passed England's first innings total by themselves, taking less than an hour to push the Australians into the lead. O'Reilly felt the Australian openers wanted to prove "the pitch itself had nothing whatever to do with the English batting debacle". Australia reached 100 at 17:30 with Barnes on 52 and Morris on 47. The only chance came when Barnes powerfully square cut Bedser low to point, where Young spilled the catch. When Young came on to bowl, his finger spin was expected to trouble the batsmen on a rain-affected surface, but he delivered little variation in pace and trajectory and Barnes in particular hit him repeatedly through the off side field. The score had reached 117 after only 126 minutes, when Barnes was caught behind from Hollies for 61. The right-handed Australian opener stumbled forward to a fast-turning leg break that caught his outside edge. He had overbalanced and would have been stumped if he had failed to make contact with the leather. This brought Bradman to the crease shortly before 18:00, late on the first day. As Bradman had already announced the tour would be his last at international level, the innings would be his last at Test level if Australia batted only once. The crowd gave him a standing ovation as he walked out to bat; Yardley led the Englishmen and the crowd in giving his Australian counterpart three cheers, before shaking Bradman's hand. With 6,996 Test career runs, Bradman needed only four runs to average exactly 100 in Test cricket. Bradman took guard and played the first ball from Hollies, a leg break, from the back foot. The leg spinner pitched the next ball up, bowling Bradman for a duck with a googly that went between bat and pad as the Australian skipper leaned forward. Bradman appeared stunned by what had happened and slowly turned around and walked back to the pavilion, receiving another large round of applause. It was claimed by many, including Hollies, that Bradman became emotional and had tears in his eyes at the ovation given to him by the crowd and the English players, and that this hampered his ability to see and hit the ball. Bradman admitted to being moved by the applause, but always denied shedding tears, saying "to suggest I got out, as some people did, because I had tears in my eyes while I was looking at the bowler was quite untrue. Eric Hollies deceived me and he deserves full credit." Hassett came in at 117/2 and together with Morris saw Australia to the close at 153/2. Morris was unbeaten on 77, having hit two hook shots from Hollies for four. Hassett was on 10. ## 16 August: Day Two 15 August was a Sunday, and thus a rest day. Play resumed on Monday, the second morning, and Morris registered his third century of the Test series and his sixth in ten Ashes matches. Overall, it was his seventh century in 14 Tests. It had taken him 208 minutes and he had hit four fours. Hassett and Morris took the score to 226 before their 109-run stand was broken when Young trapped Hassett lbw for 37 after 134 minutes of batting. As the Australians had dismissed their hosts cheaply on the first day and were already well in the lead, they had plenty of time to complete a victory, so Hassett and Morris had no need to take undue risks and scored at a sedate pace. The following batsmen were unable to establish themselves at the crease. Miller came in and tried to attack, but made only five before overbalancing and stumbling forward out of his crease, allowing Evans to stump him from the bowling of Hollies. Harvey, the youngest player in the Australian squad at the age of 19, came to the crease at 243/4 and quickly displayed the exuberance of youth. He hit Young for a straight-driven four and then pulled him for another boundary, but the attacking strokeplay did not last. Harvey succumbed to Hollies, hitting him to Young. The young batsman was having trouble against the turning ball, so he decided to use his feet and step towards the pitch of the ball. The Warwickshire spinner noticed this, and delivered a topspinner that dipped more than usual, and the batsman mistimed his off-drive, which went in the air towards mid-off. Hollies' success against the middle-order prompted Yardley to opt to continue with the older ball even when a replacement was available, a move that was rarely made throughout the series as the pacemen dominated the bowling. Hollies did not spin the ball significantly but relied on variations in flight to defeat his opponents. Loxton came in with the score at 265/5 and accompanied Morris for 39 further runs before he fell to the new ball. He appeared uncomfortable with the outswingers and leg cutters of Bedser, and was beaten several times, before Edrich had him caught behind for 15. Lindwall came in and attacked immediately, scoring two fours before falling for nine. He played a cover drive from the bowling of Young, but hit the ball too early and thus launched it into the air, and it was caught by Edrich at cover point to leave the score at 332/7. Morris was then finally removed for 196, ending an innings noted for its numerous hooks and off-drives. It took a run out to remove Morris; he attempted a quick run to third man after being called through from the non-striker's end by Tallon, but was too slow for the substitute fielder Reg Simpson's arm. Tallon, who scored 31, put on another 30 runs with Ring, before both were out with Australia's score on 389, ending the tourists' innings. Both were caught by Crapp in slips from the bowling of Hollies and Bedser respectively. Morris had scored more than half the runs as the rest of the team struggled against the leg spin of Hollies, who took 5/131. England had relied heavily on spin bowling; Young took 2/118 and of the 158.2 overs bowled, 107 were delivered by the two slow men. Hollies pitched the ball up repeatedly, coaxing the Australians into playing front-foot shots from balls that spun after pitching on off stump. England started their second innings 337 runs in arrears. Dewes took strike and got off the mark from Lindwall when he aimed a hook shot and was credited with a boundary when the ball came off his shoulder. Lindwall's steepling bouncer rose over his bat and narrowly missed his head. Soon after, Lindwall made the early breakthrough, bowling Dewes—who offered no shot—for 10 to leave England 20/1. Dewes had often committed to playing the ball from the front foot before the bowler delivered the ball, thereby putting himself into difficulty. This was because of his habit of leaning his weight onto his back foot as the ball was being bowled, which meant that a forward lean would instinctively result. Edrich joined Hutton and the pair consolidated the England innings, which reached 54/1 at the close at the second day's play, which was hastened by bad light. ## 17 August: Day Three Early on the third day, Lindwall bowled Edrich—who was playing across the line—between bat and pad for 28, hitting the off stump with a ball that cut inwards, leaving the score at 64/2, before Compton and Hutton consolidated the innings and took the score to 121 at lunch without further loss. Hutton and Compton were 42 and 37 respectively. Compton started slowly but had accelerated as the adjournment approached. The morning's batting had been relatively slow, with only 67 runs scored in 100 minutes, of which Hutton added only 23. The morning session also featured a tight spell of 13 overs by Ring. The leg spinner did not bowl consistently or accurately, and although the batsman hit him regularly, they did not place their shots, which often went to the fielders. At the other end, Johnston bowled his finger spin from around the wicket with a well-protected off side. There were four men in the off side ring and they had much work to do as Hutton hit the ball there repeatedly. The English batsmen progressed steadily and both Johnston and Ring had one confident appeal for lbw against Compton, but there were no other scares. Towards the end of the morning session, the second new ball became available but Bradman decided to bide his time. He allowed Johnston to rest after his morning spell and used Lindwall and Miller—delivering off spinners—to bowl with the old ball for the last half-hour before lunch break so that the trio could use the adjournment to recuperate before attacking with the new ball. After lunch, Lindwall and Johnston took the new ball, and the partnership progressed only four further runs to 61 in 110 minutes. On 39, Compton aimed a hard cut shot from Johnston's bowling, which flew into Lindwall's left hand at second slip for a "freak slip catch". Hutton managed to continue resisting the Australians before Miller struck Crapp in the neck with a bouncer. The batsman did not react to the blow and did not bother to rub the point of impact. After hitting a series of cover drives for boundaries, Hutton edged Miller to Tallon and was out for 64, having top-scored in both innings. He had batted for over four hours and left England at 153/4. Thereafter, the home side collapsed. Crapp was bowled by Miller for nine, and two runs later, Ring dismissed debutant Watkins for two, his only wicket for the match. Watkins swung Ring to the leg side and the ball went straight into the hands of Hassett, who did not need to move from his position on the boundary, leaving England at 167/6. Lindwall returned and yorked Evans, who appeared to not detect the delivery in the poor light, for eight. The umpires thus called off play after Yardley appealed against the light. The ground was then hit by rain, resulting in a premature end to the day's play with England at 178/7, having lost 4/25. ## 18 August: Day Four England resumed on the fourth morning with only three wickets in hand and they were still 159 runs in arrears. Johnston quickly removed the last three wickets to seal an Australian victory by an innings and 149 runs. Only ten runs were added; the match ended when Hollies fell for a golden duck after skying a ball to Morris, immediately after Yardley's departure. Johnston ended with 4/40 from 27.3 overs while Lindwall took 3/50 from 25 overs. Miller claimed 2/22 while Ring bowled the most overs, 28, to finish with 1/44. Given the time lost to inclement weather on the first day, Australia had won the match in less than three days of playing time. ## Aftermath This result sealed the series 4–0 in favour of Australia. The match was followed by a series of congratulatory speeches. Bradman began with: > No matter what you may read to the contrary, this is definitely my last Test match ever. I am sorry my personal contribution has been so small ... It has been a great pleasure for me to come on this tour and I would like you all to know how much I have appreciated it ... We have played against a very lovable opening skipper ... It will not be my pleasure to play ever again on this Oval but I hope it will not be the last time I come to England. Yardley spoke after Bradman: > In saying good-bye to Don we are saying good-bye to the greatest cricketer of all time. He is not only a great cricketer but a great sportsman, on and off the field. I hope this is not the last time we see Don Bradman in this country. Bradman was then given three cheers and the crowd sang "For He's a Jolly Good Fellow" before dispersing. The win brought Australia closer to Bradman's aim of going through the tour undefeated. The Fifth Test was the last international match, and Australia only had seven further matches to negotiate. They secured three consecutive innings victories against Kent, the Gentlemen of England and Somerset. They then took first innings leads of more than 200 against the South of England and Leveson-Gower's XI, but both matches were washed out. The last two matches were two-day non-first-class matches against Scotland, both won by an innings. Bradman's men thus completed the tour undefeated, earning themselves the sobriquet The Invincibles.
1,569,045
Empires: Dawn of the Modern World
1,171,457,818
2003 video game
[ "2003 video games", "Activision games", "Japanese invasions of Korea (1592–1598) in fiction", "Multiplayer online games", "Panhistorical video games", "Real-time strategy video games", "Titan (game engine) games", "Video games developed in the United States", "Windows games", "Windows-only games" ]
Empires: Dawn of the Modern World is a 2003 real-time strategy video game developed by Stainless Steel Studios and published by Activision. Set in a world-historical period that extends from the Middle Ages to World War II, the game tasks players with guiding one of nine rival great civilizations to victory. Customer surveys from Stainless Steel's previous game, Empire Earth, were used as a starting point for Empires: these inspired the team to take a more minimalist design approach, and to include civilizations without overlapping styles of play. Empires was positively received by critics, who enjoyed its multiplayer component. However, certain reviewers disliked its single-player mode, and opinion clashed on the game's level of uniqueness compared to competitors such as Rise of Nations. The sales of Empires, when combined with those of Empire Earth, totaled 2.5 million units by 2004. ## Gameplay Empires: Dawn of the Modern World is a real-time strategy (RTS) game in which the player guides a civilization through five historical periods, from the Middle Ages to World War II. As in many RTS titles, the player collects natural resources, erects buildings, and trains and maintains a military. Players use a mouse cursor interface (or hotkeys) to direct their units, which range from crossbowmen to King Tiger tanks. A three-dimensional (3D) camera system allows the player to view the action from any perspective, including isometric and first-person angles. A mini-map is included as well. Each of the nine civilizations features a unique style of play: for example, the French and English have powerful defensive capabilities, while Chinese structures are mobile. During a match, the player must gather resources to progress their civilization to a new historical era, after which more advanced technologies and units (land-, sea- and airborne) become available. Four civilizations are playable from the medieval to the Imperial age; at the beginning of World War I, the player transitions their civilization to one of the remaining five. For example, a player of the premodern Franks must transition to modern Germany or France. The player wins a match by destroying all opponents' means of production, or by constructing and successfully defending a "Wonder", such as the Notre Dame de Paris or Brandenburg Gate. Empires allows up to eight players (or artificially intelligent opponents) to compete in two modes: the shorter, battle-oriented Action mode or the longer, defense-oriented Empire Builder mode. In addition, the game contains three single-player storylines called "campaigns", each of which depicts major events in a civilization's history. These follow Richard the Lionheart's medieval wars in France; Admiral Yi Sun-Sin's defense of Korea against Japanese invasion in the early modern period; and General George S. Patton's exploits during World War II. The editor used to create Empires is packaged with the game, which allows the player to create original levels and campaign scenarios. ## Development ### Conception Stainless Steel Studios started work on Empires in 2002. The project was led by company head Rick Goodman, designer of Ensemble Studios' Age of Empires and Stainless Steel's earlier Empire Earth. The Empires team began by studying their previous game for features that could be reused or improved. In addition, they mined history books for interesting "events, battle tactics, weapons, technologies and economic factors", according to Goodman. A list was drafted of 100 historical elements that excited the team, and it formed the basis of the project. Although a heavy focus was placed on historical accuracy, designer Richard Bishop explained that "fun always comes first." As it had with Empire Earth, Stainless Steel delegated separate teams to the multiplayer and single-player modes of Empires. Further inspiration came from surveys of Empire Earth players, conducted during 2002. For example, the team found that Empire Earth's medieval and World War II periods were the most popular, while its futuristic and prehistoric periods were the least. In response, the team reduced the span of Empires to 1,000 years, from the Middle Ages until World War II. Goodman believed that this could make the game many times deeper than Empire Earth. Also requested by players were fully unique civilizations, without overlapping units or styles of play—a feature that Goodman claimed to be a first for a history-based RTS game. The team discovered that those who favored the single-player mode in Empire Earth preferred slower, more management-based gameplay. However, multiplayer users were split, with half in favor of shorter matches filled with combat. To please both audiences, the Empire Builder and Action modes were included to offer "a rush-oriented game for the pro gamers and a more defensive game for the casual gamer", in Goodman's words. ### Production In December 2002, publisher Activision signed Stainless Steel to a multi-game contract, the first title of which was revealed to be Empires in February 2003. By April, the team estimated the game to be 60–70% finished. The engine used to create Empire Earth—later released under the name Titan 2.0—was retained and upgraded for Empires. Significantly more detail was added to the units' 3D models than had appeared in Empire Earth. Further additions included reflection mapping, environmental bump mapping and a new physics engine. According to Goodman, reusing the game engine enabled the team to place its full concentration on gameplay, without worrying about technological development. Another priority was storytelling, an element of the RTS game Warcraft III: Reign of Chaos (2002) particularly enjoyed by the Empires team. Empires was designed primarily for multiplayer gameplay: the multiplayer development team created and fine-tuned each civilization, which the single-player team then used in campaign levels. Because the civilizations do not overlap, Bishop considered game balance to be the most difficult aspect of the project. Previously, Stainless Steel had balanced its games in a microcosmic fashion: the "individual components" of each civilization—for example, the economic power of Germany versus that of England—were balanced against one another. Balance on this scale led to overarching balance. However, this technique hinged on a broad similarity between civilizations that is not present in Empires. Consequently, the company had to abandon its earlier practice and "develop an entirely new methodology", Goodman explained. The result was a macrocosmic system of balance, in which civilizations are inherently unbalanced but equally powerful overall. As with Empire Earth, each new build of Empires was given to "strike teams" of playtesters. By April, between six and eight months of playtesting had been performed by a group of six professional RTS players. GameSpy's Allen Rausch wrote that the process allows a game to be "consistently tested, evaluated, balanced, and tweaked" at every stage of development, which enables complex forms of balance. This let the Empires team create a looser version of the rock paper scissors system typical of RTS games, wherein one type of unit is either very strong or very weak against other types. In Empires, each unit's strengths and weaknesses were made subtle enough to curb "hopeless mismatches" and reward skillful micromanagement, according to Bishop. The duration of the average battle was increased to provide more opportunities to micromanage units. Empires went gold on October 7, 2003, and it was released on the 22nd of that month. ## Reception Empires was received positively by critics, according to review aggregators Metacritic and GameRankings. The game's sales, when combined with those of Empire Earth, surpassed 2.5 million units by May 2004. Game Informer's Adam Biessener called Empires "a good knockoff" of WarCraft III and Age of Mythology, worthwhile for fans of the RTS genre. He praised its Empire Builder and Action modes, and the uniqueness of its multiplayer mode; but he found its single-player campaigns to be lackluster. Jonah Jackson of X-Play, Ron Dulin of Computer Gaming World and Stephen Poole of PC Gamer US were similarly unimpressed by the game's single-player mode: the last critic highlighted its "stupendously loquacious cut-scenes and terrible voice-acting". However, Poole dubbed Empires a strong, streamlined and fully featured multiplayer game, which he recommended despite its flaws and lack of innovation. Jackson lauded the multiplayer component as well, and he believed that, while the game at first seems unoriginal, Empires is "the most mature and well-balanced of Goodman's titles". Regarding the single-player campaigns, PC Zone's writers noted strong level design and "voice acting of the highest calibre"; and they praised the multiplayer mode's "balance and diversity". However, they criticized the pathfinding, interface, unoriginality and inconsistent graphical quality of Empires, and they named it the inferior of Medieval: Total War and Rise of Nations. Conversely, Dulin agreed with Jackson that Empires is a deceptively conventional RTS, which introduces "great, if initially unapparent, changes to the standard formula." He summarized it as a well-made competitor to historical RTS titles like Rise of Nations, Age of Empires and Empire Earth. Writing for GameSpot, Sam Parker argued that Empires separated itself from rivals Age of Empires II: The Age of Kings and Age of Mythology, and he commented, "While it may not have the breadth of Rise of Nations' real-time empire building, the tight scope deals out dividends when it comes to fast-paced battles." Steve Butts of IGN, along with GameSpy's Rausch, called Empires a major improvement on the foundation of Empire Earth, thanks to its smaller scope and deeper gameplay. Like the staff of PC Zone, both writers enjoyed the single-player mode, although Rausch noted its middling writing and voice acting. Rausch considered the multiplayer mode to be Empires' best feature: he felt that its Empire Builder and Action modes were both balanced, and that each civilization "offers players a completely different experience". He noted the game's audiovisual presentation as a low point. Butts found fault with the game's camera system, but he summarized Empires as a unique RTS and "a good direction for the genre".
578,101
Scotland national football team
1,173,757,011
Men's national association football team representing Scotland
[ "1872 establishments in Scotland", "European national association football teams", "Scotland national football team" ]
The Scotland national football team represents Scotland in men's international football and is controlled by the Scottish Football Association. It competes in the three major professional tournaments: the FIFA World Cup, UEFA Nations League and the UEFA European Championship. Scotland, as a country of the United Kingdom, is not a member of the International Olympic Committee, and therefore the national team does not compete in the Olympic Games. The majority of Scotland's home matches are played at the national stadium, Hampden Park. Scotland is the joint oldest national football team in the world, alongside England, whom they played in the world's first international football match in 1872. Scotland has a long-standing rivalry with England, whom they played annually from 1872 until 1989. The teams have met only eight times since then, most recently in a group match during Euro 2020 in June 2021. Scotland have qualified for the FIFA World Cup on eight occasions, and the UEFA European Championship three times, but have never progressed beyond the first group stage of a finals tournament. The team have achieved some noteworthy results, such as beating the 1966 FIFA World Cup winners England 3–2 at Wembley Stadium in 1967. Archie Gemmill scored what has been described as one of the greatest World Cup goals ever in a 3–2 win during the 1978 World Cup against the Netherlands, who reached the final of the tournament. In their qualifying group for UEFA Euro 2008, Scotland defeated 2006 World Cup runners-up France 1–0 in both fixtures. Scotland supporters are collectively known as the Tartan Army. The Scottish Football Association operates a roll of honour for every player who has made more than 50 appearances for Scotland. Kenny Dalglish holds the record for Scotland appearances, having played 102 times between 1971 and 1986. Dalglish scored 30 goals for Scotland and shares the record for most goals scored with Denis Law. ## History ### Early history Scotland and England are the oldest national football teams in the world. Teams representing the two sides first competed at the Oval in five matches between 1870 and 1872. The two countries contested the first official international football match, at Hamilton Crescent in Partick, Scotland, on 30 November 1872. The match ended in a goalless draw. All eleven players who represented Scotland that day played for Glasgow amateur club Queen's Park. Over the next forty years, Scotland played matches exclusively against the other three Home Nations—England, Wales and Ireland. The British Home Championship began in 1883, making these games competitive. The encounters against England were particularly fierce and a rivalry quickly developed. Scotland lost just two of their first 43 international matches. It was not until a 2–0 home defeat by Ireland in 1903 that Scotland lost a match to a team other than England. This run of success meant that Scotland would have regularly topped the Elo ratings, which were calculated in 1998, between 1876 and 1904. Scotland won the British Home Championship outright on 24 occasions, and shared the title 17 times with at least one other team. A noteworthy victory for Scotland before the Second World War was the 5–1 victory over England in 1928, which led to that Scotland side being known as the "Wembley Wizards". Scotland played their first match outside the British Isles in 1929, beating Norway 7–3 in Bergen. Scotland continued to contest regular friendly matches against European opposition and enjoyed wins against Germany and France before losing to the Austrian "Wunderteam" and Italy in 1931. Scotland, like the other Home Nations, did not enter the three FIFA World Cups held during the 1930s. This was because the four associations had been excluded from FIFA due to a disagreement regarding the status of amateur players. The four associations, including Scotland, returned to the FIFA fold after the Second World War. A match between a United Kingdom team and a "Rest of the World" team was played at Hampden Park in 1947 to celebrate this reconciliation. ### 1950s The readmission of the Scottish Football Association to FIFA meant that Scotland were now eligible to enter the 1950 FIFA World Cup. FIFA advised that places would be awarded to the top two teams in the 1950 British Home Championship, but the SFA announced that Scotland would only attend the finals if Scotland won the competition. Scotland won their first two matches, but a 1–0 home defeat by England meant that the Scots finished as runners-up. This meant that the Scots had qualified by right for the World Cup, but had not met the demand of the SFA to win the Championship. The SFA stood by this proclamation, despite pleas to the contrary by the Scotland players, supported by England captain Billy Wright and the other England players. The SFA instead sent the Scots on a tour of North America. The same qualification rules were in place for the 1954 FIFA World Cup, with the 1954 British Home Championship acting as a qualifying group. Scotland again finished second, but this time the SFA allowed a team to participate in the Finals, held in Switzerland. To quote the SFA website, "The preparation was atrocious". The SFA only sent 13 players to the finals, even though FIFA allowed 22-man squads. Despite this self-imposed hardship in terms of players, the SFA dignitaries travelled in numbers, accompanied by their wives. Scotland lost 1–0 against Austria in their first game in the finals, which prompted the team manager Andy Beattie to resign hours before the game against Uruguay. Uruguay were reigning champions and had never before lost a game at the World Cup finals, and they defeated Scotland 7–0. The 1958 FIFA World Cup finals saw Scotland draw their first game against Yugoslavia 1–1, but they then lost to Paraguay and France and went out at the first stage. Matt Busby had been due to manage the team at the World Cup, but the severe injuries he suffered in the Munich air disaster meant that trainer Dawson Walker took charge of the team instead. ### 1960s Under the management of Ian McColl, Scotland enjoyed consecutive British Home Championship successes in 1962 and 1963. Jock Stein, John Prentice and Malky MacDonald all had brief spells as manager before Bobby Brown was appointed in 1967. Brown's first match as manager was against the newly crowned world champions England at Wembley Stadium. Despite being underdogs, Scotland won 3–2 thanks to goals from Denis Law, Bobby Lennox and Jim McCalliog. Having defeated the world champions on their own turf, the Scotland fans hailed their team as the "unofficial world champions". Despite this famous win, the Scots failed to qualify for any major competitions during the 1960s. ### 1970s After Tommy Docherty's brief spell as manager, Willie Ormond was hired in 1973. Ormond lost his first match in charge 5–0 to England, but recovered to steer Scotland to their first World Cup finals in 16 years in 1974. At the 1974 World Cup finals in West Germany, Scotland achieved their most impressive performance at a World Cup tournament. The team was unbeaten but failed to progress beyond the group stages on goal difference. After beating Zaïre, they drew with both Brazil and Yugoslavia, and went out because they had beaten Zaïre by the smallest margin. Scotland appointed Ally MacLeod as manager in 1977, with qualification for the 1978 World Cup in Argentina far from assured. The team made a strong start under MacLeod by winning the 1977 British Home Championship, largely thanks to a 2–1 victory over England at Wembley. The Scotland fans invaded the pitch after the match, ripping up the turf and breaking a crossbar. Scotland's form continued as they secured qualification for the World Cup with victories over Czechoslovakia and Wales. During the build-up to the 1978 FIFA World Cup, MacLeod fuelled the hopes of the nation by stating that Scotland would come home with a medal. As the squad left for the finals in Argentina, they were given an enthusiastic send-off as they were paraded around a packed Hampden Park. Thousands more fans lined the route to Prestwick Airport as the team set off for South America. Scotland lost their first game 3–1 against Peru in Córdoba, and drew the second 1–1 against newcomers Iran. The disconsolate mood of the nation was reflected by footage of MacLeod in the dugout with his head in his hands. These results meant Scotland had to defeat the Netherlands by three clear goals to progress. Despite the Dutch taking the lead, Scotland fought back to win 3–2 with a goal from Kenny Dalglish and two from Archie Gemmill, the second of which is considered one of the greatest World Cup goals ever; Gemmill beat three Dutch defenders before lifting the ball over goalkeeper Jan Jongbloed into the net. The victory was not sufficient to secure a place in the second round, and Scotland were eliminated on goal difference for the second successive World Cup. ### 1980s MacLeod resigned as manager shortly after the 1978 World Cup, and Jock Stein, who had won nine consecutive Scottish league titles and the European Cup as manager of Celtic, was appointed as his successor. After failing to qualify for the 1980 European Championship, Scotland qualified for the 1982 FIFA World Cup from a tough group including Sweden, Portugal, Israel and Northern Ireland, losing just one match in the process. They beat New Zealand 5–2 in their first game at the World Cup, but lost 4–1 to a Brazil team containing Sócrates, Zico, Eder and Falcão. Scotland were again eliminated on goal difference, after a 2–2 draw with the Soviet Union. Scotland qualified for the 1986 FIFA World Cup, their fourth in succession, in traumatic circumstances. The squad went into their last qualification match against Wales needing a point to progress to a qualifying playoff against Australia. With only nine minutes remaining and Wales leading 1–0, Scotland were awarded a penalty kick, which was calmly scored by Davie Cooper. The 1–1 draw meant that Scotland progressed, but as the players and fans celebrated, Stein suffered a heart attack and died shortly afterwards. His assistant Alex Ferguson took over. Scotland qualified by winning 2–0 against Australia in a two-leg playoff, but were eliminated from the tournament with just one point from their three matches, a goalless draw with Uruguay following defeats by Denmark and West Germany. In July 1986, Andy Roxburgh was the surprise appointment as the new manager of Scotland. Scotland did not succeed in qualifying for Euro 1988, but their 1–0 away win over Bulgaria in the final fixture in November 1987 helped Ireland to a surprise first-place finish and qualification for the finals in West Germany. ### 1990s Scotland qualified for their fifth consecutive World Cup in 1990 by finishing second in their qualifying group, ahead of France. Scotland were drawn in a group with Costa Rica, Sweden, and Brazil, but the Scots lost 1–0 to Costa Rica. While they recovered to beat Sweden 2–1 in their second game, they lost to Brazil in their third match 1–0 and were again eliminated after the first round. By a narrow margin, Scotland qualified for the UEFA European Championship for the first time in 1992. A 1–0 defeat by Romania away from home left qualification dependent upon other results, but a 1–1 draw between Bulgaria and Romania in the final group match saw Scotland squeeze through. Despite playing well in matches against the Netherlands and Germany and a fine win against the CIS, the team was knocked out at the group stage. Scotland failed to qualify for the 1994 FIFA World Cup. The team finished fourth in their qualifying group behind Italy, Switzerland and Portugal. When it became clear that Scotland could not qualify, Andy Roxburgh resigned from his position as team manager. New manager Craig Brown successfully guided Scotland to the 1996 European Championship tournament. The first game against the Netherlands ended 0–0, raising morale ahead of a much anticipated game against England at Wembley. Gary McAllister missed a penalty kick, and a goal by Paul Gascoigne led to a 2–0 defeat. Scotland recovered to beat Switzerland 1–0 with a goal by Ally McCoist. England taking a 4–0 lead in the other match briefly put both teams in a position to qualify, but a late goal for the Netherlands meant that Scotland were knocked out on goals scored. Brown again guided Scotland to qualification for a major tournament in 1998, and Scotland were drawn against Brazil in the opening game of the 1998 World Cup. John Collins equalised from the penalty spot to level the score at 1–1, but a Tom Boyd own goal led to a 2–1 defeat. Scotland drew their next game 1–1 with Norway in Bordeaux, but the final match against Morocco ended in an embarrassing 3–0 defeat. During the qualification for the 2000 European Championship, Scotland faced England in a two-legged playoff nicknamed the "Battle of Britain" by the media. Scotland won the second match 1–0 with a goal by Don Hutchison, but lost the tie 2–1 on aggregate. ### 2000s Scotland failed to qualify for the 2002 FIFA World Cup, finishing third in their qualifying group behind Croatia and Belgium. This second successive failure to qualify prompted Craig Brown to resign from his position after the final qualifying match. The SFA appointed former Germany manager Berti Vogts as Brown's successor. Scotland reached the qualification play-offs for Euro 2004, where they beat the Netherlands 1–0 at Hampden Park, but suffered a 6–0 defeat in the return leg. Poor results in friendly matches and a bad start to the 2006 World Cup qualification caused the team to drop to a record low of 77th in the FIFA World Rankings. Vogts announced his resignation in 2004, blaming the hostile media for his departure. Walter Smith, a former Rangers and Everton manager, was brought in to replace Vogts. Improved results meant that Scotland rose up the FIFA rankings and won the Kirin Cup, a friendly competition in Japan. Scotland failed to qualify for the 2006 FIFA World Cup, finishing third in their group behind Italy and Norway. Smith left the national side in January 2007 to return to Rangers, with Scotland leading their Euro 2008 qualification group. New manager Alex McLeish guided Scotland to wins against Georgia, the Faroe Islands, Lithuania, France and Ukraine, but defeats by Georgia and Italy ended their chances of qualification for Euro 2008. These improved results, particularly the wins against France, lifted Scotland into the top 20 of the FIFA world rankings. After the narrow failure to qualify for Euro 2008, McLeish left to join Premier League club Birmingham City. Southampton manager George Burley was hired as the new manager, but he came in for criticism from the media after the team lost their first qualifier against Macedonia. After Scotland lost their fourth match 3–0 to the Netherlands, captain Barry Ferguson and goalkeeper Allan McGregor were excluded from the starting lineup for the following match against Iceland due to a "breach of discipline". Despite winning 2–1 against Iceland, Scotland suffered a 4–0 defeat by Norway in the following qualifier, which left Scotland effectively needing to win their last two games to have a realistic chance of making the qualifying play-offs. Scotland defeated Macedonia 2–0 in the first of those two games, but were eliminated by a 1–0 loss to the Netherlands in the second game. Burley was allowed to continue in his post after a review by the SFA board, but a subsequent 3–0 friendly defeat by Wales led to his dismissal. ### 2010s The SFA appointed Craig Levein as head coach of the national team in December 2009. In UEFA Euro 2012 qualifying, Scotland were grouped with Lithuania, Liechtenstein, the Czech Republic and world champions Spain. They took just four points from the first four games, leaving the team needing three wins from their remaining four games to have a realistic chance of progression. They only managed two wins and a draw and were eliminated after a 3–1 defeat by Spain in their last match. Levein left his position as head coach following a poor start to 2014 FIFA World Cup qualification, having taken just two points from four games. Gordon Strachan was appointed Scotland manager in January 2013, but defeats in his first two competitive matches meant that Scotland were the first UEFA team to be eliminated from the 2014 World Cup. Scotland finished their qualification section by winning three of their last four matches, including two victories against Croatia. UEFA Euro 2016 expanded from 16 teams to 24. After losing their first qualifier in Germany, Scotland recorded home wins against Georgia, the Republic of Ireland and Gibraltar. Steven Fletcher scored the first hat-trick for Scotland since 1969 in the game with Gibraltar. Later in the group, Scotland produced an "insipid" performance as they lost 1–0 in Georgia. A home defeat by Germany and a late equalising goal by Poland eliminated Scotland from contention. After a win against Gibraltar in the last qualifier, Strachan agreed a new contract with the SFA. In qualification for the 2018 FIFA World Cup, Scotland were drawn in the same group as England, facing their rivals in a competitive fixture for the first time since 1999. On 11 November 2016, England beat Scotland 3–0 at Wembley. The return match saw Leigh Griffiths score two late free-kicks to give Scotland a 2–1 lead, but Harry Kane scored in added time to force a 2–2 draw. A draw in Slovenia in the final game of the group ended Scottish hopes of a play-off position, and Strachan subsequently left his position by mutual consent. In February 2018, Alex McLeish was appointed manager for the second time. The team won their group in the 2018–19 UEFA Nations League, but McLeish left in April 2019 after a poor start to UEFA Euro 2020 qualifying, including a 3–0 loss to 117th-ranked Kazakhstan. ### 2020s Steve Clarke was appointed Scotland manager in May 2019. The team failed to qualify automatically for UEFA Euro 2020, but consecutive victories in penalty shootouts in the playoffs against Israel and Serbia put Scotland into their first major tournament since 1998. Defeats by the Czech Republic and Croatia, either side of a goalless draw with England, meant that Scotland finished bottom of Group D. Six consecutive wins later that year meant that Scotland finished second in Group F of 2022 FIFA World Cup qualification. This progressed the team into the play-offs, where they were paired with Ukraine in a semi-final at Hampden; Scotland lost 3–1. Later that year, Scotland won their Nations League group and promotion to League A. ## Stadium Hampden Park in Glasgow is the traditional home of the Scotland team and is described by the SFA as the National Stadium. The present stadium is one of three stadiums to have used the name. Stadiums named Hampden Park have hosted international matches since 1878. The present site was opened in 1903 and became the primary home ground of the Scotland team from 1906. The attendance record of 149,415 was set by the Scotland v England match in 1937. Safety regulations reduced the capacity to 81,000 by 1977 and the stadium was completely redeveloped during the 1990s, giving the present capacity of 52,000. Hampden is rated as a category four (elite) stadium within the UEFA stadium categories, having previously held the five-star status under the old rating system. Some friendly matches are played at smaller venues. Pittodrie Stadium in Aberdeen and Easter Road in Edinburgh were both used as venues during 2017. Other stadiums were also used while Hampden was being redeveloped during the 1990s. Celtic Park, Ibrox Stadium, Pittodrie Stadium and Rugby Park all hosted matches during the 1998 World Cup qualifying campaign, while Tynecastle Stadium, Pittodrie, Celtic Park and Ibrox Stadium were used for Euro 2000 qualifying matches. Since the last redevelopment to Hampden was completed in 1999, Scotland have played most of their competitive matches there. The most recent exception to this rule was in 2014, when Hampden was temporarily converted into an athletics stadium for the 2014 Commonwealth Games. The SFA purchased Hampden from Queen's Park in 2020, and all of Scotland's home games have been played there since then. ## Media coverage Most matches played by Scotland are presently covered by the streaming service Viaplay, who have bought the rights for Scotland games between 2024 and 2028. The arrangements to show Scotland matches on subscription services were criticised in 2008 by the Scottish Government, who argued that all competitive internationals should be a Listed Event that can only be broadcast on free-to-air television. Live coverage is only restricted during major tournament finals, which are normally shown on BBC Scotland or STV. The SFA have argued that limiting the rights for other games, such as qualifying matches, would reduce the revenue from that source. The Scottish Affairs Committee of MPs in the British House of Commons published a report in 2023 calling for more co-operation between rights holders. They also pointed to the greater coverage given on free-to-air television for qualifying matches involving England and Wales. Sky Sports, BBC Scotland, STV, Setanta Sports, Channel 5, BT Sport and Premier Sports are among other networks that have previously shown Scotland fixtures. Sky Sports opted to show the Euro 2020 playoff against Serbia on their Pick channel, which was available on Freeview. All matches are broadcast with full commentary on BBC Radio Scotland and, when schedules allow, BBC Radio 5 Live also. ## Colours Scotland traditionally wear dark blue shirts with white shorts and dark blue socks, the colours of the Queen's Park team who represented Scotland in the first international. The blue Scotland shirt was earlier used in a February 1872 rugby international, with reports stating that "the scotch were easily distinguishable by their uniform of blue jerseys.... the jerseys having the thistle embroidered". The thistle had previously been worn to represent Scotland in the 1871 rugby international, but on brown shirts. The shirt is embroidered with a crest based upon the lion rampant of the Royal Standard of Scotland. Another style often used by Scotland comprises blue shirts, white shorts and red socks, whilst a number of kits have used navy shorts and socks. Navy is routinely used as alternative colours for the shorts and socks when Scotland faces a team who share the same colours for these items, but when the home shirt is still appropriate. Change colours vary, but are most commonly white or yellow shirts with blue shorts. In 2016–17, Scotland wore pink shirts with black shorts and socks as the away kit; the kit was additionally used in a single home match against Slovakia due to both Slovakia kits clashing with the Scotland home kit, which featured white sleeves. Third kits have been produced on two occasions. Amber shirts, navy shorts and navy socks were used in 2005–06, as the alternative sky blue shirts were unsuitable when Scotland travelled to teams wearing any shade of blue shirt, while an all 'cherry red' kit was used a single time against Georgia in the Euro 2008 qualifiers in 2007. From 1994 to 1996, a tartan kit was used; this kit was worn in all three of Scotland's matches at UEFA Euro 1996. Scotland have not always played in dark blue; on a number of occasions between 1881 and 1951 they played in the primrose and pink racing colours of Archibald Primrose, 5th Earl of Rosebery. A former Prime Minister, Lord Rosebery was an influential figure in Scottish football, serving as honorary President of the SFA and Edinburgh team Hearts. His colours were used most frequently in the first decade of the 20th century, but were discontinued in 1909. The colours were briefly reprised in 1949, and were last used against France in 1951. In 1900, when Scotland defeated England 4–1, Lord Rosebery remarked, "I have never seen my colours so well sported since Ladas won the Derby". Rosebery colours were revived as a change kit for the UEFA Euro 2016 qualifying matches. The current version of the crest is a roundel similar to the crest used from 1961 to 1988 enclosing a shield, with "Scotland" written on the top and "Est 1873" on the bottom. In the shield background there are 11 thistles, representing the national flower of Scotland, in addition to the lion rampant. Since 2005, the SFA have supported the use of Scottish Gaelic on the national team's strip in recognition of the language's status in Scotland. ## Supporters Scotland fans are collectively known as the Tartan Army. During the 1970s, Scotland fans became known for their hooliganism in England, particularly after they invaded the Wembley pitch and destroyed the goalposts after the England v Scotland match in 1977. Since then, the Tartan Army have won awards from UEFA for their combination of vocal support, friendly nature and charity work. The Tartan Army have been awarded a Fair Play prize by the Belgian Olympic Committee and were named as the best supporters during the 1992 European Championship. The fans were also presented with a trophy for non-violence in sport and were voted by journalists to be the best supporters for their sense of fair play and sporting spirit at the 1998 World Cup in France. ## Coaching staff The role of a team manager was first established in May 1954, as Andy Beattie took charge of six matches before and during the 1954 FIFA World Cup. Until then the team had been picked by a SFA selection committee, and after the tournament the selection committee resumed control of the team until the appointment of Matt Busby in 1958. Busby was initially unable to assume his duties due to the serious injuries he sustained in the Munich air disaster. Twenty-four men have occupied the post since its inception, with Beattie, Jock Stein and Alex McLeish occupying it in two spells. Six of those managers held the post on a caretaker basis. Craig Brown held the position for the longest to date; a tenure of 9 years, comprising two major tournaments and a total of 71 matches. Beattie (1954), Dawson Walker (1958), Willie Ormond (1974), Ally MacLeod (1978), Jock Stein (1982), Alex Ferguson (1986), Andy Roxburgh (1990 and 1992), Brown (1996 and 1998) and Steve Clarke (2020) have all managed the team at major competitions. Ian McColl, Ormond and MacLeod all won the British Home Championship outright. German coach Berti Vogts became the first foreign manager of the team in 2002, but his time in charge was generally seen as a failure and the FIFA World Ranking declined to an all-time low of 88 in March 2005. Walter Smith and Alex McLeish achieved better results, with the ranking improving to an all-time high of 13 in October 2007, but both were only briefly in charge before returning to club management. George Burley and Craig Levein both had worse results with the team and were eventually sacked. Results improved somewhat under Gordon Strachan, but he was unable to secure qualification for a tournament. After McLeish had a second spell as manager, Steve Clarke was appointed in May 2019. Clarke guided the team to qualification for Euro 2020, their first major competition since 1998. ### Current personnel ### Statistical record Statistically the most successful manager was Alex McLeish, who won seven of the ten games during his first spell as manager. Discounting managers who took charge of less than ten games, the least successful manager was George Burley, with just three wins in 14 games. Last updated: Scotland v Georgia, 20 June 2023. Statistics include official FIFA-recognised matches, five matches from the 1967 SFA tour that were reclassified as full internationals in 2021, and a match against a Hong Kong League XI played on 23 May 2002 that the Scottish Football Association includes in its statistical totals. ## Players ### Current squad The following players were called up for the UEFA Euro 2024 qualifying match against Cyprus and friendly match against England in September 2023. Caps and goals updated as of 20 June 2023, after the match against Georgia. Clubs correct as of 27 August 2023. ### Recent call-ups The following players have also been selected by Scotland in the past twelve months. <sup>INJ</sup> Withdrew due to injury <sup>WD</sup> Withdrew from the squad due to non-injury issue <sup>SUS</sup> Serving suspension <sup>RET</sup> Retired from the national team <sup>PRE</sup> Preliminary squad / standby ### Honoured players The Scottish Football Association operates a roll of honour for every player who has made more than 50 appearances for Scotland. As of September 2022 there are 34 members of this roll, with John McGinn the most recent addition to the list. The qualifying mark of 50 appearances means that many notable Scotland players including Jim Baxter, Davie Cooper, Hughie Gallacher, John Greig, Jimmy Johnstone, Billy McNeill, Bobby Murdoch, Archie Gemmill and Lawrie Reilly are not on the roll of honour. The Scottish Football Museum operates a hall of fame which is open to players and managers involved in Scottish football. This means that membership is not restricted to people who have played for Scotland; inductees include Brian Laudrup and Henrik Larsson, as well as John McGovern who never played in Scotland or gained an international cap. Sportscotland operates the Scottish Sports Hall of Fame, which has inducted some footballers. ## Records Kenny Dalglish holds the record for Scotland appearances, having played 102 times between 1971 and 1986. He is the only Scotland player to have reached 100 caps. Jim Leighton is second, having played 91 times, a Scottish record for appearances by a goalkeeper. The title of Scotland's highest goalscorer is shared by two players. Denis Law scored 30 goals between 1958 and 1974, during which time he played for Scotland on 55 occasions. Kenny Dalglish scored an equal number from 102 appearances. Hughie Gallacher as well as being the third highest scorer is also the most prolific with his 24 goals coming from only 20 games (averaging 1.2 goals per game). The largest margin of victory achieved by a Scotland side is 11–0 against Ireland in the 1901 British Home Championship. The record defeat occurred during the 1954 FIFA World Cup, a 7–0 deficit against reigning world champions Uruguay. Scotland's 1937 British Home Championship match against England set a new world record for a football attendance. The Hampden Park crowd was officially recorded as 149,415, though the true figure is unknown as a large number of additional fans gained unauthorised entry. This attendance was surpassed 13 years later by the decisive match of the 1950 FIFA World Cup, but remains a European record. ## Competitive record ### FIFA World Cup Scotland did not compete in the first three World Cup competitions, held in 1930, 1934 and 1938. FIFA ruled that all its member associations must provide "broken-time" payments to cover the expenses of players who participated in football at the 1928 Summer Olympics. In response to what they considered to be unacceptable interference, the football associations of Scotland, England, Ireland and Wales held a meeting at which they agreed to resign from FIFA. The Scottish Football Association did not rejoin FIFA as a permanent member until 1946. The SFA declined to participate in 1950 although they had qualified, as Scotland were not the British champions. Scotland have since qualified for eight finals tournaments, including five consecutive tournaments from 1974 to 1990. Scotland have never advanced beyond the first round of the finals competition – no country has qualified for as many World Cup finals without progressing past the first round. They have missed out on progressing to the second round three times on goal difference: in 1974, when Brazil edged them out; in 1978, when the Netherlands progressed; and in 1982, when the Soviet Union went through. Draws include knockout matches decided via penalty shoot-out; correct as of 1 June 2022 after the match against Ukraine. ### UEFA European Championship Scotland have qualified for three European Championships, but have failed to advance beyond the first round. Their most recent participation was at UEFA Euro 2020, in which Hampden Park also hosted three group games and a last 16 match. Draws include knockout matches decided via penalty shoot-out; correct as of 22 June 2021 after the match against Croatia. ### UEFA Nations League When the UEFA Nations League was inaugurated in 2018–19, Scotland were allocated to League C. With a 3–2 win against Israel in their final match, Scotland won promotion to League B of the 2020–21 competition. Scotland won promotion to League A in their final match of the 2022–23 competition, a goalless draw against Ukraine in Kraków. Draws include knockout matches decided on penalty kicks; correct as of 27 September 2022 after the match against Ukraine. ### Other honours Continental - UEFA Nations League - League B (1): 2022–23 - League C (1): 2018–19 Sub-continental - British Home Championship - Winners (24): 1884, 1885, 1887, 1889, 1894, 1896, 1897, 1900, 1902, 1910, 1921, 1922, 1923, 1929, 1935, 1936, 1946, 1949, 1951, 1962, 1963, 1967, 1976, 1977 - Shared (17): 1886, 1890, 1903, 1906, 1908, 1912, 1927, 1931, 1935, 1939, 1953, 1956, 1960, 1964, 1970, 1972, 1974 - Rous Cup - Winners (1): 1985 Other - Kirin Cup - Winners (1): 2006 - Qatar Airways Cup - Winners (1): 2015 ## United Kingdom team Scotland has always participated by itself in most of the major football tournaments, such as the FIFA World Cup and the UEFA European Championship. At the Olympic Games the International Olympic Committee charter only permit a Great Britain Olympic football team, representing the whole of the United Kingdom, to compete. Teams of amateur players represented Great Britain at the Olympics from 1900 until 1972, but the FA stopped entering a team after then because the distinction between amateur and professional was abolished. The successful bid by London for the 2012 Summer Olympics prompted the FA to explore how a team could be entered. The SFA responded by stating that it would not participate, as it feared that this would threaten the independent status of the Scotland national team. FIFA President Sepp Blatter denied this, but the SFA expressed concern that a future President could take a different view. An agreement was reached in May 2009 whereby the FA would be permitted to organise a team using only England-qualified players, but this was successfully challenged by the British Olympic Association. Only English and Welsh players were selected for the men's squad, but two Scottish players were selected for the women's team.
72,469,603
Dish-bearers and butlers in Anglo-Saxon England
1,167,131,593
Royal officials in Anglo-Saxon England
[ "Anglo-Saxon kingdoms", "Anglo-Saxon society", "Serving and dining" ]
Dish-bearers (often called seneschals by historians) and butlers (or cup-bearers) were thegns who acted as personal attendants of kings in Anglo-Saxon England. Royal feasts played an important role in consolidating community and hierarchy among the elite, and dish-bearers and butlers served the food and drinks at these meals. Thegns were members of the aristocracy, leading landowners who occupied the third lay (non-religious) rank in English society after the king and ealdormen. Dish-bearers and butlers probably also carried out diverse military and administrative duties as required by the king. Some went on to have illustrious careers as ealdormen, but most never rose higher than thegn. ## Etymology The chief attendants at Anglo-Saxon royal feasts were dish-bearers and butlers or cup-bearers. Dish-bearer in Medieval Latin (ML) is discifer or dapifer, and in Old English (OE) discþegn, also discðegn and discþen (dish-thegn). The French medievalist Alban Gautier states: "Both discifer and dapifer literally mean , but in the first case should be understood as the disc-shaped object (discus), whereas in the second it refers to the culinary preparation that was inside (dapes)." The Dictionary of Medieval Latin from British Sources (DMLBS) defines discifer as dish-bearer or sewer, and dapifer as an attendant at meals, a sewer or a steward. Historians often translate discifer as seneschal, but Gautier objects that the word seneschal is not recorded in England before the Norman Conquest. According to the twelfth-century chronicler, John of Worcester, in 946 King Edmund I was killed trying to protect his dapifer from assault by an outlaw. The editors of John's chronicle translate dapifer as , but the historian Ann Williams prefers . Tenth- and eleventh-century charters are sometimes attested by several dapiferi or disciferi, suggesting teams of officers, whereas the will of Eadred mentions one discðegn and several stigweard (, literally ), who may have been the head and his deputies. Butler or cup-bearer in ML is pincerna, OE byrele (or birele, byrle, biriele). An officer in charge of drinks was generally described as a pincerna and one in dealing with food as a discifer or dapifer, and Gautier calls them "officers of the mouth". ## Role Royal feasts played an important part in consolidating community and hierarchy in the Anglo-Saxon elite. Dish-bearers and cup-bearers (butlers), who served at the table, played a major role in helping to make them political successes. Some feasts were compulsory drinking parties, such as the dinner held by Bishop Æthelwold at Abingdon for King Eadred in about 954: the King ordered that the mead should flow plentifully, the doors were locked so that no one could leave, and Northumbrian thegns in the King's entourage got drunk. There may have been teams of dish-bearers and butlers, under the supervision of two of them. They were probably versatile servants of the king, who carried out diverse administrative and military duties as required. In the later Anglo-Saxon period, queens and æthelings (sons of kings) also had dish-bearers. In the early 990s, when King Æthelred the Unready had several infant children, Æfic was dish-bearer to the æthelings, suggesting that they jointly had a household with one dish-bearer. When they grew up, each would have had their own retinue with a dish-bearer, and probably a butler. In 1014 Æthelred's eldest son Æthelstan left eight hides of land and a horse to his discþene in his will. The dish-bearer of Æthelstan's younger brother Edmund (the future King Edmund Ironside) attested a charter at a time when Æthelstan was still alive, showing that kings' younger sons also had dish-bearers. ## Status Dish-bearers and butlers had a high status in the hierarchy of the court. The offices were held by thegns, who were the third lay rank of the aristocracy. To be a thegn, a man had to at least be a substantial local landowner, and he could be a major magnate owning estates in several counties. He would be expected to perform military and administrative functions. A few were promoted to ealdorman, the top level of the lay aristocracy below the king. According to the historian Simon Keynes, "collectively, the thegns were the very fabric of social and political order". Kings and ealdormen could exploit their positions, "but in the final analysis it was the thegns who counted". The order of attestations in charters was an indication of status, and dish-bearers and butlers usually attested charters above ordinary thegns. In King Eadred's will, the discðegne, hræglðegne and biriele are listed immediately after the ealdormen and bishops. No dish-bearer or butler witnessed charters of two successive kings with mention of his office, suggesting that his position was a personal one which ended with the king's death. The butler and dish-bearer of Edith, wife of Edward the Confessor, remained close to her when the King died and did not move to serve the new queen. ## History The main evidence for the posts of dish-bearer and butler is provided by witness lists to charters. The offices may have been copied from the equivalent Frankish offices, but the sources for the early Anglo-Saxon period are few and problematic and the evidence is too limited to be certain. Between 741 and 809 pincernae attested charters of Kent, the Hwicce and Mercia, and in 785 Eatta attested a charter of Offa of Mercia as "dux et regis discifer" (ealdorman and king's dish-bearer), but all later attestations of dish-bearers and butlers are in West Saxon and English charters. In Wessex in the early ninth century, members of great families sought positions as dish-bearers and butlers, and Alfred the Great's maternal grandfather was a famous pincerna. In Alfred's own reign, the offices could be a step in an illustrious noble career. Alfred's pincerna in 892, Sigewulf, later became an ealdorman and died fighting against the Vikings at the Battle of the Holme in 902. In the tenth century, most dish-bearers and butlers were thegns of lesser status who never rose higher, but some members of leading families held the post before becoming ealdormen. Wulfgar and Odda were dish-bearers and leading thegns under King Æthelstan, and were promoted to ealdorman by his successor, Edmund. In 956/57, Ælfheah, who was later to be ealdorman of central Wessex, attested one charter as discifer and another as cyninges [king's] discðegn. Æthelmær, a leading magnate, founder of two abbeys and descendant of King Æthelred I, was discþen to Æthelred the Unready. Æthelmaær's father was Æthelweard, Ealdorman of the Western Provinces, and when he died in 998 Æthelmaær did not succeed as ealdorman, perhaps because he preferred to retain his influential position at court. Under Edward the Confessor, members of the families who held most of the earldoms, those of Godwin and Leofric, did not become dish-bearers or butlers, and the positions may have become less attractive to the greatest aristocrats when they were more powerful than the court. In the 1060s a new rank of staller was created between thegns and earls, and men with this rank could hold the office of dish-bearer.
1,966,095
Voluntary Human Extinction Movement
1,171,882,832
American antinatalist association
[ "1991 establishments in Oregon", "Childfree", "Deep ecology", "Environmental organizations based in the United States", "Human extinction", "Organizations based in Portland, Oregon", "Population concern organizations" ]
The Voluntary Human Extinction Movement (VHEMT) is an environmental movement that calls for all people to abstain from reproduction in order to cause the gradual voluntary extinction of humankind. VHEMT supports human extinction primarily because, in the group's view, it would prevent environmental degradation. The group states that a decrease in the human population would prevent a significant amount of human-caused suffering. The extinctions of non-human species and the scarcity of resources caused by humans are frequently cited by the group as evidence of the harm caused by human overpopulation. VHEMT was founded in 1991 by Les U. Knight, an American activist who became involved in the American environmental movement in the 1970s and thereafter concluded that human extinction was the best solution to the problems facing the Earth's biosphere and humanity. Knight publishes the group's newsletter and serves as its spokesman. Although the group is promoted by a website and represented at some environmental events, it relies heavily on coverage from outside media to spread its message. Many commentators view its platform as unacceptably extreme, while endorsing the logic of reducing the rate of human reproduction. In response to VHEMT, some journalists and academics have argued that humans can develop sustainable lifestyles or can reduce their population to sustainable levels. Others maintain that, whatever the merits of the idea, the human reproductive drive will prevent humankind from ever voluntarily seeking extinction. ## History The Voluntary Human Extinction Movement was founded by Les U. Knight, a graduate of Western Oregon University and high school substitute teacher living in Portland, Oregon. After becoming involved in the environmental movement as a college student in the 1970s, Knight attributed most of the dangers faced by the planet to human overpopulation. He joined the Zero Population Growth organization, and chose to be vasectomised at age 25. He later concluded that the extinction of humanity would be the best solution to the Earth's environmental problems. He believes that this idea has also been held by some people throughout human history. In 1991, Knight began publishing VHEMT's newsletter, known as These Exit Times. In the newsletter, he asked readers to further human extinction by not procreating. VHEMT has also published cartoons, including a comic strip titled Bonobo Baby, featuring a woman who forgoes childbearing in favor of adopting a bonobo. In 1996, Knight created a website for VHEMT; it was available in 11 languages by 2010. VHEMT's logo features the letter "V" (for voluntary) and a picture of the Earth with north at the bottom. ## Organization and promotion VHEMT functions as a loose network rather than a formal organization, and does not compile a list of members. Daniel Metz of Willamette University stated in 1995 that VHEMT's mailing list had just under 400 subscribers. Six years later, Fox News said the list had only 230 subscribers. Knight says that anyone who agrees with his ideology is a member of the movement; and that this includes "millions of people". Knight serves as the spokesman for VHEMT. He attends environmental conferences and events, where he publicizes information about population growth. VHEMT's message has, however, primarily been spread through coverage by media outlets, rather than events and its newsletter. VHEMT sells buttons and T-shirts, as well as bumper stickers that read "Thank you for not breeding". In 2018, a supporter of the movement appeared on the popular YouTube channel LAHWF in a video called, "Chatting with a Supporter of the Voluntary Human Extinction Movement". ## Ideology Knight argues that the human population is far greater than the Earth can handle, and that the best thing for Earth's biosphere is for humans to voluntarily cease reproducing. He says that humans are "incompatible with the biosphere" and that human existence is causing environmental damage which will eventually bring about the extinction of humans (as well as other organisms). According to Knight, the vast majority of human societies have not lived sustainable lifestyles, and attempts to live environmentally friendly lifestyles do not change the fact that human existence has ultimately been destructive to the Earth and many of its non-human organisms. Voluntary human extinction is promoted on the grounds that it will prevent human suffering and the extinction of other species; Knight points out that many species are threatened by the increasing human population. James Ormrod, a psychologist who profiled the group in the journal Psychoanalysis, Culture & Society, notes that the "most fundamental belief" of VHEMT is that "human beings should stop reproducing", and that some people consider themselves members of the group but do not actually support human extinction. Knight, however, believes that even if humans become more environmentally friendly, they could still return to environmentally destructive lifestyles and hence should eliminate themselves. Residents of First World countries bear the most responsibility to change, according to Knight, as they consume the largest proportion of resources. Knight believes that Earth's non-human organisms have a higher overall value than humans and their accomplishments, such as art: "The plays of Shakespeare and the work of Einstein can't hold a candle to a tiger". He argues that species higher in the food chain are less important than lower species. His ideology is drawn in part from deep ecology, and he sometimes refers to the Earth as Gaia. He notes that human extinction is unavoidable, and that it is better to become extinct soon to avoid causing the extinction of other animals. The potential for evolution of other organisms is also cited as a benefit. Knight sees abstinence from reproduction as an altruistic choice – a way to prevent involuntary human suffering – and cites the deaths of children from preventable causes as an example of needless suffering. Knight claims that non-reproduction would eventually allow humans to lead idyllic lifestyles in an environment comparable to the Garden of Eden, and maintains that the last remaining humans would be proud of their accomplishment. Other benefits of ceasing human reproduction that he cites include the end of abortion, war, and starvation. Knight argues that "procreation today is de facto child abuse". He maintains that the standard of human life will worsen if resources are consumed by a growing population rather than spent solving existing issues. He speculates that if people ceased to reproduce, they would use their energy for other pursuits, and suggests adoption and foster care as outlets for people who desire children. VHEMT rejects government-mandated human population control programs in favor of voluntary population reduction, supporting only the use of birth control and willpower to prevent pregnancies. Knight states that coercive tactics are unlikely to permanently lower the human population, citing the fact that humanity has survived catastrophic wars, famines, and viruses. Though their newsletter's name recalls the suicide manual Final Exit, the idea of mass suicide is rejected, and they have adopted the slogan "May we live long and die out". A 1995 survey of VHEMT members found that a majority of them felt a strong moral obligation to protect the Earth, distrusted the ability of political processes to prevent harm to the environment, and were willing to surrender some of their rights for their cause. VHEMT members who strongly believed that "Civilization [is] headed for collapse" were most likely to embrace these views. However, VHEMT does not take any overt political stances. VHEMT promotes a more extreme ideology than Population Action International, which argues for population reduction but not extinction. However, the VHEMT platform is more moderate and serious than the Church of Euthanasia, which advocates population reduction by suicide and cannibalism. The 1995 survey found that 36% considered themselves members of Earth First! or had donated to the group in the last five years. ## Reception Knight states his group's ideology runs counter to contemporary society's natalism. He believes this pressure has stopped many people from supporting, or even discussing, population control. He admits that his group is unlikely to succeed, but contends that attempting to reduce the Earth's population is the only moral option. Reception of Knight's idea in the mainstream media has been mixed. Writing in the San Francisco Chronicle, Gregory Dicum states that there is an "undeniable logic" to VHEMT's arguments, but he doubts whether Knight's ideas can succeed, arguing that many people desire to have children and cannot be dissuaded. Stephen Jarvis echoes this skepticism in The Independent, noting that VHEMT faces great difficulty owing to the basic human reproductive drive. At The Guardian's website, Guy Dammann applauds the movement's aim as "in many ways laudable", but argues that it is absurd to believe that humans will voluntarily seek extinction. Freelance writer Abby O'Reilly writes that since having children is frequently viewed as a measure of success, VHEMT's goal is difficult to attain. Knight contends in response to these arguments that though sexual desire is natural, human desire for children is a product of enculturation. The Roman Catholic Archdiocese of New York has criticized Knight's platform, arguing that the existence of humanity is "divinely ordained". Ormrod claims that Knight "arguably abandons deep ecology in favour of straightforward misanthropy". He notes that Knight's claim that the last humans in an extinction scenario would have an abundance of resources promotes his cause based on "benefits accruing to humans". Ormrod sees this type of argument as counter-intuitive, arguing that it borrows the language of "late-modern consumer societies". He faults Knight for what he sees as a failure to develop a consistent and unambiguous ideology. The Economist characterizes Knight's claim that voluntary human extinction is advisable due to limited resources as "Malthusian bosh". The paper further states that compassion for the planet does not necessarily require the pursuit of human extinction. Sociologist Frank Furedi also deems VHEMT to be a Malthusian group, classifying them as a type of environmental organization that "[thinks] the worst about the human species". Writing in Spiked, Josie Appleton argues that the group is indifferent to humanity, rather than "anti-human". Brian Bethune writes in Maclean's that Knight's logic is "as absurd as it's unassailable". However, he doubts Knight's claim that the last survivors of the human race would have pleasant lives and suspects that a "collective loss of the will to live" would prevail. In response to Knight's platform, journalist Sheldon Richman argues that humans are "active agents" and can change their behavior. He contends that people are capable of solving the problems facing Earth. Alan Weisman, author of The World Without Us, suggests a limit of one child per family as a preferable alternative to abstinence from reproduction. Katharine Mieszkowski of Salon.com recommends that childless people adopt VHEMT's arguments when facing "probing questions" about their childlessness. Writing in the Journal for Critical Animal Studies, Carmen Dell'Aversano notes that VHEMT seeks to renounce children as a symbol of perpetual human progress. She casts the movement as a form of "queer oppositional politics" because it rejects perpetual reproduction as a form of motivation. She argues that the movement seeks to come to a new definition of "civil order", as Lee Edelman suggested that queer theory should. Dell'Aversano believes that VHEMT fulfills Edelman's mandate because they embody the death drive rather than ideas that focus on the reproduction of the past. Although Knight's organization has been featured in a book titled Kooks: A Guide to the Outer Limits of Human Belief, The Guardian journalist Oliver Burkeman notes that in a phone conversation Knight seems "rather sane and self-deprecating". Weisman echoes this sentiment, characterizing Knight as "thoughtful, soft-spoken, articulate, and quite serious". Philosophers Steven Best and Douglas Kellner view VHEMT's stance as extreme, but they note that the movement formed in response to extreme stances found in "modern humanism". ## See also - Antinatalism - Carrying capacity - Childfree - Deep ecology - Negative Population Growth - Rejection of anthropocentrism - The World Without Us
437,829
St. Johns River
1,169,577,356
The longest river in Florida, United States
[ "American Heritage Rivers", "Bodies of water of Clay County, Florida", "Bodies of water of Indian River County, Florida", "Bodies of water of Putnam County, Florida", "Inlets of Florida", "NJCAA athletics", "North Florida", "Rivers of Brevard County, Florida", "Rivers of Florida", "Rivers of Orange County, Florida", "Rivers of Osceola County, Florida", "Rivers of Polk County, Florida", "Rivers of Seminole County, Florida", "Rivers of Volusia County, Florida", "St. Johns River" ]
The St. Johns River (Spanish: Río San Juan) is the longest river in the U.S. state of Florida and it is the most significant one for commercial and recreational use. At 310 miles (500 km) long, it flows north and winds through or borders twelve counties. The drop in elevation from headwaters to mouth is less than 30 feet (9 m); like most Florida waterways, the St. Johns has a very slow flow speed of 0.3 mph (0.13 m/s), and is often described as "lazy". Numerous lakes are formed by the river or flow into it, but as a river its widest point is nearly 3 miles (5 km) across. The narrowest point is in the headwaters, an unnavigable marsh in Indian River County. The St. Johns drainage basin of 8,840 square miles (22,900 km<sup>2</sup>) includes some of Florida's major wetlands. It is separated into three major basins and two associated watersheds for Lake George and the Ocklawaha River, all managed by the St. Johns River Water Management District. Although Florida was the location of the first permanent European settlement in what would become the United States, much of Florida remained an undeveloped frontier into the 20th century. With the growth of population, the St. Johns, like many Florida rivers, was altered to make way for agricultural and residential centers, suffering severe pollution and redirection that has diminished its ecosystem. The St. Johns, named one of 14 American Heritage Rivers in 1998, was number 6 on a list of America's Ten Most Endangered Rivers in 2008. Restoration efforts are underway for the basins around the St. Johns as Florida's population continues to increase. Historically, a variety of people have lived on or near the St. Johns, including Paleo-indians, Archaic people, Timucua, Mocama, French, Spanish, and British colonists, Seminoles, slaves and freemen, Florida crackers, land developers, tourists and retirees. It has been the subject of William Bartram's journals, Harriet Beecher Stowe's letters home, and Marjorie Kinnan Rawlings' books. In the year 2000, 3.5 million people lived within the various watersheds that feed into the St. Johns River. ## Geography and ecology Starting in Brevard County and meeting the Atlantic Ocean at Duval County, the St. Johns is Florida's primary commercial and recreational waterway. It flows north from its headwaters, originating in the direction of the Lake Wales Ridge, which is only slightly elevated at 30 feet (9.1 m) above sea level. Because of this low elevation drop, the river has a long backwater. It ebbs and flows with tides that pass through the barrier islands and up the channel. Uniquely, it shares the same regional terrain as the parallel Kissimmee River, although the Kissimmee flows south. ### Upper basin The St. Johns River is separated into three basins and two associated watersheds managed by the St. Johns River Water Management District. Because the river flows in a northerly direction, the upper basin is located in the headwaters of the river at its southernmost point. Indian River County is where the river begins as a network of marshes, at a point west of Vero Beach aptly named the St. Johns Marsh in central Florida. The St. Johns River is a blackwater stream, meaning that it is fed primarily by swamps and marshes lying beneath it; water seeps through the sandy soil and collects in a slight valley. The upper basin measures approximately 2,000 square miles (5,200 km<sup>2</sup>); the St. Johns transforms into a navigable waterway in Brevard County. The river touches on the borders of Osceola and Orange Counties, and flows through the southeast tip of Seminole County, transitioning into its middle basin a dozen miles (19 km) or so north of Titusville. The upper basin of the St. Johns was significantly lowered in the 1920s with the establishment of the Melbourne Tillman drainage project. This drained the St. Johns' headwaters eastward to the Indian River through canals dug across the Ten-Mile Ridge near Palm Bay. As of 2015, these past diversions are being partially reversed through the first phase of the Canal 1 Rediversion project. The river is at its narrowest and most unpredictable in this basin. Channel flows are not apparent and are usually unmarked. The most efficient way to travel on this part of the river is by airboat. Approximately 3,500 lakes lie within the overall St. Johns watershed; all are shallow, with maximum depths between 3 and 10 feet (1 and 3 m). The river flows into many of the lakes, which further confuses navigation. Eight larger lakes and five smaller ones lie in the upper basin; one of the first is named Lake Hell 'n Blazes (sometimes polished to read as Lake Helen or Hellen Blazes), referencing oaths yelled by boatmen and fishermen in the early 19th century, frustrated when trying to navigate through floating islands of macrophytes, or muck and weeds, as the islands changed location with the creeping flow. Lakes Washington, Winder, and Poinsett are located further along this stretch of the river. The northernmost points of the upper basin contain the Tosohatchee Wildlife Management Area, created in 1977 to assist with filtration of waters flowing into the larger St. Johns. Wetlands in the upper and middle basin are fed by rainwater, trapped by the structure of the surrounding land. It is an oxygen- and nutrient-poor environment; what grows usually does so in peat which is created by centuries of decaying plant material. Water levels fluctuate with the subtropical wet and dry seasons. Rain in central and north Florida occurs seasonally during summer and winter, but farther south rain in winter is rare. All plants in these basins must tolerate water fluctuation, both flooding and drought. Sweetbay (Magnolia virginiana), cypress (Taxodium), and swamp tupelo (Nyssa biflora) trees often find great success in this region on raised land called hammocks. Trees that live in water for long periods usually have buttressed trunks, tangled, braided roots, or protrusions like cypress knees to obtain oxygen when under water, but the majority of plant life is aquatic. Wetland staples include the American white waterlily (Nymphaea odorata), pitcher plants, and Virginia iris (Iris virginica). In the southernmost points of the river, Cladium, or sawgrass, grows in vast swaths of wet prairie that at one time extended into the Everglades. These wetland flora are remarkably successful in filtering pollutants that otherwise find their way into the river. ### Middle basin For 37 miles (60 km) the river passes through a 1,200-square-mile (3,100 km<sup>2</sup>) basin fed primarily by springs and stormwater runoff. This basin, spreading throughout Orange, Lake, Volusia, and Seminole Counties, is home to the greater Orlando metropolitan area, where two million people live and major tourist attractions are located. The topography of the middle basin varies between clearly distinguishable banks along the river and broad, shallow lakes. Two of the largest lakes in the middle basin are created by the river: Lake Harney and Lake Monroe. The shallow 9-square-mile (23 km<sup>2</sup>) Lake Harney is fed by the long narrow Puzzle Lake; immediately north is the Econlockhatchee River, which joins to increase the volume of the St. Johns to where navigation becomes easier for larger boats. The river veers west, touching on Lake Jesup before it empties into Lake Monroe, passing the city of Sanford. It is at this point that the St. Johns' navigable waterway, dredged and maintained by the U.S. Army Corps of Engineers with channel markers maintained by the U.S. Coast Guard, begins. Lake Monroe, a large lake at 15 square miles (39 km<sup>2</sup>) with an average depth of 8 feet (2.4 m), drains a surrounding watershed of 2,420 square miles (6,300 km<sup>2</sup>). Sanford has adapted to the lake by building some of its downtown area on the waterfront; citizens use boat transportation and Sanford's public dock to commute into town. Optimally an 8-foot (2.4 m) deep channel about 100 yards (91 m) wide after leaving Lake Monroe, the St. Johns meets its most significant tributary in the middle basin, the spring-fed Wekiva River, discharging approximately 42,000,000 US gallons (160,000,000 L) a day into the St. Johns. Near this confluence are the towns of DeBary and Deltona. Forests surrounding the Wekiva River are home to the largest black bear (Ursus americanus floridanus) population in Florida; several troops of Rhesus monkeys (Macaca mulatta) have adapted to live near the river as well. The monkeys' introduction to Florida is unclear; they were reportedly brought either to serve in backdrop scenes of Tarzan movies filmed around the Silver River in the 1930s, or to lend an air of authenticity to "jungle cruises" provided by an enterprising boat operator around the same time. Of most vital importance to marshes are invertebrate animals, the foundation of food webs. Amphibious invertebrates such as apple snails (Pomacea paludosa), crayfish, and grass shrimp consume plant material, hastening its decomposition and acting as a food source for fish and birds. Insect larvae use water for breeding, feeding upon smaller copepods and amphipods that live in microscopic algae and periphyton formations. Mosquitos, born in water, are in turn the favorite food of 112 species of dragonflies and 44 species of damselflies in Florida. These animals are water hardy and adaptable to dry conditions when water levels fluctuate from one season to the next or through drought and flood cycles. Of vertebrates, numerous species of frog, salamander, snake, turtle, and alligator (Alligator mississippiensis) proliferate in marsh waters. Most of these animals are active at night. Frog choruses are overwhelming; during alligator mating season the grunts of bulls join in. The marshes around the St. Johns River upper basin teem with birds. A recent study counted 60,000 birds in one month, nesting or feeding in the upper basin. Wading and water birds like the white ibis (Eudocimus albus), wood stork (Mycteria americana), and purple gallinule (Porphyrio martinicus) depend on the water for raising their young: they prey upon small fish and tadpoles in shallow water and puddles in the dry season. In successful seasons, their colonies can number in the thousands, creating a cacophony of calls and fertilizing trees with their droppings. ### Lake George The river turns north again as it rolls through a 46,000-acre (190 km<sup>2</sup>) basin spreading across Putnam, Lake, and Marion Counties, and the western part of Volusia County. Slightly north of the Wekiva River is Blue Spring, the largest spring on the St. Johns, producing over 64,000,000 US gallons (240,000,000 L) a day. Florida springs stay at an even temperature of 72 °F (22 °C) throughout the year. Because of this, Blue Spring is the winter home for West Indian manatees (Trichechus manatus latirostris), and they are protected within Blue Spring State Park. Manatees are large, slow-moving herbivorous aquatic mammals whose primary threats are human development and collisions with swiftly moving watercraft. Many parts of the St. Johns and its tributaries are no-wake zones to protect manatees from being critically or fatally injured by boat propellers. Human interaction with manatees in Blue Spring State Park is forbidden. Bordering to the north of Blue Spring State Park is Hontoon Island State Park, accessible only by boat. In 1955 an extremely rare Timucua totem representing an owl was found buried and preserved in the St. Johns muck off of Hontoon Island. The figure may signify that its creators were part of the owl clan. Representing different clans of the Timucua, two more totems—in all, the only totems in North America to have been found outside of the Pacific Northwest—shaped like a pelican and otter were found in 1978 after being snagged by a barge at the bottom of the river. River otters (Lutra canadensis) can be found through the length of the St. Johns and its tributaries, living in burrows or in the roots of trees bordering waterways. They eat crayfish, turtles, and small fish, and are active usually at night, playful but shy of human contact. The St. Johns creeps into the southern tip of Lake George, the second largest lake in Florida at 72 square miles (190 km<sup>2</sup>), 6 miles (9.7 km) wide and 12 miles (19 km) long. The watershed surrounding Lake George expands through 3,590 square miles (9,300 km<sup>2</sup>), lying within Ocala National Forest and Lake George State Forest, that protect an ecosystem dominated by pine and scrub more than 380,000 acres (1,500 km<sup>2</sup>) and 21,000 acres (85 km<sup>2</sup>) in size, respectively. Flatwoods forests dominate the Lake George watershed, with slash pines (Pinus elliottii), saw palmetto (Serenoa repens), and over 100 species of groundcover or herbal plants that grow in poor, sandy soil. Flatwoods pine forests stay relatively dry, but can withstand short periods of flooding. Larger land animals such as wild turkeys (Meleagris gallopavo), sandhill cranes (Grus canadensis), and the largest population of southern bald eagles (Haliaeetus leucocephalus leucocephalus) in the contiguous U.S, find it easier to live in the flatwoods. Typical mammals that live in these ecosystems, such as raccoons (Procyon lotor), opossums (Didelphis virginiana), bobcats (Lynx rufus), and white tailed-deer (Odocoileus virginianus), are ones that prefer dry, flat areas with good ground cover and available nesting sites. ### Ocklawaha River The Ocklawaha River flows north and joins the St. Johns as the largest tributary, and one of significant historical importance. The Ocklawaha (also printed as Oklawaha) drainage basin expands through Orange, Lake, Marion, and Alachua Counties, comprising a total of 2,769 square miles (7,170 km<sup>2</sup>). Ocala, Gainesville, and the northern suburbs of the Orlando metropolitan area are included in this basin. There are two headwaters for the Ocklawaha: a chain of lakes, the largest of which is Lake Apopka in Lake County, and the Green Swamp near Haines City in Polk County, drained by the Palatlakaha River. The Silver River, fed by one of Florida's most productive springs expelling 54,000,000 US gallons (200,000,000 L) daily, is located about midway along the 96-mile (154 km) Ocklawaha. Confederate Captain John William Pearson named his militia after the Ocklawaha River called the Ocklawaha Rangers in the American Civil War. Prior to the civil war, Pearson ran a successful health resort in Orange Springs. After the civil war Pearson's Orange Springs resort declined in popularity due to the increasing attention to nearby Silver Springs—the source of the Silver River—at the turn of the 20th century, popularizing the Ocklawaha. Georgia-born poet Sidney Lanier called it "the sweetest waterlane in the world" in a travel guide he published in 1876. The river gave Marjorie Kinnan Rawlings access to the St. Johns from her homestead at Orange Lake. The region served as a major fishing attraction until a decline in water quality occurred in the 1940s, and since then further degradation of the river and its sources have occurred. In particular, Lake Apopka earned the designation of Florida's most polluted lake following a chemical spill in 1980 that dumped DDE in it. It has experienced chronic algal blooms caused by citrus farm fertilizer and wastewater runoff from nearby farms. The proliferation of largemouth bass (Micropterus salmoides), black crappie (Pomoxis nigromaculatus), and bluegill (Lepomis macrochirus) in central Florida is a major attraction for fishermen from all over the country. The St. Johns is home to 183 species of fish, 55 of which appear in the main stem of the river. One, the southern tessellated darter (Etheostoma olmstedi) is found only in the Ocklawaha. Some are marine species that either migrate upriver to spawn or have found spring-fed habitats that are high in salinity, such as a colony of Atlantic stingrays (Dasyatis sabina) that live in Lake Washington in the upper basin. Ocean worms, snails, and white-fingered mud crabs (Rhithropanopeus harrisii) have also been found far upriver where tidal influences are rare. In contrast, American eels (Anguilla rostrata) live in the St. Johns and Ocklawaha and spawn in the Sargasso Sea in the middle of the Atlantic Ocean. After a year living in the ocean, many of them find their way back to the St. Johns to live, then, prompted by the phases of the moon, make the return journey to spawn and die. ### Lower basin From the intersection of the Ocklawaha River, 101 miles (163 km) to the Atlantic Ocean, the St. Johns lies within the lower basin, draining a total area of 2,600 square miles (6,700 km<sup>2</sup>) in Putnam, St. Johns, Clay, and Duval Counties. Twelve tributaries empty into the river in the lower basin. The St. Johns River widens considerably on the north end of Lake George; between Lake George and Palatka the river ranges between 600 and 2,640 feet (180 and 800 m) wide. Between Palatka and Jacksonville, that widens further to between 1 and 3 miles (1.6 and 4.8 km). This portion of the river is the most navigable and shipping is its primary use. The Army Corps of Engineers maintains shipping channels at least 12 feet (3.7 m) deep and 100 feet (30 m) wide. North of Jacksonville, the channels are expanded to 40 feet (12 m) deep and between 400 and 900 feet (120 and 270 m) wide. The towns and cities along the lower basin of the river are some of the oldest in Florida, and their histories have centered on the river. Both Palatka and Green Cove Springs have been popular tourist destinations in the past. Several smaller locations along the river sprang up around ferry landings, but when rail lines and then Interstate highways were constructed closer to the Atlantic Coast, many of the towns experienced significant economic decline, and ferry landings were forgotten. The final 35 miles (56 km) of the river's course runs through Jacksonville with a population of more than a million. Much of the economic base of Jacksonville depends on the river: 18,000,000 short tons (16,000,000 t) of goods are shipped in and out of Jacksonville annually. Exports include paper, phosphate, fertilizers, and citrus, while major imports include oil, coffee, limestone, cars, and lumber. The Port of Jacksonville produces \$1.38 billion in the local economy and supports 10,000 jobs. The U.S. Navy has two bases in the Jacksonville area: Naval Station Mayport, at the mouth of the river, serves as the second largest Atlantic Fleet operation and home port in the country. Naval Air Station Jacksonville is one of the service's largest air installations, home to two air wings and over 150 fixed-wing and rotary-wing aircraft, and the host for one of only two full-fledged Naval Hospitals remaining in Florida. Using an unofficial nickname of "The River City", Jacksonville has a culture centered on the St. Johns. An annual footrace named the Gate River Run accepts 18,000 participants who travel a course along and over the river twice. The largest kingfishing tournament in the U.S. is held on a St. Johns tributary, where sport fishers concentrate on king mackerel (Scomberomorus cavalla), cobia (Rachycentron canadum), dolphin (Coryphaena hippurus) and Wahoo (Acanthocybium solandri). The home stadium for the Jacksonville Jaguars faces the river, as does most of the commercial center of downtown. Seven bridges span the St. Johns at Jacksonville; all of them allow tall ships to pass, although some restrict passing times when train or automobile traffic is heavy. Tides cause seawater to enter the mouth of the St. Johns River and can affect the river's level into the middle basin. As a result, much of the river in Jacksonville is part seawater, making it an estuarine ecosystem. The animals and plants in these systems can tolerate both fresh and salt water, and the fluctuations in saline content and temperatures associated with tidal surges and heavy rainfall discharge. Marine animals such as dolphins and sharks can be spotted at times in the St. Johns at Jacksonville as can manatees. Fish such as mullet (Mullidae), flounder (Paralichthys lethostigma), shad (Alosa sapidissima), and blue crabs (Callinectes sapidus) migrate from the ocean to freshwater springs upriver to spawn. Although freshwater invertebrates inhabiting and comprising algae and periphyton make the foundation of food webs in the middle and lower basin, zooplankton and phytoplankton take that role in the estuarine habitat. Mollusks gather at the St. Johns estuary in large numbers, feeding on the bottom of the river and ocean floors. The abundance and importance of oysters (Crassostrea virginica) is apparent in the many middens left by the Timucua in mounds many feet high. Oysters and other mollusks serve as the primary food source of shorebirds. The large trees that line the river from its source to south of Jacksonville begin to transition into salt marshes east of the city. Mayport is home to approximately 20 shrimping vessels that use the mouth of the St. Johns to access the Atlantic Ocean. ## Formation and hydrology ### Geologic history Lying within a coastal plain, the St. Johns River passes through an area that was at one time barrier islands, coastal dunes, and estuary marshes. The Florida Peninsula was created primarily by forces and minerals from the ocean. It lies so low that minor fluctuations in sea levels can have a dramatic effect on its geomorphology. Florida was once part of the supercontinent Gondwana. Lying underneath the visible rock formations is a basement of igneous granite and volcanic composition under a sedimentary layer formed during the Paleozoic era 542 to 251 million years ago. During the Cretaceous period (145 to 66 million years ago), the basement and its sedimentary overlay were further covered by calcium carbonate and formations left by the evaporation of water called evaporites. What covers the peninsula is the result of simultaneous processes of deposits of sands, shells, and coral, and erosion from water and weather. As ocean water has retreated and progressed, the peninsula has been covered with sea water at least seven times. Waves compressed sands, calcium carbonate, and shells into limestone; at the ocean's edge, beach ridges were created by this depositional forming. North-south axis rivers, such as the St. Johns, were created by past beach ridges which were often divided by swales. As ocean water retreated, lagoons formed in the swales, which were further eroded by acidic water. Barrier islands, furthermore, formed along the Atlantic Coast, surrounding the lagoon with land and forming a freshwater river. From its origins to approximately the area of Sanford, the St. Johns flows north. It takes a sharp turn west near Sanford for a few miles—which is referred to as the St. Johns River offset, but shortly changes direction to flow north again. Geologists hypothesize that the west-flowing offset may have formed earlier than the north flowing portions, possibly during the late Tertiary or early Pleistocene era 66 to 12 million years ago. Some fracturing and faulting may also be responsible for the offset. Although seismic activity in Florida is mostly insignificant, several minor earthquakes have occurred near the St. Johns River, caused by the trough created by Pangaean rifting. ### Springs and aquifers All of Florida's abundant fresh water is the result of precipitation that returns to the atmosphere in a process called evapotranspiration that involves evaporation and transpiration of moisture from plants. As rains fall, most of the water is directed to lakes, streams, and rivers. However, a significant amount of fresh water is held underground but close to the surface in aquifers. A surficial aquifer consisting mostly of clay, shells, and sand is over a confining layer of denser materials. Wells are drilled in the surficial aquifer, which supplies better quality water in areas where the deepest aquifer has a high mineral content. Occasionally the confining layer is fractured to allow breaches of water to percolate down to recharge the layer below. The Floridan Aquifer, underneath the confining layer, underlies the entire state and portions of Georgia, Alabama, Mississippi, and South Carolina. It is particularly accessible in the northern part of Florida, serving as the fresh water source of metropolitan areas from St. Petersburg north to Jacksonville and Tallahassee. Acidic rainwater erodes the limestone and can form caverns. When the overlay of these caverns is particularly thin—less than 100 feet (30 m)—sinkholes can form. Where the limestone or sand/clay overlay dissolves over the aquifer and the pressure of the water pushes out, springs form. The upper and middle basins of the St. Johns River are located in a portion of the peninsula where the aquifer system is thinly confined, meaning springs and sinkholes are abundant. Springs are measured in magnitude of how much water is discharged, which is dependent upon season and rainfall. The greatest discharge is from first magnitude springs that emit at least 100 cubic feet (2.8 m<sup>3</sup>) of water per second. There are four first magnitude springs that feed the St. Johns River: Silver Springs in Marion County, emitting between 250 and 1,290 cubic feet (7.1 and 36.5 m<sup>3</sup>)/second; Silver Glen Spring straddling Marion and Lake Counties, emitting between 38 and 245 cubic feet (1.1 and 6.9 m<sup>3</sup>)/second; Alexander Springs in Lake County, emitting between 56 and 202 cubic feet (1.6 and 5.7 m<sup>3</sup>)/second; and Blue Spring in Volusia County, emitting between 87 and 218 cubic feet (2.5 and 6.2 m<sup>3</sup>)/second. ### Rainfall and climate The St. Johns River lies within a humid subtropical zone. In summer months, the temperature ranges from 74 and 92 °F (23 and 33 °C), and between 50 and 72 °F (10 and 22 °C) in the winter, although drops may occur in winter months to below freezing approximately a dozen times. Water temperatures in the river correlate to the air temperatures. The average range of water temperatures is between 50 and 95 °F (10 and 35 °C), rising in the summer months. Where the river widens between Palatka and Jacksonville, wind becomes a significant factor in navigation, and both whitecap waves and calm surface waters are common. Rain occurs more frequently in late summer and early fall. Tropical storms and nor'easters are common occurrences along the Atlantic coast of Florida; the St. Johns River lies between 10 and 30 miles (16 and 48 km) inland, so any storm striking the counties of Indian River north to Duval produces rain that is drained by the St. Johns River. Tropical Storm Fay in 2008 deposited 16 inches (410 mm) of rain in a 5-day period, most of it located near Melbourne. The St. Johns near Geneva in Seminole County rose 7 feet (2.1 m) in four days, setting a record. The river near Sanford rose 3 feet (1 m) in 36 hours. Fay caused severe flooding in the middle basin due not only to the deluge but the flat slopes of the river. Typically, however, the St. Johns basin receives between 50 and 54 inches (1,300 and 1,400 mm) of rain annually, half of it in summer months. The rate of evapotranspiration corresponds to rainfall, ranging between 27 and 57 inches (690 and 1,450 mm) a year, most of it occurring in the summer. ### Flow rates and water quality The entire river lies within the nearly flat Pamlico terrace, giving it an overall gradient of 0.8 inches (2.0 cm) per mile (km); it is one of the flattest major rivers on the continent. Its proximity to the ocean in the lower basin affects its rise and fall with tides and salinity. Tides regularly affect water levels as far south as Lake George; when combined with extreme winds, the river's tidal effects can extend to Lake Monroe 161 miles (259 km) away and have on occasion reached Lake Harney. Tides typically raise the river level about 1.2 feet (0.37 m) at Jacksonville, decreasing some to 0.7 feet (0.21 m) at Orange Park where the river widens, and increasing back to 1.2 feet (0.37 m) at Palatka as it narrows. As a result of tidal effects, discharge measurements in the lower basin are often inaccurate. However, the estimated rate of discharge between the Ocklawaha River and the center of Jacksonville ranges from 4,000 to 8,300 cubic feet (110 to 240 m<sup>3</sup>) per second. The nontidal discharge at the mouth at Mayport averages 15,000 cubic feet (420 m<sup>3</sup>) per second, but with tides it exceeds 50,000 cubic feet (1,400 m<sup>3</sup>) per second, and following heavy rains combined with tides can top 150,000 cubic feet (4,200 m<sup>3</sup>) per second. Farther upriver, the discharge rate ranges from 1,030 cubic feet (29 m<sup>3</sup>) per second near Lake Poinsett to 2,850 cubic feet (81 m<sup>3</sup>) per second near DeLand. The confluence of numerous springs, the Econlockhatchee River, and the Wekiva River causes the average discharge to increase by 940 cubic feet (27 m<sup>3</sup>) per second between Lake Harney and DeLand, representing the greatest annual average increase of streamflow along the St. Johns. As distance between the mouth of the St. Johns and the middle and upper basins increases, the salinity in the river gradually decreases. Marine water measures at 35 parts per thousand (ppt) or more while fresh water measures below 2 ppt. What ranges in between is characterized as brackish water. Near the center of Jacksonville, average measures have been collected at 11.40 ppt. Farther south at the Buckman Bridge, joining the south side of Jacksonville to Orange Park, it decreases to 2.9 ppt and falls again to 0.81 ppt at the Shands Bridge near Green Cove Springs. Dissolved oxygen in fresh water is measured to indicate the health of plant and animal life. It enters water through the atmosphere and from aquatic plant photosynthesis, and is affected by water pressure and temperature. Rapid decomposition of organic materials will decrease the amount of dissolved oxygen in the river, as will nutrients added to the water artificially by wastewater treatment runoff or drainage from fertilized agricultural fields. The U.S. Environmental Protection Agency and the State of Florida recommend no less than 5 mg of oxygen per liter. Several locations on the St. Johns or its tributaries reported at or below these minimums in the 1990s, including the mouth of the Wekiva River, the St. Johns at the town of Christmas, and in the early 2000s at Blue Spring and Blackwater Creek. Sustained low levels of dissolved oxygen may create algal blooms, which may also cause a further decrease in dissolved oxygen. Like all blackwater streams in Florida, the color of most of the St. Johns is black, owing to the tannins in leaf litter and decaying aquatic plants. Spring fed streams, on the other hand, are remarkably clear and visibility is very high, even when the river bottom is dozens of feet below. ## Human history ### Pre-Columbian people Humans arrived on the Florida Peninsula about 12,000 years ago when the ocean was about 350 feet (110 m) lower than today, and the peninsula was double its current size. These earliest people are called Paleo-Indians. They were primarily hunter–gatherers who followed large game, such as mastodons, horses, camels, and bison. Much of the land was far from water—most fresh water was contained in glaciers and polar ice caps. As a result, Florida was an arid landscape with few trees, dominated by grasslands and scrub vegetation. Around 9,000 years ago, the climate warmed, melting much of the polar ice caps and many glaciers, creating a wetter environment and submerging half the peninsular shelf. As Paleo-Indians now did not have to travel as far to find water, their camps became more permanent, turning into villages. With evidence of a wide variety of tools constructed around this time, archeologists note the transition into Archaic people. The Archaic people made tools from bone, animal teeth, and antlers. They wove fibers from plants such as cabbage palms and saw palmettos. A few burial sites have been excavated—including the Windover Archaeological Site in Brevard County near Titusville—that provide evidence of burial rituals. Archaic peoples interred their dead in shallow peat marshes, which preserved much of the human tissue. Further climate change between 5,000 and 3,000 years ago led to the Middle Archaic period; evidence suggests that human habitation near the St. Johns River first occurred during this era. Populations of indigenous people increased significantly at this time, and numerous settlements near the St. Johns have been recorded from this era; the banks of the St. Johns and its arteries are dotted with middens filled with thousands of shells, primarily those of Viviparus georgianus—a freshwater snail—and oysters. The advent of regional types of pottery and stone tools made of flint or limestone marked further advancements around 500 BCE. The Archaic people transitioned into settled groups around Florida. From the central part of the state north, along the Atlantic Coast lived people in the St. Johns culture, named for the most significant nearby natural formation. Around 750 CE, the St. Johns culture learned to cultivate corn, adding to their diet of fish, game, and gourds. Archeologists and anthropologists date this agricultural advancement to coincide with a spread of archeological sites, suggesting that a population increase followed. When European explorers arrived in north Florida, they met the Timucua, numbering about 14,000, the largest group of indigenous people in the region. The later Seminole people called the river Welaka or Ylacco. These forms may derive from the Creek wi-láko, "big water", a compound usually applied to large rivers that run through lakes; the St. Johns forms and borders numerous lakes. Alternately, the Seminole name may derive from walaka (from wi-alaka, "water" and "coming"), perhaps a reference to the river's slow discharge and the tidal effects on it. The name is sometimes rendered as "Chain of Lakes" in English. ### Colonial era Though the first European contact in Florida came in 1513 when Juan Ponce de León arrived near Cape Canaveral, not until 1562 did Europeans settle the north Atlantic coast of the peninsula. Early Spanish explorers named the river Rio de Corientes (River of Currents). The St. Johns River became the first place colonized in the region and its first battleground: when French explorer Jean Ribault erected a monument south of the river's mouth to make the French presence known, it alarmed the Spanish who had been exploring the southern and western coast of the peninsula for decades. Ribault was detained after he returned to Europe. In 1564, René Goulaine de Laudonnière arrived to build Fort Caroline at the mouth of the St. Johns River; they called the river Rivière de Mai because they settled it on May 1. An artist named Jacques LeMoyne documented what he saw among the Timucuan people in 1564, portraying them as physically powerful and not lacking for provisions. Fort Caroline did not last long, though relations with the local Timucua and Mocamas were friendly. The colony was unable to support itself; some of the French deserted. Those who remained were killed in 1565 by the Spanish, led by Pedro Menéndez, when they marched north from St. Augustine and captured Fort Caroline. The river was renamed San Mateo by the Spanish in honor of the Apostle Matthew, whose feast was the following day. Capturing Fort Caroline allowed the Spanish to maintain control of the river. The French and Spanish continued to spar over who would control the natural resources and native peoples of the territory. The Timucua, who had initially befriended the French, were not encouraged to make the Spanish allies because of colonial governor Pedro Menéndez de Avilés' abhorrence of French Protestantism and his view that the Timucuan beliefs were "Satanic". By 1573, the Timucua were in outright rebellion, testing the governor's patience and forcing Spanish settlers to abandon farms and garrisons in more interior parts of Florida; the Spanish could not persuade the Timucua to keep from attacking them. Over a hundred years later, missionaries had more success, setting up posts along the river. Spanish Franciscan missionaries gave the river its current name based on San Juan del Puerto (St. John of the Harbor), the mission established at the river's mouth following the demise of the French fort. The name first appeared on a Spanish map created between 1680 and 1700. The Timucua, as other groups of indigenous people in Florida, began to lose cohesion and numbers by the 18th century. A tribe located in modern-day Georgia and Alabama called the Creeks assisted with this; in 1702, they joined with the Yamasee and attacked some of the Timucua, forcing them to seek protection from the Spanish who forced them into slavery. The Creeks began assimilating other people and spread farther south until they were known by 1765 as Seminoles by the British, a term adapted from cimarrones that meant "runaways" or "wild ones". The Seminoles employed a variety of languages from the peoples the Creeks had assimilated: Hitchiti, Muskogee, as well as Timucua. Between 1716 and 1767, the Seminoles gradually moved into Florida and began to break ties with the Creeks to become a cohesive tribe of their own. The St. Johns provided a natural boundary to separate European colonies on the east bank and indigenous lands west of the river. After Florida came under the Kingdom of Great Britain's jurisdiction in 1763, Quaker father and son naturalists John and William Bartram explored the length of the river while visiting the southeastern United States from 1765 to 1766. They published journals describing their experiences and the plants and animals they observed. They were charged by King George III to find the source of the river they called the Picolata or San Juan, and measured its widths and depths, taking soil samples as they traveled southward. William returned to Florida from 1773 to 1777 and wrote another journal about his travels, while he collected plants and befriended the Seminoles who called him "Puc Puggy" (flower hunter). William's visit took him as far south as Blue Spring, where he remarked on the crystal clear views offered by the spring water: "The water is perfectly diaphanous, and here are continually a prodigious number and variety of fish; they appear as plain as though lying on a table before your eyes, although many feet deep in the water." Bartram's journals attracted the attention of such prominent Americans as James Madison and Alexander Hamilton. The success of these journals inspired other naturalists such as André Michaux to further explore the St. Johns, as he did in 1788, sailing from Palatka south to Lake Monroe, and gave names to some of the plants described by the Bartrams' journals. Michaux was followed by William Baldwin between 1811 and 1817. Subsequent explorers, including John James Audubon, have carried William's Travels Through North & South Carolina, Georgia, East & West Florida with them as a guide. In 1795, Florida was transferred back to Spain which lured Americans with cheap land. A former loyalist to Britain who left South Carolina during the American Revolutionary War, a planter and slave trader named Zephaniah Kingsley seized the opportunity and built a plantation named Laurel Grove near what is now Doctors Lake, close to the west bank of the St. Johns River, south of where Orange Park is today. Three years later, Kingsley took a trip to Cuba and purchased a 13-year-old Wolof girl named Anna Madgigine Jai. She became his common-law wife, and managed Laurel Grove while Kingsley traveled and conducted business. The plantation grew citrus and sea island cotton (Gossypium barbadense). In 1814, they moved to a larger plantation on Fort George Island, where they lived for 25 years, and owned several other plantations and homesteads in what is today Jacksonville and another on Drayton Island at the north end of Lake George. Kingsley later married three other freed women in a polygamous relationship; Spanish-controlled Florida allowed interracial marriages, and white landowners such as James Erwin, George Clarke, Francisco Sánchez, John Fraser, and Francis Richard Jr.—early settlers along the river—all were married to or in extramarital relationships with African women. ### Territorial Florida and statehood The first years following Florida's annexation to the United States in 1821 were marked with violent conflicts between white settlers and Seminoles, whose bands often included runaway African slaves. The clashes between American and Seminole forces during the establishment of the Florida territory are reflected in the towns and landmarks along the St. Johns named for those who were directly involved. Even before Florida was under U.S. jurisdiction, Major General Andrew Jackson was responsible for removing the Alachua Seminoles west of the Suwannee River, either killing them or forcing them farther south towards Lake County, in 1818. Jackson's efforts became the First Seminole War, and were rewarded by the naming of a cattle crossing across a wide portion of the St. Johns near the Georgia border—previously named Cowford—to Jacksonville. The result of Jackson's offensive was the transfer of Florida to the U.S. Following the Seminole Wars, a gradual increase in commerce and population occurred on the St. Johns, made possible by steamship travel. Steamboats heralded a heyday for the river, and before the advent of local railroads, they were the only way to reach interior portions of the state. They also afforded the citizens of Jacksonville a pastime to watch competing races. By the 1860s, weekly trips between Jacksonville, Charleston, and Savannah were made to transport tourists, lumber, cotton, and citrus. The soil along the St. Johns was considered especially successful for producing sweeter oranges. Florida's involvement in the U.S. Civil War was limited compared to other Confederate states because it had a fraction of the populations of states that had been developed. Florida provided materials to the Confederacy by way of steamboats on the St. Johns, although the river and the Atlantic coasts were blockaded by the U.S. Navy. One action in Florida's role in the Civil War was the sinking of the USS Columbine, a Union paddle steamer used for patrolling the St. Johns to keep materials from reaching the Confederate Army. In 1864, near Palatka, Confederate forces under the command of Capt. John Jackson Dickison captured, burned, and sank the USS Columbine, making her perhaps the only ship commandeered by the Confederacy. The same year and farther downriver, Confederates again sank a Union boat, the Maple Leaf, which struck a floating keg filled with explosives and settled into the muck near Julington Creek, south of Jacksonville. Part of the shipwreck was recovered in 1994, when it was discovered that many Civil War-era artifacts, including daguerreotypes and wooden matches, had been preserved in the river muck. Although the Spanish had colonized Florida for two centuries, the state remained the last part of the east coast of the United States to be developed and explored. Following the Civil War, the State of Florida was too far in debt to build roads and rail lines to further its progress. Florida Governor William Bloxham in 1881 appealed directly to a Pennsylvania-based industrialist named Hamilton Disston, initially to build canals to improve steamboat passage through the Caloosahatchee River, and later to drain lands in the central part of the state for agriculture. Disston was furthermore persuaded to purchase 4,000,000 acres (16,000 km<sup>2</sup>) of land in central Florida for \$1 million, which at the time was reported to be the largest purchase of land in human history. Disston was ultimately unsuccessful in his drainage attempts, but his investment sparked the tourist industry and made possible the efforts of railroad magnates Henry Flagler and Henry Plant to construct rail lines down the east coast of Florida, including a rail link between Sanford and Tampa. Disston was responsible for creating the towns of Kissimmee, St. Cloud, and several others on the west coast of Florida. A New York Times story reporting on Disston's progress in 1883 stated that before Disston's purchase and the subsequent development, the only places worth seeing in Florida were Jacksonville and St. Augustine, with perhaps an overnight trip on the St. Johns River to Palatka; by 1883 tourist attractions had extended 250 miles (400 km) south. More attention was paid to the St. Johns with the increasing population. Florida was portrayed as an exotic wonderland able to cure failing health with its water and citrus, and the region began to be highlighted in travel writings. To relieve his bronchitis, Ralph Waldo Emerson stayed briefly in St. Augustine, calling north Florida "a grotesque region" that was being swarmed by land speculators. Emerson poignantly disliked the public sale of slaves, adding to his overall distaste. Following the Civil War, however, famed author Harriet Beecher Stowe lived near Jacksonville and traveled up the St. Johns, writing about it with affection: "The entrance of the St. Johns from the ocean is one of the most singular and impressive passages of scenery that we ever passed through: in fine weather the sight is magnificent." Her memoir Palmetto Leaves, published in 1873 as a series of her letters home, was very influential in luring northern residents to the state. One unforeseen aspect of more people coming to Florida proved to be an overwhelming problem. Water hyacinths, possibly introduced in 1884 by Mrs W. W. Fuller, who owned a winter home near Palatka, grow so densely that they are a serious invasive species. By the mid-1890s, the purple-flowered hyacinths had grown to reside in 50,000,000 acres (200,000 km<sup>2</sup>) of the river and its arteries. The plants prevent the navigation of watercraft, fishing, and sunlight from reaching the depths of the river, affecting both plant and animal life. The government of Florida found the plants to be so vexing that it spent almost \$600,000 between 1890 and 1930 in an unsuccessful bid to rid the creeks and rivers of north Florida of them. ### Land boom An Englishman named Nelson Fell, persuaded by Disston's advertisements to make his fortunes in Florida, arrived in the 1880s. An engineer by trade, Fell purchased 12,000 acres (49 km<sup>2</sup>) near Lake Tohopekaliga to create a town named Narcoossee, which had a population of more than 200 English immigrants by 1888. A spate of poor luck and tense British-American relations followed, prompting Fell to spend some years investing in infrastructure in Siberia, but he returned in 1909 with ideas of developing wetlands in central Florida. He was further encouraged by the political promises of Governor Napoleon Bonaparte Broward to drain the Everglades during his 1904 campaign. In 1910 Fell purchased 118,000 acres (480 km<sup>2</sup>) of land for \$1.35 an acre and started the Fellsmere Farms Company to drain the St. Johns Marsh in 1911 and send water into the Indian River Lagoon, promoting the engineered canals and other structures as wondrously efficient in providing land to build a massive metropolis. Some progress was made initially, including the establishment of the town of Fellsmere in which land was sold for \$100 an acre, but sales lagged because of a scandal regarding land sale fraud and faulty draining reports from the Everglades. The company then found itself short of funds due to mismanagement. Torrential rains ruptured the newly constructed levees and dikes and forced the company by 1916 to go into receivership. Fell left Florida for Virginia in 1917. Marjorie Kinnan Rawlings used the St. Johns as a backdrop in her books South Moon Under and The Yearling, and several short stories. In 1933 she took a boat trip along the St. Johns with a friend. In the upper basin, she remarked on the difficulty of determining direction due to the river's ambiguous flow, and wrote in a chapter titled "Hyacinth Drift" in her memoir Cross Creek that she had the best luck in watching the way the hyacinths floated. Rawlings wrote, "If I could have, to hold forever, one brief place and time of beauty, I think I might choose the night on that high lonely bank above the St. Johns River." Florida in the 20th century experienced a massive migration into the state. Undeveloped land sold well and draining to reclaim wetlands has often gone unchecked, and often encouraged by government. The St. Johns headwaters decreased in size from 30 square miles (78 km<sup>2</sup>) to one between 1900 and 1972. Much of the land was reclaimed for urban use, but agricultural needs took their toll as fertilizers and runoff from cattle ranching washed into the St. Johns. Without wetlands to filter the pollutants, the chemicals stayed in the river and flushed into the Atlantic Ocean. Boaters destroyed the floating islands of muck and weeds in the upper basin with dynamite, causing the lakes to drain completely. What could have been the most serious human impact on nature in central Florida was the Cross Florida Barge Canal, an attempt to connect the Gulf and Atlantic coasts of the state by channeling the Ocklawaha River, first authorized in 1933. The canal was intended to be 171 miles (275 km) long, 250 feet (76 m) wide, and 30 feet (9.1 m) deep. Canal construction was top among the engineering priorities in the state, and by 1964 the U.S. Army Corps of Engineers began construction on the Cross Florida Barge Canal. Flood control was the primary impetus behind its construction, though the broader reasoning and feasibility of the project remained unclear. The Army Corps of Engineers was also constructing hundreds of miles of canals in the Everglades at the same time and by the 1960s was being accused of wasting tax money through its unnecessary construction projects. In 1969 the Environmental Defense Fund filed suit in federal court to stop construction on the canal, citing irreparable harm that would be done to Florida waterways and the Floridan Aquifer, central and north Florida's fresh water source. A separate canal, the St. Johns-Indian River Barge Canal, was planned to link the river with the Intracoastal Waterway; the project never broke ground, and was canceled soon after the Cross Florida Barge Canal was suspended. ### Restoration When steamboats were superseded by the railroad, the river lost much of its significance to the state. The influx of immigrants to Florida settled primarily south of Orlando, adversely affecting the natural order of wetlands there. Within the past 50 years, however, urban areas in the northern and central parts of the state have grown considerably. In the upper basin, population increased by 700 percent between 1950 and 2000, and is expected to rise another 1.5 million by 2020. Nitrates and phosphorus used as lawn and crop fertilizers wash into the St. Johns. Broken septic systems and seepage from cattle grazing lands create pollution that also finds its way into the river. Storm water washes from street drains directly to the river and its tributaries: in the 1970s, the Econlockhatchee River received 8,000,000 US gallons (30,000,000 L) of treated wastewater every day. Wetlands were drained and paved, unable to filter pollutants from the water, made worse by the river's own slow discharge. Algal blooms, fish kills, and deformations and lesions on fish occur regularly in the river from Palatka to Jacksonville. Although most of the pollutants in the river are washed from the southern parts of the river, the Jacksonville area produces approximately 36 percent of them found in the lower basin. The State of Florida implemented a program named Surface Water Improvement and Management (SWIM) in 1987 to assist with river cleanups, particularly with nonpoint source pollution, or chemicals that enter the river by soaking into the ground, as opposed to direct piped dumping. SWIM assists local jurisdictions with purchasing land for wetlands restoration. The St. Johns River Water Management District (SJRWMD) is charged by the Florida Department of Environmental Protection (DEP) with restoring the river. The first step in restoration, particularly in the upper basin, is the purchase of public lands bordering the river; ten different reserves and conservation areas have been implemented for such use around the St. Johns headwaters. Around Lake Griffin in the Ocklawaha Chain of Lakes, the SJRWMD has purchased 6,500 acres (26 km<sup>2</sup>) of land that was previously used for muck farming. More than 19,000 acres (77 km<sup>2</sup>) have been purchased along Lake Apopka to restore its wetlands, and the SJRWMD has removed nearly 15,000,000 pounds (6,800,000 kg) of gizzard shad (Dorosoma cepedianum), a fish species that stores phosphorus and adds to algae problems. The SJRWMD has also set minimum levels for the lakes and tributaries in the St. Johns watersheds to monitor permitted water withdrawals and declare water shortages when necessary. To assist with river cleanup and the associated funds for improving water quality in the St. Johns, Mayor John Delaney of Jacksonville waged a campaign to get it named as an American Heritage River, beginning in 1997. The designation by the Environmental Protection Agency is intended to coordinate efforts among federal agencies to improve natural resource and environmental protection, economic revitalization, and historic and cultural preservation. The campaign was controversial as the Republican mayor defended asking for federal government assistance, writing "Other rivers have relied heavily on federal help for massive environmental clean-ups. It's the St. Johns' turn now." Twenty-two towns along the St. Johns and environmental, sporting, recreation, boating, and educational organizations also supported its designation, but several prominent Republican politicians expressed concerns over increased federal regulations and restrictions on private property ownership along the river; the Florida House of Representatives passed a resolution asking President Bill Clinton not to include the St. Johns. Despite this, Clinton designated the St. Johns as one of only 14 American Heritage Rivers out of 126 nominated in 1998 for its ecological, historic, economic and cultural significance. The continuing increase of population in Florida has caused urban planners to forecast that the Floridan Aquifer will no longer be able to sustain the people living in north Florida. By 2020, 7 million people are predicted to live in the St. Johns basins, double the number living there in 2008. Proposals to use 155,000,000 US gallons (590,000,000 L) a day from the St. Johns, and another 100,000,000 US gallons (380,000,000 L) from the Ocklawaha River, for fresh water are controversial, prompting a private organization named St. Johns Riverkeeper to nominate it to the list of the Ten Most Endangered Rivers by an environmental watchdog group named American Rivers. In 2008, it was listed as \#6, which was met with approval from Jacksonville's newspaper, The Florida Times-Union, and skepticism from the SJRWMD. The St. Johns River is under consideration as an additional water source to meet growing public water needs. In 2008, the river's Water Management District undertook a Water Supply Impact Study of the proposed water withdrawals and asked the National Research Council to review science aspects of the study as it progressed. This resulted in a series of four reports that assessed the impact of water withdrawal on river level and flow, reviewed potential impacts on wetland ecosystems, and presented overall perspectives on the Water Management District study. The National Research Council found that, overall, the District performed a competent job in relating predicted environmental responses, including their magnitude and general degree of uncertainty, to the proposed range of water withdrawals. However, the report noted that the District's final report should acknowledge such critical issues as include future sea-level rises, population growth, and urban development. Although the District predicted that changes in water management would increase water levels and flows that exceed the proposed surface water withdrawals, these predictions have high uncertainties. The report also noted concerns about the District's conclusion that the water withdrawals will have few deleterious ecological effects. This conclusion was based on the model findings that increased flows from upper basin projects and from changes in land use (increases in impervious areas) largely compensated for the impacts of water withdrawals on water flows and levels. Although the upper basin projects are positive insofar as they will return land to the basin (and water to the river), the same cannot be said about increased urban runoff, the poor quality of which is well known. ## See also - List of lakes of the St. Johns River - List of crossings of the St. Johns River - List of Florida rivers - List of rivers of the Americas by coastline - South Atlantic-Gulf Water Resource Region
61,676,750
Meteorological history of Hurricane Dorian
1,171,668,400
null
[ "Hurricane Dorian", "Meteorological histories of individual tropical cyclones" ]
Hurricane Dorian was the strongest hurricane to affect The Bahamas on record, causing catastrophic damage on the islands of Abaco Islands and Grand Bahama, in early September 2019. The cyclone's intensity, as well as its slow forward motion near The Bahamas, broke numerous records. The fifth tropical cyclone, fourth named storm, second hurricane, and first major hurricane of the 2019 Atlantic hurricane season, Dorian originated from a westward-traveling tropical wave, that departed from the western coast of Africa on August 19. The system organized into a tropical depression and later a tropical storm, both on August 24. The newly formed Dorian strengthened only gradually over the next few days because of dry air and vertical wind shear. On August 27, Dorian made landfall in Barbados and St. Lucia before entering the Caribbean. Dorian's structure was seriously disrupted after encountering the mountains of St. Lucia, causing the system's center to reform north of its previous location. Moving farther north and east than anticipated, Dorian passed east of Puerto Rico on August 28. Simultaneously, relaxing wind shear and warm sea surface temperatures allowed Dorian to become a Category 1 hurricane as it moved over the United States Virgin Islands. Intensification temporarily stagnated on August 29 before a spurt of rapid deepening began on August 30. During this time, the hurricane turned west-northwestward, then westward, as a ridge built in the subtropics to the north. Dorian achieved Category 5 intensity – the highest classification on the Saffir–Simpson scale – on September 1. The system reached peak intensity later that day, with winds of 185 mph (295 km/h) and a central pressure of 910 mbar (hPa; 26.87 inHg) while making landfall on Elbow Cay in The Bahamas. Dorian weakened steadily throughout September 2; the storm's forward momentum came to a crawl while it was crossing over Grand Bahama. The system fell below major hurricane status on September 3, as it began to accelerate northwards. On September 5, Dorian briefly reintensified into a Category 3 hurricane, as it traversed the warm waters of the Gulf Stream. Increasing wind shear weakened Dorian once again, as it turned northeast and approached the Outer Banks. On September 6, Dorian made landfall on Cape Hatteras as a low-end Category 2 hurricane. As Dorian became increasingly influenced by the westerlies, it transitioned into a post-tropical cyclone on September 7 just before passing over Nova Scotia. It then became fully extratropical the next day over the Gulf of Saint Lawrence and was absorbed by a larger extratropical cyclone on September 9. ## Origins and track through the Lesser Antilles Hurricane Dorian originated from a large tropical wave – an elongated trough of low air pressure – that departed from the western coast of Africa on August 19, 2019. Around that time, much of the wave's convection or thunderstorm activity was located inland near Guinea and Senegal rather than close to its center. Thunderstorms in the northern portion of the wave were limited by an abundance of Saharan dust in the region. While the wave traveled westward across the low latitudes of the Atlantic, it lost most of its convection before a low-pressure area developed on August 22. Despite being located in an area of moderate vertical wind shear, the system continued to increase in organization. The National Hurricane Center (NHC), initially predicted slow development on August 23 as the system continued westward. However, the system organized into a tropical depression at 06:00 UTC on August 24, while approximately 805 mi (1,295 km) east-southeast of the island of Barbados. Early in the depression's existence, its southeastern outflow was restricted due to moderate easterly wind shear. The system was upgraded into Tropical Storm Dorian at 18:00 UTC after developing a 10-mile (15 km) wide eye-like feature at its mid-levels and banding features that wrapped around it. Dorian initially intensified while it was located in an environment of warm sea surface temperatures. This trend all but ceased over the next few days as a result of wind shear, as well as abundant mid-level dry air. With little change in intensity, Dorian made landfall over Barbados around 01:30 UTC on August 27 with maximum sustained winds of 50 mph (85 km/h), entering the Caribbean. At that time, composite radar showed that the system lacked a strong inner core. The tropical storm made its second landfall on St. Lucia around 11:00 UTC at the same intensity; the mountainous terrain of the island disrupted the low-level circulation of the system. Soon after, Dorian's center re-developed farther north. Meanwhile, the storm's convection fluctuated as the system continued to be affected by dry air and interacted with the Leeward and Windward islands. Dorian briefly developed a 10-mile (15 km) wide eye on radar on a couple occasions, but it quickly eroded because of mid-level dry air incursions. Dorian soon took a more northwesterly direction as a result of a weakness in a ridge, caused by a mid- to upper-level low (cold-core cyclone) located to the north of the island of Hispaniola. Although Dorian was initially forecast to make landfall on Hispaniola and subsequently weaken or dissipate over the island, the system's track shifted east of Puerto Rico by August 28 as a result of northerly directional change, as well as the center reformation. Dorian's structure began to improve on the same day, with banding features becoming more prominent and a partial eyewall forming. As Dorian strengthened into a Category 1 hurricane on the Saffir–Simpson scale, it made landfall over St. Croix and St. Thomas in the United States Virgin Islands at 15:30 UTC and 18:00 UTC, respectively. Meanwhile, the cloud pattern of the tropical cyclone was continuing to increase in organization, with an eye becoming apparent on satellite imagery. The hurricane progressed northwestward under the influence of flow between an upper-level low over the Straits of Florida and the Atlantic subtropical ridge. Additional strengthening was forecast as the storm would remain in a favorable environment with an increasingly moist mid-level, with sea temperatures near or over 84 °F (29 °C), and low wind shear. Even though the storm had strengthened and developed a more defined eye, dry air and southwesterly wind shear caused Dorian's intensification to temporarily stagnate before decreasing again. Later, on August 29, a NOAA Hurricane Hunter aircraft reported the presence of concentric eyewalls, indicating that an eyewall replacement cycle had commenced. The aircraft also discovered that the central pressure had fallen. The plane did not find any stronger winds as Dorian continued to track towards the northwest. ## Rapid intensification and landfalls in The Bahamas By the morning of August 30, Dorian completed the eyewall replacement cycle and resumed its intensification trend. The upper-level low steering the hurricane retreated to the south, while the Atlantic subtropical ridge built westward. This caused Dorian to track west-northwest into a highly favorable environment characterized by low wind shear, high relative humidity, and sea surface temperatures of 84 °F (29 °C). The storm then moved west, straight towards the northwestern Bahamas. Dorian reached major hurricane intensity at 18:00 UTC on August 30 about 445 mi (715 km) east of the northwestern Bahamas. Later that evening, the eye of Dorian cleared out and became surrounded by a ring of deep convection. Meanwhile, a burst of lightning activity occurred in the northwest eyewall, heralding further intensification. A Hurricane Hunter aircraft revealed that Dorian rapidly intensified to Category 4 status at 00:00 UTC on August 31, with surface winds measured to be near 130 mph (215 km/h). The satellite presentation of Dorian continued to improve, with the eye increasing in definition and stabilizing at a diameter of about 14 miles (23 km). The eye also began to demonstrate a pronounced stadium effect, where the clouds of the eyewall curved outward from the surface with height; this is a feature at times seen in intense tropical cyclones. At 06:00 UTC on September 1, Dorian became a Category 5 hurricane with winds of 165 mph (270 km/h). Around 12:00 UTC, the NHC estimated the one-minute sustained winds to have reached 180 mph (285 km/h), making Dorian the strongest hurricane to impact The Bahamas on record. Just before 15:00 UTC, a Hurricane Hunter aircraft in Dorian's eyewall measured flight-level winds of 183 mph (294 km/h) and its stepped frequency microwave radiometer recorded winds of up to 196 mph (315 km/h), while a dropsonde released by the aircraft recorded a surface wind gust of 203 mph (326 km/h). Further deepening occurred over the next couple of hours, with maximum sustained winds reaching 185 mph (295 km/h) and the minimum central pressure dropping to 910 mbar (hPa; 26.87 inHg), representing Dorian's peak intensity. At this strength, Dorian made landfall over Elbow Cay of the Abaco Islands at 16:40 UTC September 1, becoming the strongest hurricane to make landfall in The Bahamas in modern records. As Dorian crossed the Abaco Islands, a ridge to the north of the hurricane weakened. The steering flow hence diminished, causing Dorian to decelerate. Creeping slowly westwards, Dorian weakened slightly before making landfall near South Riding Point, Grand Bahama with winds of 180 mph (285 km/h) at 02:15 UTC on September 2. The system moved off the north coast of Grand Bahama six hours later, still as a Category 5 hurricane, albeit with a larger eye and lower winds because of land interaction and an upwelling of cooler waters beneath the system. These factors, coupled with Dorian's slow motion, caused steady weakening over the next couple of days, with Dorian dropping to Category 3 status at 06:00 UTC September 3. Through this time, Dorian essentially stalled just north of Grand Bahama – from 12:00 UTC September 2 to 12:00 UTC September 3, Dorian traveled a distance of just 30 miles (48 km). As a result, the entire island of Grand Bahama experienced winds of at least tropical-storm-force for three consecutive days. Parts of Grand Bahama experienced the eyewall for more than 25 hours, including 5 hours while Dorian was at Category 5 intensity; meanwhile Halls Point spent over 11 hours within Dorian's eye. During this period, Dorian brought an estimated 3.0 ft (0.9 m) of rain to The Bahamas, along with a storm surge of over 20 ft (6.1 m). ## Northward turn, extratropical transition, and dissipation Dorian began a northwestward motion late on September 3 towards the eastern coast of the United States, as an eastward-moving mid-level trough over the Eastern United States pulled Dorian to the north. At the same time, the eye had become cloud-filled, larger, and more ragged. Data collected by reconnaissance aircraft and buoys indicated that the wind field of the storm was expanding. Dorian slowly weakened over the next day after entering an area of cooler sea surface temperatures and high wind shear. The distinctness of the storm's eye waxed and waned as the system continued to pick up forward speed. Soon after, clouds in the eyewall began to cool as Dorian began to restrengthen, facilitated by the warm waters of the Gulf Stream. After the eye cleared and became surrounded by deep convection, the hurricane reached its secondary peak intensity as a 115-mile-per-hour (185 km/h) Category 3 hurricane at 00:00 UTC on September 5, while located off the coast of Georgia. After about 12 hours, Dorian began to gradually weaken as it traveled along the South Carolina coast. Rapidly accelerating northeastward, Dorian made landfall over Cape Hatteras, North Carolina, as a Category 2 hurricane, at approximately 12:30 UTC on September 6, with 100 mph (155 km/h) winds and a pressure of 957 mbar (28.26 inHg); North Carolina mainly experienced Category 1 winds as the peak winds occurred primarily offshore. Despite being significantly weaker, the storm still possessed a well-defined eye surrounded by deep convection. Early on September 7, Dorian began to undergo the transition to an extratropical cyclone. The eye completely disappeared from satellite imagery as the storm began to take on a more asymmetric structure. The hurricane's structure degraded due to strong southwesterly shear, with most of its convection displaced to the north and east of the center. Soon after, cold air clouds began to entrain on Dorian's southwestern side, as the storm connected with a warm front that was developing to the northeast. Dorian became a post-tropical cyclone around 18:00 UTC on September 7 after losing much of its tropical characteristics. Despite this, the NHC opted to continue issuing advisories on the system, due to the threat it posed to Atlantic Canada. At the time, an ASCAT pass showed a region of 90–100 mph (150–155 km/h) winds in the storm. Dorian made landfall near Sambro Creek in Nova Scotia, Canada, at approximately 22:00 UTC. The storm gradually turned towards the east as it became embedded within the extratropical westerly flow. The storm was deemed to have fully completed its extratropical transition over the Gulf of Saint Lawrence by 06:00 UTC on September 8. Dorian's cloud pattern gradually decayed, with the storm weakening to tropical storm-force by 18:00 UTC. By September 9, virtually no significant convection existed near the center of Dorian. Soon after, the cyclone passed the Strait of Belle Isle and entered the northern Atlantic; sea surface temperatures were less than 50 °F (10 °C) in the nearby Labrador Sea. Dorian was ultimately absorbed by a larger extratropical cyclone at 06:00 UTC on September 9, while located northeast of Newfoundland. ## Records Dorian broke numerous intensity records after it reached its peak intensity. Dorian is tied with the 1935 Labor Day hurricane, Gilbert, and Wilma for the second-highest wind speed of an Atlantic hurricane at 185 mph (295 km/h), just below Allen's record wind speed of 190 mph (305 km/h). This intensity made Dorian the strongest hurricane outside of the tropics. The cyclone's 185 mph (295 km/h) landfall on Abaco Island was the strongest on record for The Bahamas. Dorian was the second Category 5 hurricane to make landfall on the Abaco Islands on record, the other having occurred in 1932. Additionally, Dorian is tied with the 1935 Labor Day hurricane for the highest sustained winds at landfall in an Atlantic hurricane. With Dorian, 2019 became the fourth consecutive year to produce at least one Category 5 hurricane. Dorian's slow forward motion near The Bahamas also set several records. During one 24-hour period, Dorian moved slower than any other major hurricane since Hurricane Betsy in 1965. The storm also impacted a single land area as a Category 5 hurricane for the longest duration recorded in the Atlantic basin, with portions of Dorian's eyewall striking Great Abaco Island and Grand Bahama for about 22 hours. ## See also - Tropical cyclones in 2019 - List of Category 5 Atlantic hurricanes - 1932 Bahamas hurricane – A Category 5 hurricane that caused damage throughout The Bahamas - Hurricane Joaquin (2015) – Another slow-moving hurricane that devastated The Bahamas at Category 4 strength - Hurricane Matthew (2016) – A hurricane that made landfall on Grand Bahama at Category 4 status, causing extensive damage
14,482,360
Keith Johnson (cricket administrator)
1,121,237,012
Australian cricket administrator (1894–1972)
[ "1894 births", "1972 deaths", "Australian Army soldiers", "Australian military personnel of World War I", "Cricket managers", "Members of the Order of the British Empire", "Royal Australian Air Force officers", "Royal Australian Air Force personnel of World War II", "The Invincibles (cricket)" ]
Keith Ormond Edley Johnson MBE (28 December 1894 – 19 October 1972), was an Australian cricket administrator. He was the manager of the Australian Services cricket team in England, India and Australia immediately after World War II, and of the Australian team that toured England in 1948. The 1948 Australian cricket team earned the sobriquet The Invincibles by being the first side to complete a tour of England without losing a single match. Johnson joined the Australian Board of Control for International Cricket in 1935 as a delegate for New South Wales and served in the Royal Australian Air Force during World War II, performing public relations work in London. With the allied victory in Europe, first-class cricket resumed and Johnson was appointed to manage the Australian Services team, which played England in a series of celebratory matches known as the Victory Tests to usher in the post-war era. The series was highly successful, with unprecedented crowds raising large amounts for war charities. As a result, further matches were scheduled and Johnson's men toured British India and Australia before being demobilised. Johnson's administration was regarded as a major factor in the success of the tour. In 1948, Johnson managed the Australian tour of England, which again brought record profits and attendances, in spite of Australia's overwhelming dominance. Johnson's management of the tour—which generated large amounts of media attention—was again lauded. However, in 1951–52, the Australian Board of Control excluded Sid Barnes from the team for "reasons other than cricket". Barnes took the matter to court, and in the ensuing trial, his lawyer embarrassed Johnson, who contradicted himself several times under cross-examination. Following the trial, Johnson resigned from the board and took no further part in cricket administration. ## Early years and pre-World War II career Johnson was born on 28 December 1894 in the inner-Sydney suburb of Paddington. He later moved to the north shore suburb of Mosman, where he worked as a mechanic before serving briefly in the 3rd Field Company Army Engineers. On 8 October 1916, in the middle of World War I, Johnson enlisted in the First Australian Imperial Force as a gunner in the 5th Field Artillery Brigade. His unit left Sydney on 10 February 1917 and headed for Europe. He returned to Australia on 1 July 1919. After the end of World War I, Johnson married his wife Margaret. Johnson joined the Australian Board of Control for International Cricket in 1935 as a delegate for the New South Wales Cricket Association, having been affiliated with the Mosman Cricket Club in Sydney Grade Cricket. He had attended the annual general meeting in September 1934 as a proxy for Billy Bull, who was travelling back to Australia with the national team, which had been touring England. ## Managerial career ### Australian Services During World War II, Johnson served in the Royal Australian Air Force (RAAF). He enlisted in the RAAF on 13 April 1942 in Sydney. Johnson rose to the rank of flight lieutenant and was deployed to London, where he did public relations work at the RAAF's overseas headquarters. In June 1945, he was appointed as manager of the Australian Services cricket team on its tour of Britain for the Victory Tests, India and Australia from mid-1945 to early-1946. Officially a military unit, the team's commanding officer was Squadron leader Stan Sismey of the RAAF, although the on-field captain was Warrant officer Lindsay Hassett of the Second Australian Imperial Force. Wisden Cricketers' Almanack praised Johnson's organisational work in arranging the services' tour: "A stranger to this country, he found the programme in only skeleton form; and that the tour proved such a success from every point of view was due to his hard work and courtesy." The almanack reprinted Johnson's message of thanks to the English cricket community in full before the team sailed to India. The Victory Tests started in May 1945 between the Australian Services and England in celebration of the allied victory in Europe. In previous seasons, the English cricket administrator and former captain Pelham Warner had organised matches between the RAAF and various English military teams as an expression of defiance against Nazi air raids, and the Victory Tests were a continuation of this, although the matches were only three days long and did not have Test status. In the First Victory Test, the Australian Servicemen scraped home by six wickets with only two balls and minutes to spare. The last pre-war series between England and Australia in 1938 had been an attritional and hard-nosed contest, but in the afterglow of the war victory, the cricketers played flamboyantly with abandon in front of packed crowds. The attractive, attacking style of play was widely praised by commentators and the match raised £1,935 for war relief charities. England then levelled the series by winning the Second Victory Test at Bramall Lane, Sheffield with a hard-fought battle, by 41 runs. Australia won the Third Victory Test by four wickets late on the final day and drew the Fourth Victory Test at Lord's. That would have been the end of the series, but because of the record attendance of 93,000 at Lord's, another match was appended. England drew the series by winning the Fifth Victory Test in front of another capacity crowd. The Victory Tests were regarded as an outstanding success, with a total attendance of 367,000 and bright and attacking play. Due to the unexpectedly strong success of the Victory Tests, the government of Australia, acting on the impetus of Foreign Minister H.V. "Doc" Evatt, ordered the Australian Services to delay their demobilisation. With the team raising so much money for war charities, the government directed them to travel home via India and Ceylon to play further matches, in order to raise more funds for the Red Cross. Johnson found himself in a difficult situation during the Indian leg of the tour. The team—mostly made up of RAAF personnel—had been ill with food poisoning and dysentery, and travelled across the Indian subcontinent by long train journeys. The airmen wanted to travel by air, and threatened to abandon the tour or replace Hassett, an AIF member, with either Keith Carmody or Keith Miller, who were RAAF fighter pilots. However, the standoff was ended when Sismey arranged for a RAAF plane to transport the team. On the playing arena, it was not a happy tour for Johnson and his men. They lost the three-match series against India 1–0 and recorded only one victory, against South Zone, in their nine matches. Johnson's team arrived in Australia towards the end of 1945, but the armed services and Australian Board of Control ordered them to play another series against the various Australian states. Johnson had sought fixtures for his team in Australia, but this was before the Evatt had added the matches in the subcontinent. He implored the administrators to recognise that the players were already overworked, but was ignored. The Services performed poorly; after playing out consecutive draws against Western Australia and South Australia, they were crushed by an innings by both Victoria and New South Wales, before drawing against Queensland and Tasmania, the smallest state in the country. Johnson was involved in another administrative dispute during the Australian leg of the campaign. Cec Pepper—whom teammates Miller and Dick Whitington regarded as one of the best all rounders in the world and a certainty for Australian Test selection—appealed for leg before wicket against Australian captain Don Bradman in the match against South Australia. The appeal was turned down and Pepper complained to the umpire Jack Scott, prompting Bradman—who was also a member of the Australian Board of Control and the board of the South Australian Cricket Association—to ask Scott whether Pepper's behaviour was acceptable. As he was an employee of the SACA, Scott answered to Bradman, and he lodged a complaint about Pepper to the Australian Board of Control. Pepper was never selected for Australia. Cricket historian Gideon Haigh said that "Johnson was clearly upset by the affair, and also by the failure of the [national] selection panel [Bradman among them] ... to send Pepper, second only to Miller as a cricketer in the Services XI, to New Zealand". Johnson tried to intercede on Pepper's behalf, to no avail, although the other board members claimed that no pressure had been placed on the selectors to exclude Pepper. The home leg of the tour was a poor end to the long and taxing Australian Services campaign. As the military men played poorly in Australia, the national selectors concluded that their achievements against England must have been against weak opposition, and only Hassett and Miller were selected for the Australian tour of New Zealand. Johnson then helped to arrange England's first post-war tour of Australia, in 1946–47. ### 1948 tour Johnson was a late appointment as manager for the 1948 tour of England, taking over from his New South Wales colleague Bill Jeanes, who was secretary of the Australian Board of Control and had managed the previous Australian tour of England in 1938. Jeanes had become increasingly unpopular among the players because of an approach that Haigh called "increasingly officious and liverish". Led by Bradman—widely regarded as the greatest batsman in history—the Australians went through their 34 matches without defeat, earning the sobriquet The Invincibles. They won 25 of their matches, 17 of these by an innings, and crushed England 4–0 in the five Tests, winning most games heavily. Despite the Australians' domination of the local teams, the English public showed unprecedented levels of interest in the cricket. Record gate takings were registered at most venues, even when rain affected the matches, and the record attendance for a Test match in England was broken twice, in the Second Test at Lord's and the Fourth Test at Headingley. The 158,000 spectators that watched the proceedings at Headingley remain a record for a Test on English soil. As a result, Australia made £82,671 from the tour, resulting in a profit of £54,172. The popularity of the team meant that they were inundated with invitations for social appointments with government officials and members of the royal family, and they had to juggle a plethora of off-field engagements, with 103 days of scheduled cricket in the space of 144 days. As a result, Johnson was flooded with phone calls and letters, which he had to attend to by himself, as he was the only administrator among the touring party. Bradman later said he was worried that Johnson's tireless work would cause health problems because he "worked like a slave day and night" and that "it was the tribute to a bulldog determination to see the job through". The journalist Andy Flanagan said that Johnson was "'on the ball' every minute of the waking day, and it would be safe to say half the night too." Johnson was again praised by Wisden in its report on the 1948 tour. "Indebtedness for the smooth running of the tour and general harmony of the team was due largely to the manager, Mr Keith Johnson, hard-working and always genial," it said. "Paying tribute to the loyalty of the players, Mr Johnson said there had not been a discordant note in the party throughout the tour." Flanagan labelled Johnson as "conscientious, reserved, dignified, extraordinarily industrious and scrupulously trustworthy". He went on to say that "No organization, no body corporate, no individual could ever hope to have a more loyal, a more devoted, or a more conscientious officer...Although to the world in general all the praise and glory for the unequalled triumph the tour proved to be goes to Sir Donald Bradman, only those who travelled with the team will ever have a proper conception of the part played in that triumph by Keith Johnson." Bradman said that Johnson "created friends and goodwill everywhere both for himself and the team, and no side could have wished for a better Manager". On the journey back to Australia, the players presented Johnson with a silver Georgian salver, with their signatures engraved on the memento. In a "farewell message" to England quoted in Wisden, Johnson said that the "most lasting memory" would be the team's visit to Balmoral Castle. Johnson said "We felt we were going into an Englishman's home and into his family heart". "It was difficult to believe that we were being entertained by Royalty. My personal wish would be for everybody in the Empire to spend an hour or so with the King and Queen. It would do them a tremendous amount of good." ### Barnes libel case Johnson's claim of tour harmony and player loyalty in 1948 was thrown into a different light by events less than four years later. The opening batsman Sid Barnes—a core member of the 1948 team—was seeking a return to Test cricket. Barnes was known for being a somewhat eccentric self-promoter. During the 1948 tour, Barnes organised a multitude of business deals while not playing cricket, and avoided paying customs duties on the enormous amount of goods he acquired in Britain by disembarking at Melbourne instead of Sydney. Barnes then made himself unavailable for first-class cricket, preferring to pursue business interests instead, and ridiculed the fee paid for the 1949–50 tour of South Africa. He wrote a column for Sydney's The Daily Telegraph, titled "Like It or Lump It", in which he often lampooned the administration of the game. However, in 1951–52, Barnes made a return to cricket, and sought selection in the national team to play the West Indies during the 1951–52 Australian season. Australia had been unable to find a reliable opener to accompany Barnes's former partner Arthur Morris. Australia's batsmen struggled in the first two Tests, and before the Third Test, Barnes scored 107 against Victoria, putting on 210 in partnership with Morris for New South Wales. The Sporting Globe in Melbourne had presciently predicted that the board would object if the selectors chose Barnes. Barnes was duly selected for the Third Test by a panel of three, chaired by his former captain Bradman, but the choice was vetoed by the Australian Board of Control "for reasons outside of cricket". Bradman was one of four board members to support Barnes's selection, while 10 objected, including Johnson. The matches took place and Barnes did not play. He was unable to find out why he had been excluded and was resigned to making an appearance before the board at its next meeting in September 1952 to ask for an explanation. In the meantime, the team was not announced at the scheduled time due to the delay caused by the veto. Journalists deduced the story and Barnes became a cause célèbre for many weeks, missing all the remaining Tests. Speculation abounded as to the nature of his supposed misdeeds. These included jumping the turnstile at a ground when he forgot his player's pass, insulting the royal family, theft from team-mates, drunkenness, stealing a car, parking his car in someone else's space, or that Barnes had lampooned the board in the narration accompanying the home movies he made of the 1948 tour. In later years, a file of unknown authorship regarding Barnes's behaviour was deposited in the NSWCA library. It accused Barnes of allowing young spectators to enter the playing arena to field the ball instead of doing so himself, and of denigrating umpires by cupping his hands over his eyes and showing dissent by implying that they were blind. Barnes continued to score heavily, and during one match, he crossed paths with Johnson, who reportedly apologised to him for the exclusion from the team. However, Johnson advised Barnes to "keep quiet and say nothing" and added that "You will come out of it all right, you will be a certainty for the 1953 trip [to England]". On 24 April 1952, a letter appeared in Sydney's Daily Mirror from Jacob Raith, a baker from Stanmore in Sydney, in response to a letter from Barnes's friend Stacy Atkin, which condemned the board for vetoing Barnes's selection. Raith's letter said that the board must have had good reason to exclude Barnes. > The board is an impartial body of cricket administrators made up of men who have given outstanding service to the game. It must be abundantly clear to all that they would not have excluded Mr Barnes from an Australian XI capriciously and only for some matter of a sufficiently serious nature. In declining to meet his request to publish reasons, the board may well be acting kindly towards him. The Board of Control had previously granted itself the power to exclude a player from the national team "on grounds other than cricket ability" following the poor behaviour of some members of the 1912 team that toured England. Instead, Barnes sued Raith for libel and engaged a top Sydney lawyer, Jack Shand—described by Haigh as "the foremost trial lawyer of his day"— to represent him, with the aim of uncovering the reasons for his exclusion. Bradman felt that Raith's letter was a premeditated set-up to give Barnes a pretext to instigate a trial. The libel trial, held in August 1952, was a sensation, and Johnson, still a member of the board, was the central figure. According to Haigh, "it was effectively the Board, not Raith, in the dock". It emerged very quickly during the trial that Raith had no particular knowledge of the workings of the board. A series of administrators came forward to say that Barnes had reportedly misbehaved on the 1948 tour, even though Johnson's official report as manager had made no mention of any disharmony. Aubrey Oxlade, the chairman of the board and one of the four board members who voted to ratify Barnes's selection, said that the batsman's indiscretions were "childish things" and "not serious at all". Later, Frank Cush, another board member who had supported Barnes's inclusion, replied "none at all" when asked if there were any legitimate reasons for excluding Barnes. Selector Chappie Dwyer said "I have a very high opinion of him as a cricketer ... and I have no objection to him as a man". Johnson was called as a witness, and, under questioning from Shand, a different story came out. Johnson agreed that his written report of the 1948 tour had said that the team had behaved "in a manner befitting worthy representatives of Australia" and that "on and off the field their conduct was exemplary". However, in a verbal report, Johnson said he had drawn the board's attention to various misdemeanours by Barnes that, in his opinion, were sufficiently serious to warrant the player's exclusion from future Australian Test sides. Johnson said that Barnes had shown a "general reluctance for anything savouring of authority". The misdeeds included taking pictures as the Australian team was presented to the royal family on the playing arena during the Test match at Lord's, asking permission to travel alone in England (Barnes' family was living in Scotland at the time), and "abducting" twelfth man Ernie Toshack to play tennis during the match at Northampton on a court "300 yards from the pavilion". Under cross-examination, Johnson said that Barnes's taking pictures of the royal family at Lord's was the most serious of these misdemeanours. He admitted he had not known that Barnes had received permission from the MCC and the royal family's protocol chief to take the photos. Shand also established that Barnes had then shown the films to raise money for various charities. He further showed that Barnes had not agitated when reminded of the policy against players meeting with family members on tour. However, Johnson believed that the cumulative effect of the misdeeds "warranted omission from the team" and he saw no problem in the fact that his verbal advice to the board recommending Barnes' exclusion was at odds with the written report on the 1948 tour. Under pressure from Shand, Johnson admitted that "I don't always write what I think". According to Haigh, "Shand effortlessly twisted him [Johnson] inside out". - Shand: Will you admit one of two things: either your report to the Board or your statement that that was your state of mind, that you thought his conduct was so serious as to be left out of the team—one or other is a deliberate lie? - Johnson: No. - Shand: Think what you are saying. One or other was a deliberate and wicked lie. - Johnson: No. - Shand: How could you really have thought that he had behaved in a manner befitting worthy representatives of Australia and on and off the field his conduct was exemplary if you thought his conduct was so serious as to warrant his omission from the team? How do you reconcile those two? - Johnson: After a lot of thought. - Shand: I am not asking for a lot of thought. - Johnson: I have to report on the team as a whole, and that is my opinion. - Shand: You admit that applies to every one of them. - Johnson: Yes. - Shand: And yet one of them was guilty of conduct so serious in your opinion as to warrant his omission ... These reports are circulated. - Johnson: Yes. ... - Shand: And it would be a misleading report? - Johnson: Yes. - Shand: You still claim to be a responsible person? - Johnson: Yes. - Shand: You consider it is reasonable to mislead the various state associations? - Johnson: In a case like that, yes. - Shand: Depending on the circumstances you have no misgivings about misleading people? - Johnson: I would not mislead people. - Shand: You did, didn't you—the state bodies, about Barnes? - Johnson: Yes. Shand had nearly reduced Johnson to tears, having previously caused a senior police officer to cry during the hearing of a royal commission the year before. A colleague of Shand's recalled his "capacity of insinuation in tone which could annoy and bring a witness into antagonism". Shand then asked Johnson about a hypothetical case of Barnes being selected in the future. Johnson evaded the matter of whether he would select the player until Shand asked him to assume that "Barnes is in the best possible form and he is the best cricketer in Australia", to which Johnson replied that he would not. On the trial's second day, soon after Barnes had taken to the witness stand, Raith's counsel withdrew his client's defence. It had been his task, he said, to prove a plea that the allegation in Raith's letter was true, and that Barnes had not been excluded capriciously. Counsel remarked that "seldom in the history of libel actions has such a plea failed so completely and utterly." The case ended at that point; Barnes was vindicated, and Raith's counsel issued him with a public apology. According to fellow administrator Alan Barnes, Johnson was a "wide-open type of bloke" who was vulnerable to manoeuvring. Colleagues advised Johnson to hire a lawyer before the case started, but he refused, saying that the libel case was merely a sporting matter, not a legal one. ### Aftermath After the libel trial, Johnson resigned from the board on 9 February 1953, and played no further part in cricket administration in Australia. He also resigned from the NSWCA and withdrew his candidature for the manager's position for the 1953 tour of England. Philip Derriman wrote in his history of the NSWCA that Johnson "may be said to have been a victim of that affair no less than Barnes". Johnson wrote to the Australian Board of Control, ostensibly tendering his resignation on the grounds of difficulties in travelling to meetings, and thanking the other members for their "courtesy, cooperation and help". He was not mentioned in the board minutes again until his death was noted almost two decades later. Johnson retained the support of many of the players: six Victorian members of the 1948 team, including Lindsay Hassett, Ian Johnson and Neil Harvey, wrote to the Herald newspaper in Melbourne expressing their "confidence, respect and affection" for the tour manager. Derriman described Johnson as a "considerate, patient, inoffensive man for whom nothing was too much trouble. As a cricket official, he was efficient and dedicated." Johnson was appointed a Member of the Order of the British Empire for services to cricket in the 1964 Queen's Birthday Honours, and died in 1972 after collapsing when rising to make a speech at a charity lunch in Sydney.
38,463,601
1969 Curaçao uprising
1,170,225,653
Series of riots and protests
[ "1960s in Curaçao", "1969 in the Netherlands Antilles", "1969 labor disputes and strikes", "1969 protests", "1969 riots", "20th-century rebellions", "Articles containing video clips", "June 1969 events in North America", "May 1969 events in North America", "Rebellions in North America", "Willemstad" ]
The 1969 Curaçao uprising (known as Trinta di Mei, "Thirtieth of May", in Papiamentu, the local language) was a series of riots on the Caribbean island of Curaçao, then part of the Netherlands Antilles, a semi-independent country in the Kingdom of the Netherlands. The uprising took place mainly on May 30, but continued into the night of May 31 – June 1, 1969. The riots arose from a strike by workers in the oil industry. A protest rally during the strike turned violent, leading to widespread looting and destruction of buildings and vehicles in the central business district of Curaçao's capital, Willemstad. Several causes for the uprising have been cited. The island's economy, after decades of prosperity brought about by the oil industry, particularly a Shell refinery, was in decline and unemployment was rising. Curaçao, a former colony of the Netherlands, became part of the semi-independent Netherlands Antilles under a 1954 charter, which redefined the relationship between the Netherlands and its former colonies. Under this arrangement, Curaçao was still part of the Kingdom of the Netherlands. Anti-colonial activists decried this status as a continuation of colonial rule but others were satisfied the political situation was beneficial to the island. After slavery was abolished in 1863, black Curaçaoans continued to face racism and discrimination. They did not participate fully in the riches resulting from Curaçao's economic prosperity and were disproportionately affected by the rise in unemployment. Black Power sentiments in Curaçao were spreading, mirroring developments in the United States and across the Caribbean, of which Curaçaoans were very much aware. The Democratic Party dominated local politics but could not fulfill its promise to maintain prosperity. Radical and socialist ideas became popular in the 1960s. In 1969, a labor dispute arose between a Shell sub-contractor and its employees. This dispute escalated and became increasingly political. A demonstration by workers and labor activists on May 30 became violent, sparking the uprising. The riots left two people dead and much of central Willemstad destroyed, and hundreds of people were arrested. The protesters achieved most of their immediate demands: higher wages for workers and the Netherlands Antillean government's resignation. It was a pivotal moment in the history of Curaçao and of the vestigial Dutch Empire. New parliamentary elections in September gave the uprising's leaders seats in parliament, the Estates of the Netherlands Antilles. A commission investigated the riots; it blamed economic issues, racial tensions, and police and government misconduct. The uprising prompted the Dutch government to undertake new efforts to fully decolonize the remains of its colonial empire. Suriname became independent in 1975 but leaders of the Netherlands Antilles resisted independence, fearing the economic repercussions. The uprising stoked long-standing distrust of Curaçao in nearby Aruba, which seceded from the Netherlands Antilles in 1986. Papiamentu gained social prestige and more widespread use after the uprising. It was followed by a renewal in Curaçaoan literature, much of which dealt with local social issues and sparked discussions about Curaçao's national identity. ## Background and causes Curaçao is an island in the Caribbean Sea. It is a country (Dutch: land) within the Kingdom of the Netherlands. In 1969, Curaçao had a population of around 141,000, of whom 65,000 lived in the capital, Willemstad. Until 2010, Curaçao was the most populous island and seat of government of the Netherlands Antilles, a country and former Dutch colony composed of six Caribbean islands, which in 1969 had a combined population of around 225,000. In the 19th century the island's economy was in poor shape. It had few industries other than the manufacture of dyewood, salt, and straw hats. After the Panama Canal was built and oil was discovered in Venezuela's Maracaibo Basin, Curaçao's economic situation improved. Shell opened an oil refinery on the island in 1918; the refinery was continually expanded until 1930. The plant's production peaked in 1952, when it employed around 11,000 people. This economic boom made Curaçao one of the wealthiest islands in the region and raised living standards there above even those in the Netherlands. This wealth attracted immigrants, particularly from other Caribbean islands, Suriname, Madeira, and the Netherlands. In the 1960s, the number of people working in the oil industry fell and by 1969, Shell's workforce in Curaçao had dropped to around 4,000. This was a result both of automation and of sub-contracting. Employees of sub-contractors typically received lower wages than Shell workers. Unemployment on the island rose from 5,000 in 1961 to 8,000 to 1966, with nonwhite, unskilled workers particularly affected. The government's focus on attracting tourism brought some economic growth but did little to reduce unemployment. The rise of the oil industry led to the importation of civil servants, mostly from the Netherlands. This led to a segmentation of white, Protestant Curaçaoan society into landskinderen—those whose families had been in Curaçao for generations, and makamba, immigrants from Europe who had closer ties to the Netherlands. Dutch immigrants undermined native white Curaçaoans' political and economic hegemony. As a result, the latter began to emphasize their Antillean identity and use of Papiamentu, the local Creole language. Dutch cultural dominance in Curaçao was a source of conflict; for example, the island's official language was Dutch, which was used in schools, creating difficulties for many students. Another issue that would come to the fore in the uprising was the Netherlands Antilles', and specifically Curaçao's, relationship with the Netherlands. The Netherlands Antilles' status had been changed in 1954 by the Charter for the Kingdom of the Netherlands. Under the Charter, the Netherlands Antilles, like Suriname until 1975, was part of the Kingdom of the Netherlands but not of the Netherlands itself. Foreign policy and national defense were Kingdom matters and presided over by the Council of Ministers of the Kingdom of the Netherlands, which consisted of the full Council of Ministers of the Netherlands with one minister plenipotentiary for each of the countries Netherlands Antilles and Suriname. Other issues were governed at the country or island level. Although this system had its proponents, who pointed to the fact that managing its own foreign relations and national defense would be too costly for a small country like the Netherlands Antilles, many Antilleans saw it as a continuation of the area's subaltern colonial status. There was no strong pro-independence movement in the Antilles as most local identity discourses centered around insular loyalty. The Dutch colonization of Curaçao began with the importation of African slaves in 1641, and in 1654 the island became the Caribbean's main slave depot. Only in 1863, much later than Britain or France, did the Netherlands abolish slavery in its colonies. A government scholarship program allowed some Afro-Curaçaoans to attain social mobility but the racial hierarchy from the colonial era remained largely intact and blacks continued to face discrimination and were disproportionately affected by poverty. Although 90% of Curaçao's population was of African descent, the spoils of the economic prosperity that began in the 1920s benefited whites and recent immigrants much more than black native Curaçaoans. Like the rest of the Netherlands Antilles, Curaçao was formally democratic but political power was mostly in the hands of white elites. The situation of black Curaçaoans was similar to that of blacks in the United States and Caribbean countries such as Jamaica, Trinidad and Tobago, and Barbados. The movement leading up to the 1969 uprising used many of the same symbols and rhetoric as Black Power and civil rights movements in those countries. A high Antillean government official would later claim that the island's wide-reaching mass media was one of the uprising's causes. People in Curaçao were aware of events in the US, Europe, and Latin America. Many Antilleans, including students, traveled abroad and many Dutch and American tourists visited Curaçao and many foreigners worked in Curaçao's oil industry. The uprising would parallel anti-colonial, anti-capitalist, and anti-racist movements throughout the world. It was particularly influenced by the Cuban Revolution. Government officials in Curaçao falsely alleged that Cuban communists were directly involved in sparking the uprising, but the revolution did have an indirect influence in that it inspired many of the uprising's participants. Many of the uprising's leaders wore khaki uniforms similar to those worn by Fidel Castro. Black Power movements were emerging throughout the Caribbean and in the US at the time; foreign Black Power figures were not directly involved in the 1969 uprising but they inspired many of its participants. Local politics also contributed to the uprising. The center-left Democratic Party (DP) had been in power in the Netherlands Antilles since 1954. The DP was more closely connected to the labor movement than its major rival, the National People's Party (NVP). This relationship was strained by the DP's inability to satisfy expectations it would improve workers' conditions. The DP was mainly associated with the white segments of the working class and blacks criticized it for primarily advancing white interests. The 1960s also saw the rise of radicalism in Curaçao. Many students went to the Netherlands to study and some returned with radical left-wing ideas and founded the Union Reformista Antillano (URA) in 1965. The URA established itself as a socialist alternative to the established parties, although it was more reformist than revolutionary in outlook. Beyond parliamentary politics, Vitó, a weekly magazine at the center of a movement aiming to end the economic and political exploitation of the masses thought to be a result of neo-colonialism, published analyses of local economic, political, and social conditions. Vitó started being published in Papiamentu rather than in Dutch in 1967, and gained a mass following. It had close ties with radical elements in the labor movement. Papa Godett [nl], a leader in the dock workers' union, worked with Stanley Brown, the editor of Vitó. ## Labor dispute Although the progressive priest Amado Römer had warned that "great changes still need to come through a peaceful revolution, because, if this doesn't happen peacefully, the day is not far off when the oppressed [...] will rise up", Curaçao was thought an unlikely site for political turmoil despite low wages, high unemployment, and economic disparities between blacks and whites. The relative tranquility was attributed by the island's government to the strength of family ties. In a 1965 pitch to investors, the government ascribed the absence of a communist party and labor unions' restraint to the fact that "Antillean families are bound together by unusually strong ties and therefore extremist elements have little chance to interfere in labor relations". Labor relations, including those between Shell and the refinery's workers, had indeed generally been peaceful. After two minor strikes in the 1920s and another in 1936, a contract committee for Shell workers was established. In 1942, workers of Dutch nationality gained the right to elect representatives to this committee. In 1955, the Puerto Rican section of the American labor federation Congress of Industrial Organizations (CIO) aided workers in launching the Petroleum Workers' Federation of Curaçao (PWFC). In 1957, the Federation reached a collective bargaining agreement with Shell for workers at the refinery. The PWFC was part of the General Conference of Trade Unions (AVVC), the island's largest labor confederation. The AVVC generally took a moderate stance in labor negotiations and was often criticized for this, and for its close relationship to the Democratic Party, by the more radical parts of the Curaçaoan labor movement. Close relations between unions and political leaders were widespread in Curaçao, though few unions were explicitly allied with a particular party and the labor movement was starting to gain independence. The Curaçao Federation of Workers (CFW), another union in the AVVC, represented construction workers employed by the Werkspoor Caribbean Company, a Shell sub-contractor. The CWF was to play an important role in the events that led to the uprising. Among the unions criticizing the AVVC was the General Dock Workers Union (AHU), which was led by Papa Godett and Amador Nita and was guided by a revolutionary ideology seeking to overthrow the remnants of Dutch colonialism, especially discrimination against blacks. Godett was closely allied with Stanley Brown, the editor of Vitó. The labor movement before the 1969 uprising was very fragmented and personal animosity between labor leaders further exacerbated this situation. In May 1969, there was a labor dispute between the CFW and Werkspoor. It revolved around two central issues. For one, Antillean Werkspoor employees received lower wages than workers from the Netherlands or other Caribbean islands as the latter were compensated for working away from home. Secondly, Werkspoor employees performed the same work as Shell employees but received lower wages. Werkspoor's response pointed to the fact that it could not afford to pay higher wages under its contract with Shell. Vitó was heavily involved in the dispute, helping to keep the conflict in the public consciousness. Though the dispute between CFW and Werkspoor received the most attention, that month significant labor unrest occurred throughout the Netherlands Antilles. On May 6, around 400 Werkspoor employees went on strike. The Antillean Werkspoor workers received support and solidarity from non-Antilleans at Werkspoor and from other Curaçaoan unions. On May 8, this strike ended with an agreement to negotiate a new contract with government mediation. These negotiations failed, leading to a second strike that began on May 27. The dispute became increasingly political as labor leaders felt the government should intervene on their behalf. The Democratic Party was in a dilemma, as it did not wish to lose support from the labor movement and was wary of drawn-out and disruptive labor disputes, but also felt that giving in to excessive demands by labor would undermine its strategy to attract investments in industry. As the conflict progressed, radical leaders including Amador Nita and Papa Godett gained influence. On May 29, as a moderate labor figure was about to announce a compromise and postpone a strike, Nita took that man's notes and read a declaration of his own. He demanded the government resign and threatened a general strike. The same day, between thirty and forty workers marched to Fort Amsterdam, the Antillean government's seat, contending that the government, which had refused to intervene in the dispute, was contributing to the repression of wages. While the strike was led by the CFW, the PWFC under pressure from its members, showed solidarity with the strikers and decided to call for a strike to support the Werkspoor workers. ## Uprising On the morning of May 30, more unions announced strikes in support of the CFW's dispute with Werkspoor. Between three and four thousand workers gathered at a strike post. While the CFW emphasized that this was merely an economic dispute, Papa Godett, the dock workers' leader and Vitó activist, advocated a political struggle in his speech to the strikers. He criticized the government's handling of the labor dispute and demanded its removal. He called for another march to Fort Amsterdam, which was seven miles (11 km) away in Punda in downtown Willemstad. "If we don't succeed without force, then we have to use force. [...] The people is the government. The present government is no good and we will replace it", he proclaimed. The march was five thousand people strong when it started moving towards the city center. As it progressed through the city, people who were not associated with the strike joined, most of them young, black, and male, some oil workers, some unemployed. There were no protest marshals and leaders had little control over the crowd's actions. They had not anticipated any escalation. Among the slogans the crowd chanted were "Pan y rekonosimiento" ("Bread and recognition"), "Ta kos di kapitalista, kibra nan numa" ("These are possessions of capitalists, just destroy them"), and "Tira piedra. Mata e kachónan di Gobièrnu. Nos mester bai Punda, Fòrti. Mata e makambanan" ("Throw stones. Death to the government dogs. Let's go to Punda, to the fort. Death to the makamba"). The march became increasingly violent. A pick-up truck with a white driver was set on fire and two stores were looted. Then, large commercial buildings including a Coca-Cola bottling plant and a Texas Instruments factory were attacked, and marchers entered the buildings to halt production. Texas Instruments had a poor reputation because it had prevented unionization among its employees. Housing and public buildings were generally spared. Once it became aware, the police moved to stop the rioting and called for assistance from the local volunteer militia and from Dutch troops stationed in Curaçao. The police, with only sixty officers at the scene, were unable to halt the march and ended up enveloped by the demonstration, with car drivers attempting to hit them. The police moved to secure a hill on the march route and were pelted with rocks. Papa Godett was shot in the back by the police; he later said the police had orders to kill him, while law enforcement said officers acted only to save their own lives. Godett was taken to the hospital by members of the demonstration and parts of the march broke away to follow them. One of two fire trucks dispatched to assist the police was set on fire and pushed towards the police lines. The striker steering it was shot and killed. The main part of the march moved to Punda, Willemstad's central business district where it separated into smaller groups. The protesters chanted "Awe yu di Korsou a bira koño" ("Now the people of Curaçao are really fed up") and "Nos lo siña nan respeta nos" ("We will teach them to respect us"). Some protesters crossed a bridge to the other side of Sint Anna Bay, an area known as Otrabanda. The first building burned in Otrabanda was a shop Vitó had criticized for having particularly poor working conditions. From this store, flames spread to other buildings. Stores on both sides of the bay were looted and subsequently set on fire, as were an old theater and the bishop's palace. Women took looted goods home in shopping carts. There was an attempt to damage the bridge that crossed the bay. The government imposed a curfew and a ban on liquor sales. The Prime Minister of the Netherlands Antilles Ciro Domenico Kroon went into hiding during the riots while Governor Cola Debrot and the Deputy Governor Wem Lampe were also absent. Minister of Justice Ronchi Isa requested the assistance of elements of the Netherlands Marine Corps stationed in Curaçao. While constitutionally required to honor this request under the Charter, the Kingdom's Council of Ministers did not officially approve it until later. The soldiers, however, immediately joined police, local volunteers and firemen as they fought to stop the rioting, put out fires in looted buildings, and guarded banks and other key buildings while thick plumes of smoke emanated from the city center. Many of the buildings in this part of Willemstad were old and therefore vulnerable to fire while the compact nature of the central business district further hampered firefighting efforts. In the afternoon, clergymen made a statement via radio urging the looters to stop. Meanwhile, union leaders announced that they had reached a compromise with Werkspoor. Shell workers would receive equal wages regardless of whether they were employed by contractors and regardless of their national origin. Although the protesters achieved their economic aims, rioting continued throughout the night and slowly abated on May 31. The uprising's focus shifted from economic demands to political goals. Union leaders, both radical and moderate, demanded the government's resignation and threatened a general strike. Workers broke into a radio station, forcing it to broadcast this demand; they argued that failed economic and social policies had led to the grievances and the uprising. On May 31, Curaçaoan labor leaders met with union representatives from Aruba, which was then also part of the Netherlands Antilles. The Aruban delegates agreed with the demand for the government's resignation, announcing Aruban workers would also go on a general strike if it was ignored. By the night of May 31 to June 1, the violence had ceased. Another 300 Dutch marines arrived from the Netherlands on June 1 to maintain order. The uprising cost two lives—the dead were identified as A. Gutierrez and A. Doran—and 22 police officers and 57 others were injured. The riots led to 322 arrests, including the leaders Papa Godett and Amador Nita of the dock workers' union, and Stanley Brown, the editor of Vitó. Godett was kept under police surveillance while he recovered from his bullet wound, in the hospital. During the disturbances, 43 businesses and 10 other buildings were burned and 190 buildings were damaged or looted. Thirty vehicles were destroyed by fire. The damage caused by the uprising was valued at around million. The looting was highly selective, mainly targeting businesses owned by whites while avoiding tourists. In some cases rioters led tourists out of the disturbance to their hotels to protect them. Nevertheless, the riots drove away most tourists and damaged the island's reputation as a tourist destination. On May 31, Amigoe di Curaçao, a local newspaper, declared that with the uprising, "the leaden mask of a carefree, untroubled life in the Caribbean Sea was ripped from part of Curaçao, perhaps forever". The riots evoked a wide range of emotions among the island's population; "Everyone was crying" when it ended, said one observer. There was pride that Curaçaoans had finally stood up for themselves. Some were ashamed it had come to a riot or of having taken part. Others were angry at the rioters, the police, or at the social wrongs that had given rise to the riots. The uprising achieved both its economic and political demands. On June 2 all parties in the Estates of the Netherlands Antilles, pressured by the Chamber of Commerce that feared further strikes and violence, agreed to dissolve that body. On June 5, the Prime Minister Ciro Domenico Kroon submitted his resignation to the Governor. Elections for the Estates were set for September 5. On June 26, an interim government headed by new Prime Minister Gerald Sprockel took charge of the Netherlands Antilles. ## Aftermath Trinta di Mei (Thirtieth of May in Papiamentu) became a pivotal moment in the history of Curaçao, contributing to the end of white political dominance. While Peter Verton as well as William Averette Anderson and Russell Rowe Dynes characterize the events as a revolt, historian Gert Oostindie considers this term too broad. All of these writers agree revolution was never a possibility. Anderson, Dynes, and Verton regard the uprising as part of a broader movement, the May Movement or May 30 Movement, which began with the strikes in early 1969 and continued in electoral politics and with another wave of strikes in December 1969. ### Political effects The uprising's leaders, Godett, Nita, and Brown, formed a new political party, the May 30 Labor and Liberation Front (Frente Obrero Liberashon 30 Di Mei, FOL), in June 1969. Brown was still in prison when the party was founded. The FOL fielded candidates in the September election against the Democratic Party, the National People's Party, and the URA with Godett as its top candidate. The FOL campaigned on the populist, anti-colonial, and anti-Dutch messages voiced during the uprising, espousing black pride and a positive Antillean identity. One of its campaign posters depicted Kroon, the former Prime Minister and the Democratic Party's main candidate, shooting protesters. The FOL received 22% of the vote in Curaçao and won three of the island's twelve seats in the Estates, which had a total of twenty-two seats. The three FOL leaders took those seats. In December, Ernesto Petronia of the Democratic Party became the Netherlands Antilles' first black Prime Minister and FOL formed part of the coalition government. In 1970, the Dutch government appointed Ben Leito as the first black Governor of the Netherlands Antilles. In October of the same year, a commission similar to the Kerner Commission in the United States was established to investigate the uprising. Five of its members were Antillean and three were Dutch. It released its report in May 1970 after gathering data, conducting interviews, and holding hearings. It deemed the uprising unexpected, finding no evidence it had been pre-planned. The report concluded that the primary causes of the riots were racial tensions and disappointed economic expectations. The report was critical of the conduct of the police and on its recommendation a Lieutenant Governor with police experience was appointed. Patronage appointments were reduced in keeping with the commission's recommendations but most of its suggestions, and its criticism of government and police conduct, were ignored. The commission also pointed to a contradiction between the demands for national independence and economic prosperity: according to the report, independence would almost certainly lead to economic decline. On June 1, 1969, in The Hague, the seat of the Dutch government, between 300 and 500 people, including some Antillean students, marched in support of the uprising in Curaçao and clashed with police. The protesters denounced the deployment of Dutch troops and called for Antillean independence. The 1969 uprising became a watershed moment in the decolonization of Dutch possessions in the Americas. The Dutch parliament discussed the events in Curaçao on June 3. The parties in government and the opposition agreed that no other response to the riots was possible under the Kingdom's charter. The Dutch press was more critical. Images of Dutch soldiers patrolling the streets of Willemstad with machine guns were shown around the world. Much of the international press viewed Dutch involvement as a neo-colonial intervention. The Indonesian War of Independence, in which the former Dutch East Indies broke away from the Netherlands in the 1940s and in which some 150,000 Indonesians and 5,000 Dutch died, was still on the Dutch public's minds. In January 1970, consultations about independence between Dutch Minister for Surinamese and Antillean Affairs Joop Bakker, Surinamese Prime Minister Jules Sedney, and Petronia began. The Dutch government, fearing after Trinta di Mei it could be forced into a military intervention, wanted to release the Antilles and Suriname into independence; according to Bakker, "It would be preferably today rather than tomorrow that the Netherlands would get rid of the Antilles and Suriname". Yet, the Netherlands insisted it did not wish to force independence on the two countries. Deliberations over the next years revealed that independence would be a difficult task, as the Antilleans and the Surinamese were concerned about losing Dutch nationality and Dutch development aid. In 1973, both countries rejected a Dutch proposal for a path to independence. In Suriname's case, this impasse was overcome suddenly in 1974 when new administrations took power both in the Netherlands and in Suriname, and rapid negotiations resulted in Surinamese independence on November 25, 1975. The Netherlands Antilles resisted any swift move to independence. It insisted that national sovereignty would only be an option once it had "attained a reasonable level of economic development", as its Prime Minister Juancho Evertsz put it in 1975. Surveys in the 1970s and 1980s showed most of Curaçao's inhabitants agreed with this reluctance to pursue independence: clear majorities favored continuing the Antilles' ties to the Netherlands, but many were in favor of loosening them. By the end of the 1980s, the Netherlands accepted that the Antilles would not be fully decolonized in the near future. The 1969 uprising in Curaçao encouraged separatist sentiments in Aruba that had existed since the 1930s. Unlike the black-majority Curaçao, most Arubans were of mixed European and Native descent. Though Aruba is just 73 miles (117 km) away from Curaçao, there was a long-standing resentment with significant racial undertones about being ruled from Willemstad. Aruban distrust of Curaçao was further stoked by the uprising's Black Power sentiments. The Aruban island government started working towards separation from the Antilles in 1975 and in 1986, Aruba became a separate country within the Kingdom of the Netherlands. Eventually, in 2010, insular nationalism led to the Netherlands Antilles being completely dissolved and Curaçao becoming a country as well. Trinta di Mei also reshaped Curaçao's labor movement. A strike wave swept Curaçao in December 1969. Around 3,500 workers participated in eight wildcat strikes that took place within ten days. New, more radical leaders were able to gain influence in the labor movement. As a result of unions' involvement in Trinta di Mei and the December strikes, Curaçaoans had considerably more favorable views of labor leaders than of politicians, as a poll in August 1971 found. In the following years, unions built their power and gained considerable wage increases for their members, forcing even the notoriously anti-union Texas Instruments to negotiate with them. Their membership also grew; the CFW, for instance, went from a pre-May 1969 membership of 1,200 to around 3,500 members in July 1970. The atmosphere after the uprising led to the formation of four new unions. The labor movement's relationship to politics was changed by Trinta di Mei. Unions had been close to political parties and the government for several reasons: They had not existed for very long and were still gaining their footing. Secondly, the government played an important role in economic development and, finally, workers' and unions' position vis-à-vis employers was comparatively weak and they relied on the government's help. The events of 1969 both expressed and hastened the development of a more distant relationship between labor and the state. Government and unions became more distinct entities, although they continued to try to influence one another. Labor was now willing to take a militant position against the state and both parties realized that labor was a force in Curaçaoan society. The government was accused of letting workers down and of using force to suppress their struggle. Unions' relationship with employers changed in a similar way; employers were now compelled to recognize labor as an important force. ### Social and cultural effects The 1969 uprising put an end to white dominance in politics and administration in Curaçao, and led to the ascendance of a new black political elite. Nearly all of the governors, prime ministers, and ministers in the Netherlands Antilles and Curaçao since 1969 have been black. Although there has been no corresponding change in the island's business elite, upward social mobility increased considerably for well-educated Afro-Curaçaoans and led to improved conditions for the black middle class. The rise of black political elites was controversial from the start. Many FOL supporters were wary of the party entering into government with the Democratic Party, which they had previously denounced as corrupt. The effects of the emergence of new elites for lower-class black Curaçaoans have been limited. Although workers received some new legal protections, their living standards stagnated. In a 1971 survey, three quarters of the respondents said their economic situation had remained the same or worsened. This is mostly the result of difficult conditions that hamper most Caribbean economies, but critics have also blamed mismanagement and corruption by the new political elites. Among the lasting effects of the uprising was an increase in the prestige of Papiamentu, which became more widely used in official contexts. Papiamentu was spoken by most Curaçaoans but its use was shunned; children who spoke it on school playgrounds were punished. According to Frank Martinus Arion, a Curaçaoan writer, "Trinta di Mei allowed us to recognize the subversive treasure we had in our language". It empowered Papiamentu speakers and sparked discussions about the use of the language. Vitó, the magazine that had played a large part in the build-up to the uprising, had long called for Papiamentu becoming Curaçao's official language once it became independent of the Netherlands. It was recognized as an official language on the island, along with English and Dutch, in 2007. Curaçaoan parliamentary debate is now conducted in Papiamentu and most radio and television broadcasts are in this language. Primary schools teach in Papiamentu but secondary schools still teach in Dutch. Trinta di Mei also accelerated the standardization and formalization of Papiamentu orthography, a process that had begun in the 1940s. The events of May 30, 1969, and the situation that caused them were reflected in local literature. Papiamentu was considered by many devoid of any artistic quality, but after the uprising literature in the language blossomed. According to Igma M. G. van Putte-de Windt, it was only in the 1970s after the May 30 uprising that an "Antillean dramatic expression in its own right" emerged. Days before the uprising, Stanley Bonofacio premiered Kondená na morto ("Sentenced to death"), a play about the justice system in the Netherlands Antilles. It was banned for a time after the riots. In 1970, Edward A. de Jongh, who watched the riots as he walked the streets, published the novel 30 di Mei 1969: E dia di mas historiko ("May 30, 1969: The Most Historic Day") describing what he perceived as the underlying causes of the uprising: unemployment, the lack of workers' rights, and racial discrimination. In 1971, Pacheco Domacassé wrote the play Tula about a 1795 slave revolt in Curaçao and in 1973 he wrote Konsenshi di un pueblo (A People's Conscience), which deals with government corruption and ends in a revolt reminiscent of the May 30 uprising. Curaçaoan poetry after Trinta di Mei, too, was rife with calls for independence, national sovereignty, and social justice. The 1969 uprising opened up questions concerning Curaçaoan national identity. Prior to Trinta di Mei, one's place in society was determined largely by race; afterward these hierarchies and classifications were put into question. This led to debates about whether Afro-Curaçaoans were the only true Curaçaoans and to what extent Sephardic Jews and the Dutch, who had been present throughout Curaçao's colonial period, and more recent immigrants belonged. In the 1970s, there were formal attempts at nation-building; an island anthem was introduced in 1979, an island Hymn and Flag Day were instituted in 1984, and resources were devoted to promoting the island's culture. Papiamentu became central to Curaçaoan identity. More recently, civic values, rights of participation, and a common political knowledge are said to have become important issues in determining national identity.
12,149,763
Ordinances of 1311
1,115,549,078
Regulations imposed upon King Edward II of England
[ "1310s in law", "1311 in England", "14th century in England", "Edward II of England", "Medieval English law", "Political history of medieval England" ]
The Ordinances of 1311 were a series of regulations imposed upon King Edward II by the peerage and clergy of the Kingdom of England to restrict the power of the English monarch. The twenty-one signatories of the Ordinances are referred to as the Lords Ordainers, or simply the Ordainers. English setbacks in the Scottish war, combined with perceived extortionate royal fiscal policies, set the background for the writing of the Ordinances in which the administrative prerogatives of the king were largely appropriated by a baronial council. The Ordinances reflect the Provisions of Oxford and the Provisions of Westminster from the late 1250s, but unlike the Provisions, the Ordinances featured a new concern with fiscal reform, specifically redirecting revenues from the king's household to the exchequer. Just as instrumental to their conception were other issues, particularly discontent with the king's favourite, Piers Gaveston, whom the barons subsequently banished from the realm. Edward II accepted the Ordinances only under coercion, and a long struggle for their repeal ensued that did not end until Earl Thomas of Lancaster, the leader of the Ordainers, was executed in 1322. ## Background ### Early problems When Edward II succeeded his father Edward I on 7 July 1307, the attitude of his subjects was generally one of goodwill towards their new king. However, discontent was brewing beneath the surface. Some of this was due to existing problems left behind by the late king, while much was due to the new king's inadequacies. The problems were threefold. First there was discontent with the royal policy for financing wars. To finance the war in Scotland, Edward I had increasingly resorted to so-called prises – or purveyance – to provision the troops with victuals. The peers felt that the purveyance had become far too burdensome and compensation was in many cases inadequate or missing entirely. In addition, they did not like the fact that Edward II took prises for his household without continuing the war effort against Scotland, causing the second problem. While Edward I had spent the last decade of his reign relentlessly campaigning against the Scots, his son abandoned the war almost entirely. In this situation, the Scottish king Robert Bruce soon took the opportunity to regain what had been lost. This not only exposed the north of England to Scottish attacks, but also jeopardized the possessions of the English baronage in Scotland. The third and most serious problem concerned the king's favourite, Piers Gaveston. Gaveston was a Gascon of relatively humble origins, with whom the king had developed a particularly close relationship. Among the honours Edward heaped upon Gaveston was the earldom of Cornwall, a title which had previously only been conferred on members of the royal family. The preferential treatment of an upstart like Gaveston, in combination with his behaviour that was seen as arrogant, led to resentment among the established peers of the realm. This resentment first came to the surface in a declaration written in Boulogne by a group of magnates who were with the king when he was in France for his marriage ceremony to the French king's daughter. The so-called Boulogne agreement was vague, but it expressed clear concern over the state of the royal court. On 25 February 1308, the new king was crowned. The oath he was made to take at the coronation differed from that of previous kings in the fourth clause; here Edward was required to promise to maintain the laws that the community "shall have chosen" ("aura eslu"). Though it is unclear what exactly was meant by this wording at the time, this oath was later used in the struggle between the king and his earls. ### Gaveston’s exile In the parliament of April 1308, it was decided that Gaveston should be banned from the realm upon threat of excommunication. The king had no choice but to comply, and on 24 June, Gaveston left the country on appointment as Lieutenant of Ireland. The king immediately started plotting for his favourite's return. At the parliament of April 1309, he suggested a compromise in which certain of the earls' petitions would be met in exchange for Gaveston's return. The plan came to nothing, but Edward had strengthened his hand for the Stamford parliament in July later that year by receiving a papal annulment of the threat of excommunication. The king agreed to the so-called "Statute of Stamford" (which in essence was a reissue of the Articuli super Cartas that his father had signed in 1300), and Gaveston was allowed to return. The earls who agreed to the compromise were hoping that Gaveston had learned his lesson. Yet upon his return, he was more arrogant than ever, conferring insulting nicknames on some of the greater nobles. When the king summoned a great council in October, several of the earls refused to meet due to Gaveston's presence. At the parliament of February in the following year, Gaveston was ordered not to attend. The earls disobeyed a royal order not to carry arms to parliament, and in full military attire presented a demand to the king for the appointment of a commission of reform. On 16 March 1310, the king agreed to the appointment of Ordainers, who were to be in charge of the reform of the royal household. ## Lords Ordainers The Ordainers were elected by an assembly of magnates, without representation from the commons. They were a diverse group, consisting of eight earls, seven bishops and six barons – twenty-one in all. There were faithful royalists represented as well as fierce opponents of the king. Among the Ordainers considered loyal to Edward II was John of Brittany, Earl of Richmond who was also by this time one of the older remaining earls. John had served Edward I, his uncle, and was Edward II's first cousin. The natural leader of the group was Henry Lacy, Earl of Lincoln. One of the wealthiest men in the country, he was also the oldest of the earls and had proved his loyalty and ableness through long service to Edward I. Lincoln had a moderating influence on the more extreme members of the group, but with his death in February 1311, leadership passed to his son-in-law and heir Thomas of Lancaster. Lancaster – the king's cousin – was now in possession of five earldoms which made him by far the wealthiest man in the country, save the king. There is no evidence that Lancaster was in opposition to the king in the early years of the king's reign, but by the time of the Ordinances it is clear that something had negatively affected his opinion of King Edward. Lancaster's main ally was Guy Beauchamp, Earl of Warwick. Warwick was the most fervently and consistently antagonistic of the earls, and remained so until his early death in 1315. Other earls were more amenable. Gilbert de Clare, Earl of Gloucester, was Gaveston's brother-in-law and stayed loyal to the king. Aymer de Valence, Earl of Pembroke, would later be one of the king's most central supporters, yet at this point he found the most prudent course of action was to go along with the reformers. Of the barons, at least Robert Clifford and William Marshall seemed to have royalist leanings. Among the bishops, only two stood out as significant political figures, the more prominent of whom was Robert Winchelsey, Archbishop of Canterbury. Long a formidable presence in English public life, Winchelsey had led the struggle against Edward I to uphold the autonomy of the church, and for this he had paid with suspension and exile. One of Edward II's first acts as king had been to reinstate Winchelsey, but rather than responding with grateful loyalty, the archbishop soon reassumed a leadership role in the fight against the king. Although he was trying to appease Winchelsey, the king carried an old grudge against another prelate, Walter Langton, Bishop of Lichfield. Edward had Langton dismissed from his position as treasurer of the Exchequer and had his temporal possessions confiscated. Langton had been an opponent of Winchelsey during the previous reign, but Edward II's move against Langton drew the two Ordainers together. ## Ordinances Six preliminary ordinances were released immediately upon the appointment of the Ordainers – on 19 March 1310 – but it was not until August 1311 that the committee had finished its work. In the meanwhile Edward had been in Scotland on an aborted campaign, but on 16 August, Parliament met in London, and the king was presented with the Ordinances. The document containing the Ordinances is dated 5 October, and contains forty-one articles. In the preamble, the Ordainers voiced their concern over what they perceived as the evil councilors of the king, the precariousness of the military situation abroad, and the danger of rebellion at home over the oppressive prises. The articles can be divided into different groups, the largest of which deals with limitations on the powers of the king and his officials, and the substitution of these powers with baronial control. It was ordained that the king should appoint his officers only "by the counsel and assent of the baronage, and that in parliament." Furthermore, the king could no longer go to war without the consent of the baronage, nor could he make reforms of the coinage. Additionally, it was decided that parliament should be held at least once a year. Parallel to these decisions were reforms of the royal finances. The Ordinances banned what was seen as extortionate prises and customs, and at the same time declared that revenues were to be paid directly into the Exchequer. This was a reaction to the rising trend of receiving revenues directly into the royal household; making all royal finances accountable to the exchequer allowed greater public scrutiny. Other articles dealt with punishing specific persons, foremost among these, Piers Gaveston. Article 20 describes at length the offences committed by Gaveston; he was once more condemned to exile and was to abjure the realm by 1 November. The bankers of the Italian Frescobaldi company were arrested, and their goods seized. It was held that the king's great financial dependence on the Italians was politically unfortunate. The last individuals to be singled out for punishment were Henry de Beaumont and his sister, Isabella de Vesci, two foreigners associated with the king's household. Though it is difficult to say why these two received particular mention, it could be related to the central position of their possessions in the Scottish war. The Ordainers also took care to confirm and elaborate on existing statutes, and reforms were made to the criminal law. The liberties of the church were confirmed as well. To ensure that none of the Ordainers should be swayed in their decisions by bribes from the king, restrictions were made on what royal gifts and offices they were allowed to receive during their tenure. ## Aftermath The Ordinances were published widely on 11 October, with the intention of obtaining maximum popular support. The decade following their publication saw a constant struggle over their repeal or continued existence. Although they were not finally repealed until May 1322, the vigour with which they were enforced depended on who was in control of government. Before the end of the year, Gaveston had returned to England, and civil war appeared imminent. In May 1312, Gaveston was taken captive by the Earl of Pembroke, but Warwick and Lancaster had him abducted and executed after a mock trial. This affront to Pembroke's honour drove him irrevocably into the camp of the king, and thereby split the opposition. The brutality of the act initially drove Lancaster and his adherents away from the centre of power, but the Battle of Bannockburn, in June 1314, returned the initiative. Edward was humiliated by his disastrous defeat, while Lancaster and Warwick had not taken part in the campaign, claiming that it was carried out without the consent of the baronage, and as such in defiance of the Ordinances. What followed was a period of virtual control of the government by Lancaster, yet increasingly – particularly after the death of Warwick in 1315 – he found himself isolated. In August 1318, the so-called "Treaty of Leake" established a modus vivendi between the parties, whereby the king was restored to power while promising to uphold the Ordinances. Lancaster still had differences with the king, though—particularly with the conduct of the new favourite, Hugh Despenser the younger, and Hugh's father. In 1322, full rebellion broke out which ended with Lancaster's defeat at the Battle of Boroughbridge and his execution shortly after during March 1322. At the parliament of May in the same year, the Ordinances were repealed by the Statute of York. However, six clauses were retained that concerned such issues as household jurisdiction and appointment of sheriffs. Any restrictions on royal power were unequivocally annulled. The Ordinances were never again reissued, and therefore hold no permanent position in the legal history of England in the way that Magna Carta, for instance, does. The criticism has been against the conservative focus of the barons' role in national politics, ignoring the ascendancy of the commons. Yet the document, and the movement behind it, reflected new political developments in its emphasis on how assent was to be obtained by the barons in parliament. It was only a matter of time before it was generally acknowledged that the Commons were an integral part of that institution. ## See also - History of the British constitution - Royal Prerogative
64,662,351
1st Missouri Field Battery
1,125,656,159
Unit of the Confederate States Army
[ "1862 establishments in Arkansas", "1865 disestablishments in Louisiana", "Artillery units and formations of the American Civil War", "Military units and formations disestablished in 1865", "Military units and formations established in 1862", "Missouri in the American Civil War", "Units and formations of the Confederate States Army from Missouri" ]
The 1st Missouri Field Battery was a field artillery battery that served in the Confederate States Army during the American Civil War. The battery was formed by Captain Westley F. Roberts in Arkansas in September 1862 as Roberts' Missouri Battery and was originally armed with two 12-pounder James rifles and two 6-pounder smoothbore guns. The unit fought in the Battle of Prairie Grove on December 7, as part of a Confederate offensive. Roberts' Battery withdrew after the battle and transferred to Little Rock, Arkansas, where Roberts resigned and was replaced by Lieutenant Samuel T. Ruffner. During the middle of 1863, the unit, as Ruffner's Missouri Battery, was part of a force sent to the Mississippi River under the command of Colonel John Bullock Clark Jr., with the intent of harassing Union shipping. Clark's force was eventually recalled to Little Rock, which was being threatened by the Union Army of Arkansas under Major General Frederick Steele. The Confederates abandoned Little Rock on September 10, and Ruffner's Battery saw action during the retreat as part of the rear guard. After the retreat from Little Rock, Ruffner's Battery was temporarily assigned to Brigadier General John S. Marmaduke's cavalry division. The battery accompanied Marmaduke in an expedition against the Union garrison of Pine Bluff, Arkansas, seeing action at the Battle of Pine Bluff on October 25. The unit's assignment to Marmaduke's division ended in December, after which it received a new set of cannons: two 10-pounder Parrott rifles and two 12-pounder howitzers. In early 1864, it became part of Brigadier General Mosby M. Parsons's division, which was ordered into Louisiana in April to counter a Union thrust up the Red River. While Parsons's infantry fought at the Battle of Pleasant Hill on April 9, Ruffner's Battery served in a reserve role and was not engaged. The Union troops present at Pleasant Hill continued to retreat back down the river, so Parsons was returned to Arkansas to move against Steele's Camden expedition. Supply issues forced Steele to retreat from Camden, Arkansas, and the Union troops were pursued to the Saline River. On April 30, the Confederates caught up with Steele at the river crossing and attacked, starting the Battle of Jenkins' Ferry. Ruffner's Battery, along with Lesueur's Missouri Battery, supported an infantry assault, but moved to an exposed position in the process. A Union counterattack captured several of Ruffner's Battery's cannons and Steele's men escaped across the river that night. The battery was then rearmed with four 6-pounder smoothbores. In November 1864, the unit was given the official designation of the 1st Missouri Field Battery. It spent the remainder of the war in Louisiana and Arkansas and was paroled on June 7, 1865, at Alexandria, Louisiana, after General Edmund Kirby Smith signed surrender terms for the Confederate Trans-Mississippi Department on June 2. ## Background and formation When the American Civil War began in early 1861, the state of Missouri did not secede despite being a slave state, as both secessionist and Unionist viewpoints had substantial support among the state's populace. The Governor of Missouri, Claiborne Fox Jackson, mobilized pro-secession state militia, which encamped near St. Louis, where a federal arsenal was located. Brigadier General Nathaniel Lyon of the Union Army, commander of the arsenal, dispersed the militiamen on May 10, in the Camp Jackson affair. Lyon's action was followed by a pro-secession riot in St. Louis. In response, Jackson formed the pro-secession Missouri State Guard, a militia unit; Major General Sterling Price was appointed as its commander on May 12. On June 15, Lyon drove Jackson and the secessionists from the state capital of Jefferson City; Jackson then went to Boonville. Two days later, the secessionists were forced from there, and Jackson and Price fell back to southwestern Missouri, pursued by Lyon. In early August, Price and the Missouri State Guard were joined by Confederate States Army troops commanded by Brigadier General Ben McCulloch. On August 10, Lyon attacked the combined camp of Price and McCulloch. Lyon was killed, and the battle ended in a Union defeat. Price and the Missouri State Guard then headed north towards the Missouri River in a campaign that culminated in the successful Siege of Lexington in September. In October, Union forces commanded by Major General John C. Frémont concentrated against Price, who retreated southwards to Neosho, where he was joined by Jackson. On November 3, Jackson and the pro-secession legislators voted to secede and join the Confederate States of America, functioning as a government-in-exile, first from Arkansas and later from Marshall, Texas. The remaining portion of the state legislature had previously voted to remain in the Union. In February 1862, pressure from Brigadier General Samuel R. Curtis's Union Army of the Southwest led Price to abandon Missouri for Arkansas. In March, Price, McCulloch, and Major General Earl Van Dorn joined forces to form the Army of the West. Van Dorn moved against Curtis, and the two foes fought the Battle of Pea Ridge on March 7 and 8. McCulloch was killed and the Confederates and Missouri State Guardsmen were defeated. After Pea Ridge, the Army of the West retreated to Van Buren, Arkansas. Eventually, many of the members of the Missouri State Guard transferred to official Confederate States Army formations. Around September 7, while located at Van Buren, Captain Westley F. Roberts formed a field artillery battery that would bear his name. The battery was armed with horse-drawn cannons in October: two 12-pounder James rifles taken from Union forces at the Battle of Lone Jack and two obsolescent 6-pounder smoothbores. Unlike the smoothbores, the captured James rifles had a series of spiral grooves engraved along the inside of the gun barrel, which spun the projectile when it was fired, giving the cannon greater effective range and accuracy. The James rifles had a range of 1,700 yards (1.6 km), and the 6-pounder smoothbores had a range of 1,500 yards (1.4 km). Both of these cannons were field guns designed to fire solid shot at a flat trajectory over a long range. Later in the war, the battery was equipped with two 10-pounder Parrott rifles and two 12-pounder howitzers. Parrott rifles had a range of between 2,970 yards (2,720 m) and 3,200 yards (2,900 m) depending on the variant, while the howitzers only had a maximum range of 1,072 yards (980 m). The howitzers fired at a higher trajectory, which was useful where rough terrain made projectiles fired with a flat trajectory ineffective. Confederate artillerymen were hampered by problems with gunpowder and artillery fuze quality, which often resulted in premature detonation of shells, sometimes while still in the cannon. All of the pieces used by the battery required a crew of four to six men. ## Service history ### 1862 On December 7, the battery was engaged during the Battle of Prairie Grove in Arkansas. During the fight, Roberts' Battery was part of Colonel Robert G. Shaver's brigade, along with several infantry regiments from Arkansas. Shaver's brigade was initially held in reserve, but it was ordered from the Confederate left flank to the right flank by Army of the Trans-Mississippi commander Major General Thomas C. Hindman. Roberts' Battery then moved forward onto a ridge. The battery's new position gave it a clear field of fire against Brigadier General Francis J. Herron's Union division. Of the battery's four cannons, only the two James rifles could be deployed due to the terrain, although the two 6-pounders were still with the battery. The James rifles were the only rifled cannons available to the Confederates at Prairie Grove. After deploying, Roberts' Battery came under heavy Union fire. In turn, the Missourians took up a new position further down the ridge. Even in the new position, heavy Union artillery fire rendered the battery's position untenable, and the guns were withdrawn up the hill. Eventually, Roberts decided that the battery could not hold its position, and the gunners abandoned the pieces and took shelter in some nearby woods. The battery had participated in the fighting at Prairie Grove for two hours, and damaged two cannons of Battery L, 1st Missouri Light Artillery; one shot from the battery wounded a man riding near Herron. The battle ended when night fell, and the Confederates retreated from the field. In order to mask the noises of retreat, the wheels of Roberts' Battery's cannons and caissons were padded with blankets. The retreat continued until the Confederates reached Van Buren, a process that took two days. ### 1863 By January 6, 1863, the battery had been transferred to Little Rock, Arkansas, where Roberts resigned. Lieutenant Samuel T. Ruffner became commander of the battery, which adopted his name. In February, the unit boarded a steamboat for transport to the vicinity of Pine Bluff, Arkansas, which it reached on February 22. While near Pine Bluff, the battery was stationed at a position named Fort Pleasant under the authority of Brigadier General Daniel M. Frost. In June, the battery, as part of a formation commanded by Colonel John Bullock Clark Jr., moved to the area around the Mississippi River with the intent of interfering with Union shipping. Ruffner's Battery, which was armed with four 6-pounders at this time, was positioned in the vicinity of Gaines' Landing, along with the 8th and 9th Missouri Infantry Regiments. After firing on Union Navy shipping on June 22 and 27, Ruffner's Battery, along with the two infantry regiments, skirmished with the 25th Wisconsin Infantry Regiment, the 4th Ohio Battery, and elements of the 5th Illinois Cavalry Regiment on June 28, in the Gaines' Landing vicinity. In late July, Clark's force was transferred back to Little Rock, as the city was threatened by the Union Army of Arkansas under Major General Frederick Steele. The Confederates abandoned Little Rock on September 10, without a fight. On September 11, Ruffner's Battery was engaged during the Confederate withdrawal. Union cavalry were pursuing the Confederates, and encountered elements from the 11th and 12th Missouri Cavalry Regiments. Ruffner's Battery then fired at the pursuers with the unit's four cannons, inflicting casualties. After additional fighting between the Union cavalry and the 5th Missouri Cavalry Regiment and Elliott's Missouri Cavalry Battalion, the retreat continued without further pursuit. After the retreat from the city, Ruffner's Battery was temporarily assigned to Marmaduke's cavalry division. After capturing Little Rock, Union troops occupied several points on the Arkansas River. Pine Bluff was occupied by the 5th Kansas and 1st Indiana Cavalry Regiments; the garrison was commanded by Colonel Powell Clayton. On October 25, Marmaduke attacked Pine Bluff. The Union cavalrymen barricaded the town square, which was then assaulted by Marmaduke's cavalry. The attack quickly bogged down and Ruffner's Battery, which had remained in reserve with other Confederate artillery, was called into action. The unit served on the right of the Confederate line, and opened fire with three cannons on the Union position (near the local courthouse) from the grounds of a church. While the artillery fire forced the defenders from some of their positions, the main Union line held up under fire. Further Confederate cavalry charges failed to carry the makeshift defensive position, and Marmaduke's men withdrew after engaging in some looting. On December 2, the battery's assignment to Marmaduke's division ended, and the unit left Marmaduke on the 5th. Ruffner's Battery returned to Fort Pleasant without its cannons, which were given to Joseph Bledsoe's Missouri Battery. Once the fort was reached, Ruffner's Battery was assigned the cannons of a defunct artillery unit known as Von Puhl's Missouri Battery: two 10-pounder Parrott rifles and two 12-pounder howitzers. ### 1864–1865 The battery was later assigned to a new brigade commanded by Clark, which was part of Brigadier General Mosby M. Parsons's division. In March 1864, Parsons's division was transferred to Louisiana, where Major General Richard Taylor and his District of West Louisiana were confronting a Union thrust up the Red River. On April 9, Parsons's division, as part of Taylor's army, engaged the Union force at the Battle of Pleasant Hill, although Ruffner's Battery was in a reserve role and was unengaged. While Confederate assaults at Pleasant Hill were repulsed, the Union army, commanded by Major General Nathaniel Banks, continued a retreat that had begun several days earlier. After Pleasant Hill, General Edmund Kirby Smith, who was in overall command of the Confederate forces, moved his men back into Arkansas, where Steele had occupied Camden. Steele's supply line was tenuous, and he had suffered defeats at the battles of Poison Spring and Marks' Mills. Running low on food, the Union troops abandoned Camden on April 26, with hopes of retreating to Little Rock. The Confederates pursued, and caught up with Steele at the crossing of the Saline River on April 30. That morning, as part of the Battle of Jenkins' Ferry, Ruffner's Battery, along with Lesueur's Missouri Battery, positioned themselves to provide supporting fire for an attack by Parsons's Division. When Clark's brigade, along with Colonel Lucien C. Gause's brigade of Brigadier General Thomas J. Churchill's division, attacked the Union line, Ruffner's and Lesueur's Batteries moved forward in support. Clark and Gause were repulsed, exposing the two batteries' positions. Visibility on the battlefield was poor, and Ruffner's Battery stumbled into the 2nd Kansas Colored Infantry Regiment, under the erroneous perception that the Kansans were a Confederate regiment. The error allowed the 2nd Kansas Colored Infantry to capture either two or three of the battery's cannons, which were moved to the Union lines. After the attack miscarried, Churchill's and Parsons's men were withdrawn. The battery suffered 17 casualties at Jenkins' Ferry; seven of the losses were prisoners of war, some of whom were executed by African American soldiers as revenge for African American troops who had been killed by Confederate cavalry while trying to surrender at Poison Spring. Later that day, Steele's men escaped across the Saline River via a pontoon bridge; they arrived in Little Rock on May 2. Ruffner's Battery was assigned four new cannons, all 6-pounder smoothbores. After Jenkins' Ferry, the unit saw no further action, and spent the rest of the war stationed at various points in Arkansas and Louisiana. On November 19, the battery, which had previously borne the name of its commander, was officially designated the 1st Missouri Field Battery and was assigned to Major William D. Blocher's artillery organization. Smith signed surrender terms for the Trans-Mississippi Department on June 2, 1865; the men of the 1st Missouri Field Battery were paroled five days later, while stationed at Alexandria, Louisiana, ending their combat experience. Over the course of the unit's existence, roughly 170 men served in it at some time or another. At least six of them were killed in battle, and at least four more died of illnesses. ## See also - Bibliography of the American Civil War - List of American Civil War battles
2,211,037
Flowing Hair dollar
1,122,164,294
Coin minted by the United States from 1794 to 1795
[ "1794 introductions", "Eagles on coins", "Goddess of Liberty on coins", "Silver coins", "United States dollar coins" ]
The Flowing Hair dollar was the first dollar coin issued by the United States federal government. The coin was minted in 1794 and 1795; its size and weight were based on the Spanish dollar, which was popular in trade throughout the Americas. In 1791, following a study by Alexander Hamilton, Congress passed a joint resolution calling for the establishment of a national mint. Later that year, in his third State of the Union address, President George Washington urged Congress to provide for a mint, which was officially authorized by the Coinage Act of 1792. Despite the authorization, silver and gold coins were not struck until 1794. The Flowing Hair dollar, designed by Robert Scot, was initially produced in 1794, and again in 1795. In October 1795 the design was replaced by the Draped Bust dollar. ## Background Beginning in the 1780s, a large number of prominent Americans called for the establishment of a central mint to supply the United States with official coinage; all such proposals failed due in large part to lack of funds and opposition from individuals and groups who preferred that coins be struck by the individual states. Since there were no federal coins issued, the needs of the states were fulfilled by a variety of domestic and foreign coins and tokens, including Spanish peso, eight-real coins (popularly known as Spanish dollars or pieces of eight). In 1789, the United States Constitution, which granted Congress the power "to coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures", was ratified and came into force. The following year, Congress began deliberating on the state of the nation's monetary system and coinage. On January 28, 1791, Treasury Secretary Alexander Hamilton presented a report to Congress detailing the findings of a study he had conducted on the monetary system and the potential of a United States mint. As part of his study, Hamilton had a series of assay tests of Spanish dollars performed, as that was the coin upon which the United States monetary system would be based. After viewing the results, the secretary recommended that the silver content of the United States dollar be based on the average silver content of the pesos tested. Hamilton's recommendation was that the dollar should contain 371.25 grains of silver and have a gross weight of 416 grains, the balance being copper. On March 3, 1791, after reviewing Hamilton's report, Congress passed a joint resolution authorizing a federal mint; the resolution, however, gave no specifics or appropriations. ### Establishment of the Mint In his third annual address to Congress, later known as the State of the Union address, delivered on October 25, 1791, in Philadelphia, President George Washington urged members of Congress to put the joint resolution approved earlier that year into immediate effect: > The disorders in the existing currency, and especially the scarcity of small change, a scarcity so peculiarly distressing to the poorer classes, strongly recommend the carrying into immediate effect the resolution already entered into concerning the establishment of a mint. Measures have been taken pursuant to that resolution for procuring some of the most necessary artists, together with the requisite apparatus. In response, the Senate appointed a committee chaired by Robert Morris to draft the necessary specifications and legislation that would officially create a federal mint and coinage. The committee presented a bill before Congress on December 21, 1791, which stated in part that the new dollar coin (which was to form the basis of the United States monetary system) should contain 371 grains of silver and a total weight of 416 grains, as Hamilton had earlier recommended. The new silver coins were to be struck in an alloy containing 1,485 parts out of 1,664 (about 89.24 percent) fine silver, with the remainder copper, intended to equal the silver in Spanish dollars. However, an assay of the Spanish dollars was in error—they were in fact 65/72 silver (about 90.28 percent) with the remainder copper. One provision in Morris' legislation called for President Washington to be depicted on the obverse side of every coin struck by the new mint. The bill passed the Senate after debate, but it was altered in the House of Representatives to instead call for the head of an allegorical figure representing Liberty to appear. Upon returning to the Senate, the upper house insisted on its version of the design provision. The House rejected the provision for the second time and passed another version of the bill, after which the Senate concurred. The law, known as the Coinage Act of 1792, was signed into law on April 2, 1792, by President Washington. The Act provided for the creation of the United States Mint, and appropriated money to meet the cost of construction of an appropriate facility, and for salaries for employees and officials. The denominations sanctioned under the Act were half cents, cents, half dimes, dimes, quarter dollars, half dollars, dollars, quarter eagles, half eagles and eagles. On July 31, 1792, the foundation stone of the Philadelphia Mint was laid by newly appointed Mint Director David Rittenhouse. Machinery and personnel began occupying the new building by September 1792, and production began on cents in February 1793. In the first year of production at the Mint, only copper coins were minted, as the prospective assayer could not raise the required \$10,000 surety to officially assume the position; the 1792 Coinage Act stated that both the chief coiner and assayer were to "become bound to the United States of America, with one or more sureties to the satisfaction of the Secretary of the Treasury, in the sum of ten thousand dollars". Later that year, Secretary of State Thomas Jefferson appealed to Congress that the amount of the bonds be lowered. On March 3, 1794, Congress lowered the bonds to \$5,000 and \$1,000 for chief coiner and assayer, respectively. ## Production ### Design creation Early in 1794, engraver Robert Scot began preparing designs for the silver dollar. Scot's initial design depicted a bust of Liberty, while his reverse featured an eagle, both required by the 1792 Coinage Act. Scot's design closely followed his design for the cent, but with the Phrygian cap removed. Government officials later instructed Scot to include a wreath around the eagle and to move the denomination from the reverse face to the edge of the coin. After receiving approval, Scot began engraving the hubs for the new silver dollar. Extra care was taken during the engraving of this denomination, because the dollar would be the largest American coin, and would thus receive the most scrutiny from foreign nations. The lettering was executed by Frederick Geiger, who had worked as a typographer for various books and newspapers. After the dies were created, several copper test pieces were struck. Officials decided to add fifteen stars around the periphery, representing the fifteen states that had ratified the Constitution to that point, to the right-facing Liberty on the obverse. ### Minting Now that mintage of the silver denominations could begin, the Mint began seeking depositors to bring in silver and gold bullion to be coined. After receiving several deposits, assayer Albion Cox notified Rittenhouse of his beliefs that the .892 standard approved for silver coinage was difficult to produce and that it would darken if put into circulation. Instead, Cox recommended that the purity be modified to .900 fine, but also that the weight be kept at 416 grains. This meant that the new alloy was contrary to statute and that all depositors would be overcharged for their silver bullion deposits, as there was a higher silver content in the coins than was allowed by the Coinage Act of 1792. The Mint's action cost suppliers of silver about one percent of their deposit; the largest depositor, John Vaughan, reckoned his loss at \$2,260. Congress approved his petition for reimbursement in 1800, after several delays. Before the coins could be struck, the edge lettering and devices had to be impressed on the edge of the planchets. This action was performed with a device known as the Castaing machine; the machine stamped the edge with the words "Hundred Cents One Dollar or Unit" along with ornamentation. As production was inexact, many planchets intended for silver dollars were overweight. This was remedied by filing the face of the planchets; for this reason, the coins vary in weight more dramatically than later issues, which were minted with more precise equipment. The first silver dollars were struck on October 15, 1794. The silver used for the 1794 dollars came solely from silver ingots deposited with the Mint by Mint Director David Rittenhouse on August 22, 1794. Per a handwritten coin transfer warrant issued by Director Rittenhouse on October 15, 1794, 1,758 silver dollars were transferred from the custody of Chief Coiner Henry Voigt to the custody of Mint Treasurer Dr. Nicholas Way. Also on October 15, per a handwritten coin return warrant issued by Director Rittenhouse, the 1,758 silver dollars were transferred from the custody of Mint Treasurer Dr. Nicholas Way to David Rittenhouse, as a partial coin return towards his August 22 silver deposits. The 1,758 coins that were struck by Chief Coiner Henry Voigt, though acceptable, were poorly struck due to issues with the coining press that was used during early production at the Mint. It was a man-powered screw press intended for use on coins no larger than a half dollar. On October 16, 1794, after receiving a silver dollar from David Rittenhouse, Secretary of State Edmund Randolph forwarded the dollar coin to President Washington for his inspection. In an attempt to help circulate the coins, Rittenhouse spent many of the new coins and traded them for foreign coins to market the new products of the Mint. Others were distributed to VIPs and distinguished visitors to the Mint. After the initial production, Rittenhouse ordered all dollar coin production to end until Mint personnel could build a more powerful press that would be capable of better striking the coins. The Columbian Centinel (Boston, MA) first wrote an article about the new dollar coins on November 26, 1794: > Some of the new dollars now coining at the Mint of the United States have found their way to this town. A correspondent put one in into the editor's hands yesterday. Its weight is equal to that of a Spanish dollar, but the metal appears finer ... The tout ensemble (entire design) has a pleasing effect to a connoisseur, but the touches of the [en]graver are too delicate, and there is a want of that boldness of execution which is necessary to durability and currency The new coinage press was completed in early 1795, and the first group of dollars, totalling 3,810 coins, was delivered on May 6. The coins struck on May 8 may have borne a 1794 date, however there is no document or evidence to support such a statement. A number of 1795 dollars (along with one 1794 issue) are known to have been struck with a silver plug set into the center, measuring approximately 8 millimetres (0.31 in). It is believed that this was done to correct the weight of underweight planchets. The total mintage for the second and final year of production is estimated at 160,295. In total, 203,033 silver dollars were struck in 1795, but it is unknown exactly how many of those were of the Flowing Hair type, as the Draped Bust dollar succeeded it in October 1795; the Draped Bust dollar was designed by portraitist Gilbert Stuart at the behest of Rittenhouse's successor as Mint Director, Henry DeSaussure. ## Collecting Throughout its history, the 1794 dollar has widely been considered one of the rarest and most valuable of all United States coins. In a September 1880 issue of The Coin Journal, the author noted that a good quality specimen of the 1794 dollar was valued at fifty dollars. In the early 1990s, numismatic historian Jack Collins estimated the surviving number of the coins to be between 120 and 130. In 2013, the finest known example, which was among the earliest coins struck and was prepared with special care, was sold at auction for \$10,016,875, the highest selling price of any coin in history. The dollar was graded Specimen-66 by the Professional Coin Grading Service, noting the special conditions under which it was struck. The coin, which had previously been owned by Colonel E.H.R. Green, was sold by Stack's Bowers Galleries in a public auction in January 2013. It was previously sold in 2010 for what was then a record sum of \$7.85 million, to the Cardinal Collection Educational Foundation. Steven Contursi, a former owner of the coin, said that it was a "national treasure" and that he was proud to have been its "custodian" from 2003 until its sale in 2010. Martin Logies, representative of the foundation that purchased the coin, said that of all the rarities he had seen, he believed that one was the "single most important of all".
8,096,037
Thomas Playford IV
1,166,333,392
20th-century Australian politician and fruit grower
[ "1896 births", "1981 deaths", "20th-century Australian politicians", "Attorneys-General of South Australia", "Australian Army officers", "Australian Freemasons", "Australian Knights Grand Cross of the Order of St Michael and St George", "Australian military personnel of World War I", "Australian people of English descent", "Leaders of the Opposition in South Australia", "Liberal and Country League politicians", "Members of the South Australian House of Assembly", "Premiers of South Australia", "Treasurers of South Australia" ]
Sir Thomas Playford (5 July 1896 – 16 June 1981) was an Australian politician from the state of South Australia. He served continuously as Premier of South Australia and leader of the Liberal and Country League (LCL) from 5 November 1938 to 10 March 1965. Though controversial, it was the longest term of any elected government leader in Australian history. His tenure as premier was marked by a period of population and economic growth unmatched by any other Australian state. He was known for his parochial style in pushing South Australia's interests, and was known for his ability to secure a disproportionate share of federal funding for the state as well as his shameless haranguing of federal leaders. His string of election wins was enabled by a system of malapportionment and gerrymander later dubbed the "Playmander". Born into the Playford family, an old political family, he was the fifth Thomas Playford and the fourth to have lived in South Australia; his grandfather Thomas Playford II had served as premier in the 19th century. He grew up on the family farm in Norton Summit before enlisting in the Australian Imperial Force in World War I, fighting in Gallipoli and Western Europe. After serving, he continued farming until his election as a representative for Murray at the 1933 state election. In his early years in politics, Playford was an outspoken backbencher who often lambasted LCL ministers and their policies, and had a maverick strategy, often defying party norms and advocating unadulterated laissez faire economics and opposing protectionism and government investment, in stark contrast to his later actions as premier. With the resignation of the LCL's leader, Richard Layton Butler, Playford became premier in 1938, having been made a minister just months earlier in an attempt to dampen his insubordination. Playford inherited a minority government and many independents to deal with, and instability was expected; he was seen as a transitional leader. However, Playford dealt with the independents adroitly and went on to secure a one-seat majority at the next election. In office, Playford turned his back on laissez faire economics and used his negotiating skills to encourage industry to relocate to South Australia during World War II, as the state was far from the battlefield. He built upon this in the post-war boom years, particular in automotive manufacturing; although a liberal conservative, his approach to economics was pragmatic, and he was derided by his colleagues for his socialism as he nationalised electricity companies and used state enterprises to drive economic growth. Generally, Playford had more dissent from within his own party than the opposition centre-left Labor Party; the main obstructions to his initiatives came from the upper house, where the restriction of suffrage to landowners resulted in a chamber dominated by the conservative landed gentry. Labor leader Mick O'Halloran worked cooperatively with Playford and was known to be happy being out of power, quipping that Playford could better serve his left-wing constituents. Playford's policies allowed for the supply of cheap electricity to factories, minimal business taxes, and low wages to make the state more attractive to industrial investment. He kept salaries low by using the South Australian Housing Trust to build public housing and government price controls to attract workers and migrants, angering the landlord class. Implemented in the 1940s, these policies were seen as dangerous to Playford's control of his party, but they proved successful and he cemented his position within the LCL. During the 1950s, Playford and the LCL's share of the vote declined continually despite economic growth, and they clung to power mainly due to the Playmander. Playford became less assured in parliament as Labor became more aggressive, their leading debater Don Dunstan combatively disrupting the previously collaborative style of politics, targeting the injustice of the Playmander in particular. Playford's successful economic policies had fuelled a rapid expansion of the middle class, which wanted more government attention to education, public healthcare, the arts, the environment, and heritage protection; however, Playford was an unrelenting utilitarian, and was unmoved by calls to broaden policy focus beyond economic development. This was exacerbated by Playford and his party's failure to adapt to changing social mores, remaining adamantly committed to restrictive laws on alcohol, gambling and police powers. A turning point in Playford's tenure was the Max Stuart case in the 1950s, when Playford came under heavy scrutiny for his hesitation to grant clemency to a murderer on death row amid claims of judicial wrongdoing. Although Playford eventually commuted the sentence, the controversy was seen as responsible for his government losing its assurance, and he eventually lost office in the 1965 election. He relinquished the party leadership to Steele Hall and retired at the next election, serving on various South Australian company boards until his death in 1981. ## Family The Playford family heritage can be traced back to 1759, when a baby boy was left at the door of a house in Barnby Dun, Yorkshire, England, with a note to christen the child 'Thomas Playford'. The occupants of the house, who were to raise the child, were given instructions to receive money from a bank account for the deed. The child grew up to be a simple farmer in the village, and had a son in 1795 whom he christened 'Thomas Playford'. The tradition of naming the firstborn son in the family in this way has continued since. The second Playford was something of a loner, but at the age of 15 he developed a relationship with a girl five years his senior with whom he fathered a child. In order to avoid the social stigma of the situation, and on the advice of his parents, Playford enlisted in the British Army in 1810. While three years under the acceptable age, Playford's height (6 ft 2 in) enabled him to pass as eighteen. He spent 24 years in the service of the Life Guards, fighting all over Europe in Portugal, Spain and France, including the Battle of Waterloo at the age of 20. While a soldier, Playford became a devout Christian; he journeyed to many churches and listened to a great variety of sermons. He was sceptical of many pastors and church men, dismissing their "high sounding barren words". He left the Life Guards in 1834, received a land grant in Canada for his service, and journeyed there with his wife and family. His wife and a child died in the country, so he and his remaining kin returned to England. He worked as a historian for the Life Guards until 1844 when he migrated to the then-province of South Australia. Playford became a pastor there, built a property at Mitcham, and preached regularly for his own 'Christian Church', which was essentially Baptist in character. The third Playford, Thomas Playford II, was born at Bethnal Green, London in 1837, to the second wife of Pastor Playford. He was raised on the Mitcham property in South Australia, was intellectual and bookish, and wished to attend the prestigious Anglican St Peter's College to study law. He was rebuked by his father and subsequently became a farmer like his predecessors, buying property at Norton Summit and growing vegetables, plums and apples. He was elected to the local East Torrens Council in 1863 at the age of 27; and then to the State Parliament in 1868 as a 'liberal' (parties had not yet formed), representing the constituency of Onkaparinga. He became known as 'Honest Tom' for his straightforward and blunt ways. He lost his seat in 1871 and regained it in 1875 only to lose it again until he was re-elected in 1887, upon which he became Premier of South Australia. He subsequently lost the premiership in 1889, regained it in 1890, and then spent a great deal of his term absent in India. After losing an election, he relocated to London to represent South Australia as Agent General to the United Kingdom. While in England, Playford was thrice offered a knighthood, but declined it each time. He returned to South Australia to assist Charles Kingston in his government, but ultimately crossed the floor to bring down Kingston over his plans to lessen the power of the Legislative Council. With the advent of Australian Federation, Playford became a Senator for South Australia. He was leader of the Senate and the 7th Minister for Defence. After one term as a Senator, Playford was defeated. He ran again in 1910, was unsuccessful, and retired to Kent Town, where he died in 1915, aged 78. The fourth Playford, father of Sir Thomas, was born in 1861. Unlike his own father and grandfather, who had led lives as soldiers, churchmen and politicians, he became a simple farmer at the Norton Summit property and was dominated by his wife, Elizabeth. He was, like his forebears, a regular churchgoer, and only once was involved in politics with a short stint on the East Torrens District Council. In comparison, Elizabeth was the local correspondent of The Advertiser, treasurer and chief member of the local Baptist Church, and a teacher. Four children were born to the couple; three daughters and one son, Sir Thomas. ## Early life Thomas Playford was the third child born to the family, with two sisters before him and one following. He started school at the age of six, going to the local Norton Summit School. The school had one room, one teacher, two assistants and 60 students, and taught children aged six to twelve. Playford, while an adept learner, frequently argued with his teacher, and was the first child to have been caned there. While learning, he accompanied his father down to the East End Markets with their farming produce. It was the influence of Playford's mother, Elizabeth, that contributed to his relative Puritanism and social habits. She was a devout Baptist Christian, and it was primarily because of her that he publicly abstained from alcohol, smoking, and gambling throughout his lifetime. However, despite her influence on his social habits, he did not regularly attend church like his family. His father suffered a fall and a broken leg when Playford was thirteen. He requested permission to leave school and take over the family farm; this was granted, and the boy, even after his father had recovered, dominated the management of the farm. While out of school, Playford continued to learn; he joined the local Norton Summit Society, and took part in classes and debates in Adelaide. He won a public speaking award for a speech he made to an Adelaide literary society. World War I broke out in 1914, and Playford wished to join the Australian Imperial Force. His parents persuaded him to assist them on the farm until close to his 19th birthday. He entered Keswick Barracks on 17 May 1915, was enlisted as a private and placed in the 27th Battalion, 2nd Division. Playford was one of those who left Adelaide on HMAT Geelong on 31 May. The Geelong picked up more soldiers at Perth, and then sailed to Suez, Egypt. The Australian soldiers received training in Egypt, but during the evenings left their camps to indulge themselves in the Egyptian towns and cities. Frequent fights broke out between the Australian troops and the locals, with responsible soldiers left to take the rest back to camp. Playford assisted in this and dragged Australian soldiers from the beds of Egyptian prostitutes. Training was completed after two months and Playford landed at Anzac Cove on 12 September 1915. After taking part in the Gallipoli Campaign, Playford and his battalion left for France on 15 March 1916. He fought on the Western Front and was shot and wounded on 20 October, evacuated to London, and kept out of action for a year. Playford endured many operations during this time to remove the shrapnel that had penetrated his body, although some of it remained within him, and his hearing was permanently damaged. Turning down an offer for a staff job in India, Playford returned to his battalion in October 1917 and continued fighting in Belgium and France. With the end of the Great War, Playford returned to South Australia with his battalion, disembarking at Outer Harbor, Adelaide on 2 July 1919. He had received no decorations, but had been commissioned from the ranks as an officer and was honourably discharged in October with the rank of lieutenant. Despite Playford's intellectual capability, he shunned the Government's offer of free university education for soldiers and returned to his orchard. He continued growing cherries on the property, and engaged in his hobby of horticulture. His involvement in various organisations and clubs was renewed. Through relatives Playford met his future wife Lorna Clark (1906–1986), who lived with her family in Nailsworth. Although both families were religiously devout, the Clarks were even more so than the Playfords, and a long courtship ensued. Taking her out on his Harley Davidson motorcycle at night, the two were forced to leave the theatre halfway through performances so as to not raise the ire of the Clarks. Before their wedding on 1 January 1928, they were engaged for three years. During their engagement, Playford built their new house on his property, mostly by his own hands and indented in the hills themselves; it remained their home throughout their lives. Two years later, on Christmas Day, 1930, the family's first daughter was born, Margaret. Two more children were born to the family; Patricia in 1936, and Thomas Playford V in 1945. All three of them attended private schools: Patricia attended the Presbyterian Girls' College, becoming a teacher; and Margaret attended Methodist Ladies' College, later training as a child psychiatrist. The sixth Thomas wanted to attend university, but, like his forebears, was rebuked and worked on the orchard. Like a Playford before him, he became a minister of religion in his later life. ## Political career Among the organisations that Playford belonged to was the local branch of the Liberal Federation, yet until the months preceding his eventual election, he never talked of holding political office. The Liberal Federation was considering a merger with the Country Party to avoid Labor retaining office during the Great Depression. Archie Cameron, an old wartime friend of Playford's and a federal Coalition MP, influenced Playford to run for office when he heard of the merger. In 1932 the Liberal and Country League (LCL) was created, and Playford ran for the multi-member constituency of Murray at the 1933 election. Along with the other LCL hopefuls, Playford journeyed around the electorate advocating his platform. The constituency had a considerable German element, descendants of refugees who had escaped persecution in the German Empire. Grateful for the past help of Playford's grandfather, they swung their strong support behind him and he was comfortably elected to the South Australian House of Assembly. With a split in the Labor vote, the first LCL government was formed with Richard Layton Butler as Premier. For the next five years Playford was to remain a backbencher, and to involve himself relatively little in government matters. His speeches were short, but to the point, and, running against the norm, he often attacked the government itself when he saw fit. The historian Peter Howell said that Playford was "an unusually insolent and disloyal backbencher, always concerned to cut a figure and ridicule his party's leader". A new member's maiden speech is traditionally heard politely without the interruption and heckling prevalent in Australian politics, but Playford's aggressive debut in parliament was not accorded this privilege as "a casual visitor could have mistaken him" for an opposition member. At one point, a visibly angry Premier Butler interjected after Playford attacked the members of the Employment Promotion Council. In his opening address, Playford individually mocked the bureaucrats who comprised various government bodies, and then condemned public transport monopolies, as well as declaring "It is not our business to worry whether people go broke or not". This comment provoked interjections from both government and opposition members—in the midst of the Great Depression, Playford's unashamed and aggressive promotion of his unbridled laissez faire philosophy stood out amidst the increasing prevalence of government intervention. During his first term in parliament, Playford also gained attention for his unconvincing command of the English language; he developed a reputation for mispronouncing common words, using bad syntax, and speaking in a monotone. He continued to attack his ministers, and complaints from the likes of Public Works Minister Herbert Hudd only encouraged Playford to further mock him. He consistently opposed the liberalisation of liquor trading, having been unimpressed by the drunken behaviour he had witnessed while in the military. He continued to stridently support economic rationalism, something he would later renege on as premier. He opposed government investment in capital works as a means of generating employment and stimulating the economy during the Depression, and called for a decrease in dairy production within the state on the basis that it was more efficient to import from interstate, where rainfall was higher and grazing was more effective. Playford further criticised government subsidies to work farms designed to alleviate unemployment among Indigenous Australians, claiming that the cost exceeded that of the standard jobless payment. He also endorsed the privatisation of unprofitable state railways and denounced tariff protection as rewarding inefficiency and non-innovation. In 1936, Playford defied his party by voting against the formation of the South Australian Housing Trust. Nevertheless, despite his refusal to toe the party line, Playford was well regarded for his studious attitude to research and his preparation of his speeches. Around Playford, much activity was occurring. Legislation provided for the tools that he was to inherit later as Premier: aggressive economic initiatives, a malapportioned electoral system and a staid internal party organisation. The state had been persistently in deficit in recent times, and as an agriculture-dominant state, had been at the mercy of commodity prices, so a strategy of industrialisation was initiated under the guidance of senior politicians, public servants and industrialists. The creation of the LCL was dependent on the implementation of various policies to ensure the strength of the party's country faction. There had been an electoral bias in favour of rural areas since the Constitution Act of 1857, but it was now to dramatically increase. In 1936, legislation was brought in that stipulated that electoral districts were to be malapportioned to a ratio of at least 2:1 in favour of country areas. In addition, the 46 multimember districts were replaced with 39 single-member districts—13 in Adelaide and 26 in the country. Over the next three decades, Adelaide's population grew until it had triple the population of the country, but the distribution of seats in the legislature gave rural voters a disproportionate influence by a factor of six. The desired long-term effect was to lock the opposition Labor Party out of power; the unexpected short-term effect was a large number of dissatisfied rural independents in the 1938 election. Although he played no part in its development or implementation, the electoral system gerrymander was later christened the 'Playmander', as a result of its benefit to Playford, and his failure to take action towards reforming it. After the Liberals won the 1938 election, with Playford having transferred to Gumeracha, Butler sought to tame Playford's aggressive oratory approach towards the LCL cabinet by offering him a ministry. Playford entered the cabinet in March 1938 as the Commissioner of Crown Lands, and held portfolios in Irrigation and Repatriation. The new frontbencher subsequently adopted a more moderate style of parliamentary conduct. Butler abandoned the Premiership in November to seek election for the federal seat of Wakefield, a Liberal stronghold that had been vacated by the death of sitting member Charles Hawker in an aviation accident. Despite having been in cabinet for only a few months, Playford was unanimously elected as the new leader of the LCL by his peers, and thus became the 33rd Premier of South Australia. Like Butler, he also served as Treasurer of South Australia. Regarded as a compromise candidate who was able to appeal to both urban and rural voters, it was thought that Playford would only be a transitional leader before someone else took over the Liberal leadership, but he was to remain for almost 27 years. Playford's tenure was the premiership in which the title was officially applied; until that point, the Treasurer was the head of the government, although Premier had been in de facto use for several years. Upon his ascension, Playford headed a minority government; the LCL only held 15 of the 39 seats in the lower house. The balance of power was held by 13 mostly conservative independents. Many had gained from discontent over Butler's relatively liberal social stances, so Playford sought to assuage them by having his LCL colleagues refrain from upsetting social conservatives. He also used the threat of an early election to deter the independents from stalling his initiatives—with their lack of party infrastructure and funding, they would be the most vulnerable to election campaigns. ## World War II Playford became a wartime Premier in 1939 when Australia, as part of the British Empire, entered World War II. Later in the war, cut off from traditional suppliers of manufactures, the country was forced to create its own. Armaments and munitions factories needed to be created to supply the war effort, and Playford was vociferous in advocating South Australia as the perfect location for these. It was far from the battlegrounds and had the most efficient labour force in the nation. British Tube Mills opened a mill in the inner-northern suburbs. Ammunition factories were built in the northern and western suburbs of Adelaide, as well as in some smaller installations in regional centres, and construction on a shipyard began in Whyalla. Having strenuously opposed a construction of a pipeline to pump water from Morgan in the Murray River to Whyalla for the Whyalla Steelworks and blast furnace there before his ascension to the premiership, Playford oversaw approval of the Morgan-Whyalla pipeline in 1940 and its completion in 1944. He also reversed his previous opposition to Butler's pine plantation and sawmill program, authorising an expansion of the program in the state's southeast. Salisbury, then a dormitory town to the north of Adelaide, became a defence centre; the shipyards at Whyalla began launching corvettes in 1941 just as Japan entered the war. All of these developments were done under Playford's watch, with most of the factories being built by the Department of Manpower and the South Australian Housing Trust. In Woodville in western Adelaide, a large plant for Actil cotton was built. The explosives factory at Salisbury was converted into an aerospace research facility after the war, as various companies worked on matters related to rocket testing at Woomera in the state's far north; the Salisbury complex became the second largest employer of South Australians for a period after the war. The munitions factory in the western suburb of Hendon was later converted into a plant for the electrical appliance firm Philips and at its peak employed more than three thousand people. In order for these developments to occur, Playford personally had to attend to the bureaucracy that stood in the way. He confronted public service workers, and successfully negotiated with the heads of private companies. But it was negotiations with the Federal Government that were to prove the hardest. In his time as Premier, Playford was to confront seven different Prime Ministers: Lyons, Page, Menzies, Fadden, Curtin, Forde and Chifley. Strangely, he enjoyed best relations with the Laborite Chifley, and had a poor rapport with his fellow conservative, Menzies. During the wartime years, Menzies' reluctance to meet with Playford initially hampered industrial efforts, but Playford's other federal colleagues made sure that deals could be made. To Playford's advantage there was usually a disproportionate number of South Australians in federal cabinets, both Liberal and Labor. This clout, combined with his own intensive and unconventional negotiating tactics, made sure that South Australia regularly received more federal funds than it would have been allocated otherwise. This was to Robert Menzies' chagrin, who said: "Tom [Playford] wouldn't know intellectual honesty if he met it on the end of a pitch fork but he does it all for South Australia, not for himself, so I forgive him." By the time of his departure from power, Playford gained the reputation of being "a good South Australian but a very bad Australian", and for using "threats to bully recalcitrant Prime Ministers". For his part, Playford remained unrepentant, claiming that federal authorities had infringed the constitution of Australia and had consistent exercised powers over the states that were not rightfully theirs. Playford accused the High Court of Australia of helping the federal parliament under Curtin to legislate to give itself a monopoly on the acquisition of income tax, which he claimed was contrary to the intention of the constitution to prevent excessive centralisation of power in the federal government. In 1958, he threatened to take the federal government to the High Court, which led to South Australia being given more compensation under the River Murray Waters Agreement for the loss of water from the Snowy River. Three years later he went to the High Court in an attempt to have Canberra pay for the standardisation of the gauge on the Broken Hill-Port Pirie railway. During the war, two state elections were held, in 1941 and 1944. In the 1941 election, there was a significant decrease in the independent vote, and both the Labor Party and the LCL made gains, with Playford forming the LCL's first majority government. This was in large part due to the LCL's shift to the right on social issues to usurp the independents' appeal. In 1942, compulsory voting (but not enrolment) was introduced, and first took effect at the 1944 election, with an increase in voter turnout from 51% to 89%. Again Playford won with a one-seat LCL majority, hanging on with the help of the malapportioned electoral system. Power and water schemes were expanded to be able to cope with the industrial development occurring. The state was at a disadvantage in that it was completely reliant on imports for its fuel supply. South Australia's near-monopoly electricity supplier, the Adelaide Electricity Supply Company (AESC), was reluctant to build up coal reserves in case of a transportation problem. They ran on coal that was shipped over from New South Wales (NSW), where the mines were inefficient and plagued by communist-agitated industrial strife. Playford demanded that supplies be built up so the factories could keep producing; he managed to secure eight months worth of coal reserves from NSW, but even that began to dwindle due to the continued industrial action. Coal supplies were ordered from South Africa in desperation, at Playford's behest. The frustration he experienced while dealing with the AESC would later prove disastrous to the company as the Premier took action against them. ## Industrialisation The AESC continued to snub the government. Playford advocated the use of brown coal from the South Australian Leigh Creek mine to avoid supply complications, and even made into law a bill encouraging its use. He also championed the development of the town and the expansion of the mine, which had been dormant for several decades, to ease the state's dependency on imported coal. Much state and federal government money was invested in the scheme, the town infrastructure was built, and the production started in February 1944. Shortly afterwards, the AESC responded by buying new boilers which could only use the more productive black coal. With more conflicts ensuing, and even with the company slowly relenting, Playford did not stop his struggle. A Royal Commission in March 1945 was appointed to ascertain a solution between the two parties, and presented its report in August with a recommendation that the AESC be nationalised. A few months later, Playford's stance received a boost when heavy strikes in New South Wales forced shutdowns in South Australia that saw thousands of labourers out of work. By then heading the only conservative government in the nation, when Playford requested commonwealth funds to assist in the nationalisation of the AESC, Prime Minister Chifley responded with glee and enthusiasm. On 11 October, he presented a bill to Parliament to nationalise the AESC and create the Electricity Trust of South Australia. Labor, astonished that such an action was to come from a Liberal Premier, resolutely supported the bill, guaranteeing it passage through the House of Assembly 29–6, the only dissenters being LCL members. However, the Legislative Council was dominated by economic conservatives, fierce adherents of free enterprise and opponents of what they considered to be undue government intervention in the economy. The LCL councillors tried to have the bill watered down to allow merely for government control of AESC for a brief period. In the Council, where suffrage was reliant upon wage and property requirements, the ALP only held four seats out of twenty, and only five LCL members supported the nationalisation. Thus, on 7 November, the bill failed to pass and it was not put to the Parliament again until 1946. On 6 April, after months of campaigning on Playford's part, he managed to change the mind of MLC Jack Bice, and the bill passed. The Electricity Trust of South Australia was formed, and was to become a major aid to post-war industrialisation. The decision to nationalise AESC and develop Leigh Creek proved to be prescient. In early 1947, mines in New South Wales were again crippled by strikes. The worst strike came in 1949, forcing Chifley to send in the armed forces to extract coal. While the other states had to suffer industrial power rationing and thus reduced manufacturing output and more unemployment, South Australia managed to escape as the miners at Leigh Creek worked around the clock. Within four years the mine was operating at a surplus and the town was further rewarded with federal funding. From 1947 until the end of Playford's leadership in 1965, the output of the mine increased tenfold to almost two million tons a year. Transport infrastructure was improved, European immigrant workers were recruited, and twin power plants at Port Augusta were completed in 1960 and named after the premier. The new plants exclusively used Leigh Creek coal and by 1970, the whole state was self-sufficient for electricity. ETSA and the mine were generating enough revenue to maintain the town—sometimes dubbed Uncle Tom's Baby—and mine of Leigh Creek and making a profit as well. From 1946 to 1965, the proportion of South Australians connected to electricity increased from 70 to 96%. The nationalisation of the AESC was the most prominent manifestation of Playford's economic pragmatism; although ideologically a supporter of free enterprise like his colleagues, he saw ideology as secondary if in the way of his objectives. He had little time for those who objected to plans that were for the betterment of South Australia, despite these plans being contrary to particular interpretations of party ideology. The struggle for Leigh Creek was seen as a critical point in Playford's premiership; a second legislative failure was seen as being potentially fatal for Playford's leadership of his party, but the successful passage of the bill enhanced his image and gave himi enduring control over his party for the rest of his career, although it angered some of the staunch LCL conservatives in the upper house for some time; a significant number of them refused to talk to Playford for a substantial period of time thereafter. During the post-war boom, the methods used to set up business in South Australia were unique. Playford's government would charge little to no business tax, supply cheap electricity, land and water, and have the Housing Trust build the factories and workers' homes. Consumer goods and automotive factories were created in the northern and western suburbs of Adelaide; mining, steel and shipbuilding industries appeared in the 'Iron Triangle' towns of Whyalla, Port Pirie and Port Augusta. Prices and wages were kept relatively low to enable continued investment, and South Australia was slower than the other states to abolish these wartime measures to increase its industrial competitiveness. The government initiatives managed to overcome the large logistic burden, as Adelaide and South Australia were far from the markets where the goods would be sold. The Housing Trust was a key plank in Playford's campaign to keep costs low and promote investment. By providing cheap housing, workers could also be persuaded to accept lower salaries, therefore keeping production costs down. In 1940, Playford introduced the Housing Improvement Act to parliament, having seen the benefits of the Housing Trust's activities. The main aims of the legislation were "to improve the adverse housing conditions" by replacing "insanitary, old, crowded, or obsolete dwelling houses" with better-quality buildings—at the time many older residences in the city centre were made of corrugated iron and many areas were slum-like. The law forced landlords to provide a minimum standard of housing and enacted rent controls, setting a maximum rent for various houses; at the time many landlords bought large numbers of low-quality dwellings and charged tenants exorbitant prices. It also expanded the role of the Housing Trust, potentially undercutting the rentier class. Labor were taken aback by Playford's move, as this was the start of a trend whereby the nominally conservative government pursued policies that were more left-wing than other Labor governments across the country. After expressing shock at Playford's "loving kindness to the poor and distressed", Labor helped to get the legislation—which threatened the interests of the landlord class that traditionally supported the LCL—passed into law. During one 15-year period, Housing Trust rents were not increased once despite steady inflation. Many of the methods that Playford used were described by economic conservatives as 'socialism', drawing opposition from within his own party, especially in the Legislative Council. It is even said that the Liberal leader in that chamber—Sir Collier Cudmore—once referred to Playford as a 'Bolshevik'. The unique economic intervention earned Playford scorn from his own colleagues, but the Labor movement was much more receptive. Indeed, Labor leader Mick O'Halloran would dine with Playford on a weekly basis to discuss the development of the state, and the pair were on close personal terms. At a dinner party, O'Halloran remarked that "I wouldn't want to be Premier even if I could be. Tom Playford can often do more for my own voters than I could if I were in his shoes." O'Halloran's lack of ambition was mocked in a political cartoon, but the Labor leader took the piece as a compliment and had it framed and put on display. As Playford had more opposition from his LCL colleagues in the upper house than Labor, O'Halloran was often described as the premier's 'junior partner". Playford called Labor "our Opposition", in comparison to opponents in his party, which he decried as being "critical without being helpful". This cooperative nature of party politics would not change until Don Dunstan's prominence in the late 1950s, when Playford would be assailed not for his economics, but for his government's comparatively low expenditure on public services such as education and healthcare. Large projects were commenced. The city of Elizabeth was built by the Housing Trust in Adelaide's north, for the production of Holden motor vehicles. Populated mainly by working-class English migrants, it was, before its eventual economic and social decline, a showcase of successful city planning. Playford also successfully coaxed Chrysler to stay in Adelaide and expand its operations. The Housing Trust sold the Tonsley Park where the car manufacturing plant was set up, and helped to install railyards, electricity and water infrastructure there, as it had done at Elizabeth. By the time Playford left office, Holden and Chrysler employed around 11,000 workers, 11% of the state's manufacturing employees. After earlier failed attempts to bring a tyre factory to Adelaide, the plans to build the Port Stanvac Refinery which would produce hydrocarbons used in synthetic rubber—in the early 1960s were enough to convince both a Dunlop Rubber-Olympic joint venture and SA Rubber Mills (later Bridgestone Australia) to start manufacturing operations. Playford also sought to involve South Australia in uranium mining, which he saw as both a means of providing electricity for powering industrial development, and as a means of ensconcing the state in the anti-communist alliance in the midst of the Cold War. He was supported his venture by federal subsidies and concessions. After the deposits at Mount Painter were deemed to be unsuitable, the focus turned to Radium Hill and significant state government money was invested into research. State and federal laws were changed to allow for mining at Radium Hill and exportation of uranium; Playford also publicly advocated for nuclear power. Rewards were offered for the discovery of uranium deposits, but no suitable reserves were found, so Radium Hill was the only project to proceed. The Korean War had just erupted, and the American government was anxious to secure uranium for nuclear weapons. Playford was able to exploit this to secure "the easiest and most generous [deal] in the history of uranium negotiations". It was the highest purchase of uranium the Americans made during the Cold War and they contributed £4m for infrastructure development. Mining started in November 1954, and lasted for the seven-year-period of the contract with the Americans. Almost a million tonnes of ore had been mined, amounting to nearly £16m in contracts. Radium Hill had made a profit but was closed as higher-grade alternatives were discovered elsewhere and a new buyer could not be found. Playford also attempted to have the Australian Atomic Energy Commission based in the state, but failed; the nation's only nuclear reactor was built at Lucas Heights on the outskirts of Sydney. When Playford left office in 1965, South Australia's population had doubled from 600,000 in the late 1930s to 1.1 million, the highest proportionate rate among the states. The economy had done likewise, and personal wealth had increased at the same rate, second only to Victoria. During Playford's 27 years in power, employment in manufacturing in South Australia had increased by 173%; Western Australia was in second place with 155% growth, while the national average was during the period was 129%. The state's share of Australia's manufacturing sector increased from 7.7 to 9.2%. However, there was criticism that Playford had diversified secondary industries enough, that industrial growth was beginning to lag the other states in the last decade of his leadership, and that the reliance on automotive production—Holden and Chrysler were 15% of the economy—made the economy more vulnerable to shocks in the future. Playford was also criticised for his informal style and tendency to rely on a small circle of public servants, sidelining much of his cabinet and not leaving a legacy of industrial infrastructure. Blewett and Jaensch said that Playford's "ad hoc methods and personalised administration" had worked well but said he needed a "more sophisticated" approach in later years, and was unable to adapt. ## Don Dunstan At the 1953 election, the young lawyer Don Dunstan was elected to the House of Assembly as the Labor member for Norwood, ousting the LCL incumbent. Playford had landed unexpectedly in his role as the undisputed leader of his party, while Dunstan was, from the start of his parliamentary career, a stand-out among his own ranks and an excellent orator in parliament. Dunstan and Playford were each other's principal antagonists. Playford, used to cooperating with Labor leaders more than attacking them, sensed Dunstan's promise and, predicting that one day Dunstan would be at the helm, attempted to establish bonds. So, after a late session of parliament at night, Playford would give Dunstan a lift home in his car. As Dunstan's home was situated on George Street, Norwood, it was only a small deviance from Playford's normal route to his home in Norton Summit. The topics that the two discussed were not ever completely revealed, yet Playford, according to Dunstan, would talk to him in a paternalistic manner. The two built up somewhat of a relationship and developed a respect for each other, but due to the strength of their respective views (Playford was a liberal conservative, Dunstan a libertarian socialist), did not establish the same type of bond that Playford had with earlier Laborites. To face an opposition that was becoming uncooperative was not what Playford has expected, or could satisfactorily handle. Before the effect Dunstan had on Parliament, Playford would meet with Labor leaders to discuss bills, and ensure bipartisan support in the House of Assembly for them; there was little discordance on matters. The belligerents were previously only rural independent members. Even while the economic boom continued, the LCL vote gradually declined after 1941. The LCL never held more than 23 seats during Playford's tenure due to being almost nonexistent in Adelaide. With few exceptions, its support in the capital was limited to the eastern crescent and the Holdfast Bay area even at the height of Playford's power. It relied on favourable preferences from minor parties and independents and the malapportioned electoral system in order to stay in office. It did, however, win a majority of actual votes, barring 1944 and 1953, on a two-party-preferred basis until 1962. Knowing that the Playmander made a statewide campaign fruitless, Labor had begun to combat the Playmander by directing its efforts at individual seats. Slowly, seats were whittled away—the loss of Norwood in 1953 was followed by the losses of Murray, Millicent and Frome in 1956, and Mt Gambier and Wallaroo in 1957–8 by-elections. Playford's dominance over the party and his ignorance of the wishes of its broad membership base brought about a degree of disillusionment, and the party machine began to decay. The dominance stopped the emergence of a new generation of political talent, and had a "stultifying" effect. Although the Playmander ensured his ongoing electoral success, and Playford was credited with South Australia's economic success, the LCL polled a lower percentage than the corresponding Liberal government at federal level. During this period, Prime Minister Menzies recommended that Playford be bestowed with a form of honours. Playford's wish was to be made a privy councillor, yet, while entirely possible, if granted it would lead to demands from other state Premiers. Playford's grandfather had declined a KCMG, and Playford himself did initially, but under the influence of Menzies he eventually accepted the honour and was knighted in 1957, but this time a class above the KCMG; the GCMG. ## Max Stuart trial In December 1958, an event that initially had nothing to do with Playford, occurred, and eventually intensified into a debacle that was regarded as a turning point in his premiership and marked the beginning of the end of his rule. A young girl was found raped and murdered, and Max Stuart, an Aborigine, was convicted and sentenced to be executed only a month later, on the basis of a confession gained during interrogation, although he had protested his innocence in pidgin English. Stuart's lawyer claimed that the confession was forced, and appeals to the Supreme and High Courts were dismissed. A linguist who investigated the case thought that the style of English in the confession was inconsistent with Stuart's background and speech. This aroused disquiet and objections against the fairness of the trial among an increasing number of legal academics and judges, and The News brought much attention to Stuart's plight with an aggressive, tabloid-style campaign. Soon, the case attracted international attention, some on the assumption that the legal system was racist. The former High Court Justice Sir John Latham also spoke out. During this time, Stuart's execution had been delayed on multiple occasions. On 6 July, Playford and the Executive Council decided not to reprieve Stuart, and he was due to be executed the next day, but an appeal to the Privy Council in London stalled proceedings again. However, this also failed. Labor then tried to introduce legislation to stall the hanging. Amid loud outcry, Playford started a Royal Commission to review the case. However, two of the Commissioners appointed, Chief Justice Mellis Napier and Justice Geoffrey Reed, had already been involved, Napier as presiding judge in the Full Court appeal and Reed as the trial judge. This provoked worldwide controversy with claims of bias from the likes of the President of the Indian Bar Council, the British judge Norman Birkett, the leader of the United Kingdom Liberal Party Jo Grimond and former British Prime Minister Clement Attlee. Years later, Playford admitted that he erred in his appointments of Reed and Napier and that it could have shaken public confidence on the fairness of the hearing. The Royal Commission began its work and the proceedings were followed closely and eagerly debated by the public. As Playford had not shown an inclination to commute Stuart's sentence, Dunstan introduced a bill to abolish capital punishment. The vote was split along party lines and was thus defeated, but Dunstan used the opportunity to attack the Playmander with much effect in the media, portraying the failed legislation as an unjust triumph of a malapportioned minority who had a vengeance mentality over an electorally repressed majority who wanted a humane outcome. Amid the continuing uproar, Playford decided to grant clemency. He gave no reason for his decision. The Royal Commission continued its work and concluded that the guilty verdict was sound. Although a majority of those who spoke out against the handling of the matter thought that Stuart was probably guilty, the events provoked heated and bitter debate in South Australian society and destabilised Playford's administration. According to Ken Inglis, "most of the responsibility for letting the ... general controversy ... [lies with] Sir Thomas Playford and his ministers ... [Theirs] was the response of men who were convinced that the affairs of the society were in good hands, and that only the naive and the mischievous would either doubt this general truth or challenge any particular application of it." Blewett and Jaensch said that the "clumsy handling" of the case was a manifestation of "the inevitable hubris of men too long in power". ## Political decline Playford was confronted with an economic recession when he went into the election of 1962. Earlier, in late 1961, the federal Liberal-Country coalition suffered a 13-seat swing and barely held onto government. Menzies' majority was slashed from 32 to 2. In the 1962 election, the Labor Party gained 54.3% of the two-party-preferred vote to the LCL's 45.7 percent. However, due to the Playmander, this was only enough to net Labor 19 seats to the LCL's 18. The balance of power rested with two independents, Tom Stott and Percy Quirke. On election night, it was thought that Playford's long tenure was over, but he did not concede. There was speculation that Playford would let an inexperienced Labor form a minority government as the economic difficulties might make it a poisoned chalice. After a week of silence he said he would not resign, and would see how the independents lined up when parliament reconvened. Labor needed the support of only one of the independents to make its leader, Frank Walsh, premier, while the LCL needed them both for a 10th term in government They swung their support behind Playford and allowed his government to continue for another term; in return Quirke joined the LCL and was appointed to cabinet, while Stott was appointed speaker. Nonetheless, much media fanfare was made of the result, and of the detrimental effects of the 'Playmander'. Walsh declared the result "a travesty of electoral injustice" and lobbied the governor to not invite Playford to form government, to no avail. The election showed just how distorted the Playmander had become. Adelaide now accounted for two-thirds of the state's population, but a country vote was effectively worth two to 10 times a vote in Adelaide. Electoral legislation remained unchanged. Labor introduced bills for reform, but these were defeated in both houses of Parliament. The premier introduced electoral legislation that would have entrenched his government further than under the Playmander. As electoral legislation was part of the South Australian constitution, it required an absolute parliamentary majority (20 seats, under the current system) to be changed. The LCL relied on Stott in the house, so Labor could obstruct changes by keeping members away and forcing a pair. While the political situation was becoming increasingly untenable, Playford himself continued with his job of building the state. Plans for Adelaide's future development, including a road transport plan, were commissioned. Playford saw a modern road transport system as crucial to continuing the industrialisation of the state, and motor vehicle registrations, which had increased by a factor of 50 since the end of the war, required road expansion. The Metropolitan Plan, a 1962 publication of the Town Planning Committee called for the construction of 56 km of freeways and speculated that three times as much would be needed in future. However, most of this never materialised; only the South Eastern Freeway was approved during Playford's term, and construction just after he left office. A more ambitious plan for a freeway system was commissioned, but the study was not completed until after Playford's departure and was scrapped by later governments due to widespread public objections to the proposed demolition of entire suburbs for interchanges. Playford was criticised for seeing roads only from an engineering and utilitarian standpoint and neglecting the social and community effects of such building. The state's population hit in the one million mark in 1963 and the Port Stanvac oil refinery was completed. Adelaide's water supply was increased and the pipeline from Morgan to Whyalla was duplicated. ### Changing policy expectations The economic success of Playford's administration also fuelled the rapid growth of an immigrant, working and middle-class whose social expectations differed markedly from his traditionalist stance, loosening his grip on power. The demographic changes brought on by Playford's successful economic policies increased the number of people who had rather different views to his on matters such as education, health, arts, the environment, gambling and alcohol. Blewett and Jaensch said "it can be argued that the development he fostered ultimately brought about his own political demise." The state's social fabric became more complex, but Playford was unable or unwilling to adapt to their more complicated political desires. Playford was known for his lack of funding for education, regarding it as a distraction from the industrialisation of the state. During this period, only the financial elite could afford a university education, and less than one percent of the population had a degree by the time Playford left office. Despite this, university attendance more than tripled, and secondary and technical school enrolments more than quintupled, far outstripping the 77% population growth during his time in office, as incomes—and hence access to education—rose steadily as the need for teenagers to find a job to help support the family declined. Although the government expenditure on education increased from 10 to 17% from 1945 to 1959, the number of teachers had only doubled by the time he left office, so class sizes increased. The premier's education policy was criticised for being too conservative and lacking in innovation. Playford also did not allow the teaching of languages other than English in schools on the grounds that "English is good enough". Howell said that Playford's "prejudices...served to limit the capacity of many able South Australians to participate in trade negotiations or diplomatic work." University academics and Public Examinations Board called for the inclusion of biology and the broadening of senior high school curriculum to better prepare students for tertiary education, but were rebuffed. In 1963 the minimum school leaving age was raised to 15, but this was still lower than most states in Australia. The premier was also known for his suspicious attitude towards the University of Adelaide and tertiary education in general; many of their graduates moved interstate and he thought that scientific research done within the state was not sufficiently focussed on practical applications. The antipathy was mutual and originated from Playford's days as a backbencher, when he formally complained to the university about a lecture given by a political science professor about Marxism. Playford saw the discussion of such a topic as misuse of public funds for promotion of socialism, and his continued outspokenness about political curricula angered academics, who saw it as an attempt to curtail intellectual freedom. One vice-chancellor was angered to the point of telling a senior public servant that Playford "an uneducated country colonial". Playford also opposed the establishment of a second university in the state as the population increased. While academics thought that another institution would bring more academic diversity, Playford thought this would increase competition for resources, so he allowed only a new campus of the University of Adelaide, which became Flinders University after his departure from power. In his defence, Playford pointed that he had never rejected a funding request since the state took responsibility for universities in 1951, and that his proportional expenditure on tertiary education matched that of other states. During Playford's rule, hospitals were overcrowded and the Royal Adelaide Hospital's beds were crammed together with a density twice higher than developed world standards. After a media exposé and criticism from health sector professionals, two more hospitals were built in the western and northern suburbs of Adelaide respectively. Playford's attitude to social welfare was also criticised. He said that it was up to charity, not the government, to support orphans and disadvantaged sectors of the community so that they could enjoy a better standard of living. Spending on social welfare lagged behind that in other states, and legislative reforms on this front were non-existent. Arts, which Playford showed no personal interest in, and regarded as "frills not fundamentals" and "non-productive", became a more prominent issue among the emerging middle-class. For his attitude, Playford was often mocked by his opponents and critics for his "philistinism". The Nation derisorily quipped that "It is axiomatic that the Premier draws his orchard spray gun at the mention of the word 'culture'". Sir Arthur Rymill, an LCL member of the upper house, criticised the demolition of the Theatre Royal, lobbied Playford for increased funding without success, pointing out that world-class performing arts venues were generally subsidised by the government. Hurtle Morphett, a former State President of the LCL, quipped that if Playford "had wanted to convert the Art Gallery on North Terrace into a power house he would have done it without hesitation". In the 1960s, the Adelaide Festival started, while the Australian Dance Theatre and the State Theatre Company of South Australia was founded in the capital, with minimal assistance from Playford's government. The festival was well-received despite the effect of censorship in a state well known for social conservatism. With the success of the festival, public interest in arts increased, and with increasing calls for government funding, particularly from Dunstan, Playford finally agreed to fund the "non-productive area" in 1963 by allocating funding for the eventual building of the Festival Centre. Playford's focus on development above all also led to controversy over heritage preservation. In 1955, the City of Adelaide legislated to rezone much of the city centre from residential to commercial land for office blocks. Many older houses, as well as the Exhibition Building were demolished, sparking calls by many parliamentarians, Dunstan prominent among them, for Playford to intercede to preserve the historic character of the city. The premier was unmoved, backing the redevelopment and claiming that many of the demolished structures were "substandard". While Playford was known for his use of price controls to restrain the price of living and therefore attract blue-collar workers to settle in the state and fuel industrialisation, South Australia was slow to introduce consumer protection laws in regards to quality control. It was believed that he was opposed to compulsory pasteurisation and other quality standards on milk to avoid offending his rural support base. Playford's reluctance to introduce regulations for tradesmen such as builders, electricians and plumbers were often seen to have resulted from his being a keen do-it-yourself handyman. The conservatism of the Liberal and Country League did not keep up with the expectations of a modern-day society. There was dissatisfaction with the restrictive drinking laws; environmentalists campaigned for more natural parks and more 'green' practices; police powers stood strong, 'no loitering' legislation remained in place; gambling was almost completely restricted. The constituents who loudly demanded changes were mostly immigrants and their offspring, used to more libertine conditions in their countries of origin. Their homes, usually built by the Housing Trust, sprawled into 'rural' electoral districts that were controlled by the League. Labor pledged to introduce social legislation to meet their demands; Playford, who did not drink, smoke or gamble, had no interest in doing so. His own candidates knew that the 1965 election would be unwinnable if Playford did not budge. The economy was still going strong and incomes were still increasing, so the Premier did not change his position on social reform. ## Fall from power Playford went into the 1965 election confident that he would build upon his previous result. Labor was continuing its practice of concentrating on individual seats: this time the effort was invested in the electorates of Barossa and Glenelg. In Barossa, northern Adelaide urban sprawl was overflowing into an otherwise rural and conservative electorate; in Glenelg, a younger generation of professionals and their families were settling. On election day, 6 March, both seats fell to Labor with substantial swings. The LCL lost power for the first time in 35 years. In seats that were contested by both parties, Labor led on the primary vote with 52.7 to 43.3%. Playford stayed up on the night to see the result, and conceded defeat at midnight. He appeared calm when announcing the loss to the public, but wept when he told his family of it. Playford had been premier for 26 years and 126 days. After the loss, there were calls for Playford to be offered the post of Governor of South Australia or Governor-General of Australia, but nothing came of that. Playford continued to lead the LCL opposition for a further one and a half years until he relinquished the leadership. In the subsequent ballot, Steele Hall, a small farmer like Playford, won and led the LCL to victory at the following election with the Playmander still in place. Contrary to perceptions, Playford was loath to favour or groom a successor, and he did not publicly hint at whom he voted for in the leadership ballot; there was speculation that the former premier may have been one of those who abstained from the vote. Playford retired from politics at the same time, presumably for reasons of age, but stated that "I couldn't cope with the change in the attitudes of some MPs, even some in the highest places... I found I could no longer cope with the change... I can't handle a liar who doesn't turn a hair while he's lying... I decided I couldn't take it any longer". ## Retirement and death Playford retired from Parliament with a pension of \$72 a week; he had resisted giving higher pensions to Ministers or longer-serving MPs throughout his tenure. Regardless of what people thought of the Playmander, Playford was held in high regard for his integrity; during his premiership, there were no complaints of corruption or government largesse. Playford also prohibited his ministers from sitting on the board of directors of public companies or owning shares, lest they became conflicted in their decision-making. He returned to his orchard at Norton Summit, and took a continued interest in South Australian politics, but did not typically raise his opinions publicly; he was still consulted in private by Liberals up until his death, however. His closeness to Labor figures did not end either, offering advice to their new South Australian ministers, and assisting in a memorial to the former Labor Prime Minister John Curtin. In line with his reputation for promoting his state, Playford also privately lobbied the Liberal government in Canberra on behalf of the state Labor administration for more infrastructure funding. In 1977, when Don Dunstan celebrated his 50th birthday party, Playford was the only Liberal invited. There he socialised with former and future Labor Prime Ministers Gough Whitlam and Bob Hawke, Dunstan, and other Laborites. He served on the boards of the Electricity Trust and the Housing Trust, among others. Here, unused to not being in absolute control, and having little specific scientific knowledge, he occasionally stumbled in his decisions. This also created difficulties with the other board members, who were reluctant to disagree with their former boss, regardless of their expertise. But his thrift, a theme throughout his Premiership, did not abate; he was constantly forcing the trusts to use cost-saving methods and old vehicles for their work. This extended to his family property; he vigorously opposed his son's desire to install a new irrigation system in the orchard. Playford had begun experiencing serious health problems since his first heart attack in June 1971, and underwent treatment and procedures for ten years. On 16 June 1981, he experienced a massive heart attack and died. Two days later his memorial service was held at the Flinders Street Baptist Church. The funeral procession carried his coffin from the city, along Magill and Old Norton Summit Roads where thousands turned out to pay their respects, to the Norton Summit cemetery where his forebears had been buried. There his gravestone was emblazoned with the phrase: 'a good man who did good things'. ## Select bibliography
20,907,274
Boydell Shakespeare Gallery
1,173,481,689
Art museum in London
[ "1789 establishments in England", "1803 books", "1805 disestablishments in England", "18th-century prints", "British books", "Defunct art galleries in London", "Museums established in 1789", "Paintings based on works by William Shakespeare", "William Shakespeare" ]
The Boydell Shakespeare Gallery in London, England, was the first stage of a three-part project initiated in November 1786 by engraver and publisher John Boydell in an effort to foster a school of British history painting. In addition to the establishment of the gallery, Boydell planned to produce an illustrated edition of William Shakespeare's plays and a folio of prints based upon a series of paintings by different contemporary painters. During the 1790s the London gallery that showed the original paintings emerged as the project's most popular element. The works of William Shakespeare enjoyed a renewed popularity in 18th-century Britain. Several new editions of his works were published, his plays were revived in the theatre and numerous works of art were created illustrating the plays and specific productions of them. Capitalising on this interest, Boydell decided to publish a grand illustrated edition of Shakespeare's plays that would showcase the talents of British painters and engravers. He chose the noted scholar and Shakespeare editor George Steevens to oversee the edition, which was released between 1791 and 1803. The press reported weekly on the building of Boydell's gallery, designed by George Dance the Younger, on a site in Pall Mall. Boydell commissioned works from famous painters of the day, such as Joshua Reynolds, and the folio of engravings proved the enterprise's most lasting legacy. However, the long delay in publishing the prints and the illustrated edition prompted criticism. Because they were hurried, and many illustrations had to be done by lesser artists, the final products of Boydell's venture were judged to be disappointing. The project caused the Boydell firm to become insolvent, and they were forced to sell the gallery at a lottery. ## Shakespeare in the 18th century In the 18th century, Shakespeare became associated with rising British nationalism, and Boydell tapped into the same mood that many other entrepreneurs were exploiting. Shakespeare appealed not only to a social elite who prided themselves on their artistic taste, but also to the emerging middle class who saw in Shakespeare's works a vision of a diversified society. The mid-century Shakespearean theatrical revival was probably most responsible for reintroducing the British public to Shakespeare. Shakespeare's plays were integral to the theatre's resurgence at this time. Despite the upsurge in theatre-going, writing tragedies was not profitable, and thus few good tragedies were written. Shakespeare's works filled the gap in the repertoire, and his reputation grew as a result. By the end of the 18th century, one out of every six plays performed in London was by Shakespeare. The actor, director, and producer David Garrick was a key figure in Shakespeare's theatrical renaissance. His reportedly superb acting, unrivalled productions, numerous and important Shakespearean portraits, and his spectacular 1769 Shakespeare Jubilee helped promote Shakespeare as a marketable product and the national playwright. Garrick's Drury Lane theatre was the centre of the Shakespeare mania which swept the nation. The visual arts also played a significant role in expanding Shakespeare's popular appeal. In particular, the conversation pieces designed chiefly for homes generated a wide audience for literary art, especially Shakespearean art. This tradition began with William Hogarth (whose prints reached all levels of society) and attained its peak in the Royal Academy exhibitions, which displayed paintings, drawings, and sculptures. The exhibitions became important public events: thousands flocked to see them, and newspapers reported in detail on the works displayed. They became a fashionable place to be seen (as did Boydell's Shakespeare Gallery, later in the century). In the process, the public was refamiliarized with Shakespeare's works. ### Shakespeare editions The rise in Shakespeare's popularity coincided with Britain's accelerating change from an oral to a print culture. Towards the end of the century, the basis of Shakespeare's high reputation changed. He had originally been respected as a playwright, but once the theatre became associated with the masses, Shakespeare's status as a "great writer" shifted. Two strands of Shakespearean print culture emerged: bourgeois popular editions and scholarly critical editions. In order to turn a profit, booksellers chose well-known authors, such as Alexander Pope and Samuel Johnson, to edit Shakespeare editions. According to Shakespeare scholar Gary Taylor, Shakespearean criticism became so "associated with the dramatis personae of 18th-century English literature ... [that] he could not be extracted without uprooting a century and a half of the national canon". The 18th century's first Shakespeare edition, which was also the first illustrated edition of the plays, was published in 1709 by Jacob Tonson and edited by Nicholas Rowe. The plays appeared in "pleasant and readable books in small format" which "were supposed ... to have been taken for common or garden use, domestic rather than library sets". Shakespeare became "domesticated" in the 18th century, particularly with the publication of family editions such as Bell's in 1773 and 1785–86, which advertised themselves as "more instructive and intelligible; especially to the young ladies and to youth; glaring indecencies being removed". Scholarly editions also proliferated. At first, these were edited by author-scholars such as Pope (1725) and Johnson (1765), but later in the century this changed. Editors such as George Steevens (1773, 1785) and Edmond Malone (1790) produced meticulous editions with extensive footnotes. The early editions appealed to both the middle class and to those interested in Shakespeare scholarship, but the later editions appealed almost exclusively to the latter. Boydell's edition, at the end of the century, tried to reunite these two strands. It included illustrations but was edited by George Steevens, one of the foremost Shakespeare scholars of the day. ## Boydell's Shakespeare venture Boydell's Shakespeare project contained three parts: an illustrated edition of Shakespeare's plays; a folio of prints from the gallery (originally intended to be a folio of prints from the edition of Shakespeare's plays); and a public gallery where the original paintings for the prints would hang. The idea of a grand Shakespeare edition was conceived during a dinner at the home of Josiah Boydell (John's nephew) in late 1786. Five important accounts of the occasion survive. From these, a guest list and a reconstruction of the conversation have been assembled. The guest list reflects the range of Boydell's contacts in the artistic world: it included Benjamin West, painter to King George III; George Romney, a renowned portrait painter; George Nicol, bookseller to the king; William Hayley, a poet; John Hoole, a scholar and translator of Tasso and Aristotle; and Daniel Braithwaite, secretary to the postmaster general and a patron of artists such as Romney and Angelica Kauffman. Most accounts also place the painter Paul Sandby at the gathering. Boydell wanted to use the edition to help stimulate a British school of history painting. He wrote in the "Preface" to the folio that he wanted "to advance that art towards maturity, and establish an English School of Historical Painting". A court document used by Josiah to collect debts from customers after Boydell's death relates the story of the dinner and Boydell's motivations: > [Boydell said] he should like to wipe away the stigma that all foreign critics threw on this nation—that they had no genius for historical painting. He said he was certain from his success in encouraging engraving that Englishmen wanted nothing but proper encouragement and a proper subject to excel in historical painting. The encouragement he would endeavor to find if a proper subject were pointed out. Mr. Nicol replied that there was one great National subject concerning which there could be no second opinion, and mentioned Shakespeare. The proposition was received with acclaim by the Alderman [John Boydell] and the whole company. However, as Frederick Burwick argues in his introduction to a collection of essays on the Boydell Gallery, "[w]hatever claims Boydell might make about furthering the cause of history painting in England, the actual rallying force that brought the artists together to create the Shakespeare Gallery was the promise of engraved publication and distribution of their works." After the initial success of the Shakespeare Gallery, many wanted to take credit. Henry Fuseli long claimed that his planned Shakespeare ceiling (in imitation of the Sistine Chapel ceiling) had given Boydell the idea for the gallery. James Northcote claimed that his Death of Wat Tyler and Murder of the Princes in the Tower had motivated Boydell to start the project. However, according to Winifred Friedman, who has researched the Boydell Gallery, it was probably Joshua Reynolds's Royal Academy lectures on the superiority of history painting that influenced Boydell the most. The logistics of the enterprise were difficult to organise. Boydell and Nicol wanted to produce an illustrated edition of a multi-volume work and intended to bind and sell the 72 large prints separately in a folio. A gallery was required to exhibit the paintings from which the prints were drawn. The edition was to be financed through a subscription campaign, during which the buyers would pay part of the price up front and the remainder on delivery. This unusual practice was necessitated by the fact that over £350,000—an enormous sum at the time, worth about £ today—was eventually spent. The gallery opened in 1789 with 34 paintings and added 33 more in 1790 when the first engravings were published. The last volume of the edition and the Collection of Prints were published in 1803. In the middle of the project, Boydell decided that he could make more money if he published different prints in the folio than in the illustrated edition; as a result, the two sets of images are not identical. Advertisements were issued and placed in newspapers. When a subscription was circulated for a medal to be struck, the copy read: "The encouragers of this great national undertaking will also have the satisfaction to know, that their names will be handed down to Posterity, as the Patrons of Native Genius, enrolled with their own hands, in the same book, with the best of Sovereigns." The language of both the advertisement and the medal emphasised the role each subscriber played in the patronage of the arts. The subscribers were primarily middle-class Londoners, not aristocrats. Edmond Malone, himself an editor of a rival Shakespeare edition, wrote that "before the scheme was well-formed, or the proposals entirely printed off, near six hundred persons eagerly set down their names, and paid their subscriptions to a set of books and prints that will cost each person, I think, about ninety guineas; and on looking over the list, there were not above twenty names among them that anybody knew". ## Illustrated Shakespeare edition and folio The "magnificent and accurate" Shakespeare edition which Boydell began in 1786 was to be the focus of his enterprise—he viewed the print folio and the gallery as offshoots of the main project. In an advertisement prefacing the first volume of the edition, Nicol wrote that "splendor and magnificence, united with correctness of text were the great objects of this Edition". The volumes themselves were handsome, with gilded pages that, unlike those in previous scholarly editions, were unencumbered by footnotes. Each play had its own title page followed by a list of "Persons in the Drama". Boydell spared no expense. He hired the typography experts William Bulmer and William Martin to develop and cut a new typeface specifically for the edition. Nicol explains in the preface that they "established a printing-house ... [and] a foundry to cast the types; and even a manufactory to make the ink". Boydell also chose to use high-quality wove Whatman paper. The illustrations were printed independently and could be inserted and removed as the purchaser desired. The first volumes of the Dramatic Works were published in 1791 and the last in 1805. Boydell was responsible for the "splendor", and George Steevens, the general editor, was responsible for the "correctness of text". Steevens, according to Evelyn Wenner, who has studied the history of the Boydell edition, was "at first an ardent advocate of the plan" but "soon realized that the editor of this text must in the very scheme of things give way to painters, publishers and engravers". He was also ultimately disappointed in the quality of the prints, but he said nothing to jeopardize the edition's sales. Steevens, who had already edited two complete Shakespeare editions, was not asked to edit the text anew; instead, he picked which version of the text to reprint. Wenner describes the resulting hybrid edition: > The thirty-six plays, printed from the texts of Reed and Malone, divide into the following three groups: (1) five plays of the first three numbers printed from Reed's edition of 1785 with many changes adopted from the Malone text of 1790 (2) King Lear and the six plays of the next three numbers printed from Malone's edition of 1790 but exhibiting conspicuous deviations from his basic text (3) twenty-four plays of the last twelve numbers also printed from Malone's text but made to conform to Steevens's own edition of 1793. Throughout the edition, modern (i.e. 18th-century) spelling was preferred as were First Folio readings. Boydell sought out the most eminent painters and engravers of the day to contribute paintings for the gallery, engravings for the folio, and illustrations for the edition. Artists included Richard Westall, Thomas Stothard, George Romney, Henry Fuseli, Benjamin West, Angelica Kauffman, Robert Smirke, James Durno, John Opie, Francesco Bartolozzi, Thomas Kirk, Henry Thomson, and Boydell's nephew and business partner, Josiah Boydell. The folio and the illustrated Shakespeare edition were "by far the largest single engraving enterprise ever undertaken in England". As print collector and dealer Christopher Lennox-Boyd explains, "had there not been a market for such engravings, not one of the paintings would have been commissioned, and few, if any, of the artists would have risked painting such elaborate compositions". Scholars believe that a variety of engraving methods were employed and that line engraving was the "preferred medium" because it was "clear and hardwearing" and because it had a high reputation. Stipple engraving, which was quicker and often used to produce shading effects, wore out quicker and was valued less. Many plates were a mixture of both. Several scholars have suggested that mezzotint and aquatint were also used. Lennox-Boyd, however, claims that "close examination of the plates confirms" that these two methods were not used and argues that they were "totally unsuitable": mezzotint wore quickly and aquatint was too new (there would not have been enough artists capable of executing it). Most of Boydell's engravers were also trained artists; for example, Bartolozzi was renowned for his stippling technique. Boydell's relationships with his illustrators were generally congenial. One of them, James Northcote, praised Boydell's liberal payments. He wrote in an 1821 letter that Boydell "did more for the advancement of the arts in England than the whole mass of the nobility put together! He paid me more nobly than any other person has done; and his memory I shall ever hold in reverence". Boydell typically paid the painters between £105 and £210, and the engravers between £262 and £315. Joshua Reynolds at first declined Boydell's offer to work on the project, but he agreed when pressed. Boydell offered Reynolds carte blanche for his paintings, giving him a down payment of £500, an extraordinary amount for an artist who had not even agreed to do a specific work. Boydell eventually paid him a total of £1,500. There are 96 illustrations in the nine volumes of the illustrated edition and each play has at least one. Approximately two-thirds of the plays, 23 out of 36, are each illustrated by a single artist. Approximately two-thirds of the total number of illustrations, or 65, were completed by three artists: William Hamilton, Richard Westall, and Robert Smirke. The primary illustrators of the edition were known as book illustrators, whereas a majority of the artists included in the folio were known for their paintings. Lennox-Boyd argues that the illustrations in the edition have a "uniformity and cohesiveness" that the folio lacks because the artists and engravers working on them understood book illustration while those working on the folio were working in an unfamiliar medium. The print folio, A Collection of Prints, From Pictures Painted for the Purpose of Illustrating the Dramatic Works of Shakspeare, by the Artists of Great-Britain (1805), was originally intended to be a collection of the illustrations from the edition, but a few years into the project, Boydell altered his plan. He guessed that he could sell more folios and editions if the pictures were different. Of the 97 prints made from paintings, two-thirds of them were made by ten of the artists. Three artists account for one-third of the paintings. In all, 31 artists contributed works. ## Gallery building In June 1788, Boydell and his nephew secured the lease on a site at 52 Pall Mall () to build the gallery and engaged George Dance, then the Clerk of the City Works, as the architect for the project. Pall Mall at that time had a mix of expensive residences and commercial operations, such as bookshops and gentleman's clubs, popular with fashionable London society. The area also contained some less genteel establishments: King's Place (now Pall Mall Place), an alley running to the east and behind Boydell's gallery, was the site of Charlotte Hayes's high-class brothel. Across King's Place, immediately to the east of Boydell's building, 51 Pall Mall had been purchased on 26 February 1787 by George Nicol, bookseller and future husband of Josiah's elder sister, Mary Boydell. As an indication of the changing character of the area, this property had been the home of Goostree's gentleman's club from 1773 to 1787. Begun as a gambling establishment for wealthy young men, it had later become a reformist political club that counted William Pitt and William Wilberforce as members. Dance's Shakespeare Gallery building had a monumental, neoclassical stone front, and a full-length exhibition hall on the ground floor. Three interconnecting exhibition rooms occupied the upper floor, with a total of more than 4,000 square feet (370 m<sup>2</sup>) of wall space for displaying pictures. The two-storey façade was not especially large for the street, but its solid classicism had an imposing effect. Some reports describe the exterior as "sheathed in copper". The lower storey of the façade was dominated by a large, rounded-arched doorway in the centre. The unmoulded arch rested on wide piers, each broken by a narrow window, above which ran a simple cornice. Dance placed a transom across the doorway at the level of the cornice bearing the inscription "Shakespeare Gallery". Below the transom were the main entry doors, with glazed panels and side lights matching the flanking windows. A radial fanlight filled the lunette above the transom. In each of the spandrels to the left and right of the arch, Dance set a carving of a lyre inside a ribboned wreath. Above all this ran a panelled band course dividing the lower storey from the upper. The upper façade contained paired pilasters on either side, and a thick entablature and triangular pediment. The architect Sir John Soane criticised Dance's combination of slender pilasters and a heavy entablature as a "strange and extravagant absurdity". The capitals topping the pilasters sported volutes in the shape of ammonite fossils. Dance invented this neo-classical feature, which became known as the Ammonite Order, specifically for the gallery. In a recess between the pilasters, Dance placed Thomas Banks's sculpture Shakespeare attended by Painting and Poetry, for which the artist was paid 500 guineas. The sculpture depicted Shakespeare, reclining against a rock, between the Dramatic Muse and the Genius of Painting. Beneath it was a panelled pedestal inscribed with a quotation from Hamlet: "He was a Man, take him for all in all, I shall not look upon his like again". ## Reaction The Shakespeare Gallery, when it opened on 4 May 1789, contained 34 paintings, and by the end of its run it had between 167 and 170. (The exact inventory is uncertain and most of the paintings have disappeared; only around 40 paintings can be identified with any certainty.) According to Frederick Burwick, during its sixteen-year operation, the Gallery reflected the transition from Neoclassicism to Romanticism. Works by artists such as James Northcote represent the conservative, neoclassical elements of the gallery, while those of Henry Fuseli represent the newly emerging Romantic movement. William Hazlitt praised Northcote in an essay entitled "On the Old Age of Artists", writing "I conceive any person would be more struck with Mr. Fuseli at first sight, but would wish to visit Mr. Northcote oftener." The gallery itself was a fashionable hit with the public. Newspapers carried updates of the construction of the gallery, down to drawings for the proposed façade. The Daily Advertiser featured a weekly column on the gallery from May through August (exhibition season). Artists who had influence with the press, and Boydell himself, published anonymous articles to heighten interest in the gallery, which they hoped would increase sales of the edition. At the beginning of the enterprise, reactions were generally positive. The Public Advertiser wrote on 6 May 1789: "the pictures in general give a mirror of the poet ... [The Shakespeare Gallery] bids fair to form such an epoch in the History of the Fine Arts, as will establish and confirm the superiority of the English School". The Times wrote a day later: > This establishment may be considered with great truth, as the first stone of an English School of Painting; and it is peculiarly honourable to a great commercial country, that it is indebted for such a distinguished circumstance to a commercial character—such an institution—will place, in the Calendar of Arts, the name of Boydell in the same rank with the Medici of Italy. Fuseli himself may have written the review in the Analytical Review, which praised the general plan of the gallery while at the same time hesitating: "such a variety of subjects, it may be supposed, must exhibit a variety of powers; all cannot be the first; while some must soar, others must skim the meadow, and others content themselves to walk with dignity". However, according to Frederick Burwick, critics in Germany "responded to the Shakespeare Gallery with far more thorough and meticulous attention than did the critics in England". Criticism increased as the project dragged on: the first volume did not appear until 1791. James Gillray published a cartoon labelled "Boydell sacrificing the Works of Shakespeare to the Devil of Money-Bags". The essayist and soon-to-be co-author of the children's book Tales from Shakespeare (1807) Charles Lamb criticised the venture from the outset: > What injury did not Boydell's Shakespeare Gallery do me with Shakespeare. To have Opie's Shakespeare, Northcote's Shakespeare, light headed Fuseli's Shakespeare, wooden-headed West's Shakespeare, deaf-headed Reynolds' Shakespeare, instead of my and everybody's Shakespeare. To be tied down to an authentic face of Juliet! To have Imogen's portrait! To confine the illimitable! Northcote, while appreciating Boydell's largesse, also criticised the results of the project: "With the exception of a few pictures by Joshua [Reynolds] and [John] Opie, and—I hope I may add—myself, it was such a collection of slip-slop imbecility as was dreadful to look at, and turned out, as I had expected it would, in the ruin of poor Boydell's affairs". ## Collapse By 1796, subscriptions to the edition had dropped by two-thirds. The painter and diarist Joseph Farington recorded that this was a result of the poor engravings: > West said He looked over the Shakespeare prints and was sorry to see them of such inferior quality. He said that excepting that from His Lear by Sharpe, that from Northcote's children in the Tower, and some small ones, there were few that could be approved. Such a mixture of dotting and engraving, and such a general deficiency in respect of drawing which He observed the Engravers seemed to know little of, that the volumes presented a mass of works which He did not wonder many subscribers had declined to continue their subscription. The mix of engraving styles was criticised; line engraving was considered the superior form and artists and subscribers disliked the mixture of lesser forms with it. Moreover, Boydell's engravers fell behind schedule, delaying the entire project. He was forced to engage lesser artists, such as Hamilton and Smirke, at a lower price to finish the volumes as his business started to fail. Modern art historians have generally concurred that the quality of the engravings, particularly in the folio, was poor. Moreover, the use of so many different artists and engravers led to a lack of stylistic cohesion. Although the Boydells ended with 1,384 subscriptions, the rate of subscriptions dropped, and remaining subscriptions were also increasingly in doubt. Like many businesses at the time, the Boydell firm kept few records. Only the customers knew what they had purchased. This caused numerous difficulties with debtors who claimed they had never subscribed or had subscribed for less. Many subscribers also defaulted, and Josiah Boydell spent years after John's death attempting to force them to pay. The Boydells focused all their attention on the Shakespeare edition and other large projects, such as The History of the River Thames and The Complete Works of John Milton, rather than on lesser, more profitable ventures. When both the Shakespeare enterprise and the Thames book failed, the firm had no capital to fall back upon. Beginning in 1789, with the onset of the French revolution, John Boydell's export business to Europe was cut off. By the late 1790s and early 19th century, the two-thirds of his business that depended upon the export trade was in serious financial difficulty. In 1804, John Boydell decided to appeal to Parliament for a private bill to authorise a lottery to dispose of everything in his business. The bill received royal assent on 23 March, and by November the Boydells were ready to sell tickets. John Boydell died before the lottery was drawn on 28 January 1805, but lived long enough to see each of the 22,000 tickets purchased at three guineas apiece (£ each in modern terms). To encourage ticket sales and reduce unsold inventory, every purchaser was guaranteed to receive a print worth one guinea from the Boydell company's stock. There were 64 winning tickets for major prizes, the highest being the Gallery itself and its collection of paintings. This went to William Tassie, a gem engraver and cameo modeller, of Leicester Fields (now Leicester Square). Josiah offered to buy the gallery and its paintings back from Tassie for £10,000 (worth about £ now), but Tassie refused and auctioned the paintings at Christie's. The painting collection and two reliefs by Anne Damer fetched a total of £6,181 18s. 6d. The Banks sculpture group from the façade was initially intended to be kept as a monument for Boydell's tomb. Instead, it remained part of the façade of the building in its new guise as the British Institution until the building was torn down in 1868–69. The Banks sculpture was then moved to Stratford-upon-Avon and re-erected in New Place Garden between June and November 1870. The lottery saved Josiah from bankruptcy and earned him £45,000, enabling him to begin business again as a printer. ## Legacy From the outset, Boydell's project inspired imitators. In April 1788, after the announcement of the Shakespeare Gallery, but a year before its opening, Thomas Macklin opened a Gallery of the Poets in the former Royal Academy building on the south side of Pall Mall. The first exhibition featured one work from each of 19 artists, including Fuseli, Reynolds, and Thomas Gainsborough. The gallery added new paintings of subjects from poetry each year, and from 1790 supplemented these with scenes from the Bible. The Gallery of the Poets closed in 1797, and its contents were offered by lottery. This did not deter Henry Fuseli from opening a Milton Gallery in the same building in 1799. Another such venture was the Historic Gallery opened by Robert Bowyer in Schomberg House at 87 Pall Mall in about 1793. The gallery accumulated 60 paintings (many by the same artists who worked for Boydell) commissioned to illustrate a new edition of David Hume's The History of Great Britain. Ultimately, Bowyer had to seek parliamentary approval for a sale by lottery in 1805, and the other ventures, like Boydell's, also ended in financial failure. The building in Pall Mall was purchased in 1805 by the British Institution, a private club of connoisseurs founded that year to hold exhibitions. It remained an important part of the London art scene until disbanded in 1867, typically holding a Spring exhibition of new works for sale from the start of February to the first week of May, and a loan exhibition of old masters, generally not for sale, from the first week of June to the end of August. The paintings and engravings that were part of the Boydell Gallery affected the way Shakespeare's plays were staged, acted, and illustrated in the 19th century. They also became the subject of criticism in important works such as Romantic poet and essayist Samuel Taylor Coleridge's "Lectures on Shakespeare" and William Hazlitt's dramatic criticism. Despite Charles Lamb's criticism of the Gallery's productions, Charles and Mary Lamb's children's book, Tales from Shakespeare (1807), was illustrated using plates from the project. The Boydell enterprise's most enduring legacy was the folio. It was reissued throughout the 19th century, and in 1867, "by the aid of photography the whole series, excepting the portraits of their Majesties George III. and Queen Charlotte, is now presented in a handy form, suitable for ordinary libraries or the drawing-room table, and offered as an appropriate memorial of the tercentenary celebration of the poet's birth". Scholars have described Boydell's folio as a precursor to the modern coffee table book. ## List of art works The Folio and Illustrated Edition lists were taken from Friedman's Boydell's Shakespeare Gallery. ### Sculptures - Shakespeare attended by Painting and Poetry by Thomas Banks (on façade of gallery building) - Present location: New Place Gardens, Stratford-upon-Avon - Coriolanus by Anne Seymour Damer (bas relief) - Antony and Cleopatra by Anne Seymour Damer (bas relief) ### Paintings The Paintings list is derived from the numbered catalogue The exhibition of the Shakspeare gallery, Pall-Mall: being the last time the pictures can ever be seen as an entire collection (London: W. Bulmer & Co., 1805), The Boydell Shakespeare Gallery edited by Walter Pape and Frederick Burwick (Bottrop: Peter Pomp, 1996), and "What Jane Saw". ### Folio engravings ### Illustrated edition
196,999
Isidor Isaac Rabi
1,173,791,448
American physicist (1898–1988)
[ "1898 births", "1988 deaths", "20th-century American physicists", "20th-century atheists", "American Nobel laureates", "American nuclear physicists", "American people of Polish-Jewish descent", "Atoms for Peace Award recipients", "Austro-Hungarian Jews", "Columbia University alumni", "Columbia University faculty", "Cornell University alumni", "Emigrants from Austria-Hungary to the United States", "Fellows of the American Physical Society", "Institute for Advanced Study visiting scholars", "J. Robert Oppenheimer", "Jewish American atheists", "Jewish American scientists", "Jewish physicists", "Jews from Galicia (Eastern Europe)", "Manhattan Project people", "Medal for Merit recipients", "Niels Bohr International Gold Medal recipients", "Nobel laureates from Austria-Hungary", "Nobel laureates in Physics", "Office of Science and Technology Policy officials", "People associated with CERN", "People from Brownsville, Brooklyn", "People from Rymanów", "People from the Lower East Side", "People from the Upper West Side", "Polish atheists", "Presidents of the American Physical Society", "Recipients of the Four Freedoms Award", "Recipients of the King's Medal for Service in the Cause of Freedom", "Recipients of the Legion of Honour", "Scientists from New York City", "Spectroscopists", "Theoretical physicists", "Vannevar Bush Award recipients", "Yiddish-speaking people" ]
Isidor Isaac Rabi (/ˈrɑːbi/; born Israel Isaac Rabi, July 29, 1898 – January 11, 1988) was an American physicist who won the Nobel Prize in Physics in 1944 for his discovery of nuclear magnetic resonance, which is used in magnetic resonance imaging. He was also one of the first scientists in the United States to work on the cavity magnetron, which is used in microwave radar and microwave ovens. Born into a traditional Polish-Jewish family in Rymanów, Galicia, Rabi came to the United States as an infant and was raised in New York's Lower East Side. He entered Cornell University as an electrical engineering student in 1916, but soon switched to chemistry. Later, he became interested in physics. He continued his studies at Columbia University, where he was awarded his doctorate for a thesis on the magnetic susceptibility of certain crystals. In 1927, he headed for Europe, where he met and worked with many of the finest physicists of the time. In 1929, Rabi returned to the United States, where Columbia offered him a faculty position. In collaboration with Gregory Breit, he developed the Breit–Rabi equation and predicted that the Stern–Gerlach experiment could be modified to confirm the properties of the atomic nucleus. His techniques for using nuclear magnetic resonance to discern the magnetic moment and nuclear spin of atoms earned him the Nobel Prize in Physics in 1944. Nuclear magnetic resonance became an important tool for nuclear physics and chemistry, and the subsequent development of magnetic resonance imaging from it has also made it important to the field of medicine. During World War II he worked on radar at the Massachusetts Institute of Technology (MIT) Radiation Laboratory (RadLab) and on the Manhattan Project. After the war, he served on the General Advisory Committee (GAC) of the Atomic Energy Commission, and was chairman from 1952 to 1956. He also served on the Science Advisory Committees (SACs) of the Office of Defense Mobilization and the Army's Ballistic Research Laboratory, and was Science Advisor to President Dwight D. Eisenhower. He was involved with the establishment of the Brookhaven National Laboratory in 1946, and later, as United States delegate to UNESCO, with the creation of CERN in 1952. When Columbia created the rank of university professor in 1964, Rabi was the first to receive that position. A special chair was named after him in 1985. He retired from teaching in 1967, but remained active in the department and held the title of University Professor Emeritus and Special Lecturer until his death. ## Early years Israel Isaac Rabi was born on July 29, 1898, into a Polish-Jewish Orthodox family in Rymanów, Galicia, in what was then part of Austria-Hungary but is now Poland. Soon after he was born, his father, David Rabi, emigrated to the United States. The younger Rabi and his mother, Sheindel, joined David there a few months later, and the family moved into a two-room apartment on the Lower East Side of Manhattan. At home the family spoke Yiddish. When Rabi was enrolled in school, Sheindel said his name was Izzy, and a school official, thinking it was short for Isidor, put that down as his name. Henceforth, that became his official name. Later, in response to anti-Semitism, he started writing his name as Isidor Isaac Rabi, and was known professionally as I.I. Rabi. To most of his friends and family, including his sister Gertrude, who was born in 1903, he was known simply by his last name. In 1907, the family moved to Brownsville, Brooklyn, where they ran a grocery store. As a boy, Rabi was interested in science. He read science books borrowed from the public library and built his own radio set. His first scientific paper, on the design of a radio condenser, was published in Modern Electrics when he was in elementary school. After reading about Copernican heliocentrism, he became an atheist. "It's all very simple", he told his parents, adding, "Who needs God?" As a compromise with his parents, for his Bar Mitzvah, which was held at home, he gave a speech in Yiddish about how an electric light works. He attended the Manual Training High School in Brooklyn, from which he graduated in 1916. Later that year, he entered Cornell University as an electrical engineering student, but soon switched to chemistry. After the American entry into World War I in 1917, he joined the Student Army Training Corps at Cornell. For his senior thesis, he investigated the oxidation states of manganese. He was awarded his Bachelor of Science degree in June 1919, but since at the time Jews were largely excluded from employment in the chemical industry and academia, he did not receive any job offers. He worked briefly at the Lederle Laboratories, and then as a bookkeeper. ## Education In 1922 Rabi returned to Cornell as a graduate chemistry student, and began studying physics. In 1923 he met, and began courting, Helen Newmark, a summer-semester student at Hunter College. To be near her when she returned home, he continued his studies at Columbia University, where his supervisor was Albert Wills. In June 1924 Rabi landed a job as a part-time tutor at the City College of New York. Wills, whose specialty was magnetism, suggested that Rabi write his doctoral thesis on the magnetic susceptibility of sodium vapor. The topic did not appeal to Rabi, but after William Lawrence Bragg gave a seminar at Columbia about the electric susceptibility of certain crystals called Tutton's salts, Rabi decided to research their magnetic susceptibility, and Wills agreed to be his supervisor. Measuring the magnetic resonance of crystals first involved growing the crystals, a simple procedure often done by elementary school students. The crystals then had to be prepared by skillfully cutting them into sections with facets that had an orientation different from the internal structure of the crystal, and the response to a magnetic field had to be painstakingly measured. While his crystals were growing, Rabi read James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism, which inspired an easier method. He lowered a crystal on a glass fiber attached to a torsion balance into a solution whose magnetic susceptibility could be varied between two magnetic poles. When it matched that of the crystal, the magnet could be turned on and off without disturbing the crystal. The new method not only required much less work, it also produced a more accurate result. Rabi sent his thesis, entitled On the Principal Magnetic Susceptibilities of Crystals, to Physical Review on July 16, 1926. He married Helen the next day. The paper attracted little fanfare in academic circles, although it was read by Kariamanickam Srinivasa Krishnan, who used the method in his own investigations of crystals. Rabi concluded that he needed to promote his work as well as publish it. Like many other young physicists, Rabi was closely following momentous events in Europe. He was astounded by the Stern–Gerlach experiment, which convinced him of the validity of quantum mechanics. With Ralph Kronig, Francis Bitter, Mark Zemansky and others, he set out to extend the Schrödinger equation to symmetric top molecules and find the energy states of such a mechanical system. The problem was that none of them could solve the resulting equation, a second-order partial differential equation. Rabi found the answer in Ludwig Schlesinger's Einführung in die Theorie der Differentialgleichungen, which describes a method originally developed by Carl Gustav Jacob Jacobi. The equation had the form of a hypergeometric equation to which Jacobi had found a solution. Kronig and Rabi wrote up their result and sent it to Physical Review, which published it in 1927. ## Europe In May 1927, Rabi was appointed a Barnard Fellow. This came with a stipend of \$1,500 (\$ in dollars) for the period from September 1927 to June 1928. He immediately applied for a year's leave of absence from the City College of New York so he could study in Europe. When this was refused, he resigned. On reaching Zürich, where he hoped to work for Erwin Schrödinger, he met two fellow Americans, Julius Adams Stratton and Linus Pauling. They found that Schrödinger was leaving, as he had been appointed head of the Theoretical Institute at Friedrich Wilhelm University in Berlin. Rabi therefore decided to seek a position with Arnold Sommerfeld at the University of Munich instead. In Munich, he found two more Americans, Howard Percy Robertson and Edward Condon. Sommerfeld accepted Rabi as a postdoctoral researcher. German physicists Rudolf Peierls and Hans Bethe were also working with Sommerfeld at the time, but the three Americans became especially close. On Wills' advice, Rabi traveled to Leeds for the 97th annual meeting of the British Association for the Advancement of Science, where he heard Werner Heisenberg present a paper on quantum mechanics. Afterwards, Rabi moved to Copenhagen, where he volunteered to work for Niels Bohr. Bohr was on vacation, but Rabi went straight to work on calculating the magnetic susceptibility of molecular hydrogen. After Bohr returned in October, he arranged for Rabi and Yoshio Nishina to continue their work with Wolfgang Pauli at the University of Hamburg. Although he came to Hamburg to work with Pauli, Rabi found Otto Stern working there with two English-speaking postdoctoral fellows, Ronald Fraser and John Bradshaw Taylor. Rabi soon made friends with them, and became interested in their molecular beam experiments, for which Stern would receive the Nobel Prize in Physics in 1943. Their research involved non-uniform magnetic fields, which were difficult to manipulate and hard to measure accurately. Rabi devised a method of using a uniform field instead, with the molecular beam at a glancing angle, so the atoms would be deflected like light through a prism. This would be easier to use, and produce more accurate results. Encouraged by Stern, and greatly assisted by Taylor, Rabi managed to get his idea to work. On Stern's advice, Rabi wrote a letter about his results to Nature, which published it in February 1929, followed by a paper entitled Zur Methode der Ablenkung von Molekularstrahlen ("On the method of deflection of molecular beams") to Zeitschrift für Physik, where it was published in April. By this time the Barnard Fellowship had expired, and Rabi and Helen were living on a \$182 (\$ in dollars) per month stipend from the Rockefeller Foundation. They left Hamburg for Leipzig, where he hoped to work with Heisenberg. In Leipzig, he found Robert Oppenheimer, a fellow New Yorker. It would be the start of a long friendship. Heisenberg departed for a tour of the United States in March 1929, so Rabi and Oppenheimer decided to go to the ETH Zurich, where Pauli was now the professor of physics. Rabi's education in physics was enriched by the leaders in the field he met there, which included Paul Dirac, Walter Heitler, Fritz London, Francis Wheeler Loomis, John von Neumann, John Slater, Leó Szilárd and Eugene Wigner. ## Molecular Beam Laboratory On March 26, 1929, Rabi received an offer of a lectureship from Columbia, with an annual salary of \$3,000. The dean of Columbia's physics department, George B. Pegram, was looking for a theoretical physicist to teach statistical mechanics and an advanced course in the new subject of quantum mechanics, and Heisenberg had recommended Rabi. Helen was now pregnant, so Rabi needed a regular job, and this job was in New York. He accepted, and returned to the United States in August on the SS President Roosevelt. Rabi became the only Jewish faculty member at Columbia at the time. Rabi was a poor instructor. Leon Lederman recalled that after a lecture, students would head to the library to try to work out what Rabi had been talking about. Irving Kaplan rated Rabi and Harold Urey as "the worst teachers I ever had". Norman Ramsey considered Rabi's lectures "pretty dreadful", while William Nierenberg felt that he was "simply an awful lecturer". Despite his shortcomings as a lecturer, his influence was great. He inspired many of his students to pursue careers in physics, and some became famous. Rabi's first daughter, Helen Elizabeth, was born in September 1929. A second girl, Margaret Joella, followed in 1934. Between his teaching duties and his family, he had little time for research, and published no papers in his first year at Columbia, but was nonetheless promoted to assistant professor at its conclusion. He became a professor in 1937. In 1931 Rabi returned to particle beam experiments. In collaboration with Gregory Breit, he developed the Breit-Rabi equation, and predicted that the Stern–Gerlach experiment could be modified to confirm the properties of the atomic nucleus. The next step was to do so. With the help of Victor W. Cohen, Rabi built a molecular beam apparatus at Columbia. Their idea was to employ a weak magnetic field instead of a strong one, with which they hoped to detect the nuclear spin of sodium. When the experiment was conducted, four beamlets were found, from which they deduced a nuclear spin of 3⁄2. Rabi's Molecular Beam Laboratory began to attract others, including Sidney Millman, a graduate student who studied lithium for his doctorate. Another was Jerrold Zacharias who, believing that the sodium nucleus would be too difficult to understand, proposed studying the simplest of the elements, hydrogen. Its deuterium isotope had only recently been discovered at Columbia in 1931 by Urey, who received the 1934 Nobel Prize in Chemistry for this work. Urey was able to supply them with both heavy water and gaseous deuterium for their experiments. Despite its simplicity, Stern's group in Hamburg had observed that hydrogen did not behave as predicted. Urey also helped in another way; he gave Rabi half his prize money to fund the Molecular Beam Laboratory. Other scientists whose careers began at the Molecular Beam Laboratory included Norman Ramsey, Julian Schwinger, Jerome Kellogg and Polykarp Kusch. All were men; Rabi did not believe that women could be physicists. He never had a woman as a doctoral or postdoctoral student, and generally opposed women as candidates for faculty positions. At the suggestion of C. J. Gorter, the team attempted to use an oscillating field. This became the basis for the nuclear magnetic resonance method. In 1937, Rabi, Kusch, Millman and Zacharias used it to measure the magnetic moment of several lithium compounds with molecular beams, including lithium chloride, lithium fluoride and dilithium. Applying the method to hydrogen, they found that the moment of a proton was 2.785±0.02 nuclear magnetons, and not 1 as predicted by the then-current theory, while that of a deuteron was 0.855±0.006 nuclear magnetons. This provided more accurate measurements of what Stern's team had found, and Rabi's team had confirmed, in 1934. Since a deuteron is composed of a proton and a neutron with aligned spins, the neutron's magnetic moment could be inferred by subtracting the proton's magnetic moment from the deuteron's. The resulting value was not zero, and had a sign opposite to that of the proton. Based on curious artifacts of these more accurate measurements, Rabi suggested that the deuteron had an electric quadrupole moment. This discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. For the creation of the molecular-beam magnetic-resonance detection method, Rabi was awarded the Nobel Prize in Physics in 1944. ## World War II In September 1940, Rabi became a member of the Scientific Advisory Committee of the U.S. Army's Ballistic Research Laboratory. That month, the British Tizard Mission brought a number of new technologies to the United States, including a cavity magnetron, a high-powered device that generates microwaves using the interaction of a stream of electrons with a magnetic field. This device promised to revolutionize radar, so Alfred Lee Loomis of the National Defense Research Committee decided to establish a new laboratory at the Massachusetts Institute of Technology (MIT) to develop this radar technology. The name Radiation Laboratory was chosen as both unremarkable and a tribute to the Berkeley Radiation Laboratory. Loomis recruited Lee DuBridge to run it. Loomis and DuBridge recruited physicists for the new laboratory at an Applied Nuclear Physics conference at MIT in October 1940. Among those who volunteered was Rabi. His assignment was to study the magnetron, which was so secret that it had to be kept in a safe. The Radiation Laboratory scientists set their sights on producing a microwave radar set by January 6, 1941, and having a prototype installed in a Douglas A-20 Havoc by March. This was done; the technological obstacles were gradually overcome, and a working US microwave radar set was produced. The magnetron was further developed on both sides of the Atlantic to permit a reduction in wavelength from 150 cm to 10 cm, and then to 3 cm. The laboratory went on to develop air-to-surface radar to detect submarines, the SCR-584 radar for fire control, and LORAN, a long-range radio navigation system. At Rabi's instigation, a branch of the Radiation Laboratory was located at Columbia, with Rabi in charge. In 1942 Oppenheimer attempted to recruit Rabi and Robert Bacher to work at the Los Alamos Laboratory on a new secret project. They convinced Oppenheimer that his plan for a military laboratory would not work, since a scientific effort would need to be a civilian affair. The plan was modified, and the new laboratory would be a civilian one, run by the University of California under contract from the War Department. In the end, Rabi still did not go west, but did agree to serve as a consultant to the Manhattan Project. Rabi attended the Trinity test in July 1945. The scientists working on Trinity set up a betting pool on the yield of the test, with predictions ranging from total dud to 45 kilotons of TNT equivalent (kt). Rabi arrived late and found the only entry left was for 18 kilotons, which he purchased. Wearing welding goggles, he waited for the result with Ramsey and Enrico Fermi. The blast was rated at 18.6 kilotons, and Rabi won the pool. ## Later life In 1945, Rabi delivered the Richtmyer Memorial Lecture, held by the American Association of Physics Teachers in honor of Floyd K. Richtmyer, wherein he proposed that the magnetic resonance of atoms might be used as the basis of a clock. William L. Laurence wrote it up for The New York Times, under the headline "'Cosmic pendulum' for clock planned". Before long Zacharias and Ramsey had built such atomic clocks. Rabi actively pursued his research into magnetic resonance until about 1960, but he continued to make appearances at conferences and seminars until his death. Rabi chaired Columbia's physics department from 1945 to 1949, during which time it was home to two Nobel laureates (Rabi and Enrico Fermi) and eleven future laureates, including seven faculty (Polykarp Kusch, Willis Lamb, Maria Goeppert-Mayer, James Rainwater, Norman Ramsey, Charles Townes and Hideki Yukawa), a research scientist (Aage Bohr), a visiting professor (Hans Bethe), a doctoral student (Leon Lederman) and an undergraduate (Leon Cooper). Martin L. Perl, a doctoral student of Rabi's, won the Nobel Prize in 1995. Rabi was the Eugene Higgins professor of physics at Columbia but when Columbia created the rank of university professor in 1964, Rabi was the first to receive such a chair. This meant that he was free to research or teach whatever he chose. He retired from teaching in 1967 but remained active in the department and held the title of University Professor Emeritus until his death. A special chair was named after him in 1985. A legacy of the Manhattan Project was the network of national laboratories, but none was located on the East Coast. Rabi and Ramsey assembled a group of universities in the New York area to lobby for their own national laboratory. When Zacharias, who was now at MIT, heard about it, he set up a rival group at MIT and Harvard. Rabi had discussions with Major General Leslie R. Groves, Jr., the director of the Manhattan Project, who was willing to go along with a new national laboratory, but only one. Moreover, while the Manhattan Project still had funds, the wartime organization was expected to be phased out when a new authority came into existence. After some bargaining and lobbying by Rabi and others, the two groups came together in January 1946. Eventually nine universities (Columbia, Cornell, Harvard, Johns Hopkins, MIT, Princeton, Pennsylvania, Rochester and Yale) came together, and on January 31, 1947, a contract was signed with the Atomic Energy Commission (AEC), which had replaced the Manhattan Project, that established the Brookhaven National Laboratory. Rabi suggested to Edoardo Amaldi that Brookhaven might be a model that Europeans could emulate. Rabi saw science as a way of inspiring and uniting a Europe that was still recovering from the war. An opportunity came in 1950 when he was named the United States Delegate to the United Nations Educational, Scientific and Cultural Organization (UNESCO). At a UNESCO meeting at the Palazzo Vecchio in Florence in June 1950, he called for the establishment of regional laboratories. These efforts bore fruit; in 1952, representatives of eleven countries came together to create the Conseil Européen pour la Recherche Nucléaire (CERN). Rabi received a letter from Bohr, Heisenberg, Amaldi and others congratulating him on the success of his efforts. He had the letter framed and hung it on the wall of his home office. ### Military matters The Atomic Energy Act of 1946 that created the Atomic Energy Commission provided for a nine-man General Advisory Committee (GAC) to advise the Commission on scientific and technical matters. Rabi was one of those appointed in December 1946. The GAC was enormously influential throughout the late 1940s, but in 1950 the GAC unanimously opposed the development of the hydrogen bomb. Rabi went further than most of the other members, and joined Fermi in opposing the hydrogen bomb on moral as well as technical grounds. However, President Harry S. Truman overrode the GAC's advice, and ordered development to proceed. Rabi later said: > I never forgave Truman for buckling under the pressure. He simply did not understand what it was about. As a matter of fact, after he stopped being President he still didn't believe that the Russians had a bomb in 1949. He said so. So for him to have alerted the world that we were going to make a hydrogen bomb at a time when we didn't even know how to make one was one of the worst things he could have done. It shows the dangers of this sort of thing. Oppenheimer was not reappointed to the GAC when his term expired in 1952, and Rabi succeeded him as chairman, serving until 1956. Rabi later testified on Oppenheimer's behalf at the Atomic Energy Commission's controversial security hearing in 1954 that led to Oppenheimer being stripped of his security clearance. Many witnesses supported Oppenheimer, but none more forcefully than Rabi: > So it didn't seem to me the sort of thing that called for this kind of proceeding... against a man who has accomplished what Dr. Oppenheimer has accomplished. There is a real positive record... We have an A-bomb and a whole series of it, and we have a whole series of super bombs, and what more do you want, mermaids? Rabi was appointed a member of the Science Advisory Committee (SAC) of the Office of Defense Mobilization in 1952, serving as its chairman from 1956 to 1957. This coincided with the Sputnik crisis. President Dwight Eisenhower met with the SAC on October 15, 1957, to seek advice on possible US responses to the Soviet satellite success. Rabi, who knew Eisenhower from the latter's time as president of Columbia, was the first to speak, and put forward a series of proposals, one of which was to strengthen the committee so it could provide the President with timely advice. This was done, and the SAC became the President's Science Advisory Committee a few weeks later. He also became Eisenhower's Science Advisor. In 1956 Rabi attended the Project Nobska anti-submarine warfare conference, where discussion ranged from oceanography to nuclear weapons. He served as the US Representative to the NATO Science Committee at the time that the term "software engineering" was coined. While serving in that capacity, he bemoaned the fact that many large software projects were delayed. This prompted discussions that led to the formation of a study group that organized the first conference on software engineering. ### Honors In the course of his life, Rabi received many honors in addition to the Nobel Prize. These included the Elliott Cresson Medal from the Franklin Institute in 1942, the Medal for Merit and the King's Medal for Service in the Cause of Freedom from Great Britain in 1948, the officer in the French Legion of Honour in 1956, Columbia University's Barnard Medal for Meritorious Service to Science in 1960, the Niels Bohr International Gold Medal and the Atoms for Peace Award in 1967, the Oersted Medal from the American Association of Physics Teachers in 1982, the Four Freedoms Award from the Franklin and Eleanor Roosevelt Institute and the Public Welfare Medal from the National Academy of Sciences in 1985, the Golden Plate Award of the American Academy of Achievement and the Vannevar Bush Award from the National Science Foundation in 1986. He was a Fellow (elected 1931) of the American Physical Society, serving as its president in 1950, and a member of the National Academy of Sciences, the American Philosophical Society, and the American Academy of Arts and Sciences. He was internationally recognized with membership in the Japan Academy and the Brazilian Academy of Sciences, and in 1959 was appointed a member of the board of governors of the Weizmann Institute of Science in Israel. The most valuable of Columbia University's undergraduate research scholarships, designed to motivate and support promising young scientists, is named after him, so is the street, Route Rabi at CERN, on the Prévessin site in France. Columbia University's I. I. Rabi Scholars program assists "some of Columbia College's most promising science students at the point of admission into the College." ### Death Rabi died at his home on Riverside Drive in Manhattan from cancer on January 11, 1988. His wife, Helen, survived him and died at the age of 102 on June 18, 2005. In his last days, he was reminded of his greatest achievement when his physicians examined him using magnetic resonance imaging, a technology that had been developed from his ground-breaking research on magnetic resonance. The machine happened to have a reflective inner surface, and he remarked: "I saw myself in that machine... I never thought my work would come to this." ### In popular culture Rabi was portrayed by Barry Dennen in the 1980 TV Miniseries Oppenheimer, and by David Krumholtz in the 2023 film Oppenheimer. ## Books ## See also - List of Jewish Nobel laureates
39,024,575
Hasan al-Kharrat
1,057,639,617
Leader of the Great Syrian Revolt against the French Mandate of Syria
[ "1861 births", "1925 deaths", "Guerrillas killed in action", "Ottoman Arab nationalists", "People from Damascus", "People of the Great Syrian Revolt", "Syrian Arab nationalists", "Syrian Sunni Muslims" ]
Abu Muhammad Hasan al-Kharrat (Arabic: حسن الخراط Ḥassan al-Kharrāṭ; 1861 – 25 December 1925) was one of the principal Syrian rebel commanders of the Great Syrian Revolt against the French Mandate. His main area of operations was in Damascus and its Ghouta countryside. He was killed in the struggle and is considered a hero by Syrians. As the qabaday (local youths boss) of the al-Shaghour quarter of Damascus, al-Kharrat was connected with Nasib al-Bakri, a nationalist from the quarter's most influential family. At al-Bakri's invitation, al-Kharrat joined the revolt in August 1925 and formed a group of fighters from al-Shaghour and other neighborhoods in the vicinity. He led the rebel assault against Damascus, briefly capturing the residence of French High Commissioner of the Levant Maurice Sarrail before withdrawing amid heavy French bombardment. Towards the end of 1925, relations grew tense between al-Kharrat and other rebel leaders, particularly Sa'id al-'As and Ramadan al-Shallash, as they traded accusations of plundering villages or extorting local inhabitants. Al-Kharrat continued to lead operations in the Ghouta, where he was ultimately killed in a French ambush. The revolt dissipated by 1927, but he gained a lasting reputation as a martyr of the Syrian resistance to French rule. ## Early life and career Al-Kharrat was born to a Sunni Muslim family in Damascus in 1861, during Ottoman rule in Syria. He served as the night watchman of the city's al-Shaghour quarter and as a guard for the neighborhood's orchards. Damascus was captured by Arab rebels during World War I in October 1918. Afterward, the Arab Club, an Arab nationalist organization, emerged in the city to raise support for the rebels. The club assisted the rebels' leader, Emir Faisal, who formed a rudimentary government. Al-Kharrat became an affiliate of the Arab Club and raised support for Faisal in al-Shaghour. In July 1920, Faisal's government collapsed after its motley forces were defeated by the French at the Battle of Maysalun. Afterward, the French ruled Syria under the aegis of their League of Nations mandate. In the early years of French rule, al-Kharrat was al-Shaghour's qabaday (pl. qabadayat), the traditional leader of a neighborhood's local toughs. The qabaday was informally charged with redressing grievances and defending a neighborhood's honor against local criminals or the encroachments of qabadayat from other neighborhoods. He was popularly characterized as an honorable man, noted for his personal strength, and protection of minorities and the poor. The qabaday was considered an "upholder of Arab traditions and customs, the guardian of popular culture", according to historian Philip S. Khoury. Khoury asserts that al-Kharrat was "probably the most respected and esteemed qabaday of his day". Qabadayat normally shunned formal education, and historian Michael Provence maintains that al-Kharrat was likely illiterate. Qabadayat were normally linked with particular city notables and could secure them political support in their neighborhoods. Al-Kharrat was allied with Nasib al-Bakri, a Damascene politician and landowner. The al-Bakri family was the most influential in al-Shaghour, and al-Kharrat served as the family's principal connection and enforcer in the quarter. ## Commander in the Great Syrian Revolt ### Recruitment and early confrontations A revolt against French rule was launched in mid-1925 by the Druze sheikh (chieftain), Sultan Pasha al-Atrash, in the southern mountains of Jabal al-Druze. As al-Atrash's men scored decisive victories against the French Army of the Levant, Syrian nationalists were inspired and the revolt spread northward to the countryside of Damascus and beyond. Al-Bakri was the chief liaison between al-Atrash and the emerging rebel movement in Damascus and the Ghouta. The Ghouta is the fertile plain surrounding Damascus, and its orchard groves and extensive waterways provided cover for the rebels and a base from which they could raid Damascus. In August, al-Bakri convinced al-Kharrat to join the uprising. According to Provence, al-Kharrat was "ideal" for the job, possessing "a local following of young men, notoriety outside the quarter, good connections and a reputation for toughness". The group of fighters he commanded was known as ′isabat al-Shawaghirah (the band of al-Shaghour). Though named after al-Kharrat's quarter, the band included twenty qabadayat and their armed retinues from other Damascus neighborhoods and nearby villages. His main areas of operation were in the vicinity of al-Shaghour and the al-Zur forest in the eastern Ghouta. Through his alliance with a Sufi religious leader, al-Kharrat brought an Islamic holy war dimension to the largely secular revolt, something that was not welcomed by some involved. Al-Kharrat commenced guerrilla operations in September, targeting French forces posted in the eastern and southern Ghouta. His prominence rose as he led nighttime raids against the French in Damascus, during which he disarmed army patrols and took soldiers hostage. In al-Shaghour, Souk Saruja and Jazmatiyya, al-Kharrat and his band burnt down all French-held buildings. In the first week of October, sixty French gendarmes were dispatched to the Ghouta to apprehend al-Kharrat and his fighters. The gendarmes were quartered in the home of al-Malihah's mukhtar (village headman). In the evening, the rebels attacked the residence, killing one gendarme and capturing the rest; the prisoners were eventually all returned unharmed. On 12 October, French troops backed by tanks, artillery and aerial support launched an operation to surround and eliminate al-Kharrat's rebels in the al-Zur forest. Al-Kharrat's men were forewarned of the French deployment by the peasants of al-Malihah. Positioned among the trees, the rebels used sniper fire against the French troops. The latter were unable to lure the rebels out and retreated. As the French withdrew toward al-Malihah, they looted the village and set it on fire. French intelligence officials justified the collective punishment of al-Malihah as retaliation for the rebels' capture and humiliation of the gendarmes during the previous week; the French claimed a young boy from al-Malihah had notified al-Kharrat's men of the French presence in the village. Though they were unable to engage al-Kharrat and his forces directly, French troops executed around 100 civilians from Ghouta villages. Their corpses were brought to Damascus, and the bodies of sixteen men described by the French as "brigands" were put on display. ### Battle of Damascus and operations in Ghouta Spurred by French army actions in the Ghouta, al-Bakri planned to capture the Citadel of Damascus, where French forces were concentrated, and the Azm Palace, where General Maurice Sarrail, the French high commissioner of Syria, would be residing on 17–18 October (Sarrail was typically headquartered in Beirut). The high commissioner functioned as the overall administrator of Syria on behalf of France and exercised practically absolute power. The rebel units active in Damascus at the time were al-Kharrat's ′isabat and a mixed force of Druze fighters and rebels from the al-Midan quarter and the Ghouta. To compensate for the lack of rebel strength, al-Bakri sent a letter to Sultan al-Atrash requesting reinforcements. Al-Atrash replied that he was currently occupied with operations in the Hauran, but would dispatch his entire force to back the Damascus rebels as soon as affairs there were settled. Before he received al-Atrash's reply, al-Bakri decided to move ahead with the operation. On 18 October, al-Kharrat led forty rebels into al-Shaghour from the old cemeteries adjacent to the southern gate of Damascus, announcing that the Druze had arrived to relieve the city from French occupation. Crowds of residents enthusiastically welcomed the rebels, and many took up arms alongside them. Al-Kharrat's men captured the quarter's police station, disarming its garrison. They were joined by Ramadan al-Shallash, a rebel commander from Deir ez-Zor, and twenty of his Bedouin fighters. The joint forces proceeded to the Hamidiyya Market and captured the Azm Palace, but Sarrail was not present, having already left to attend a meeting in the Hauran town of Daraa. The rebels plundered the palace and set it on fire. Provence asserts that capturing the palace without Sarrail "held no tactical importance" but was a highly symbolic achievement for the rebels because of the Azm Palace's "importance as the historical seat of economic and political power in Damascus, now usurped by the French and totally undefended". While al-Kharrat captured the Azm Palace, al-Bakri and 200 rebels under his command rode through the city and were joined by civilians in increasing numbers. After sealing the Old City to prevent the entry of enemy reinforcements, al-Kharrat issued an order to kill anyone linked to the French army. About 180 French soldiers were killed. Sarrail ordered the shelling and aerial bombardment of the city, which lasted two days and killed about 1,500 people. Chaos and scattered fighting ensued as whole neighborhoods, mosques and churches were leveled, French forces moved in, and hundreds of leading figures in the Syrian national movement were arrested, including al-Kharrat's son Fakhri. The latter was captured on 22 October during a botched nighttime raid by the rebels against the French, who had by then retaken Damascus. Al-Kharrat was offered the release of his son in exchange for his own surrender, but refused. The rebels withdrew from Damascus as a meeting was held between French army commander Maurice Gamelin and a delegation of Damascene notables. As a result of the meeting, the French agreed to end their bombardment in return for a payment of 100,000 Turkish gold liras by 24 October. The fine was not paid by the French deadline, but the bombardment was not renewed, likely as a result of orders from the French government in Paris. International condemnation of Sarrail's bombardment of Damascus and growing criticism in France of his mishandling of the revolt led to his dismissal on 30 October. He was replaced by politician Henry de Jouvenel, who arrived in Syria in December. On 22 November, al-Kharrat commanded 700 rebels in a battle with about 500 French soldiers outside of Damascus. Al-Kharrat's men inflicted "trifling" losses on the French, but experienced heavy casualties themselves, with thirty dead and forty wounded according to Reuters. On 5 December, al-Kharrat was one of the commanders of a 2,000-strong force uniting rebels from disparate backgrounds, which assaulted the French Army barracks in al-Qadam, south of Damascus. The French claimed to have inflicted significant casualties, but rebel activity continued. ### Tensions with rebel leaders Centralized order and oversight among the revolt's armed participants was difficult to establish because of the diversity and independence of the rebel factions. A meeting of rebel leaders was held in the Ghouta village of Saqba on 26 November. Sa'id al-'As accused al-Kharrat and others of plundering in the Ghouta, while al-Kharrat alleged that al-Shallash extorted the residents of al-Midan and the Ghouta town of Douma. The meeting concluded with an agreement to elect a government to replace the French authorities, increase recruitment of the Ghouta's inhabitants, coordinate military operations under a central command, and establish a revolutionary court to execute spies. The meeting also designated the area between the village of Zabdin and north of the Douma-Damascus road as being part of al-Kharrat's zone of operations. Despite his leading role in the rebels' military efforts, al-Kharrat was not included in the newly formed rebel leadership council, nor were any of al-Bakri's allies. Instead, al-'As served as the rebels' overall head. Sharp divisions among rebel factions became apparent during a second meeting in Saqba on 5 December. According to Syrian journalist Munir al-Rayyes, hostility between al-Kharrat and al-Shallash was well known among the rebels. Because al-Shallash had levied war taxes on the major landlords and city elites of the Ghouta, al-Kharrat's benefactor al-Bakri viewed him as a threat to the traditional landowning class to which al-Bakri belonged. Al-Rayyis claimed the meeting was called for by al-Kharrat, who ordered his fighters to capture and bring al-Shallash to Saqba. However, according to al-'As, the summit was called by al-Shallash, and once the latter arrived in the village, al-Kharrat personally detained him and confiscated his horse, weapons and money. After his detention, al-Shallash was given a brief trial during which al-Kharrat accused him of making "impositions and ransoms and financial collections in the name of the revolt", while al-Bakri condemned him specifically for extorting the residents of Douma for 1,000 giney (Ottoman pounds), and imposing large fines on the inhabitants of Harran al-Awamid, al-Qisa and Maydaa for his own personal enrichment. Al-Kharrat and al-Bakri decided al-Shallash's verdict, and dismissed him from the revolt. While many rebels with officer backgrounds similar to al-Shallash disapproved of the judgement, they did not intervene. In his account of the meeting, al-Rayyis condemned the rebel commanders for complacency in the "ridiculous trial" and accused al-Kharrat of being motivated solely by personal animosity. Al-Shallash was able to escape—or was released by al-'As—when French planes bombed the meeting. Al-Shallash would later surrender to Jouvenel and collaborate with French authorities. ## Death and legacy Al-Kharrat was killed in an ambush by French troops in the Ghouta on 25 December 1925. He was succeeded as qabaday of al-Shaghour and commander of the ′isabat al-Shawaghirah by Mahmud Khaddam al-Srija. Al-Kharrat's men continued to fight the French until the revolt ended in 1927, though historian Thomas Philipp states that al-Kharrat's group dissipated after his death. In January 1926, al-Kharrat's son Fakhri was sentenced to death and publicly executed with two other rebels in Marjeh Square, Damascus. The French had previously implored Fakhri to persuade his father to surrender in return for his release, but Fakhri refused. Abd al-Rahman Shahbandar, a prominent Syrian nationalist leader, described al-Kharrat as having played "the preeminent role" in the battle against the French in the Ghouta and Damascus. Historian Daniel Neep wrote that al-Kharrat was the "best-known" of all of the Damascus-based rebel leaders, although other leaders of the rebel movement attributed the publicity and praise of al-Kharrat to the efforts of the Cairo-based Syrian-Palestinian Committee, with which al-Bakri was closely affiliated. Al-Kharrat and his son Fakhri are today considered "martyred heroes" by Syrians for their nationalist efforts and their deaths in the Syrian struggle for independence from France. ## See also - Ayyash Al-Haj - Ibrahim Hananu - Saleh Al-Ali
195,797
New York Dolls (album)
1,172,746,872
null
[ "1973 debut albums", "Albums produced by Todd Rundgren", "Albums recorded at Record Plant (New York City)", "Mercury Records albums", "New York Dolls albums" ]
New York Dolls is the debut album by the American hard rock band New York Dolls. It was released on July 27, 1973, by Mercury Records. In the years leading up to the album, the Dolls had developed a local fanbase by playing regularly in lower Manhattan after forming in 1971. However, most music producers and record companies were reluctant to work with them because of their vulgarity and onstage fashion as well as homophobia in New York; the group later appeared in exaggerated drag on the album cover for shock value. After signing a contract with Mercury, the Dolls recorded their first album at The Record Plant in New York City with producer Todd Rundgren, who was known for his sophisticated pop tastes and held a lukewarm opinion of the band. Despite stories of conflicts during the recording sessions, lead singer David Johansen and guitarist Sylvain Sylvain later said Rundgren successfully captured how the band sounded live. The resulting music on the album – a mix of carefree rock and roll, influences from Brill Building pop, and campy sensibilities – explores themes of urban youth, teen alienation, adolescent romance, and authenticity, as rendered in Johansen's colloquial and ambiguous lyrics. New York Dolls was met with widespread critical acclaim but sold poorly and polarized listeners. The band proved difficult to market outside their native New York and developed a reputation for rock-star excesses while touring the United States in support of the album. Despite its commercial failure, New York Dolls was an influential precursor to the 1970s punk rock movement as the group's crude musicianship and youthful attitude on the album challenged the prevailing trend of musical sophistication in popular music, particularly progressive rock. Among the most acclaimed albums in history, it has since been named in various publications as one of the best debut records in rock music and one of the greatest albums of all time. ## Background In 1971, vocalist David Johansen formed the New York Dolls with guitarists Johnny Thunders and Rick Rivets, bassist Arthur Kane, and drummer Billy Murcia; Rivets was replaced by Sylvain Sylvain in 1972. The band was meant to be a temporary project for the members, who were club-going youths that had gone to New York City with different career pursuits. As Sylvain recalled, "We just said 'Hey, maybe this will get us some chicks.' That seemed like a good enough reason." He and Murcia originally planned to work in the clothing business and opened a boutique on Lexington Avenue that was across the street from a toy repair shop called the New York Doll Hospital, which gave them the idea for their name. The group soon began playing regularly in lower Manhattan and earned a cult following within a few months with their reckless style of rock music. Nonetheless, record companies were hesitant to sign them because of their onstage cross-dressing and blatant vulgarity. In October 1972, the group garnered the interest of critics when they opened for English rock band the Faces at the Empire Pool in Wembley. However, on the New York Dolls' first tour of England that year, Murcia died after consuming a lethal combination of alcohol and methaqualone. They enlisted Jerry Nolan as his replacement, while managers Marty Thau, Steve Leber, and David Krebs still struggled to find the band a record deal. After returning to New York, the Dolls played to capacity crowds at venues such as Max's Kansas City and the Mercer Arts Center in what Sylvain called a determined effort to "fake it until they could make it": "We had to make ourselves feel famous before we could actually become famous. We acted like we were already rock stars. Arthur even called his bass 'Excalibur' after King Arthur. It was crazy." Their performance at the Mercer Arts Center was attended by journalist and Mercury Records publicity director Bud Scoppa, and Paul Nelson, an A&R executive for the label. Scoppa initially viewed them as an amusing but inferior version of the Rolling Stones: "I split after the first set. Paul stuck around for the second set, though, and after the show he called me and said, 'You should have stayed. I think they're really special.' Then, after that, I fell in love with them anyway." In March 1973, the group signed a two-album deal with a US \$25,000 advance from Mercury. According to Sylvain, some of the members' parents had to sign for them because they were not old enough to sign themselves. ### Hiring of Todd Rundgren For the New York Dolls' debut album, Mercury wanted to find a record producer who could make the most out of the group's sound and the hype they had received from critics and fans in New York. At the band's first board meeting in Chicago, Johansen fell asleep in Mercury's conference room while record executives discussed potential producers. He awoke when they mentioned Todd Rundgren, a musician and producer who by 1972 had achieved unexpected rock stardom with his double album Something/Anything? and its hit singles "I Saw the Light" and "Hello It's Me". Rundgren had socialized at venues such as Max's Kansas City and first saw the Dolls when his girlfriend at the time, model Bebe Buell, brought him there to see them play. Known for having refined pop tastes and technologically savvy productions, Rundgren had become increasingly interested in progressive rock sounds by the time he was enlisted to produce the New York Dolls' debut album. Consequently, his initial impression of the group was that of a humorous live act who were technically competent only by the standards of other unsophisticated New York bands. "The Dolls weren't out to expand any musical horizons", said Rundgren, although he enjoyed Thunders' "attitude" and Johansen's charismatic antics onstage. Johansen had thought of Rundgren as "an expert on second rate rock 'n' roll", but also said the band was "kind of persona non grata, at the time, with most producers. They were afraid of us, I don't know why, but Todd wasn't. We all liked him from Max's ... Todd was cool and he was a producer." Sylvain, on the other hand, felt the decision to enlist him was based on availability, time, and money: "It wasn't a long list. Todd was in New York and seemed like he could handle the pace." Upon being hired, Rundgren declared that "the only person who can produce a New York record is someone who lives in New York". ## Recording and production Mercury booked the Dolls at The Record Plant in New York City for recording sessions in April 1973. Rundgren was originally concerned that they had taken "the worst sounding studio in the city at that time" because it was the only one available to them with the short time given to record and release the album. He later said that expectations for the band and the festive atmosphere of the recording sessions proved to be more of a problem: "The Dolls were critics' darlings and the press had kind of adopted them. Plus, there were lots of extra people around, socializing, which made it hard to concentrate." New York Dolls was recorded there in eight days on a budget of \$17,000 (equivalent to \$ in ). With a short amount of studio time and no concept in mind for the album, the band chose which songs to record based on how well they had been received at their live shows. In Johansen's own words, "we went into a room and just recorded. It wasn't like these people who conceptualize things. It was just a document of what was going on at the time." In the studio, the New York Dolls dressed in their usual flashy clothes. Rundgren, who did not approve of their raucous sound, at one point yelled at them during the sessions to "get the glitter out of your asses and play". Sylvain recalled Rundgren inviting Buell and their Chihuahua to the studio and putting the latter atop an expensive mixing console, while Johansen acknowledged that his recollections of the sessions have since been distorted by what he has read about them: "It was like the 1920s, with palm tree décor and stuff. Well, that's how I remember it, anyway." He also said Rundgren directed the band from the control room with engineer Jack Douglas and hardly spoke to them while they recorded the album. According to Scoppa, the group's carefree lifestyle probably conflicted with Rundgren's professional work ethic and schedule: "He doesn't put up with bullshit. I mean, [the band] rarely started their live sets before midnight, so who knows? Todd was very much in charge in the studio, however, and I got the impression that everybody was looking to him." Although Sylvain said Rundgren was not an interfering producer, he occasionally involved himself to improve a take. Sylvain recalled moments when Rundgren went into the isolation booth with Nolan when he struggled keeping a beat and drummed out beats on a cowbell for him to use as a click track. During another session, he stopped a take and walked out of the control room to plug in Kane's bass cabinet. Scoppa, who paid afternoon visits to the studio, overheard Rundgren say, "Yeah, that's all you needed. Okay, let's try it again!", and ultimately found the exchange funny and indicative of Rundgren's opinion of the band: "Todd was such a 'musician' while they were just getting by on attitude and energy. But as disdainful as he appeared to be at some points he got the job done really well." Rundgren felt Johansen's wild singing often sounded screamed or drunken but also eloquent in the sense that Johansen demonstrated a "propensity to incorporate certain cultural references into the music", particularly on "Personality Crisis". While recording the song, Johansen walked back into the control room and asked Rundgren if his vocals sounded "ludicrous enough". Because the Dolls had little money, Sylvain and Thunders played the austerely designed and affordable Gibson Les Paul Junior guitars on the record. They jokingly referred to them as "automatic guitars" due to their limited sound shaping features. To amplify their guitars, they ran a Marshall Plexi standalone amplifier through the speaker cabinets of a Fender Dual Showman, and occasionally used a Fender Twin Reverb. Some songs were embellished with additional instruments, including Buddy Bowser's brassy saxophone on "Lonely Planet Boy". Johansen sang into distorted guitar pickups for additional vocals and overdubbed them into the song. He also played an Asian gong for "Vietnamese Baby" and harmonica on "Pills". For "Personality Crisis", Sylvain originally played on The Record Plant's Yamaha grand piano before Rundgren added his own piano flourishes to both that song and "Private World". Rundgren also contributed to the background vocals heard on "Trash" and played synthesizers on "Vietnamese Baby" and "Frankenstein (Orig.)", which Sylvain recalled: "I remember him getting those weird sounds from this beautiful old Moog synthesizer he brought in. He said it was a model that only he and the Beatles had." New York Dolls was mixed in less than half a day. Rundgren felt the band seemed distracted and disinterested at that point, so he tried unsuccessfully to ban them from the mixing session. For the final mix, he minimized the sound of Nolan's drumming. In retrospect, Rundgren said the quality of the mix was poor because the band had hurried and questioned him while mixing the record: "It's too easy for it to become a free-for-all, with every musician only hearing their own part and not the whole. They all had other places to be, so rather than split, they rushed the thing and if that wasn't enough they took it to the crappy mastering lab that Mercury had put them in." Thunders famously complained to a journalist that Rundgren "fucked up the mix" on New York Dolls, adding to stories that the two had clashed during the album's recording. Both Johansen and Scoppa later said they did not see any conflict between the two and that Thunders' typically foolish behavior was misinterpreted. Johansen later praised Rundgren for how he enhanced and equalized each instrument, giving listeners the impression that "[they're] in a room and there's a band playing", while Sylvain said his mix accurately captured how the band sounded live. ## Music and lyrics New York Dolls features ten original songs and one cover – the 1963 Bo Diddley song "Pills". Johansen describes the album as "a little jewel of urban folk art". Rundgren, on the other hand, says the band's sensibilities were different from "the urban New York thing" because they had been raised outside Manhattan and drew on carefree rock and roll and Brill Building pop influences such as the Shangri-Las: "Their songs, as punky as they were, usually had a lot to do with the same old boy-girl thing but in a much more inebriated way." Johansen quotes the lyric "when I say I'm in love, you'd best believe I'm in love L-U-V" from the Shangri-Las' "Give Him a Great Big Kiss" (1964) when opening "Looking for a Kiss", which tells a story of adolescent romantic desire hampered by peers who use drugs. On "Subway Train", he uses lyrics from the American folk standard "Someone's in the Kitchen with Dinah". In the opinion of critic Stephen Thomas Erlewine, the album's rowdy hard rock songs also revamp riffs from Chuck Berry and the Rolling Stones, resulting in music that sounds edgy and threatening in spite of the New York Dolls' wittingly kitsch and camp sensibilities. "Personality Crisis" features raunchy dual guitars, boogie-woogie piano, and a histrionic pause, while "Trash" is a punky pop rock song with brassy singing. Several songs on New York Dolls function as what Robert Hilburn deems to be "colorful, if exaggerated, expressions of teen alienation". According to Robert Christgau, because many of Manhattan's white youths at the time were wealthy and somewhat artsy, only ill-behaved young people from the outer boroughs like the band could "capture the oppressive excitement Manhattan holds for a half-formed human being". "Private World", an escapist plea for stability, was co-written by Kane, who rarely contributed as a songwriter and felt overwhelmed as a young adult in the music business. Sylvain jokingly says "Frankenstein (Orig.)" was titled with the parenthetical qualifier because rock musician Edgar Winter had released his song of the same name before the band could record their own: "Our song 'Frankenstein' was a big hit in our live show ... Now, his thing didn't sound at all like ours, but I'm sure he stole our title." Johansen, the band's main lyricist, says "Frankenstein (Orig.)" is about "how kids come to Manhattan from all over, they're kind of like whipped dogs, they're very repressed. Their bodies and brains are disoriented from each other ... it's a love song." In interpreting the song's titular monster, Frank Kogan writes that it serves as a personification for New York and its ethos, while Johansen asking listeners if they "could make it with Frankenstein" involves more than sexual slang: > Frankenstein wasn't just a creature to have sex with, he represented the whole funky New Yorkiness of New York, the ostentation and the terror, the dreams and the fear ... David was asking if you – if I – could make it with the monster of life, whether I could embrace life in all its pain and dreams and disaster. Although the Dolls exhibit tongue-in-cheek qualities, Gary Graff observes a streetwise realism in the album's songs. In Christgau's opinion, Johansen's colloquial and morally superior lyrics are imbued with humor and a sense of human limits in songs whose fundamental theme is authenticity. This theme is explored in stories about lost youths, as on "Subway Train", or in a study of a specific subject, such as the "schizy imagemonger" on "Personality Crisis". He argues that beneath the band's decadent and campy surface are lyrics about "the modern world ... one nuclear bomb could blow it all away. Pills and personality crises weren't evils – easy, necessary, or whatever. They were strategies and tropisms and positive pleasures". According to journalist Steve Taylor, "Vietnamese Baby" deals with the impact of the Vietnam War at the time on everyday activities for people, whose fun is undermined by thoughts of collective guilt. On songs such as "Subway Train" and "Trash", Johansen uses ambiguity as a lyrical mode. In Kogan's opinion, Johansen sings in an occasionally unintelligible manner and writes in a perplexing, fictional style that is lazy yet ingenious, as it provides his lyrics an abundance of "emotional meaning" and interpretation: "David never provides an objective framework, he's always jumping from voice to voice, so you're hearing a character addressing another character, or the narrator addressing the character, or the character or the narrator addressing us, all jammed up together so you're hearing bits of conversation and bits of subjective description in no kind of chronological order. But as someone says in 'Vietnamese Baby': 'Everything connects.'" On "Trash", Johansen undercuts his vaguely pansexual beliefs with the possibility of going to "fairyland" if he takes a "lover's leap" with the song's subject. ## Marketing and sales New York Dolls was released on July 27, 1973, in the United States and on October 19 in the United Kingdom. Its controversial cover featured the band dressed in exaggerated drag, including high wigs, messy make-up, high heels, and garters. The photo was used for shock value, and on the back of the album, the band is photographed in their usual stage wear. To announce the album's release, Mercury published an advertisement slogan that read "Introducing The New York Dolls: A Band You're Gonna Like, Whether You Like It Or Not", while other ads called them "The Band You Love to Hate". Two double A-sided, 7-inch singles were released – "Trash" / "Personality Crisis" in July and "Jet Boy" / "Vietnamese Baby" in November 1973 – neither of which charted. New York Dolls was commercially unsuccessful and only reached number 116 on the American Top LPs while in the UK it failed to chart altogether. The record sold over 100,000 copies at the time and fell well short of expectations in the press. According to Rolling Stone in 2003, it ended up selling fewer than 500,000 copies. Music journalist Phil Strongman said its commercial failure could be attributed to the New York Dolls' divisive effect on listeners, including writers from the same magazine. In a feature story on the band for Melody Maker prior to the album, Mark Plummer had dismissed their playing as the poorest he had ever seen, while the magazine's reporter Michael Watts viewed them as an encouraging albeit momentary presence in what he felt was a lifeless rock and roll scene at the time. In Creem's readers poll, the album earned the band awards in the categories of "Best New Group of the Year" and "Worst New Group of the Year". After the album's release, the Dolls toured the US as a supporting act for English rock band Mott the Hoople. Reviews complimented their songwriting, Thunders and Sylvain's guitar interplay, and noted their campy fashion and the resemblance of Johansen and Thunders to Mick Jagger and Keith Richards. However, some critics panned them as an unserious group of amateurs who could not play or sing. During their appearance on The Old Grey Whistle Test in the UK, the show's host Bob Harris dismissed their music as "mock rock" in his on-air comments. They also developed a reputation for rock-star excesses, including drugs, groupies, trashed hotel rooms, and public disturbances, and according to Ben Edmonds of Creem, became "the most walked-out-on band in the history of show business". Strongman wrote that the band and the album were difficult to market because of their kitschy style and how Murcia's death had exacerbated their association with hard drugs, which "wasn't altogether true in the early days". They remained the most popular band in New York City, where their Halloween night concert at the Waldorf Astoria in 1973 drew hundreds of young fans and local television coverage. ## Critical reception and legacy New York Dolls received widespread acclaim from contemporary reviewers. In a rave review for NME, published in August 1973, Nick Kent said the band's raunchy style of rock and roll had been vividly recorded by Rundgren on an album that, besides Iggy and the Stooges' Raw Power (1973), serves as the only one "so far to fully define just exactly where 1970s rock should be coming from". Trouser Press founder and editor Ira Robbins viewed New York Dolls as an innovative record, brilliantly chaotic, and well produced by Rundgren. Ellen Willis, writing for The New Yorker, said it is by far 1973's most compelling hard rock album and that at least half of its songs are immediate classics, particularly "Personality Crisis" and "Trash", which she called "transcendent". In Newsday, Christgau hailed the New York Dolls as "the best hard rock band in the country and maybe the world right now", writing that their "special genius" is combining the shrewd songwriting savvy of early-1960s pop with the anarchic sound of late-1960s heavy metal. He claimed that the record's frenzied approach, various emotions, and wild noise convey Manhattan's harsh, deviant thrill better than the Velvet Underground. In an overall positive review, Rolling Stone critic Tony Glover found the band's impressive live sound to be mostly preserved on the album. However, he was slightly critical of production flourishes and overdubs, feeling that they make some lyrics sound incomprehensible and some choruses too sonorous. Although he was surprised at how well Rundgren's production works with the group's raunchy sound on most of the songs, Glover ultimately asked whether or not "the record alone will impress as much as seeing them live (they're a highly watchable group)." Years later, Christgau would also voice that the album is "in fact a little botched aurally", but still regarded it as a classic. ### Impact and reappraisal New York Dolls is often cited as one of the greatest debut albums in rock music, one of the genre's most popular cult records, and a foundational work for the late 1970s punk rock movement. Chuck Eddy considers it one of the records crucial to rock's evolution. The album was a pivotal influence on many of the rock and roll, punk, and glam rock groups that followed, including the Ramones, Kiss, the Sex Pistols, the Damned, and Guns N' Roses. Chris Smith, in 101 Albums That Changed Popular Music (2009), says that the New York Dolls's pioneering punk aesthetic of amateurish musicianship on the album undermined the musical sophistication that developed in the preceding years of popular music and was perfected months earlier with Pink Floyd's The Dark Side of the Moon (1973). Similarly, The Guardian publication "1000 albums to hear before you die" credits New York Dolls for serving as "an efficacious antidote to the excesses of prog rock". According to Sylvain, the album's influence on punk can be attributed to how Rundgren recorded Sylvain's guitar through the left speaker and Thunders' guitar on the right side, an orientation younger bands such as the Ramones and the Sex Pistols subsequently adopted. Rundgren, on the other hand, was amused by how the record became considered a precursor to the punk movement: "The irony is that I wound up producing the seminal punk album, but I was never really thought of as a punk producer, and I never got called by punk acts. They probably thought I was too expensive for what they were going for. But the Dolls didn't really consider themselves punk." It was English singer Morrissey's favorite album, and according to Paul Myers, the record "struck such a chord with [him] that he was not only moved to form his own influential group, The Smiths ... but would eventually convince the surviving Dolls to reunite [in 2004]". According to The Mojo Collection (2007), New York Dolls ignited punk rock and could still inspire more movements because of the music's abundant attitude and passion, while Encyclopedia of Popular Music writer Colin Larkin deems it "a major landmark in rock history, oozing attitude, vitality and controversy from every note". Writing for AllMusic, Erlewine – the website's senior editor – claims that New York Dolls is a more quintessential proto-punk album than any of the Stooges' releases because of how it "plunders history while celebrating it, creating a sleazy urban mythology along the way". David Fricke considers it to be a more definitive glam rock album than David Bowie's Ziggy Stardust (1972) or anything by Marc Bolan because of how the band "captured both the glory and sorrow of glam, the high jinx and wasted youth, with electric photorealism". In The New Rolling Stone Album Guide (2004), Joe Gross calls it an "absolutely essential" record and "epic sleaze, the sound of five young men shaping the big city in their own scuzzy image". ### Professional rankings New York Dolls appears frequently on professional listings of the greatest albums. In 1978, it was voted 199th in Paul Gambaccini's book Rock Critics' Choice: The Top 200 Albums, which polled a number of leading music journalists and record collectors. Christgau, one of the critics polled, ranked it as the 15th best album of the 1970s in The Village Voice the following year – 11 spots behind the Dolls' second album Too Much Too Soon (1974), although years later he would say the first album should be ranked ahead and was his favorite rock album. New York Dolls was included in Neil Strauss's 1996 list of the 100 most influential alternative records, and the Spin Alternative Record Guide (1995) named it the 70th best alternative album. In 2002, it was included on a list published by Q of the 100 best punk records, while Mojo named it both the 13th greatest punk album and the 49th greatest album of all time. Rolling Stone placed the record at number 213 on its 500 greatest albums list in 2003 and "Personality Crisis" at number 271 on its 500 greatest songs list the following year. In 2007, Mojo polled a panel of prominent recording artists and songwriters for the magazine's "100 Records That Changed the World" publication, in which New York Dolls was voted the 39th most influential and inspirational record ever. In 2013, it placed at number 355 on NME's list of the 500 greatest albums of all time. ## Track listing ## Personnel Credits are adapted from the album's liner notes. New York Dolls - David Johansen – gong, harmonica, vocals - Arthur "Killer" Kane – bass guitar - Jerry Nolan – drums - Sylvain Sylvain – piano, rhythm guitar, vocals - Johnny Thunders – lead guitar, vocals Additional personnel - Buddy Bowser – saxophone - Jack Douglas – engineering - David Krebs – executive production - Steve Leber – executive production - Paul Nelson – executive production - Dave O'Grady – makeup - Todd Rundgren – additional piano, Moog synthesizer, production - Ed Sprigg – engineer - Alex Spyropoulos – piano - Marty Thau – executive production - Toshi – photography ## Release history Information is adapted from Nina Antonia's Too Much Too Soon: The New York Dolls (2006). ## See also - Lipstick Killers – The Mercer Street Sessions 1972 - List of rock albums - Timeline of punk rock - Todd Rundgren discography
24,918,912
Amanita bisporigera
1,113,461,521
Poisonous species of fungus in the family Amanitaceae endemic to North America
[ "Amanita", "Deadly fungi", "Fungi described in 1906", "Fungi of North America", "Hepatotoxins", "Poisonous fungi" ]
Amanita bisporigera is a deadly poisonous species of fungus in the family Amanitaceae. It is commonly known as the eastern destroying angel amanita, the eastern North American destroying angel or just as the destroying angel, although the fungus shares this latter name with three other lethal white Amanita species, A. ocreata, A. verna and A. virosa. The fruit bodies are found on the ground in mixed coniferous and deciduous forests of eastern North America south to Mexico, but are rare in western North America; the fungus has also been found in pine plantations in Colombia. The mushroom has a smooth white cap that can reach up to 10 cm (4 in) across, and a stipe, up to 14 cm (5.5 in) long by 1.8 cm (0.7 in) thick, that has a delicate white skirt-like ring near the top. The bulbous stipe base is covered with a membranous sac-like volva. The white gills are free from attachment to the stalk and crowded closely together. As the species name suggests, A. bisporigera typically bears two spores on the basidia, although this characteristic is not as immutable as was once thought. Amanita bisporigera was described as a new species in 1906. It is classified in the section Phalloideae of the genus Amanita together with other amatoxin-containing species. Amatoxins are cyclic peptides which inhibit the enzyme RNA polymerase II and interfere with various cellular functions. The first symptoms of poisoning appear 6 to 24 hours after consumption, followed by a period of apparent improvement, then by symptoms of liver and kidney failure, and death after four days or more. Amanita bisporigera closely resembles a few other white amanitas, including the equally deadly A. virosa and A. verna. These species, difficult to distinguish from A. bisporigera based on visible field characteristics, do not have two-spored basidia, and do not stain yellow when a dilute solution of potassium hydroxide is applied. The DNA of A. bisporigera has been partially sequenced, and the genes responsible for the production of amatoxins have been determined. ## Taxonomy, classification, and phylogeny Amanita bisporigera was first described scientifically in 1906 by American botanist George Francis Atkinson in a publication by Cornell University colleague Charles E. Lewis. The type locality was Ithaca, New York, where several collections were made. In his 1941 monograph of world Amanita species, Édouard-Jean Gilbert transferred the species to his new genus Amanitina, but this genus is now considered synonymous with Amanita. In 1944, William Murrill described the species Amanita vernella, collected from Gainesville, Florida; that species is now thought to be synonymous with A. bisporigera after a 1979 examination of its type material revealed basidia that were mostly 2-spored. Amanita phalloides var. striatula, a poorly known taxon originally described from the United States in 1902 by Charles Horton Peck, is considered by Amanita authority Rodham Tulloss to be synonymous with A. bisporigera. Vernacular names for the mushroom include "destroying angel", "deadly amanita", "white death cap", "angel of death" and "eastern North American destroying angel". Amanita bisporigera belongs to section Phalloideae of the genus Amanita, which contains some of the deadliest Amanita species, including A. phalloides and A. virosa. This classification has been upheld with phylogenetic analyses, which demonstrate that the toxin-producing members of section Phalloideae form a clade—that is, they derive from a common ancestor. In 2005, Zhang and colleagues performed a phylogenetic analysis based on the internal transcribed spacer (ITS) sequences of several white-bodied toxic Amanita species, most of which are found in Asia. Their results support a clade containing A. bisporigera, A. subjunquillea var. alba, A. exitialis, and A. virosa. The Guangzhou destroying angel (Amanita exitialis) has two-spored basidia, like A. bisporigera. ## Description The cap is 3–10 cm (1.2–3.9 in) in diameter and, depending on its age, ranges in shape from egg-shaped to convex to somewhat flattened. The cap surface is smooth and white, sometimes with a pale tan- or cream-colored tint in the center. The surface is either dry or, when the environment is moist, slightly sticky. The flesh is thin and white, and does not change color when bruised. The margin of the cap, which is rolled inwards in young specimens, does not have striations (grooves), and lacks volval remnants. The gills, also white, are crowded closely together. They are either free from attachment to the stipe or just barely reach it. The lamellulae (short gills that do not extend all the way to the stipe) are numerous, and gradually narrow. The white stipe is 6–14 cm (2.4–5.5 in) by 0.7–1.8 cm (0.3–0.7 in) thick, solid (i.e., not hollow), and tapers slightly upward. The surface, in young specimens especially, is frequently floccose (covered with tufts of soft hair), fibrillose (covered with small slender fibers), or squamulose (covered with small scales); there may be fine grooves along its length. The bulb at the base of the stipe is spherical or nearly so. The delicate ring on the upper part of the stipe is a remnant of the partial veil that extends from the cap margin to the stalk and covers the gills during development. It is white, thin, membranous, and hangs like a skirt. When young, the mushrooms are enveloped in a membrane called the universal veil, which stretches from the top of the cap to the bottom of the stipe, imparting an oval, egg-like appearance. In mature fruit bodies, the veil's remnants form a membrane around the base, the volva, like an eggshell-shaped cup. On occasion, however, the volva remains underground or gets torn up during development. It is white, sometimes lobed, and may become pressed closely to the stipe. The volva is up to 3.8 cm (1.5 in) in height (measured from the base of the bulb), and is about 2 mm thick midway between the top and the base attachment. The mushroom's odor has been described as "pleasant to somewhat nauseous", becoming more cloying as the fruit body ages. The cap flesh turns yellow when a solution of potassium hydroxide (KOH, 5–10%) is applied (a common chemical test used in mushroom identification). This characteristic chemical reaction is shared with A. ocreata and A. virosa, although some authors have expressed doubt about the identity of North American A. virosa, suggesting those collections may represent four-spored A. bisporigera. Tulloss suggests that reports of A. bisporigera that do not turn yellow with KOH were actually based on white forms of A. phalloides. Findings from the Chiricahua Mountains of Arizona and in central Mexico, although "nearly identical" to A. bisporigera, do not stain yellow with KOH; their taxonomic status has not been investigated in detail. ### Microscopic features The spore print of A. bisporigera, like most Amanita, is white. The spores are roughly spherical, thin-walled, hyaline (translucent), amyloid, and measure 7.8–9.6 by 7.0–9.0 μm. The cap cuticle is made of partially gelatinized, filamentous interwoven hyphae, 2–6 μm in diameter. The tissue of the gill is bilateral, meaning it diverges from the center of the gill to its outer edge. The subhymenium is ramose—composed of relatively thin branching, unclamped hyphae. The spore-bearing cells, the basidia, are club-shaped, thin-walled, without clamps, with dimensions of 34–45 by 4–11 μm. They are typically two-spored, although rarely three- or four-spored forms have been found. Although the two-spored basidia are a defining characteristic of the species, there is evidence of a tendency to shift towards producing four-spored basidia as the fruiting season progresses. The volva is composed almost exclusively of densely interwoven filamentous hyphae, 2–10 μm in diameter, that are sparsely to moderately branched. There are few small inflated cells, which are mostly spherical to broadly elliptic. The tissue of the stipe is made of abundant, sparsely branched, filamentous hyphae, without clamps, measuring 2–5 μm in diameter. The inflated cells are club-shaped, longitudinally oriented, and up to 2–3 by 15.7 μm. The annulus is made of abundant moderately branched filamentous hyphae, measuring 2–6 μm in diameter. The inflated cells are sparse, broadly elliptic to pear-shaped, and are rarely larger than 31 by 22 μm. Pleurocystidia and cheilocystidia (cystidia found on the gill faces and edges, respectively) are absent, but there may be cylindrical to sac-like cells of the partial veil on the gill edges; these cells are hyaline and measure 24–34 by 7–16 μm. In 1906 Charles E. Lewis studied and illustrated the development of the basidia in order to compare the nuclear behavior of the two-spored with that of the four-spored forms. Initially (1), the young basidium, appearing as a club-shaped branch from the subhymenium, is filled with cytoplasm and contains two primary nuclei, which have distinct nucleoli. As the basidium grows larger, the membranes of the two nuclei contact (2), and then the membrane disappears at the point of contact (3). The two primary nuclei remain distinct for a short time, but eventually the two nuclei fuse completely to form a larger secondary nucleus with a single secondary nucleolus (4, 5). The basidium increases in size after the primary nuclei fuse, and the nucleus migrates towards the end of the basidia (6, 7). During this time, the nucleus develops vacuoles "filled by the nuclear sap in the living cell". Chromosomes are produced from the nucleolar threads, and align transversely near the apex of the basidium, connected by spindles (8–10). The chromosomes then move to the poles, forming the daughter nuclei that occupy different positions in the basidium; the daughters now have a structure similar to that of the parent nuclei (11). The two nuclei then divide to form four nuclei, similar to fungi with four-spored basidia (12, 13). The four nuclei crowd together at some distance from the end of the basidium to form an irregular mass (14). Shortly thereafter, the sterigmata (slender projections of the basidia that attach the spores) begin to form (15), and cytoplasm begins to pass through the sterigmata to form the spores (16). Although Lewis was not able to clearly determine from observation alone whether the contents of two or four nuclei passed through the sterigmata, he deduced, by examining older basidia with mature spores, that only two nuclei enter the spores (16, 17). ## Toxicity Amanita bisporigera is considered the most toxic North American Amanita mushroom, with little variation in toxin content between different fruit bodies. Three subtypes of amatoxin have been described: α-, β, and γ-amanitin. The principal amatoxin, α-amanitin, is readily absorbed across the intestine, and 60% of the absorbed toxin is excreted into bile and undergoes enterohepatic circulation; the kidneys clear the remaining 40%. The toxin inhibits the enzyme RNA polymerase II, thereby interfering with DNA transcription, which suppresses RNA production and protein synthesis. This causes cellular necrosis, especially in cells which are initially exposed and have rapid rates of protein synthesis. This process results in severe acute liver dysfunction and, ultimately, liver failure. Amatoxins are not broken down by boiling, freezing, or drying. Roughly 0.2 to 0.4 milligrams of α-amanitin is present in 1 gram of A. bisporigera; the lethal dose in humans is less than 0.1 mg/kg body weight. One mature fruit body can contain 10–12 mg of α-amanitin, enough for a lethal dose. The α-amanitin concentration in the spores is about 17% that of the fruit body tissues. A. bisporigera also contains the phallotoxin phallacidin, structurally related to the amatoxins but considered less poisonous because of poor absorption. Poisonings (from similar white amanitas) have also been reported in domestic animals, including dogs, cats, and cows. The first reported poisonings resulting in death from the consumption of A. bisporigera were from near San Antonio, Mexico, in 1957, where a rancher, his wife, and three children consumed the fungus; only the man survived. Amanita poisoning is characterized by the following distinct stages: the incubation stage is an asymptomatic period which ranges from 6 to 12 hours after ingestion. In the gastrointestinal stage, about 6 to 16 hours after ingestion, there is onset of abdominal pain, explosive vomiting, and diarrhea for up to 24 hours, which may lead to dehydration, severe electrolyte imbalances, and shock. These early symptoms may be related to other toxins such as phalloidin. In the cytotoxic stage, 24 to 48 hours after ingestion, clinical and biochemical signs of liver damage are observed, but the patient is typically free of gastrointestinal symptoms. The signs of liver dysfunction such as jaundice, hypoglycemia, acidosis, and hemorrhage appear. Later, there is an increase in the levels of prothrombin and blood levels of ammonia, and the signs of hepatic encephalopathy and/or kidney failure appear. The risk factors for mortality that have been reported are age younger than 10 years, short latency period between ingestion and onset of symptoms, severe coagulopathy (blood clotting disorder), severe hyperbilirubinemia (jaundice), and rising serum creatinine levels. ## Similar species The color and general appearance of A. bisporigera are similar to those of A. verna and A. virosa. A. bisporigera is at times smaller and more slender than either A. verna or A. virosa, but it varies considerably in size; therefore size is not a reliable diagnostic characteristic. A. virosa fruits in autumn—later than A. bisporigera. A. elliptosperma is less common but widely distributed in the southeastern United States, while A. ocreata is found on the West Coast and in the Southwest. Other similar toxic North American species include Amanita magnivelaris, which has a cream-colored, rather thick, felted-submembranous, skirt-like ring, and A. virosiformis, which has elongated spores that are 3.9–4.7 by 11.7–13.4 μm. Neither A. elliptosperma nor A. magnivelaris typically turn yellow with the application of KOH; the KOH reaction of A. virosiformis has not been reported. Leucoagaricus leucothites is another all-white mushroom with an annulus, free gills, and white spore print, but it lacks a volva and has thick-walled dextrinoid (staining red-brown in Melzer's reagent) egg-shaped spores with a pore. A. bisporigera may also be confused with the larger edible species Agaricus silvicola, the "horse-mushroom". Like many white amanitas, young fruit bodies of A. bisporigera, still enveloped in the universal veil, can be confused with puffball species, but a longitudinal cut of the fruit body reveals internal structures in the Amanita that are absent in puffballs. In 2006, seven members of the Hmong community living in Minnesota were poisoned with A. bisporigera because they had confused it with edible paddy straw mushrooms (Volvariella volvacea) that grow in Southeast Asia. ## Habitat and distribution Like most other Amanita species, A. bisporigera is thought to form mycorrhizal relationships with trees. This is a mutually beneficial relationship where the hyphae of the fungus grow around the roots of trees, enabling the fungus to receive moisture, protection and nutritive byproducts of the tree, and giving the tree greater access to soil nutrients. Fruit bodies of Amanita bisporigera are found on the ground growing either solitarily, scattered, or in groups in mixed coniferous and deciduous forests; they tend to appear during summer and early fall. The fruit bodies are commonly found near oak, but have been reported in birch-aspen areas in the west. It is most commonly found in eastern North America, and rare in western North America. It is widely distributed in Canada, and its range extends south to Mexico. The species has also been found in Colombia, where it may have been introduced from trees exported for use in pine plantations. ## Genome sequencing The Amanita Genome Project was begun in Jonathan Walton's lab at Michigan State University in 2004 as part of their ongoing studies of Amanita bisporigera. The purpose of the project is to determine the genes and genetic controls associated with the formation of mycorrhizae, and to elucidate the biochemical mechanisms of toxin production. The genome of A. bisporigera has been sequenced using a combination of automated Sanger sequencing and pyrosequencing, and the genome sequence information is publicly searchable. The sequence data enabled the researchers to identify the genes responsible for amatoxin and phallotoxin biosynthesis, AMA1 and PHA1. The cyclic peptides are synthesized on ribosomes, and require proline-specific peptidases from the prolyl oligopeptidase family for processing. The genetic sequence information from A. bisporigera has been used to identify molecular polymorphisms in the related A. phalloides. These single-nucleotide polymorphisms may be used as population genetic markers to study phylogeography and population genetics. Sequence information has also been employed to show that A. bisporigera lacks many of the major classes of secreted enzymes that break down the complex polysaccharides of plant cell walls, like cellulose. In contrast, saprobic fungi like Coprinopsis cinerea and Galerina marginata, which break down organic matter to obtain nutrients, have a more complete complement of cell wall-degrading enzymes. Although few ectomycorrhizal fungi have yet been tested in this way, the authors suggest that the absence of plant cell wall-degrading ability may correlate with the ectomycorrhizal ecological niche. ## See also - List of Amanita species - List of deadly fungi - Silibinin – a liver-protecting compound used in cases of Amanita mushroom poisoning
930,820
Hurricane Carmen
1,165,454,243
Category 4 Atlantic hurricane in 1974
[ "1974 Atlantic hurricane season", "1974 in Alabama", "1974 in Louisiana", "1974 in Mexico", "1974 in Mississippi", "1974 in Puerto Rico", "1974 natural disasters in the United States", "Atlantic hurricanes in Mexico", "Category 4 Atlantic hurricanes", "Hurricanes in Alabama", "Hurricanes in Louisiana", "Hurricanes in Mississippi", "Hurricanes in Oklahoma", "Hurricanes in Puerto Rico", "Retired Atlantic hurricanes" ]
Hurricane Carmen was the most intense tropical cyclone of the 1974 Atlantic hurricane season. A destructive storm with widespread impacts, Carmen developed from a tropical wave that emerged from Africa toward the end of August. The disturbance traveled westward, and organized as a tropical depression east of the Lesser Antilles on August 29. The storm moved through the Caribbean Sea, and in an environment conducive to intensification, it quickly strengthened to its initial peak intensity as a Category 4 hurricane on the Saffir–Simpson Hurricane Scale. Carmen moved ashore on the Yucatán Peninsula, where, despite striking a sparsely populated region, it caused significant crop damage and killed several people. Before the storm's arrival, officials had set up several evacuation centers, and many residents had moved to higher ground. Upon entering the Gulf of Mexico, Carmen turned northward and re-intensified as it approached the United States. Initially threatening the major city of New Orleans, it veered westward and made landfall on the marshland of southern Louisiana, eventually dissipating over eastern Texas on September 10. Tropical cyclone watches and warnings had been issued for the storm, and approximately 100,000 residents left their homes and sought shelter. Damage was lighter than first feared, but the sugar industry suffered substantial losses. Throughout its course, the hurricane killed 8 people and caused \$162 million in damage. Due to the severity of the storm, the name Carmen was retired from the list of Atlantic tropical cyclone names. ## Meteorological history The origins of Hurricane Carmen can be traced to a weather disturbance over Africa during the middle of August 1974. The disturbance moved slowly westward with little convective activity initially, although upon entering the Atlantic Ocean, it spawned a tropical wave within the Intertropical Convergence Zone. The wave had intensified and broadened by August 25, and it eventually split into two components, the northernmost of which consolidated into an organized storm system. Moving westward, the system developed into a tropical depression on August 29, more than 200 mi (320 km) east of Guadeloupe. Due to favorable outflow from an anticyclone nearby, the depression gradually strengthened as it moved through the Lesser Antilles. It attained tropical storm status on August 30, south of Puerto Rico, and was named Carmen by the National Hurricane Center. At first, the storm's proximity to Hispaniola prevented further strengthening, but by August 31, it had managed to intensify into a Category 1 hurricane on the Saffir–Simpson Hurricane Scale. As Carmen passed south of Jamaica, an eye feature briefly appeared. On September 1, the hurricane began to rapidly deepen over warm waters of the Caribbean Sea; by 18:00 Coordinated Universal Time (UTC), it had strengthened to Category 4 intensity. Continuing westward, the storm passed north of Swan Island later that day. Early on September 2, a double eyewall appeared on satellite imagery. Carmen's forward movement gradually slowed as the storm took a west by north direction, and it reached its initial peak intensity with maximum sustained winds of 150 mph (240 km/h), accompanied by a central barometric pressure of 928 mbar (hPa; 27.4 inHg). Atmospheric steering currents became increasingly weaker, and Carmen slowed to a drift. Later on September 2, the hurricane made landfall on the Yucatán Peninsula; its northern jog spared Belize City from a direct hit. The storm's center passed a few miles north of Chetumal, Quintana Roo. The cyclone drifted inland, deteriorating to a tropical storm on September 3. About a day later, Carmen emerged into the Gulf of Mexico, where it nearly stalled. Turning northward, the storm regained hurricane strength on September 5. Carmen continued to strengthen and accelerated northward towards the United States Gulf Coast, reaching a forward speed of 12 mph (19 km/h); at 00:00 UTC on September 7, it once again became a Category 3 major hurricane. The storm then became a Category 4 hurricane again and reached its second peak intensity while located south of Louisiana; although the wind speeds were identical to that of its initial peak, the barometric pressure was slightly higher. Carmen weakened and veered westward before landfall, ultimately striking south-central Louisiana. After moving ashore, the hurricane quickly lost strength and late on September 9 degenerated into a tropical depression. The depression moved westward and soon dissipated over eastern Texas. ## Preparations Initial reactions to the approaching hurricane in the Yucatán Peninsula were regarded as calm by the United States media. Mexican officials declared an emergency alert by September 2, although they did not advise any evacuations. Meteorologists in the United States urged those living near the coast to move inland immediately. Fearing significant loss of life and property, the Red Cross began preparations for the approaching hurricane in Belize. The following day, the Mexican Army rushed to set up emergency operation centers and shelters in five cities. Mobile communication units and relief teams were prepared for deployment following the storm's passage. Many of the nearly 35,000 residents in and around the city of Chetumal evacuated to higher ground. Although it initially threatened the United States city of New Orleans, the hurricane turned west prior to making landfall and spared the area from severe damage. Contrary to its actual path, forecasters predicted the hurricane to execute an eastward swerve toward Florida. Had the cyclone instead continued northward and traveled over Lake Pontchartrain, low-lying areas could have suffered "catastrophic" flooding. Over 100,000 residents of the Gulf Coast, mostly in Louisiana and Mississippi, evacuated in advance of the hurricane, causing heavy congestion on highways. About 60,000 people sought shelter in facilities across the New Orleans region, according to Red Cross officials. Hurricane warnings were issued along the coast, while Coast Guard personnel went door-to-door on Grand Isle urging residents to leave the area. From there through the coast of southwestern Florida, small craft were advised to remain near shore due to rough seas. Offshore, workers were removed from oil rigs. Many Mississippi citizens, having experienced the destruction of Hurricane Camille just five years earlier, quickly left their coastal homes. ## Impact As a tropical depression and storm, Carmen produced moderate rainfall across Puerto Rico and the northern Lesser Antilles, peaking at 5.91 in (150 mm) in Jajome Alto, Puerto Rico. The storm spawned a tornado on Puerto Rico and triggered flash flooding, which collectively left over \$2 million in damage. Winds approaching gale force affected several islands. Heavy rain fell on Hispaniola as the storm progressed westward, and on Jamaica, the storm caused three drownings. High winds and heavy rainfall were reported there and in Cuba. The hurricane damaged local reefs on the north shore of Jamaica during its passage. Killing one person in the state of Louisiana the total damage was close to an extra \$175 million in cost. Following the storm, the name Carmen was retired from the annually rotating list of hurricane names. However, due to a change in the naming scheme in 1979, it was not replaced by any particular name. ### Yucatán Peninsula Although Carmen made landfall as a powerful Category 4 hurricane, it caused significantly less damage than anticipated because it struck a sparsely populated region. However, torrential rainfall from the storm inundated farmland across the region, ruining rice crops. The fishing industry also sustained major losses. Communication with the hardest hit regions was lost following Carmen's passage; however, early reports stated that at least five people were injured. Several days later, officials in Mexico confirmed that three people had been killed by the storm. The city of Chetumal was described as a "disaster", and hundreds of people were left homeless. More than 5,000 people in the city lost their belongings as a result of the storm. Officials in the area estimated that damage in Chetumal alone reached \$8 million (1974 USD). Throughout the Yucatán Peninsula, Hurricane Carmen claimed four lives and wrought \$10 million (1974 USD) in damage. Following Carmen's passage, officials feared the worst for an area of 1,000 mi<sup>2</sup> (2,590 km<sup>2</sup>) where communication was lost in Belize. A reconnaissance task force was sent out from Belize City the day after Carmen made landfall to assist any residents stranded by the storm. One person was killed off the coast of Belize after being washed off his boat by large swells produced by Carmen. Three other fishermen were listed as missing following similar incidents. Thousands of people moved from coastal areas inland to escape the storm. Crop damage in the country was reportedly severe. ### United States Carmen dropped moderate rainfall along its path, though the heaviest rainfall occurred well to the east of the storm's center, in southern Alabama and the northern Florida Panhandle. Precipitation peaked at over 13 in (330 mm) in Atmore, Alabama. Winds gusted up to 86 mph (138 km/h), and along the coast, tides ran as high as 6 ft (1.8 m) above normal. Over northwestern Louisiana, winds ranged from 40 to 45 mph (64 to 72 km/h) and brought down several trees. In New Orleans, despite wind gusts to 72 mph (116 km/h), minimal damage was reported. The hurricane's effects in Baton Rouge were confined to strewn debris and a few downed trees. Because Carmen moved ashore over uninhabited marshland, it caused far less damage than initially feared. Nonetheless, tidal flooding from the Gulf of Mexico and coastal bodies of water was severe. Freshwater flooding was less extreme. In total, the storm inundated 2,380,500 acres (963,400 ha) of land in Louisiana, including 742,300 acres (300,400 ha) in Terrebonne Parish and 590,000 acres (240,000 ha) in Plaquemines Parish. A large oak tree was overturned by high winds in the town of Jeanerette in Iberia Parish. The storm's greatest impact was the loss of sugar cane crops in Louisiana. An estimated 308,000 acres (125,000 ha) of sugar cane in 16 parishes was damaged, and about 20 percent was completely ruined. After a tour of the affected area, then-Governor Edwin Edwards estimated crop damage alone at \$400 million, although a more recent estimate placed total agricultural damage from the hurricane at \$74 million. The sugar cane crop was crucial to the country's sugar supplies, rendering the losses "doubly bad", and sugar futures rose drastically after the storm. Other crops damaged by Carmen included soybeans, rice, and cotton. Tidal action along the coast affected the balance of salinity in coastal marshes and water bodies. The sudden intrusion of saltwater stressed delicate plants. Fish, shrimp and oysters also suffered the ecological effects of Hurricane Carmen. Flooding on land caused some wildlife to drown. Several parks in Louisiana sustained damage, either from flooding or high winds; losses to Grand Island State Park in particular totaled \$114,600. The oil and gas industry was also affected, and its estimated \$24.7 million in losses resulted mainly from damage to equipment and offshore facilities. The storm diminished oil production by 1.4 million barrels when it shut down operations for 24 to 48 hours at various locations. Over 60,000 electric cooperative customers lost power. The hurricane caused two fatalities in Louisiana: a utility repairman who was electrocuted while working on power lines damaged by strong winds, and a motorist who was involved in a storm-related traffic accident. Total monetary losses in the state was estimated at \$150 million. Overall, the hurricane spawned four confirmed tornadoes. One touched down near Brandon, Mississippi, destroying a barn and causing other damage. Another struck Kaplan, Louisiana, injuring one person. The storm's effects in Mississippi were described as minimal and were mainly confined to minor traffic accidents during bouts of heavy precipitation. Light to moderate rainfall from the storm extended as far east as Florida and Georgia and as far west as Oklahoma and Texas. ## In popular culture Hurricane Carmen was depicted in the 1994 movie Forrest Gump, in which the hurricane plays a major part in the movie's plot. ## See also - Other storms of the same name - List of Category 4 Atlantic hurricanes - List of retired Atlantic hurricane names - List of wettest tropical cyclones in Alabama - Hurricane Audrey (1957) – caused severe impacts in Louisiana and Mississippi - Hurricane Laura (2020) – devastated Southwestern Louisiana - Hurricane Delta (2020) and Hurricane Zeta (2020) – both also made landfall on the Yucatán Peninsula and in Louisiana
267,366
Mauna Kea
1,173,890,085
Hawaiian volcano
[ "Cenozoic Hawaii", "Four-thousanders of Hawaii", "Hawaiian religion", "Hawaiian words and phrases", "Highest points of U.S. states", "Holocene Oceania", "Holocene shield volcanoes", "Inactive volcanoes", "Mountains of Hawaii", "National Natural Landmarks in Hawaii", "Pleistocene Oceania", "Pleistocene shield volcanoes", "Polygenetic shield volcanoes", "Sacred mountains", "Shield volcanoes of the United States", "Volcanoes of the Island of Hawaii" ]
Mauna Kea (/ˌmɔːnə ˈkeɪə/ or /ˌmaʊnə ˈkeɪə/; ; abbreviation for Mauna a Wākea) is an inactive volcano on the island of Hawaiʻi. Its peak is 4,207.3 m (13,803 ft) above sea level, making it the highest point in the state of Hawaiʻi and second-highest peak of an island on Earth. The peak is about 38 m (125 ft) higher than Mauna Loa, its more massive neighbor. Mauna Kea is unusually topographically prominent for its height: its wet prominence is fifteenth in the world among mountains, at 4,205 m (13,796 ft); its dry prominence is 9,330 m (30,610 ft). This dry prominence is greater than Mount Everest's height above sea level of 8,848.86 m (29,032 ft), and some authorities have labelled Mauna Kea the tallest mountain in the world, from its underwater base. It is about one million years old and thus passed the most active shield stage of life hundreds of thousands of years ago. In its current post-shield state, its lava is more viscous, resulting in a steeper profile. Late volcanism has also given it a much rougher appearance than its neighboring volcanoes due to construction of cinder cones, decentralization of its rift zones, glaciation on its peak, and weathering by the prevailing trade winds. Mauna Kea last erupted 6,000 to 4,000 years ago and is now thought to be dormant. In Hawaiian religion, the peaks of the island of Hawaiʻi are sacred. An ancient law allowed only high-ranking aliʻi to visit its peak. Ancient Hawaiians living on the slopes of Mauna Kea relied on its extensive forests for food, and quarried the dense volcano-glacial basalts on its flanks for tool production. When Europeans arrived in the late 18th century, settlers introduced cattle, sheep and game animals, many of which became feral and began to damage the volcano's ecological balance. Mauna Kea can be ecologically divided into three sections: an alpine climate at its summit, a Sophora chrysophylla–Myoporum sandwicense (or māmane–naio) forest on its flanks, and an Acacia koa–Metrosideros polymorpha (or koa–ʻōhiʻa) forest, now mostly cleared by the former sugar industry, at its base. In recent years, concern over the vulnerability of the native species has led to court cases that have forced the Hawaiʻi Department of Land and Natural Resources to work towards eradicating all feral species on the volcano. With its high elevation, dry environment, and stable airflow, Mauna Kea's summit is one of the best sites in the world for astronomical observation. Since the creation of an access road in 1964, thirteen telescopes funded by eleven countries have been constructed at the summit. The Mauna Kea Observatories are used for scientific research across the electromagnetic spectrum and comprise the largest such facility in the world. Their construction on a landscape considered sacred by Native Hawaiians continues to be a topic of debate to this day. ## Topographic prominence Mauna Kea is unusually topographically prominent for its height, with a wet prominence fifteenth in the world among mountains, and a dry prominence second in the world, after only Mount Everest. It is the highest peak on its island, so its wet prominence matches its height above sea level, at 4,207.3 m (13,803 ft). Because the Hawaiian Islands slope deep into the ocean, Mauna Kea has a dry prominence of 9,330 m (30,610 ft). This dry prominence is taller than Mount Everest's height above sea level of 8,848.86 m (29,032 ft), so Everest would have to include whole continents in its foothills to exceed Mauna Kea's dry prominence. Given how much Mauna Kea protrudes from the Hawaiian Trough, some authorities have called it the tallest (as opposed to highest) mountain in the world, as measured from base to peak. Unlike prominence, base is loosely defined, which has resulted in numbers ranging from 9,966 m (32,696 ft) (roughly to the deepest point in the Hawaiian Trough) to 17,205 m (56,447 ft) (to the root of the mountain deep underground). On those examples, other mountains stake rivaling claims, such as Mount Lamlam claiming higher climb from base (11,528 m (37,820 ft), starting from Challenger Deep), and all of the Himalayan Mountains claiming tremendously deep roots. Greater rises could be measured from the Atacama Trench to the Andes Mountains, for example, the bottom of Richard's Deep (8,065 m (26,460 ft) deep) to the peak of the nearby Llullaillaco (6,739 m (22,110 ft) high) is 14,804 m (48,570 ft). Neither Mount Lamlam nor Llullaillaco have the dry prominence of Mauna Kea, because they do not extend into trenches in every direction. ## Geology Mauna Kea is one of five volcanoes that form the island of Hawaiʻi, the largest and youngest island of the Hawaiian–Emperor seamount chain. Of these five hotspot volcanoes, Mauna Kea is the fourth oldest and fourth most active. It began as a preshield volcano driven by the Hawaiʻi hotspot around one million years ago, and became exceptionally active during its shield stage until 500,000 years ago. Mauna Kea entered its quieter post-shield stage 250,000 to 200,000 years ago, and is currently active, having last erupted between 4,500 and 6,000 years ago. Mauna Kea does not have a visible summit caldera, but contains a number of small cinder and pumice cones near its summit. A former summit caldera may have been filled and buried by later summit eruption deposits. Mauna Kea is over 32,000 km<sup>3</sup> (7,680 cu mi) in volume, so massive that it and its neighbor, Mauna Loa, depress the ocean crust beneath it by 6 km (4 mi). The volcano continues to slip and flatten under its own weight at a rate of less than 0.2 mm (0.01 in) per year. Much of its mass lies east of its present summit. It stands 4,207.3 m (13,803 ft) above sea level, about 38 m (125 ft) higher than its neighbor Mauna Loa, and is the highest point in the state of Hawaii. Like all Hawaiian volcanoes, Mauna Kea has been created as the Pacific tectonic plate has moved over the Hawaiian hotspot in the Earth's underlying mantle. The Hawaii island volcanoes are the most recent evidence of this process that, over 70 million years, has created the 6,000 km (3,700 mi)-long Hawaiian Ridge–Emperor seamount chain. The prevailing, though not completely settled, view is that the hotspot has been largely stationary within the planet's mantle for much, if not all of the Cenozoic Era. However, while Hawaiian volcanism is well understood and extensively studied, there remains no definite explanation of the mechanism that causes the hotspot effect. Lava flows from Mauna Kea overlapped in complex layers with those of its neighbors during its growth. Most prominently, Mauna Kea is built upon older flows from Kohala to the northwest, and intersects the base of Mauna Loa to the south. The original eruptive fissures (rift zones) in the flanks of Mauna Kea were buried by its post-shield volcanism. Hilo Ridge, a prominent underwater rift zone structure east of Mauna Kea, was once believed to be a part of the volcano; however, it is now understood to be a rift zone of Kohala that has been affected by younger Mauna Kea flows. The shield-stage lavas that built the enormous main mass of the volcano are tholeiitic basalts, like those of Mauna Loa, created through the mixing of primary magma and subducted oceanic crust. They are covered by the oldest exposed rock strata on Mauna Kea, the post-shield alkali basalts of the Hāmākua Volcanics, which erupted between 250,000 and 70–65,000 years ago. The most recent volcanic flows are hawaiites and mugearites: they are the post-shield Laupāhoehoe Volcanics, erupted between 65,000 and 4,000 years ago. These changes in lava composition accompanied the slow reduction of the supply of magma to the summit, which led to weaker eruptions that then gave way to isolated episodes associated with volcanic dormancy. The Laupāhoehoe lavas are more viscous and contain more volatiles than the earlier tholeiitic basalts; their thicker flows significantly steepened Mauna Kea's flanks. In addition, explosive eruptions have built cinder cones near the summit. These cones are the most recent eruptive centers of Mauna Kea. Its present summit is dominated by lava domes and cinder cones up to 1.5 km (0.9 mi) in diameter and hundreds of meters tall. Mauna Kea is the only Hawaiian volcano with distinct evidence of glaciation. Similar deposits probably existed on Mauna Loa, but have been covered by later lava flows. Despite Hawaii's tropical location, during several past ice ages a drop of a degree in temperature allowed snow to remain at the volcano's summit through summer, triggering the formation of an ice cap. There are three episodes of glaciation that have been recorded from the last 180,000 years: the Pōhakuloa (180–130 ka), Wāihu (80–60 ka) and Mākanaka (40–13 ka) series. These have extensively sculpted the summit, depositing moraines and a circular ring of till and gravel along the volcano's upper flanks. Subglacial eruptions built cinder cones during the Mākanaka glaciation, most of which were heavily gouged by glacial action. The most recent cones were built between 9000 and 4500 years ago, atop the glacial deposits, although one study indicates that the last eruption may have been around 3600 years ago. At their maximum extent, the glaciers extended from the summit down to between 3,200 and 3,800 m (10,500 and 12,500 ft) of elevation. A small body of permafrost, less than 25 m (80 ft) across, was found at the summit of Mauna Kea before 1974, and may still be present. Small gullies etch the summit, formed by rain- and snow-fed streams that flow only during winter melt and rain showers. On the windward side of the volcano, stream erosion driven by trade winds has accelerated erosion in a manner similar to that on older Kohala. Mauna Kea is home to Lake Waiau, the highest lake in the Pacific Basin. At an altitude of 3,969 m (13,022 ft), it lies within the Puʻu Waiau cinder cone and is the only alpine lake in Hawaii. The lake is very small and shallow, with a surface area of 0.73 ha (1.80 acres) and a depth of 3 m (10 ft) when fullest. Radiocarbon dating of samples at the base of the lake indicates that it was clear of ice 12,600 years ago. Hawaiian lava types are typically permeable, preventing lake formation due to infiltration. Either sulfur-bearing steam altered the volcanic ash to low-permeability clays, or explosive interactions between rising magma and groundwater or surface water during phreatic eruptions formed exceptionally fine ash that reduced the permeability of the lake bed. No artesian water was known on the island of Hawaiʻi until 1993 when drilling by the University of Hawaiʻi tapped an artesian aquifer more than 300 m (980 ft) below sea level, that extended more than 100 m (330 ft) of the borehole's total depth. The borehole had drilled through a compacted layer of soil and lava where the flows of Mauna Loa had encroached upon the exposed Mauna Kea surface and had subsequently been subsided below sea level. Isotopic composition shows the water present to have been derived from rain coming off Mauna Kea at higher than 2,000 m (6,600 ft) above mean sea level. The aquifer's presence is attributed to a freshwater head within Mauna Kea's basal lens. Scientists believe there may be more water in Mauna Kea's freshwater lens than current models may indicate. Two more boreholes were drilled on Mauna Kea in 2012, with water being found at much higher elevations and shallower depths than expected. Donald Thomas, director of the University of Hawaiʻi's Center for the Study of Active Volcanoes believes one reason to continue study of the aquifers is due to use and occupancy of the higher elevation areas, stating: "Nearly all of these activities depend on the availability of potable water that, in most cases, must be trucked to the Saddle from Waimea or Hilo — an inefficient and expensive process that consumes a substantial quantity of our scarce liquid fuels." ### Future activity The last eruption of Mauna Kea was about 4,600 years ago (about 2600 BC); because of this inactivity, Mauna Kea is assigned a United States Geological Survey hazard listing of 7 for its summit and 8 for its lower flanks, out of the lowest possible hazard rating of 9 (which is given to the extinct volcano Kohala). Since 8000 BC lava flows have covered 20% of the volcano's summit and virtually none of its flanks. Despite its dormancy, Mauna Kea is expected to erupt again. Based on earlier eruptions, such an event could occur anywhere on the volcano's upper flanks and would likely produce long lava flows, mostly of ʻaʻā, 15–25 km (9–16 mi) long. Long periods of activity could build a cinder cone at the source. Although not likely in the next few centuries, such an eruption would probably result in little loss of life but significant damage to infrastructure. ## Human history ### Native history The first Ancient Hawaiians to arrive on Hawaiʻi island lived along the shores, where food and water were plentiful. Settlement expanded inland to the Mauna Loa – Mauna Kea region in the 12th and early 13th centuries. Archaeological evidence suggests that these regions were used for hunting, collecting stone material, and possibly for spiritual reasons or for astronomical or navigational observations. The mountain's plentiful forest provided plants and animals for food and raw materials for shelter. Flightless birds that had previously known no predators became a staple food source. Early settlement of the Hawaiian islands led to major changes to local ecosystems and many extinctions, particularly amongst bird species. Ancient Hawaiians brought foreign plants and animals, and their arrival was associated with increased rates of erosion. The prevailing lowland forest ecosystem was transformed from forest to grassland; some of this change was caused by the use of fire, but the prevailing cause of forest ecosystem collapse and avian extinction on Hawaiʻi appears to have been the introduction of the Polynesian (or Pacific) rat. The five volcanoes of Hawaiʻi are revered as sacred mountains; and Mauna Kea's summit, the highest, is the most sacred. For this reason, a kapu (ancient Hawaiian law) restricted visitor rights to high-ranking aliʻi. Hawaiians associated elements of their natural environment with particular deities. In Hawaiian mythology, the summit of Mauna Kea was seen as the "region of the gods", a place where benevolent spirits reside. Poliʻahu, deity of snow, also resides there. "Mauna Kea" is an abbreviation for Mauna a Wākea and means "white mountain," in reference to its seasonally snow-capped summit. Around AD 1100, natives established adze quarries high up on Mauna Kea to extract the uniquely dense basalt (generated by the quick cooling of lava flows meeting glacial ice during subglacial eruptions) to make tools. Volcanic glass and gabbro were collected for blades and fishing gear, and māmane wood was preferred for the handles. At peak quarry activity after AD 1400, there were separate facilities for rough and fine cutting; shelters with food, water, and wood to sustain the workers; and workshops creating the finished product. Lake Waiau provided drinking water for the workers. Native chiefs would also dip the umbilical cords of newborn babies in its water, to give them the strength of the mountain. Use of the quarry declined between this period and contact with Americans and Europeans. As part of the ritual associated with quarrying, the workers erected shrines to their gods; these and other quarry artifacts remain at the sites, most of which lie within what is now the Mauna Kea Ice Age Reserve. This early era was followed by cultural expansion between the 12th and late 18th century. Land was divided into regions designed for the immediate needs of the populace. These ahupuaʻa generally took the form of long strips of land oriented from the mountain summits to the coast. Mauna Kea's summit was encompassed in the ahupuaʻa of Kaʻohe, with part of its eastern slope reaching into the nearby Humuʻula. Principal sources of nutrition for Hawaiians living on the slopes of the volcano came from the māmane–naio forest of its upper slopes, which provided them with vegetation and bird life. Bird species hunted included the ʻuaʻu (Pterodroma sandwichensis), nēnē (Branta sandvicensis), and palila (Loxioides bailleui). The lower koa–ʻōhiʻa forest gave the natives wood for canoes and ornate bird feathers for decoration. ### Modern era There are three accounts of foreigners visiting Hawaiʻi before the arrival of James Cook, in 1778. However, the earliest Western depictions of the isle, including Mauna Kea, were created by explorers in the late 18th and early 19th centuries. Contact with Europe and America had major consequences for island residents. Native Hawaiians were devastated by introduced diseases; port cities including Hilo, Kealakekua, and Kailua grew with the establishment of trade; and the adze quarries on Mauna Kea were abandoned after the introduction of metal tools. In 1793, cattle were brought by George Vancouver as a tribute to King Kamehameha I. By the early 19th century, they had escaped confinement and roamed the island freely, greatly damaging its ecosystem. In 1809 John Palmer Parker arrived and befriended Kamehameha I, who put him in charge of cattle management on the island. With an additional land grant in 1845, Parker established Parker Ranch on the northern slope of Mauna Kea, a large cattle ranch that is still in operation today. Settlers to the island burned and cut down much of the lower native forest for sugarcane plantations and houses. The Saddle Road, named for its crossing of the saddle-shaped plateau between Mauna Kea and Mauna Loa, was completed in 1943, and eased travel to Mauna Kea considerably. The Pohakuloa Training Area on the plateau is the largest military training ground in Hawaiʻi. The 108,863-acre (44,055 ha) base extends from the volcano's lower flanks to 2,070 m (6,790 ft) elevation, on state land leased to the US Army since 1956. There are 15 threatened and endangered plants, three endangered birds, and one endangered bat species in the area. Mauna Kea has been the site of extensive archaeological research since the 1980s. Approximately 27 percent of the Science Reserve had been surveyed by 2000, identifying 76 shrines, 4 adze manufacturing workshops, 3 other markers, 1 positively identified burial site, and 4 possible burial sites. By 2009, the total number of identified sites had risen to 223, and archaeological research on the volcano's upper flanks is ongoing. It has been suggested that the shrines, which are arranged around the volcano's summit along what may be an ancient snow line, are markers for the transition to the sacred part of Mauna Kea. Despite many references to burial around Mauna Kea in Hawaiian oral history, few sites have been confirmed. The lack of shrines or other artifacts on the many cinder cones dotting the volcano may be because they were reserved for burial. ### Ascents In pre-contact times, natives traveling up Mauna Kea were probably guided more by landscape than by existing trails, as no evidence of trails has been found. It is possible that natural ridges and water sources were followed instead. Individuals likely took trips up Mauna Kea's slopes to visit family-maintained shrines near its summit, and traditions related to ascending the mountain exist to this day. However, very few natives reached the summit, because of the strict kapu placed on it. In the early 19th century, the earliest notable recorded ascents of Mauna Kea included the following: - On August 26, 1823, Joseph F. Goodrich, an American missionary, made the first recorded ascent in a single day; however, a small arrangement of stones he observed suggested he was not the first human on the summit. He recorded four ecosystems as he travelled from base to summit, and also visited Lake Waiau. - On June 17, 1825, an expedition from HMS Blonde, led by botanist James Macrae, reached the summit of Mauna Kea. Macrae was the first person to record the Mauna Kea silversword (Argyroxiphium sandwicense), saying: "The last mile was destitute of vegetation except one plant of the Sygenisia tribe, in growth much like a Yucca, with sharp pointed silver coloured leaves and green upright spike of three or four feet producing pendulous branches with brown flowers, truly superb, and almost worth the journey of coming here to see it on purpose." - In January 1834, David Douglas climbed the mountain and described extensively the division of plant species by altitude. On an unrelated traverse, from Kamuela to Hilo in July, he was found dead in a pit intended to catch wild cattle. Although murder was suspected, it was probably an accidental fall. The site, Ka lua kauka , is marked by the Douglas fir trees named for him. - In 1881, Queen Emma traveled to the peak to bathe in the waters of Lake Waiau during competition for the role of ruling chief of the Kingdom of Hawaiʻi. - On August 6, 1889, E.D. Baldwin left Hilo and followed cattle trails to the summit. - In February 2021, Victor Vescovo and Clifford Kapono made the first ascent of Mauna Kea from its subaerial base 16,785 ft below sea level using the submersible Limiting Factor, then ocean kayaks from above the mountain base 27 miles to the shoreline, then bicycles to a camp at about 9000 ft altitude from which they then walked to the 13,802 ft summit (a total gain of 30,587 ft). In the late 19th and early 20th centuries trails were formed, often by the movement of game herds, that could be traveled on horseback. However, vehicular access to the summit was practically impossible until the construction of a road in 1964, and it continues to be restricted. Today, multiple trails to the summit exist, in various states of use. ## Ecology ### Background Hawaiʻi's geographical isolation strongly influences its ecology. Remote islands like Hawaiʻi have a large number of species that are found nowhere else (see Endemism in the Hawaiian Islands). The remoteness resulted in evolutionary lines distinct from those elsewhere and isolated these endemic species from external biotic influence, and also makes them especially vulnerable to extinction and the effects of invasive species. In addition the ecosystems of Hawaiʻi are under threat from human development including the clearing of land for agriculture; an estimated third of the island's endemic species have already been wiped out. Because of its elevation, Mauna Kea has the greatest diversity of biotic ecosystems anywhere in the Hawaiian archipelago. Ecosystems on the mountain form concentric rings along its slopes due to changes in temperature and precipitation with elevation. These ecosystems can be roughly divided into three sections by elevation: alpine–subalpine, montane, and basal forest. Contact with Americans and Europeans in the early 19th century brought more settlers to the island, and had a lasting negative ecological effect. On lower slopes, vast tracts of koa–ʻōhiʻa forest were converted to farmland. Higher up, feral animals that escaped from ranches found refuge in, and damaged extensively, Mauna Kea's native māmane–naio forest. Non-native plants are the other serious threat; there are over 4,600 introduced species on the island, whereas the number of native species is estimated at just 1,000. ### Alpine environment The summit of Mauna Kea lies above the tree line, and consists of mostly lava rock and alpine tundra. An area of heavy snowfall, it is inhospitable to vegetation, and is known as the Hawaiian tropical high shrublands. Growth is restricted here by extremely cold temperatures, a short growing season, low rainfall, and snow during winter months. A lack of soil also retards root growth, makes it difficult to absorb nutrients from the ground, and gives the area a very low water retention capacity. Plant species found at this elevation include Styphelia tameiameiae, Taraxacum officinale, Tetramolopium humile, Agrostis sandwicensis, Anthoxanthum odoratum, Trisetum glomeratum, Poa annua, Sonchus oleraceus, and Coprosma ernodiodes. One notable species is Mauna Kea silversword (Argyroxiphium sandwicense var. sandwicense), a highly endangered endemic plant species that thrives in Mauna Kea's high elevation cinder deserts. At one stage reduced to a population of just 50 plants, Mauna Kea silversword was thought to be restricted to the alpine zone, but in fact has been driven there by pressure from livestock, and can grow at lower elevations as well. The Mauna Kea Ice Age Reserve on the southern summit flank of Mauna Kea was established in 1981. The reserve is a region of sparsely vegetated cinder deposits and lava rock, including areas of aeolian desert and Lake Waiau. This ecosystem is a likely haven for the threatened ʻuaʻu (Pterodroma sandwichensis) and also the center of a study on wēkiu bugs (Nysius wekiuicola). Wēkiu bugs feed on dead insect carcasses that drift up Mauna Kea on the wind and settle on snow banks. This is a highly unusual food source for a species in the genus Nysius, which consists of predominantly seed-eating insects. They can survive at extreme elevations of up to 4,200 m (13,780 ft) because of natural antifreeze in their blood. They also stay under heated surfaces most of the time. Their conservation status is unclear, but the species is no longer a candidate for the Endangered Species List; studies on the welfare of the species began in 1980. The closely related Nysius aa lives on Mauna Loa. Wolf spiders (Lycosidae) and forest tent caterpillar moths have also been observed in the same Mauna Kea ecosystem; the former survive by hiding under heat-absorbing rocks, and the latter through cold-resistant chemicals in their bodies. ### Māmane–naio forest The forested zone on the volcano, at an elevation of 2,000–3,000 m (6,600–9,800 ft), is dominated by māmane (Sophora chrysophylla) and naio (Myoporum sandwicense), both endemic tree species, and is thus known as māmane–naio forest. Māmane seeds and naio fruit are the chief foods of the birds in this zone, especially the palila (Loxioides bailleui). The palila was formerly found on the slopes of Mauna Kea, Mauna Loa, and Hualālai, but is now confined to the slopes of Mauna Kea—only 10% of its former range—and has been declared critically endangered. The largest threat to the ecosystem is grazing by feral sheep, cattle (Bos primigenius), and goats (Capra hircus) introduced to the island in the late 18th century. Feral animal competition with commercial grazing was severe enough that a program to eradicate them existed as far back as the late 1920s, and continued through to 1949. One of the results of this grazing was the increased prevalence of herbaceous and woody plants, both endemic and introduced, that were resistant to browsing. The feral animals were almost eradicated, and numbered a few hundred in the 1950s. However, an influx of local hunters led to the feral species being valued as game animals, and in 1959 the Hawaiʻi Department of Land and Natural Resources, the governing body in charge of conservation and land use management, changed its policy to a sustained-control program designed to facilitate the sport. Mouflon (Ovis aries orientalis) was introduced from 1962 to 1964, and a plan to release axis deer (Axis axis) in 1964 was prevented only by protests from the ranching industry, who said that they would damage crops and spread disease. The hunting industry fought back, and the back-and-forth between the ranchers and hunters eventually gave way to a rise in public environmental concern. With the development of astronomical facilities on Mauna Kea commencing, conservationists demanded protection of Mauna Kea's ecosystem. A plan was proposed to fence 25% of the forests for protection, and manage the remaining 75% for game hunting. Despite opposition from conservationists the plan was put into action. While the land was partitioned no money was allocated for the building of the fence. In the midst of this wrangling the Endangered Species Act was passed; the National Audubon Society and Sierra Club Legal Defense Fund filed a lawsuit against the Hawaiʻi Department of Land and Natural Resources, claiming that they were violating federal law, in the landmark case Palila v. Hawaii Department of Land and Natural Resources (1978). The court ruled in favor of conservationists and upheld the precedence of federal laws before state control of wildlife. Having violated the Endangered Species Act, Hawaiʻi state was required to remove all feral animals from the mountainside. This decision was followed by a second court order in 1981. A public hunting program removed many of the feral animals, at least temporarily. An active control program is in place, though it is not conducted with sufficient rigor to allow significant recovery of the māmane-naio ecosystem. There are many other species and ecosystems on the island, and on Mauna Kea, that remain threatened by human development and invasive species. The Mauna Kea Forest Reserve protects 52,500 acres (212 km<sup>2</sup>) of māmane-naio forest under the jurisdiction of the Hawaii Department of Land and Natural Resources. Ungulate hunting is allowed year-round. A small part of the māmane–naio forest is encompassed by the Mauna Kea State Recreation Area. ### Lower environment A band of ranch land on Mauna Kea's lower slopes was formerly Acacia koa – Metrosideros polymorpha (koa-ʻōhiʻa) forest. Its destruction was driven by an influx of European and American settlers in the early 19th century, as extensive logging during the 1830s provided lumber for new homes. Vast swathes of the forest were burned and cleared for sugarcane plantations. Most of the houses on the island were built of koa, and those parts of the forest that survived became a source for firewood to power boilers on the sugarcane plantations and to heat homes. The once vast forest had almost disappeared by 1880, and by 1900, logging interests had shifted to Kona and the island of Maui. With the collapse of the sugar industry in the 1990s, much of this land lies fallow but portions are used for cattle grazing, small-scale farming and the cultivation of eucalyptus for wood pulp. The Hakalau Forest National Wildlife Refuge is a major koa forest reserve on Mauna Kea's windward slope. It was established in 1985, covering 32,733 acres (13,247 ha) of ecosystem remnant. Eight endangered bird species, twelve endangered plants, and the endangered Hawaiian hoary bat (Lasiurus cinereus semotus) have been observed in the area, in addition to many other rare biota. The reserve has been the site of an extensive replanting campaign since 1989. Parts of the reserve show the effect of agriculture on the native ecosystem, as much of the land in the upper part of the reserve is abandoned farmland. Bird species native to the acacia koa–ʻōhiʻa forest include the Hawaiian crow (Corvus hawaiiensis), the ʻakepa (Loxops coccineus), Hawaii creeper (Oreomystis mana), ʻakiapōlāʻau (Hemignathus munroi), and Hawaiian hawk (Buteo solitarius), all of which are endangered, threatened, or near threatened; the Hawaiian crow in particular is extinct in the wild, but there are plans to reintroduce the species into the Hakalau reserve. ## Summit observatories Mauna Kea's summit is one of the best sites in the world for astronomical observation due to favorable observing conditions. The arid conditions are important for submillimeter and infrared astronomy for this region of the electromagnetic spectrum. The summit is above the inversion layer, keeping most cloud cover below the summit and ensuring the air on the summit is dry, and free of atmospheric pollution. The summit atmosphere is exceptionally stable, lacking turbulence for some of the world's best astronomical seeing. The very dark skies resulting from Mauna Kea's distance from city lights are preserved by legislation that minimizes light pollution from the surrounding area; the darkness level allows the observation of faint astronomical objects. These factors historically made Mauna Kea an excellent spot for stargazing. In the early 1960s, the Hawaiʻi Island Chamber of Commerce encouraged astronomical development of Mauna Kea, as economic stimulus; this coincided with University of Arizona astronomer Gerard Kuiper's search for sites to use newly improved detectors of infrared light. Site testing by Kuiper's assistant Alika Herring in 1964 confirmed the summit's outstanding suitability. An intense three-way competition for NASA funds to construct a large telescope began between Kuiper, Harvard University, and the University of Hawaiʻi (UH), which only had experience in solar astronomy. This culminated in funds being awarded to the "upstart" UH proposal. UH rebuilt its small astronomy department into a new Institute for Astronomy, and in 1968 the Hawaiʻi Department of Land and Natural Resources gave it a 65-year lease for all land within a 4 km (2.5 mi) radius of its telescope, essentially that above 11,500 ft (3,505 m). On its completion in 1970, the UH 88 in (2.2 m) was the seventh largest optical/infrared telescope in the world. By 1970, two 24 in (0.6 m) telescopes had been constructed by the US Air Force and Lowell Observatory. In 1973, Canada and France agreed to build the 3.6 m CFHT on Mauna Kea. However, local organisations started to raise concerns about the environmental impact of the observatory. This led the Department of Land and Natural Resources to prepare an initial management plan, drafted in 1977 and supplemented in 1980. In January 1982, the UH Board of Regents approved a plan to support the continued development of scientific facilities at the site. In 1998, 2,033 acres (823 ha) were transferred from the observatory lease to supplement the Mauna Kea Ice Age Reserve. The 1982 plan was replaced in 2000 by an extension designed to serve until 2020: it instituted an Office of Mauna Kea Management, designated 525 acres (212 ha) for astronomy, and shifted the remaining 10,763 acres (4,356 ha) to "natural and cultural preservation". This plan was further revised to address concern expressed in the Hawaiian community that a lack of respect was being shown toward the cultural values of the mountain. Today the Mauna Kea Science Reserve has 13 observation facilities, each funded by as many as 11 countries. There are nine telescopes working in the visible and infrared spectrum, three in the submillimeter spectrum, and one in the radio spectrum, with mirrors or dishes ranging from 0.9 to 25 m (3 to 82 ft). In comparison, the Hubble Space Telescope has a 2.4 m (7.9 ft) mirror, similar in size to the UH88, now the second smallest telescope on the mountain. A "Save Mauna Kea" movement believes development of the mountain to be sacrilegious. Native Hawaiian non-profit groups such as Kahea, concerned with cultural heritage and the environment, also oppose development for cultural and religious reasons. The multi-telescope "outrigger" proposed in 2006 was eventually canceled. A planned new telescope, the Thirty Meter Telescope (TMT), has attracted controversy and protests. The TMT was approved in April 2013. In October 2014, the groundbreaking ceremony for the telescope was interrupted by protesters causing the project to temporarily halt. In late March 2015, demonstrators blocked access of the road to the summit again. On April 2, 2015, 300 protestors were gathered near the visitor's center when 12 people were arrested with 11 more arrested at the summit. Among the concerns of the protest groups are the land appraisals and Native Hawaiians consultation. Construction was halted on April 7, 2015, after protests expanded over the state. After several halts, the project has been voluntarily postponed. Governor Ige announced substantial changes to the management of Mauna Kea in the future but stated the project can move forward. The Supreme Court of Hawaiʻi approved the resumption of construction on October 31, 2018. Protestors have posted online petitions as a reaction against the Thirty Meter Telescope. The online petition titled "The Immediate Halt to the Construction of the TMT Telescope" was posted to Change.org on July 14, 2019. The online petition has currently gathered over 278,057 signatures worldwide. Some protesters have also called for the impeachment of Hawaiian Governor David Ige because of his support for the Thirty Meter Telescope. On July 18, 2019, an online petition titled "Impeach Governor David Ige" had been posted to Change.org and has currently gathered over 62,562 signatures. As of late 2021 construction plans are currently on hold due to the ongoing effects of the COVID-19 pandemic and a shift in funding for the project that may see federal funds made available through the National Science Foundation. The controversy surrounding construction of the Thirty Meter Telescope continues. Independent polls commissioned by local media organizations show consistent support for the project in the islands with over two thirds of local residents supporting the project. These same polls indicate Native Hawaiian community support remains split with about half of Hawaiian respondents supporting construction of the new telescope. A July, 2022, state law responds to the protests by shifting control over the master land lease from the University of Hawaii to the new Mauna Kea Stewardship and Oversight Authority (MKSOA). The MKSOA is a 12 member board (11 voting members and 1 non-voting member) which includes representatives from the university, astronomers and native Hawaiians and it was created with the aim of finding a balance between the conflicting interests of astronomers and native Hawaiians. It is placed within the Hawaii department of land and natural resources and will be the principal authority for the management of state-managed lands within the Mauna Kea lands after a 5 year transition period from the university starting on July 1, 2023. During the transition period, the MKSOA and the university will jointly manage the land while the authority develops a management plan to govern land uses, human activities, and overall operations. Once the transition period is up in 2028, the authority's responsibilities will include issuing new land use permits which will be important as the current master lease ends in 2033, and any observatories which don't come up with new leases by the time of its expiration will be decommissioned. ## Climate Mauna Kea has an alpine climate (ET). Due to the influence of its tropical latitude, temperature swings are very low. Frosts are common year round, but the average monthly temperature remains above freezing throughout the year. Snow may fall at an altitude of 11,000 ft (3,353 m) and above in any month, but occurs most often from October to April. A weather station was operated from 1972 to 1982; however, only 33 months within this period have temperature records; many years only have data for two months. The temperatures presented below are smoothed averages, not the raw numbers recorded by NOAA. ## Recreation Mauna Kea's coastline is dominated by the Hamakua Coast, an area of rugged terrain created by frequent slumps and landslides on the volcano's flank. The area includes several recreation parks including Kalopa State Recreation Area, Wailuku River State Park and Akaka Falls State Park. There are over 3,000 registered hunters on Hawaii island, and hunting, for both recreation and sustenance, is a common activity on Mauna Kea. A public hunting program is used to control the numbers of introduced animals including pigs, sheep, goats, turkey, pheasants, and quail. The Mauna Kea State Recreation Area functions as a base camp for the sport. Birdwatching is also common at lower levels on the mountain. A popular site is Kīpuka Puʻu Huluhulu, a kīpuka on Mauna Kea's flank that formed when lava flows isolated the forest on a hill. Mauna Kea's great height and the steepness of its flanks provide a better view and a shorter hike than the adjacent Mauna Loa. The height with its risk of altitude sickness, weather concerns, steep road grade, and overall inaccessibility make the volcano dangerous and summit trips difficult. Until the construction of roads in the mid-20th century, only the hardy visited Mauna Kea's upper slopes; hunters tracked game animals, and hikers traveled up the mountain. These travelers used stone cabins constructed by the Civilian Conservation Corps in the 1930s as base camps, and it is from these facilities that the modern mid-level Onizuka Center for International Astronomy telescope support complex is derived. The first Mauna Kea summit road was built in 1964, making the peak accessible to more people. Today, multiple hiking trails exist, including the Mauna Kea Trail, and by 2007 over 100,000 tourists and 32,000 vehicles were going each year to the Visitor Information Station (VIS) adjacent to the Onizuka Center for International Astronomy. The Mauna Kea Access Road is paved up to the Center at 2,804 m (9,199 ft). One study reported that around a third of visitors and two thirds of professional astronomers working on the mountain have experienced symptoms of acute altitude sickness; visitors traveling up the volcano's flanks are advised to stop for at least half an hour and preferably longer at the visitor center to acclimate to the higher elevation. It is strongly recommended to use a four-wheel drive vehicle to drive all the way to the top. Brakes often overheat on the way down and there is no fuel available on Mauna Kea. A free Star Gazing Program was previously held at the VIS every night from 6-10 pm, with the program canceled due to both a change in operating hours and closure due to the ongoing COVID-19 pandemic. Between 5,000 and 6,000 people visit the summit of Mauna Kea each year, and to help ensure safety, and protect the integrity of the mountain, a ranger program was implemented in 2001. ## See also - List of mountain peaks of the United States - List of volcanoes of the United States - List of mountain peaks of Hawaiʻi - List of the highest major summits of the United States - List of the most prominent summits of the United States - List of the most isolated major summits of the United States - List of Ultras of Oceania - Evolution of Hawaiian volcanoes - List of tallest mountains in the Solar System - Mount Everest
357,581
Lou Henry Hoover
1,173,200,139
First Lady of the United States from 1929 to 1933
[ "1874 births", "1944 deaths", "19th-century American women", "19th-century Quakers", "20th-century American women", "20th-century Quakers", "20th-century linguists", "American Quakers", "American women in World War I", "Burials in Iowa", "California Republicans", "First ladies of the United States", "Girl Scouts of the USA national leaders", "Hoover family", "Iowa Republicans", "Linguists from the United States", "New York (state) Republicans", "People from Waterloo, Iowa", "People from Whittier, California", "San Jose State University alumni", "Stanford University alumni", "University of California, Los Angeles alumni", "Women linguists", "Writers from Iowa", "Wyckoff family" ]
Lou Hoover (née Henry; March 29, 1874 – January 7, 1944) was an American philanthropist, geologist, and the first lady of the United States from 1929 to 1933 as the wife of President Herbert Hoover. She was active in community organizations and volunteer groups throughout her life, including the Girl Scouts of the USA, which she led from 1922 to 1925 and from 1935 to 1937. Throughout her life, Hoover supported women's rights and women's independence. She was a proficient linguist, fluent in Latin and Mandarin, and she was the primary translator from Latin to English of the complex 16th-century metallurgy text De re metallica. Hoover was raised in California while it was part of the American frontier. She attended Stanford University, and became the first woman to receive a degree in geology from the institution. She met fellow geology student Herbert Hoover at Stanford, and they married in 1899. The Hoovers first resided in China; the Boxer Rebellion broke out later that year, and they were at the Battle of Tientsin. In 1901 they moved to London, where Hoover raised their two sons and became a popular hostess between their international travels. During World War I, the Hoovers led humanitarian efforts to assist war refugees. The family moved to Washington, D.C. in 1917, when Herbert was appointed head of the Food and Drug Administration, and Lou became a food conservation activist in support of his work. Hoover became the First Lady of the United States when her husband was inaugurated as president in 1929. She minimized her public role as White House hostess, dedicating her time as first lady to her volunteer work. She refused to give interviews to reporters, but she became the first first lady to give regular radio broadcasts. Her invitation of Jessie De Priest to the White House for tea was controversial for its implied support of racial integration and civil rights. Hoover was responsible for refurbishing the White House during her tenure, and she also saw to the construction of a presidential retreat at Rapidan Camp. Hoover's reputation declined alongside her husband's during the Great Depression as she was seen as uncaring of the struggles faced by Americans. Both the public and those close to her were unaware of her extensive charitable work to support the poor while serving as first lady, as she believed that publicizing generosity was improper. After Herbert lost his reelection campaign in 1932, the Hoovers returned to California, and they moved to New York City in 1940. Hoover was bitter about her husband's loss, blaming dishonest reporting and underhanded campaigning tactics, and she strongly opposed the Roosevelt administration. She worked to provide humanitarian support with her husband during World War II until her sudden death of a heart attack in 1944. ## Early life and education Lou Henry was born in Waterloo, Iowa, on March 29, 1874. Her mother was Florence Ida (née Weed), a former schoolteacher, and her father was Charles Delano Henry, a banker. She was the oldest of two daughters, raised in Waterloo before moving to Texas, Kansas, and California. Most of her childhood was spent in the California towns of Whittier and Monterey. While she was a child, her father educated her in outdoorsmanship, and she learned to camp and ride. She took up sports, including baseball, basketball, and archery. Her parents taught her other practical skills, such as bookkeeping and sewing. Her family was nominally Episcopalian, but Lou sometimes attended Quaker services. As a child, Henry attended Bailey Street School in Whittier until 1890. She was well-liked in school, known for the science and literature clubs she organized and for her tendency to ignore gender norms by engaging in athletics and outdoor activities. When she was ten, she was the editor of her school newspaper. She began her postsecondary schooling at the Los Angeles Normal School (now the University of California, Los Angeles). While in Los Angeles, she was a member of the school's Dickens Club that studied and collected specimens of plants and animals. She later transferred to San José Normal School (now San José State University), obtaining a teaching credential in 1893. She took a serious interest in politics during her college years; she joined the Republican Party based on its progressive platform, and she strongly supported women's suffrage. After her graduation in 1893, Henry took a job at her father's bank as well as working as a substitute teacher. The following year, she attended a lecture by geologist John Casper Branner. Fascinated by the subject, she enrolled in Branner's program at Stanford University to pursue a degree in geology. It was there that Branner introduced her to her future husband, Herbert Hoover, who was then a senior. They bonded over their shared Iowa heritage and their mutual interests in science and outdoorsmanship, and their friendship developed into a courtship. She studied geology with the intention of doing field work, but she and Branner were unable to find any employers willing to accept a female geologist. She maintained her interest in sports while at Stanford, serving as president of the Stanford Women's Athletic Club in her final year. In 1898, Hoover became the first woman to receive a bachelor's degree in geology from Stanford, and she was one of the first women in the United States to hold such a degree. She continued to work with Branner, conducting research on his behalf and requesting geological samples for Stanford's collection. Branner credited her with making it one of the largest collections in the world. After graduating, Henry volunteered with the Red Cross to support American soldiers during the Spanish–American War. ## Marriage and travels ### Marriage and travel to China In 1897, Herbert was offered an engineering job in Australia. Before leaving, he had dinner with the Henrys and their engagement was informally agreed upon. Lou and Herbert maintained a long-distance relationship while he was in Australia. Herbert was hired as chief engineer of the Chinese Engineering and Mining Company the following year, and he sent her a marriage proposal by cable, reading "Going to China via San Francisco. Will you go with me?". They were married in the Henrys' home on February 10, 1899. Lou also announced her intention to change her religious faith from Episcopalian to her husband's Quaker religion, but there was no Quaker Meeting in Monterey. Instead, they were married in a civil ceremony performed by a Spanish Roman Catholic priest. The day after their marriage, Lou Hoover and her husband boarded a ship from San Francisco, and they briefly honeymooned at the Royal Hawaiian Hotel in Honolulu. While on route, they read extensively about China and its history. They arrived in Shanghai on March 8, spending four days in the Astor House Hotel. Hoover stayed with a missionary couple in the foreign colony in Tientsin (now Tianjin) while her husband was working, and they moved into a home of their own the following September. It was their first home as a married couple, a Western-style brick house at the edge of the colony. It was here that Hoover began homemaking and interior decoration; she managed a staff and entertained for guests. She also took up typing while in China, purchasing a typewriter and writing scientific articles on Chinese mining with her husband. Hoover worked closely with her husband, through both writing and field work. She also started a collection of Chinese porcelain that she would maintain throughout her life. The Boxer Rebellion began while the Hoovers were in China; despite her husband's pleas, Lou refused to leave the country. As foreigners, they were both potential targets of the Boxer movement. During the Battle of Tientsin in 1900, Lou worked as a nurse and managed food supplies while Herbert organized barricades. For a month, Hoover carried a revolver while she ran supplies to soldiers on her bicycle. In one incident, a bullet struck her tire while she was riding. In another, shells struck around her home, but once it was clear the shelling was over, she calmly returned to her game of solitaire. At least one obituary was mistakenly published for her. The Hoovers left China after the end of the Boxer Rebellion that summer, traveling to London to make arrangements regarding control of Chinese mines. They returned to China once more with Lou's sister Jean for several months in 1901. ### London and World War I The Hoovers made their home in London in November 1901 after Herbert was offered a partnership with a British mining company. Their work took them throughout Europe and to many other countries, including Australia, Burma, Ceylon, Egypt, India, Japan, New Zealand, and Russia. Because of their travels, Hoover spent much of her time on steamboats. The trips were relatively comfortable, as they traveled in first class. She passed time on these months-long voyages by reading or by hosting social visits with other travelers using portable tea sets and tables. The Hoovers had two sons who accompanied them as they traveled: Herbert Hoover Jr. was born in 1903, and Allan Hoover was born in 1907. The Hoovers became extremely wealthy after Herbert's decision to become an independent consultant in 1908. Lou's expertise in geology allowed her to participate in business talk with Herbert and his colleagues, and she thoroughly enjoyed this. The Hoovers played a role in standardizing the modern mining industry, particularly in regard to human management and business ethics. When they were in London, Lou often entertained large crowds. Their home became a social hub for their fellow expatriates and for Herbert's colleagues in the mining industry. The Hoovers engaged in philanthropy during their time in London, and Lou saw to it that her servants had their needs addressed. She joined the Friends of the Poor to work directly with people in poverty, and she joined social clubs such as the Society of American Women, the British affiliate of the General Federation of Women's Clubs; she participated in and eventually led the society's philanthropic committee. When World War I began, the Hoovers had already spent time back in the United States and were preparing to move back permanently. Upon hearing that war had broken out, the Hoovers instead became involved with London relief efforts. When Herbert was chosen to direct relief efforts for Belgian refugees, Lou became heavily involved as well. She also reorganized the Society of American Women as a humanitarian group to facilitate the transport of Americans stranded in Britain. She traveled regularly to the United States and back to give speeches and collect donations for relief efforts, despite the danger of crossing the North Atlantic during the war. Her involvement with refugee assistance earned her a position on the American relief committee as the only female member, and she was the chairwoman of the women's American relief committee. Other projects of hers included the creation of a Red Cross hospital for British soldiers, a knitting factory in London to provide jobs for displaced women, and a maternity hospital in Belgium. As her humanitarian efforts increased, she found herself responsible for so many projects that she had to delegate several of them to other women. For her work, she was decorated in 1919 by King Albert I of Belgium. The Hoovers returned to the United States in January 1917. When the U.S. entered World War I three months later, Herbert was appointed head of the Food and Drug Administration, and the Hoovers made their home in Washington, D.C. As with Herbert's previous endeavors, Lou worked closely alongside him. She joined her husband in promoting food conservation, traveling to give speeches promoting the cause. The Hoovers effectively became the public faces of the conservation movement. She also organized the construction of a home for her and her husband by Stanford University in Palo Alto, California, but this was seen as selfish by the public amid her humanitarian work, and she delayed the project until the end of the war. The war brought thousands of women to Washington to work as civil servants. The poor economic security of these women led Hoover to found women's groups and provide housing for the women that worked in her husband's department. She expanded her support for these women's groups to include medical treatment during the Spanish flu. Hoover paid for these programs with her own funds, describing them as loans but asking that they be repaid to someone that needed it more. After the war, Hoover continued her fundraising work in the U.S. while her husband was in Europe administrating relief efforts. ### Cabinet member's wife The Hoovers returned to Washington when Herbert was appointed Secretary of Commerce in 1921. Drawing from her experience as a hostess, Hoover made their new Washington home into a social hub, allowing her husband to build relationships in the city. She found the practice of calling on her fellow cabinet wives to be a waste of time, and her refusal to do so contributed to the end of the practice. As the wife of a cabinet member, Hoover sought involvement in many women's organizations, including the Girl Scouts of the USA, the Camp Fire Girls, the General Federation of Women's Clubs, and the League of Women Voters. By the time Hoover was a cabinet wife, she emphasized a distinction between her work and her husband's, refusing to answer reporters' questions about her husband's work in Washington. When Calvin Coolidge ascended to the presidency, Hoover became close friends with the new first lady, Grace Coolidge. The two of them began a tradition of exchanging flowers on Easter, and Hoover invited Coolidge to participate in Girl Scouts events. Hoover had begun her involvement with the Girl Scouts in 1917, wishing to continue her work with children that she had begun in her war relief efforts. She was chosen as the group's president in 1922, and she held the position until 1925. She emphasized a "lead from behind" structure for Girl Scout troops in which she recommended that troop leaders "don't forget joy". Hoover's reforms, as well as her own personal popularity, led to a significant increase in membership and funds for the organization. She convinced first lady Edith Wilson to accept a position as honorary head of the Girl Scouts, establishing a tradition that Hoover herself would eventually take on as first lady. For girls living in rural areas, she founded the Lone Scout program so they could participate without a troop in their area. She also founded racially integrated Girl Scout troops in Washington and Palo Alto. Hoover was also heavily involved with the National Amateur Athletic Foundation, and she was the only woman to serve as vice president within the organization. She first began working with the group in 1922, and she played an active role until her husband's election as president in 1928. With this position, she created a Women's Division that outlasted the original organization. The women's division was created with the goal of moving women's sports away from the practices of men's sports, which they argued were too competitive and failed to prioritize the well-being of the athletes. Hoover believed that sports were essential for one's health and she wished to see all young girls participate in a sport. As with her participation in the Girl Scouts, she used her skill for fundraising to greatly expand the organization's resources. When Herbert was considered as a candidate for the 1928 presidential election, Lou did not approve of active campaigning, and Herbert often refrained from political talk when she was present. Though she accompanied her husband on his campaign, she refused to comment on the election or say anything that might be considered political. The 1928 election brought more attention to the candidates' wives than those of previous years. When her husband was chosen as the Republican Party's nominee, she found herself frequently compared to Catherine Smith, the wife of Democratic candidate Al Smith. Hoover was relatively popular compared to Mrs. Smith, who was an urbanite, a Catholic, and an alleged alcoholic—all things that made her unpopular with voters. Hoover was seen as better fit for the role, being athletic and well-traveled. After Herbert was elected president, Lou accompanied him on a goodwill tour in Latin America. ## First Lady of the United States ### White House hostess Hoover was not as successful in her role as White House hostess as she was in other projects; she was not eager to participate in Washington society except on her own terms, and her social position became increasingly precarious as the Hoovers' reputation diminished during the Great Depression. She did not prioritize public presentations as first lady, and when she took up the role, she declined to purchase new clothes or learn any new skills as incoming first ladies often did. She was often reclusive, ending the practice of greeting thousands of people during the New Year's Day reception because she deemed it unpleasant. Her husband later said that it was only her "rigid sense of duty" that prevented her from abolishing other receptions as well. She did made sure to accommodate pregnant women, rejecting the social expectation of the time that pregnancy not be visible in public. Hoover was more willing to invite individual guests to the White House, and such guests were present at every meal. Some days included additional teas to accommodate the constant flow of guests to the White House. On several occasions, White House staff found that due to last-minute invitations, they had to prepare and serve meals for several times as many people as originally expected. Unlike previous first ladies, Hoover emphasized political advantage when selecting guests, setting a precedent for future first ladies. At the beginning of her tenure, Hoover spent large sums of money to ensure that the White House had "the best of everything", using all of the funds allocated by Congress and then supplementing it with the family's personal funds. She expressed her love of music by inviting several renowned musicians to the White House, and she introduced the tradition of inviting a guest musician to play for visiting foreign leaders after she had her friend Mildred Dilling play for the King of Siam. The Great Depression brought an end to the White House's more extravagant social events as Hoover reduced her spending to serve as an example for the American people. When African American candidate Oscar Stanton De Priest was elected to Congress, Hoover initiated a meeting for tea at the White House with his wife Jessie De Priest, as was tradition for the wives of all incoming Congressmen. Hoover was responsible for planning the event to ensure its success. She arranged the scheduling so that only women she trusted would attend, and she alerted White House security that Mrs. De Priest was to be expected and not barred entry. Hoover chose not to publicize the details of De Priest's attendance until after it occurred so as to avoid interruptions. The event became part of a larger debate on racial issues as southern voters protested the invitation of a Black woman. It further complicated Hoover's relationship with the press, as she deemed Southern newspapers to be responsible for the criticism. The Hoovers reinforced the precedent by inviting other non-white musicians to play at the White House, including the Tuskegee Institute Choir. ### Management of the White House During her time as first lady, Hoover oversaw refurbishing of the White House, importing art and furniture to decorate the building. She worked in conjunction with a committee that had been formed in the previous administration to decorate the White House, though she sometimes declined to consult them and made her own changes. She hired her own assistant at personal expense to catalogue what already existed in the White House, creating the first full compilation for the history of the White House's furnishings. Her refurbishments included the reconstruction of the studies of Abraham Lincoln and James Monroe, which would later be converted into the Lincoln Bedroom and the Treaty Room, respectively. She also had a movie projector installed in the White House. Hoover's many projects meant that she frequently held meetings of her own in the White House, and she had bedrooms converted into sitting rooms so she and the president could both see several people each day. Hoover played a critical role in designing and overseeing the construction of a rustic presidential retreat at Rapidan Camp in Madison County, Virginia. After the location was chosen, the Hoovers discovered the poverty in the area and added the construction of a school building to their project. Once Rapidan Camp was established as a second presidential home, the Hoovers stayed there each weekend. Hoover often practiced horseback riding while at the camp, where she often outpaced the military horsemen that accompanied her. The Hoovers undertook another philanthropic construction project in 1930 to build a Quaker Meeting House in Washington D.C. The Hoovers' relationship with their staff is the subject of debate. Memoirs of staff members have portrayed them in a negative light, but it is unclear how much of this depiction originates from the books' ghostwriters. Hoover required the staff to remain out of sight, and a bell would be rung before she or her husband entered a room, signaling for the staff to leave the area. While managing White House events, she would use hand signals to communicate with the staff. Many innocuous gestures, such as raising a finger or dropping a handkerchief, indicated a command for them to follow. Even slight deviations from expected behavior, such as scraping plates or breaking composure when standing during mealtimes, risked a rebuke. Though she was strict, she also treated the staff generously, frequently paying for their food and other personal expenses. Besides the White House staff, Hoover had her own personal first lady staff. She had four women working directly for her, more than any previous first lady. ### Politics and activism Hoover was her husband's frequent adviser while he was president. Throughout her tenure, she refused to give interviews to the press, seeing them as intrusive and error-prone. Instead, she spoke to the public by giving speeches over the radio, and she was the first woman to make radio broadcasts as first lady. She took pride in her broadcasts, rehearsing them in a dedicated room and practicing her speaking technique. These broadcasts often used plain language and advocated feminist ideals. Hoover continued her involvement with volunteer and activist work, though much of it was reduced or ignored in favor of her responsibilities as first lady. She remained directly active with the Girl Scouts, continuing her oversight of its organizational and financial operations, and she touted it as an example of the volunteerism she felt was necessary to combat the Great Depression. Hoover also became a patron of the arts as first lady, particularly in her support of aspiring musicians. Using her influence as first lady, Hoover encouraged her husband to hire more women in his administration, and she expressed support for an executive order to ban sex discrimination in civil service appointments. She generally avoided any strong political statements or affiliations that might have interfered with her husband's administration. During the Great Depression, Hoover regularly received requests for assistance from citizens who were struggling. She referred each one to a local charity organization or a person who could help so that each would get the needed assistance. Whenever she was unable to find a charity or a donor that could help, she sent her own money. She refused to publicize or draw attention to her charitable work, consistent with her lifelong belief that private generosity should not be promotional. Often she sent the money anonymously through a proxy so her name would not be associated with it. She also became responsible for the financial situation of her and her husband's relatives and family friends. Serving as a point of contact between her husband and those suffering poverty, she presented an image of empathy to contrast with the president's perceived aloofness. Hoover also helped organize fundraiser concerts for the American Red Cross with pianist Ignacy Jan Paderewski. She was deeply affected by the criticisms leveled against her husband during the Great Depression, furious that a man whom she saw as caring and charitable was being criticized as the opposite. In support of her husband's stance on the economy, her radio broadcasts during the Great Depression focused on volunteerism, emphasizing women's role in volunteer work. She accompanied her husband on a presidential campaign again in 1932, but he was defeated in the 1932 presidential election. ## Later life and death After leaving the White House, the Hoovers took their first true vacation in many years, driving through the Western United States. Hoover continued to receive letters requesting assistance, though far fewer than she had addressed while serving as first lady. She did not share her husband's desire to return to politics, but she was active in Republican Party women's groups. In 1935, she took up a project to purchase and restore her husband's birthplace cottage in Iowa. She also returned to the Girl Scouts the same year to serve as its president for another year. Hoover became involved with the Salvation Army to support its fundraising operations in 1937. The same year, she returned to Stanford University to develop the Friends of Music program, with which she was active for the rest of her life. She also supported a physical therapy program that she hoped would prove beneficial should the United States go to war. She maintained an active lifestyle throughout her later years, including a weeks-long horseback tour of the Cascade Range while she was in her sixties. Hoover disapproved of the actions of the Roosevelt administration, and she became affiliated with the Pro-America movement that opposed the New Deal. At the onset of World War II, she once again worked to provide relief for war refugees with her husband, reminiscent of their work in World War I. She was enraged by the Japanese invasion of China, a place with which she always felt a personal connection. Despite this, she took an isolationist stance, hoping that the U.S. would not enter the second World War as it entered the first. During the 1940 presidential election, the Hoovers campaigned on behalf of Republican candidate Wendell Willkie. They moved to New York in December 1940, as Herbert had been spending an increasing amount of time there on business. Hoover died of a heart attack on January 7, 1944, while staying at the Waldorf Astoria New York. She was found by her husband when he returned to their room. Two services were held for her. The first, a joint Episcopalian-Quaker service in New York, was attended by about one thousand people, including two hundred girl scouts. The second was held in Palo Alto, where she was buried. After her death, her family found many checks she had received to repay her for her charity but which she had declined to cash. She was later reinterred in her husband's grave in West Branch, Iowa. ## Political beliefs During her early life and career, Hoover was not politically vocal. She preferred to speak to nonpartisan issues, and she wished to avoid saying anything that might have political ramifications for her husband. To present a unified stance with her husband, she rarely expressed political ideas of her own except on women's issues. Hoover supported civil rights and deplored racism, though she was susceptible to the racial stereotyping that was common at the time, and she was unaware of problems faced by the African American community. One of the few issues on which she disagreed with her husband was her support for the prohibition of alcohol. She disposed of her husband's wine collection, and she refused to attend any event that served alcohol illegally. Throughout her life, Hoover worked to support women's causes. She was an advocate of women's employment, encouraging housewives to start careers as well as keeping house. Her support for women's causes came about early in life, and she wrote school essays on the subject. She was a member of several women's groups, many of which engaged in philanthropic efforts to support women. When the Nineteenth Amendment guaranteed women's suffrage in the United States in 1920, Hoover said that women's responsibilities extended to civic duty. She chastised women who lived purely domestic lives as "lazy", arguing that household chores did not preclude a career. She was also critical of politically active women who focused exclusively on women's and children's rights issues, believing that women should participate in governance more broadly. Hoover was a strong believer in philanthropy and business ethics, supporting her husband's decision to reimburse his employees at personal expense after a fellow partner defrauded them. She also ensured that the culprit's family was cared for financially after he fled the country. She believed that private charity was preferable to public assistance programs. Hoover was not vocal about her beliefs on philanthropy, believing that it was something that should be practiced privately. She opposed publicized philanthropy, and she gave funds to the needy throughout her life without telling others. The full extent of her philanthropy was not known until records were discovered after her death. She held a similar philosophy regarding religion, believing that practice was more important than sectarian identification. While her husband was the head of the Food and Drug Administration, Hoover took up the cause of food conservation. She began a tradition of leaving one chair empty as a reminder of child starvation whenever she entertained company. In 1918, she invited reporters into her home for a special "Dining with the Hoovers" interview in which she detailed their household's dining habits and conservation strategies. The practice of self-imposed dietary restrictions to conserve, such as going one day a week without meat, became known as "Hoovering". She provided lessons and recipes for Americans that wished to grow or prepare their own food. Amid the corruption of the Teapot Dome scandal, Hoover took an active stance in favor of government accountability. The scandal led her to call for more women in law enforcement, and she headed the Women's Conference on Law Enforcement in 1924. As first lady, Hoover provided indirect support to disabled veterans of the Bonus Army, though she believed that the able-bodied veterans had no claim to the additional support they were requesting. She was highly sensitive to political criticism as first lady, and she was strongly affected by remarks against her husband's presidency. Hoover became more conservative after her tenure as first lady, and she was critical of the Roosevelt administration. Hoover had a low opinion of the Roosevelts, believing that they caused her husband to be politically smeared and cost him a second term in the White House. She also felt that many of President Roosevelt's actions were unconstitutional. Later in life, she made political statements deploring the spread of communism and fascism. ## Languages Hoover spoke five languages by the time she became first lady. She began her study of Mandarin Chinese while on the ship to China after her marriage. She took up instruction under a Chinese Christian scholar, eventually surpassing him in her Chinese vocabulary. She sometimes served as her husband's translator while they lived in China, and she would continue to practice Chinese with him afterward so that he would retain the little that he knew. When she wished to speak privately with her husband in the White House, Hoover would engage with him in Mandarin. Her Chinese name was 'Hoo Loo' (古鹿; Pinyin: Gǔ Lù【胡潞,Hú Lù】), derived from the sound of her name in English. Hoover was also well versed in Latin, which she studied while at Stanford. She collaborated with her husband in translating Georgius Agricola's De re metallica, a 16th-century encyclopedia of mining and metallurgy. Lou was responsible for the linguistic translation, while Herbert applied his knowledge of the subject matter and carried out physical experiments based on what they discerned from the text. The book had previously been considered unusable due to the difficulty of translating its technical language, some of which had been invented by its author. After its translation, the Hoovers published it at their own expense and donated copies to students and experts of mining. In recognition of their work, they received the gold medal of the Mining and Metallurgical Society of America in 1914. They dedicated the book to Dr. Branner, the instructor that had introduced Lou to geology and to Herbert. ## Legacy During her tenure as first lady, Hoover was variously seen as a homemaker, as was common for first ladies, and as an activist. Her reputation, along with that of her husband, languished as the Hoover administration was criticized for its response to the Great Depression. Hoover is often seen as a counterbalance to her husband as she took up the social responsibilities of their work in and out of the White House, her charisma and tact balancing his reputation of being shy and sometimes arrogant. She has since been consistently ranked in the upper half of first ladies in periodic polling of historians. Hoover set an early precedent for the political role of first ladies in the 20th century by expressing an interest in women's issues and supporting her husband's platform with her own projects. Despite their political differences, Hoover has been compared to her successor Eleanor Roosevelt in their common approaches to political engagement and women's issues. Hoover's use of radio broadcasts proved similar to her successor's own use of media over the following years. The first biography about Hoover was Lou Henry Hoover: Gallant First Lady, written by her friend Helen B. Pryor in 1969. Her husband requested that her papers remained sealed for twenty years after his own death, preventing any significant scholarly analysis of her life or her role as first lady until then. They were opened in 1985, allowing for increased scholarship on her life and her work. Her papers are relatively comprehensive for historical figures of the period, including over 220,000 items and encompassing every period of her life. Historical study of Hoover has been complicated by her private nature, as she would often refuse media attention and burn personal letters. The Stanford home that Hoover designed was donated to the university by her husband, who requested that it be named the Lou Henry Hoover House. Two elementary schools were named in her honor: Lou Henry Hoover Elementary School of Whittier, California, in 1938 and Lou Henry Elementary School of Waterloo, Iowa, in 2005. Lou Henry Hoover Memorial Hall was built in 1948 at Whittier College. One of the brick dormitories at San José State University was named "Hoover Hall" in her honor until its demolition in 2016. Camp Lou Henry Hoover in Middleville, New Jersey, is named for her. ## See also - Margaret Hoover – Hoover's great-granddaughter
4,779,630
Disappearance of Natalee Holloway
1,173,856,849
Case of an American woman who disappeared in Aruba
[ "2000s missing person cases", "2005 crimes in the Netherlands", "2005 in Aruba", "2005 in women's history", "Aruba–United States relations", "History of women in the Netherlands", "History of women in the United States", "May 2005 crimes in the United States", "May 2005 events in North America", "Missing person cases in Europe", "Netherlands–United States relations" ]
Natalee Ann Holloway (October 21, 1986 – disappeared May 30, 2005; declared dead January 12, 2012) was an 18-year-old American teenager whose mysterious disappearance made international news after she vanished on May 30, 2005, in Aruba. Holloway lived in Mountain Brook, Alabama, and graduated from Mountain Brook High School on May 24, 2005, days before the trip. Her disappearance resulted in a media sensation in the United States. Her remains have not been found. Holloway was scheduled to fly home from the Caribbean island on May 30, 2005, but she failed to appear for her flight. Her classmates last saw her outside of Carlos'n Charlie's, a restaurant and nightclub in Oranjestad. She was in a car with local residents Joran van der Sloot and brothers Deepak and Satish Kalpoe. When the three men were questioned, they said that they dropped off Holloway at her hotel and denied knowing what had become of her. Upon further investigation by authorities, Van der Sloot was arrested twice on suspicion of involvement in her disappearance, and the Kalpoes were each arrested three times. Due to lack of evidence, the three suspects were released each time without being charged with a crime. Holloway's parents criticized the Aruban police for the lack of progress in the investigation and interrogation of the three men who were last seen with their daughter. The family also called for a boycott of Aruba, which gained governor of Alabama Bob Riley's support but failed to gain widespread backing. With the assistance of hundreds of volunteers, Aruban investigators conducted an extensive search operation. American special agents from the FBI, 50 Dutch soldiers, and three specially equipped Dutch Air Force F-16 aircraft participated in the search. In addition to the ground search, divers searched the ocean for Holloway's body. Holloway's remains were never found. On December 18, 2007, Aruban prosecutors announced that the case would be closed without charging anyone with a crime. The Aruban prosecutor's office reopened the case on February 1, 2008, after receiving video footage of Van der Sloot, under the influence of marijuana, saying that Holloway died on the morning of her disappearance, and that a friend had disposed of her body. Van der Sloot later denied that what he had said was true, and in an interview said that he had sold Holloway into sexual slavery. He later retracted his comments. In January 2012, Van der Sloot was convicted of the May 30, 2010, murder of 21-year-old Stephany Flores Ramírez in Lima, Peru. At the request of Holloway's father, Alabama judge Alan King declared Holloway legally dead on January 12, 2012. On June 8, 2023, Van der Sloot, who was still the main suspect in Holloway's disappearance, was extradited to the United States to face trial for extortion and wire fraud, with both charges being linked to Holloway's disappearance. ## Background Holloway was the first of two children born to Dave and Elizabeth "Beth" Holloway (1960–) in Memphis, Tennessee. Her parents divorced in 1993, and she and her younger brother Matthew were raised by their mother. In 2000, Beth married George "Jug" Twitty, a prominent Alabama businessman, and the family moved to Mountain Brook, Alabama. Holloway graduated with honors in May 2005 from Mountain Brook High School, located in a wealthy suburb of Birmingham. She was a member of the National Honor Society and the school dance squad and participated in other extracurricular activities. Holloway was scheduled to attend the University of Alabama on a full scholarship, where she planned to pursue a pre-med track. At the time of his daughter's disappearance, Dave Holloway was an insurance agent for State Farm in Meridian, Mississippi, while Beth Twitty was employed by the Mountain Brook School System. ## Disappearance in Aruba On Thursday, May 26, 2005, Holloway and 124 fellow graduates of Mountain Brook High School arrived in Aruba for a five-day, unofficial graduation trip. The teenagers were accompanied by seven chaperones. According to teacher and chaperone Bob Plummer, the chaperones met with the students each day to make sure everything was fine. Jodi Bearman, who organized the trip, stated, "the chaperones were not supposed to keep up with their every move." Police Commissioner Gerold Dompig, who headed the investigation from mid-2005 until 2006, stated that the Mountain Brook students engaged in "wild partying, a lot of drinking, lots of room switching every night. We know the Holiday Inn told them they weren't welcome next year. Natalee, we know, she drank all day every day. We have statements she started every morning with cocktails—so much drinking that Natalee didn't show up for breakfast two mornings." Two of Holloway's classmates, Liz Cain and Claire Fierman, agreed that the drinking on the trip was "kind of excessive". Holloway was last seen by her classmates around 1:30 a.m. on Monday, May 30, as she was leaving the Oranjestad bar and nightclub Carlos'n Charlie's. She left in a car with 17-year-old Joran van der Sloot — a Dutch honors student who was living in Aruba and attending the International School of Aruba — and his two Surinamese friends, brothers 21-year-old Deepak Kalpoe (the owner of the car) and 18-year-old Satish Kalpoe. Holloway was scheduled to fly home later that day, but she did not appear for her return flight. Her packed luggage and her passport were found in her Holiday Inn room. Aruban authorities initiated searches for Holloway throughout the island and surrounding waters but did not find her. ## Investigation ### Early investigation Immediately following Holloway's missed flight, her mother and stepfather flew with friends to Aruba by private jet. Within four hours of landing on the island, the Twittys presented the Aruban police with the name and address of Van der Sloot, who was the person with whom Holloway left the nightclub. Beth stated that Van der Sloot's full name was given to her by the night manager at the Holiday Inn, who supposedly recognized him on a videotape. The Twittys and their friends went to the Van der Sloot home with two Aruban policemen to look for Holloway. Van der Sloot initially denied knowing Holloway's name, but he then told a story corroborated by Deepak Kalpoe, who was present in the house: Van der Sloot related that they drove Holloway to the California Lighthouse area of Arashi Beach because she wanted to see sharks; they later dropped Holloway off at her hotel at around 2:00 a.m. According to Van der Sloot, Holloway fell down as she exited the car but refused his help. He stated that as he and Kalpoe were driving away, Holloway was approached by a dark man in a black shirt similar to those worn by security guards. The search and rescue efforts for Holloway began immediately. Hundreds of volunteers from Aruba and the United States joined in the effort. During the first days of the search, the Aruban government gave thousands of civil servants the day off to participate in the rescue effort. Fifty Dutch marines conducted an extensive search of the shoreline. Aruban banks raised \$20,000 and provided other support to aid volunteer search teams. Beth Twitty was provided with housing, initially at the Holiday Inn where she coincidentally stayed in the same room her daughter had occupied. She subsequently stayed at the presidential suite of the nearby Wyndham Hotel. Reports indicated that Holloway did not appear on any nighttime surveillance camera footage of the hotel lobby; however, Twitty has made varying statements as to whether the cameras were operational that night. According to an April 19, 2006, statement made by Twitty, the video cameras at the Holiday Inn were not functioning the night Holloway vanished. Twitty has made other statements indicating that they were working and has stated so in her book. Police Commissioner Jan van der Straaten—the initial head of the investigation until his 2005 retirement—said that Holloway did not have to go through the lobby to return to her room. The search for physical evidence was extensive and subject to occasional false leads; for example, a possible blood sample taken from Deepak Kalpoe's car was tested but determined not to be blood. American law enforcement cooperated substantially with Aruban authorities from the early days of the investigation. U.S. Secretary of State Condoleezza Rice stated to reporters that the United States was in constant contact with Aruban authorities. Another State Department official indicated, "Substantial resources are being applied to this as they [Aruba officials] continue to ask for more." ### 2005 arrests of multiple suspects On June 5, Aruban police detained Nick John and Abraham Jones, former security guards from the nearby Allegro Hotel (which was then closed for renovation) on suspicion of murder and kidnapping. Authorities have never officially disclosed the reason for their arrests, but, according to news accounts, statements made by Van der Sloot and the Kalpoe brothers may have been a factor in their arrests. Reports also indicated that the two former guards were known for cruising hotels to pick up women, and at least one of them had a prior incident with law enforcement. John and Jones were released on June 13 without being charged. On June 9, Van der Sloot and the Kalpoe brothers were arrested on suspicion of the kidnapping and murder of Holloway. Aruban law allows for investigators to make an arrest based on serious suspicion. In order to continue holding the suspect in custody, an increasing evidential burden must be met at periodic reviews. According to Dompig, the focus of the investigation centered on these three suspects from the "get-go". Dompig stated that close observation of the three men began three days after Holloway was reported missing, and the investigation included surveillance, telephone wiretaps, and even monitoring of their e-mail. Dompig indicated that pressure from Holloway's family caused the police to prematurely stop their surveillance and detain the three suspects. As the investigation continued, David Cruz—spokesman for the Aruban Minister of Justice—falsely indicated on June 11 that Holloway was dead and that authorities knew the location of her body. Cruz later retracted the statement, saying he was a victim of a "misinformation campaign". That evening, Dompig alleged to the Associated Press that one of the detained young men admitted "something bad happened" to Holloway after the suspects took her to the beach and that the suspect was leading police to the scene. The next morning, prosecution spokeswoman Vivian van der Biezen refused to confirm or deny the allegation, simply stating that the investigation was at a "very crucial, very important moment". On June 17, a sixth person later identified as disc jockey Steve Gregory Croes was also arrested. Van der Straaten told the media that "Croes was detained based on information from one of the other three detainees." On June 22, Aruban police detained Van der Sloot's father, Paulus van der Sloot, for questioning; he was arrested that same day. Both Paulus and Croes were ordered to be released on June 26. During this period, the suspects who had been detained changed their stories. All three indicated that Van der Sloot and Holloway were dropped off at the Marriott Hotel beach near the fishermen's huts. Van der Sloot stated that he did not harm Holloway but left her on the beach. According to Satish Kalpoe's attorney, David Kock, Van der Sloot called Deepak Kalpoe to tell the latter that he was walking home and sent him a text message forty minutes later. At some time during the interrogation, Van der Sloot detailed a third account that he was dropped off at home and Holloway was driven off by the Kalpoe brothers. Dompig discounted the story, stating: > This latest story [came about] when [Van der Sloot] saw [that] the other guys, the Kalpoes, were kind of finger-pointing in his direction, and he wanted to screw them also, by saying he was dropped off. But that story doesn't check out at all. He just wanted to screw Deepak. They had great arguments about this in front of the judge. Because their stories didn't match. This girl, she was from Alabama, she's not going to stay in the car with two black kids. We believe the second story, that they were dropped off by the Marriott. Following hearings before a judge, the Kalpoe brothers were released on Monday, July 4, but Van der Sloot was detained for an additional 60 days. ### Continued search, rearrests and releases On July 4, the Royal Netherlands Air Force deployed three F-16 aircraft equipped with infrared sensors to aid in the search, but the results came up empty. In March 2006 it was reported that satellite photos were being compared with photographs taken more recently (presumably from the F-16s) in an attempt to find unexpected shifts of ground that might be Holloway's grave. After a local gardener came forward with information, a small pond near the Aruba Racquet Club close to the Marriott Hotel beach was partly drained between July 27 and 30, 2005. According to Jug Twitty, the gardener claimed to have seen Van der Sloot attempting to hide his face as he drove into the Racquet Club with the Kalpoe brothers on the very early morning of May 30 between 2:30 a.m. and 3:00 a.m. Nancy Grace described the gardener as "the man whose testimony cracks the case wide open". Another person, "the jogger", claimed to have seen men burying a blonde-haired woman in a landfill during the afternoon of May 30. The police had searched the landfill in the days following Holloway's disappearance. After the jogger's statements, the landfill was searched three more times; the FBI used cadaver dogs to assist in the recovery operation. The searches were fruitless. Holloway's family initially offered \$175,000 and donors offered \$50,000 for her safe return. Two months after her disappearance, the reward was increased from \$200,000 to \$1,000,000, with a \$100,000 reward for information leading to the location of her remains. In August 2005, the reward for information leading to Natalee's corpse was increased from \$100,000 to \$250,000. The FBI announced that Aruban authorities had provided its agency with documents, suspect interviews, and other evidence. Investigators found a piece of duct tape with strands of blond hair attached to it; the samples were tested at a Dutch lab. A group from the Aruban police and prosecutor's office then traveled to the FBI central laboratory at Quantico, Virginia, to consult with American investigators. The hair samples were then tested a second time. The FBI announced that the hair samples did not belong to Holloway. The Kalpoe brothers were rearrested on August 26 along with another new suspect, 21-year-old Freddy Arambatzis. Arambatzis' lawyer said that his client was suspected of taking photographs of an underage girl and having inappropriate physical contact with the same girl. This incident allegedly occurred before the Holloway disappearance. Arambatzis' friends Van der Sloot and the Kalpoe brothers were supposedly involved in the incident. Van der Sloot's mother, Anita van der Sloot, stated, "It's a desperate attempt to get the boys to talk. But there is nothing to talk about." While no public explanation was then made for the Kalpoe rearrests, Dompig later said that it was an unsuccessful attempt to pressure the brothers into confessing. On September 3, the four detained suspects were released by a judge despite the attempts of the prosecution to keep them in custody. The suspects were released on the condition that they remain available to police. On September 14, all restrictions on them were removed by the Combined Appeals Court of the Netherlands Antilles and Aruba. In the months following his release, Van der Sloot gave several interviews that explained his version of events. The most notable interview was broadcast on Fox News over three nights in March 2006. During the interview, Van der Sloot indicated that Holloway wanted to have sex with him, but he did not because he didn't have a condom. He stated that Holloway wanted them to stay on the beach, but that he had to go to school in the morning. According to Van der Sloot, he was picked up by Satish Kalpoe at about 3:00 a.m. and left Holloway sitting on the beach. In August 2005, David Kock, Kalpoe's attorney, stated that his client had gone to sleep, and had not returned to drive Van der Sloot home. Van der Sloot stated that he was somewhat ashamed to have left a young woman alone on the beach, albeit by her own request, and related that he was not truthful at first because he was convinced that Holloway would soon turn up. In January 2006, the FBI and Aruban authorities interviewed—or in some cases, re-interviewed—several of Holloway's classmates in Alabama. On January 17, Aruban police searched for Holloway's body in sand dunes on the northwest coast of Aruba, as well as areas close by the Marriott beach. Additional searches took place in March and April 2006, without result. Shortly before leaving the case, Dompig gave an interview to CBS in which he stated that he believed Holloway was not murdered but probably died from alcohol and/or drug poisoning, and that someone later hid her body. Dompig also stated that Aruba had spent about \$3 million on the investigation, which was about 40% of the police operational budget. Dompig indicated that there was evidence that pointed to possession (though not necessarily use) of illicit drugs by Holloway. Members of Holloway's family have denied that she used drugs. On April 11, 2006, Dave Holloway published a book—co-authored with R. Stephanie Good and Larry Garrison—called Aruba: The Tragic Untold Story of Natalee Holloway and Corruption in Paradise, that recounted the search for his daughter. ### 2006 arrests of new suspects; Dutch investigation takeover On April 15, 2006, Geoffrey van Cromvoirt was arrested by Aruban authorities on suspicion of criminal offenses related to dealing in narcotics which, according to the prosecutor, might have been related to the disappearance of Holloway. At his first court appearance, his detention was extended by eight days. Van Cromvoirt was released, however, on April 25. In addition, another individual with initials "A.B." was arrested on April 22, but was released the same day. On May 17, another suspect, Guido Wever (the son of a former Aruban politician) was detained in the Netherlands on suspicion of assisting in the abducting, battering, and killing of Holloway. Wever was questioned for six days in Utrecht. Aruban prosecutors initially sought his transfer to the island, but he was instead released by agreement between the prosecutor and Wever's attorney. At Aruba's request, the Netherlands took over the investigation. Following receipt of extensive case documentation in Rotterdam, a team of the Dutch National Police started work on the case in September. On April 16, 2007, a combined Aruban–Dutch team began pursuing the investigation in Aruba. ### Book, search, and inspection A book by Van der Sloot and reporter Zvezdana Vukojevic, De zaak Natalee Holloway (The Case of Natalee Holloway) was published in Dutch in April 2007. In the book, Van der Sloot gives his perspective of the night Holloway disappeared and the media frenzy that followed. He admits to and apologizes for his initial untruths, but maintains his innocence. On April 27, a new search involving approximately 20 investigators was launched at the Van der Sloot family residence in Aruba. Dutch authorities searched the yard and surrounding area, using shovels and thin metal rods to penetrate the dirt. Prosecution spokeswoman Van der Biezen stated, "The investigation has never stopped and the Dutch authorities are completely reviewing the case for new indications." A statement from the prosecutor's office related, "The team has indications that justify a more thorough search." Investigators did not comment on what prompted the new search, except that it was not related to Van der Sloot's book. According to Paulus van der Sloot, "nothing suspicious" was found, and all that was seized were diary entries of him and his wife, and his personal computer—which was subsequently returned. According to Jossy Mansur, managing editor of Aruba's Diario newspaper, investigators were following up on statements made during early suspect interrogations regarding communications between the Kalpoe brothers and Van der Sloot. He also said investigators could be seen examining a laptop at the house. On May 12, the Kalpoe family residence was searched by the authorities. The two brothers were detained for about an hour upon objecting to the entry by police and Dutch investigators, but were released when the authorities left. According to Kock, the brothers objected to the search because officials did not show them an order justifying the intrusion. A statement from Van der Biezen did not mention what, if anything, officials were searching for, but indicated nothing was removed from the home. A subsequent statement from Het Openbaar Ministerie van Aruba (the Aruban prosecutor's office) indicated that the purpose of the visit was to "get a better image of the place or circumstances where an offense may have been committed and to understand the chain of events leading to the offense." ### 2007 rearrests and re-releases Citing what was described as newly discovered evidence, Aruban investigators rearrested Van der Sloot and the Kalpoe brothers on November 21, 2007, on suspicion of involvement in "manslaughter and causing serious bodily harm that resulted in the death of Holloway." Van der Sloot was detained by Dutch authorities in the Netherlands, while the Kalpoe brothers were detained in Aruba. Van der Sloot was returned to Aruba, where he was incarcerated. Soon after, Dave Holloway announced a new search for his daughter that probed the sea beyond the original 330-foot (100 m) depths in which earlier searches had taken place. That search involved a vessel called the Persistence and was abandoned due to lack of funds at the end of February 2008, when nothing of significance was found. On November 30, a judge ordered the release of the Kalpoe brothers. Despite attempts by the prosecution to extend their detention, the brothers were released on the following day. The prosecution appealed their release, which was denied on December 5, with the court writing, "Notwithstanding expensive and lengthy investigations on her disappearance and on people who could be involved, the file against the suspect does not contain direct indications that Natalee passed away due to a violent crime." Van der Sloot was released without charge on December 7 due to lack of evidence implicating him as well as a lack of evidence that Holloway died as the result of a violent crime. The prosecution indicated it would not appeal. On December 18, prosecutor Hans Mos officially declared the case closed, and that no charges would be filed due to lack of evidence. The prosecution indicated a continuing interest in Van der Sloot and the Kalpoe brothers (though they legally ceased to be suspects), and alleged that one of the three, in a chat room message, had stated that Holloway was dead. This was hotly contested by Deepak Kalpoe's attorney, who stated that the prosecution, in translating from Papiamento to Dutch, had misconstrued a reference to a teacher who had drowned as one to Holloway. Attorney Ronald Wix also stated, "Unless [Mos] finds a body in the bathroom of one of these kids, there's no way in hell they can arrest them anymore." ### Dutch television programme On January 31, 2008, Dutch crime reporter Peter R. de Vries claimed that he had solved the Holloway case. De Vries stated that he would tell all on a special television program on Dutch television on February 3. On February 1, the Dutch media reported that Van der Sloot made a confession regarding Holloway's disappearance. Later that day, Van der Sloot stated that he was telling the individual what he wanted to hear, and denied any involvement in her disappearance. That same day, the Aruba prosecutor's office announced the reopening of the case. The broadcast, aired on February 3, 2008, included excerpts from footage recorded from hidden cameras and microphones in the vehicle of Patrick van der Eem, a Dutch businessman and ex-convict who gained Van der Sloot's confidence. Van der Sloot was seen smoking marijuana and stating that he was with Holloway when she began convulsively shaking, then became unresponsive. Van der Sloot stated that he attempted to revive her, without success. He said that he called a friend, who told Van der Sloot to go home and who disposed of the body. An individual reputed to be this friend, identified in the broadcast as Daury, has denied Van der Sloot's account, indicating that he was then in Rotterdam at school. The Aruban prosecutor's office attempted to obtain an arrest warrant for Van der Sloot based on the tapes; however, a judge denied the request. The prosecutor appealed the denial, but the appeal failed on February 14. The appeals court held that the statements on the tape were inconsistent with evidence in the case and were insufficient to hold Van der Sloot. On February 8, Van der Sloot met with Aruban investigators in the Netherlands and denied that what he said on the tape was true, stating that he was under the influence of marijuana at the time. Van der Sloot indicated that he still maintains that he left Holloway behind on the beach. In March 2008, news reports indicated that Van der Eem was secretly taped after giving an interview for Aruban television. Van der Eem, under the impression that cameras had been turned off, disclosed that he had been a friend of Van der Sloot for years (contradicting his statement on De Vries' show that he had met Van der Sloot in 2007), that he expected to become a millionaire through his involvement in the Holloway case, and that he knew the person who supposedly disposed of Holloway's body—and that Van der Sloot had asked him for two thousand euros to buy the man's silence. According to Dutch news service ANP, Van der Eem, who had already signed a book deal, "was furious" after learning of the taping and "threatened" the interviewer, who sought legal advice. Van der Eem's book Overboord (Overboard), co-written with E.E. Byars, was released (in Dutch) on June 25. Van der Eem was arrested on December 13 in the Netherlands for allegedly hitting his girlfriend with a crowbar and engaging in risky driving behavior while fleeing police. The De Vries broadcast was discussed in a seminar by Dutch legal psychologist Willem Albert Wagenaar, who indicated that the statements did not constitute a confession. Wagenaar criticized De Vries for broadcasting the material, stating that the broadcast made it harder to obtain a conviction, and had De Vries turned over the material to the authorities without broadcasting it, they would have held "all the trumps" in questioning Van der Sloot. Wagenaar opined that not only was the case not solved, it was not even clear that a crime had been committed. Professor Crisje Brants, in the same seminar, also criticized De Vries' methods. On November 24, Fox News aired an interview with Van der Sloot in which he alleged that he sold Holloway into sexual slavery, receiving money both when Holloway was taken, and later on to keep quiet. Van der Sloot also alleged that his father paid off two police officers who had learned that Holloway was taken to Venezuela. Van der Sloot later retracted the statements made in the interview. Fox News also aired part of an audio recording provided by Van der Sloot, which he alleged is a phone conversation between him and his father, in which the father displays knowledge of his son's purported involvement in human trafficking. According to Mos, this voice heard on the recording is not that of Paulus van der Sloot—the Dutch newspaper De Telegraaf reported that the "father's" voice is almost certainly that of Joran van der Sloot himself, trying to speak in a lower tone. Paulus died of a heart attack on February 10, 2010. On March 20, 2009, Dave Holloway transported a search dog to Aruba to search a small reservoir in the northern part of the island. The reservoir was previously identified by a supposed witness as a possible location of Natalee's remains. Aruban authorities indicated that they had no new information in the case, but that Holloway had been given permission to conduct the search. On February 23, 2010, it was reported that Van der Sloot had stated in an interview (first offered to RTL Group in 2009) that he had disposed of Holloway's body in a marsh on Aruba. New chief prosecutor Peter Blanken indicated that authorities had investigated the latest story, and had dismissed it. Blanken stated that the "locations, names, and times he gave just did not make sense." In March 2010, underwater searches were conducted by Aruban authorities after an American couple reported that they were snorkeling when they photographed what they thought might be human skeletal remains, possibly those of Holloway. Aruban authorities sent divers to investigate, but no remains were ever recovered. ### Van der Sloot's extortion of Holloway family On March 29, 2010, Van der Sloot contacted John Q. Kelly, Beth Twitty's legal representative, with an offer to reveal the location of Holloway's body and the circumstances surrounding her death, if he were given advance of US\$25,000 against a total of \$250,000. After Kelly notified the FBI, they arranged to proceed with the transaction. On May 10, Van der Sloot had a \$15,000 wire transferred to his account in the Netherlands, following the receipt of \$10,000 in cash that was videotaped by undercover investigators in Aruba. Authorities stated that the information that he provided in return was false because the house in which he said Holloway's body was located had not yet been built at the time of her disappearance. On June 3, Van der Sloot was charged in the U.S. District Court of Northern Alabama with extortion and wire fraud. U.S. Attorney Joyce White Vance obtained an arrest warrant and transmitted it to Interpol. On June 30, Van der Sloot was indicted on the charges. At the request of the U.S. Justice Department, authorities conducted a June 4 raid and confiscated items from two homes in the Netherlands. One of the homes belonged to reporter Jaap Amesz, who had previously interviewed Van der Sloot and claimed knowledge of his criminal activities. Aruban investigators used information gathered from the extortion case to launch a new search at a beach, but no new evidence was found. Dave Holloway returned to Aruba on June 14 to pursue possible new clues. ### Van der Sloot's murder of Stephany Flores Ramirez in Peru On May 30, 2010—five years to the day after Holloway's disappearance—Stephany Flores Ramírez, a 21-year-old business student, was reported missing in Lima, Peru. She was found dead three days later in a hotel room registered in Van der Sloot's name. On June 3, Van der Sloot was arrested in Chile on a murder charge and extradited to Peru the next day. On June 7, Peruvian authorities said that Van der Sloot confessed to killing Flores after he lost his temper because she accessed his laptop without permission and found information linking him to Holloway. Police chief César Guardia related that Van der Sloot told Peruvian police that he knew where Holloway's body was and offered to help Aruban authorities find it. However, Guardia stated that the interrogation was limited to their case in Peru, and that questions about Holloway's disappearance were avoided. On June 11, Van der Sloot was charged in Lima Superior Court with first-degree murder and robbery. On June 15, Aruban and Peruvian authorities announced an agreement to cooperate and allow investigators from Aruba to interview Van der Sloot at Miguel Castro Castro prison in Peru. In a September 2010 interview from the prison, Van der Sloot reportedly admitted to the extortion plot, stating: "I wanted to get back at Natalee's family—her parents have been making my life tough for five years." On January 11, 2012, Van der Sloot pleaded guilty to murdering Flores and was sentenced to 28 years in prison. ### Declaration of death In June 2011 (six years after Natalee's disappearance), Dave Holloway filed a petition with the Alabama courts to have his daughter declared legally dead. The papers were served on his ex-wife Beth Twitty, who announced her intention to oppose the petition. A hearing was held on September 23, 2011, at which time Probate Judge Alan King ruled that Dave Holloway had met the requirements for a legal presumption of death. On January 12, 2012, a second hearing was held, after which Judge King signed the order declaring Natalee Holloway to be dead. ### Unrelated bone discovery, contested Oxygen documentary On November 12, 2010, tourists found a jawbone on an Aruban beach near the Phoenix Hotel and Bubali Swamp. Preliminary examination by a forensic expert determined that the bone was from a young woman. A part of the bone was sent to The Hague for testing by the Netherlands Forensic Institute. On November 23, 2010, Aruba Solicitor-General Taco Stein announced that based on dental records, the jawbone was not of Holloway, and it was not even possible to determine whether it had come from a man or woman. In 2016, Dave Holloway hired a private investigator, T.J. Ward, to once more go through all evidence and information related to the disappearance of his daughter. This led to an informant, Gabriel, who claimed to have been a roommate of one of Van der Sloot's closest friends, American John Ludwick, in 2005. Gabriel claimed that Ludwick was told what became of Natalee. In an interview with the Oxygen television channel, Gabriel gave a detailed description of what happened on the night of Natalee's disappearance. Oxygen created a new documentary series on Natalee's disappearance that aired on August 19, 2017. Using Gabriel's information, the investigator had found what appeared to be human bones. On October 3, 2017, DNA testing concluded that one piece of bone was human but did not belong to Natalee. On the show, Ludwick claimed to have helped Van der Sloot dig up, smash and cremate Holloway's bones in 2010. In February 2018, Elizabeth Holloway sued the producers, alleging this and other claims are fictional and harmfully lurid, and that she was misled into providing a DNA sample for comparison without being made aware of plans for a show. In March 2018, Ludwick was stabbed to death by a woman he tried to kidnap. ### 2023 extradition of Joran van der Sloot to the United States On June 8, 2023, Joran van der Sloot was officially extradited from Peru, where he was serving his 28-year sentence for the 2010 murder of Stephany Flores Ramírez, to the United States, landing at Shuttlesworth Airport in Birmingham, Alabama just before 2:30 p.m. After arriving in Birmingham he was taken into U.S. custody and transported to the Hoover City Jail. On June 9, he was arraigned in the federal court in Birmingham on one count of extortion and one count of wire fraud against Bethany Holloway, Natalee Holloway's mother. He pleaded not guilty to each charge. ## Investigation criticism The Twittys and their supporters criticized a perceived lack of progress by Aruban police. The Twittys' own actions in Aruba were also criticized, and the Twittys were accused of actively stifling any evidence that might impugn Holloway's character by asking her fellow students to remain silent about the case and using their access to the media to push their own version of events. The Twittys denied this. In televised interviews and in a book, Beth Twitty alleged that Van der Sloot and the Kalpoe brothers knew more about Holloway's disappearance than they have told authorities and that at least one of them sexually assaulted or raped her daughter. On July 5, 2005, following the initial release of the Kalpoes, Twitty alleged, "Two suspects were released yesterday who were involved in a violent crime against my daughter," and referred to the Kalpoes as "criminals". A demonstration involving about two hundred Arubans took place that evening outside the Oranjestad courthouse. The protesters were angry over Twitty's remarks, with signs reading, "Innocent until proven guilty" and, "Respect our Dutch laws or go home." Satish Kalpoe's attorney threatened legal action and described Twitty's allegations as "prejudicial, inflammatory, libelous, and totally outrageous." On July 8, 2005, Twitty read a statement that said her remarks were fuelled by "despair and frustration" and that she "apologize to the Aruban people and to the Aruban authorities if I or my family offended you in any way." In 2007, Twitty released her own book Loving Natalee: A Mother's Testament of Hope and Faith. That same year Twitty appeared on Nancy Grace and said that, > What we want is, we want justice. And you know—and we have to recognize the fact that, you know, this crime has been committed on the island of Aruba, and we know the perpetrators. We know it's these suspects, Deepak and Satish Kalpoe and Joran Van Der Sloot. And you know, we just have to, though, keep going, Nancy, because the only way we will get justice for Natalee is if we do keep going. I mean, if we give up, absolutely nothing will happen. Nothing. Following the airing of the De Vries programme on Dutch television, Twitty adhered to the position that the tapes represented the way events transpired and told the New York Post that she believed her daughter might still be alive if Van der Sloot had called for help. She contended that Van der Sloot had dumped Holloway's body, possibly alive, into the Caribbean sea. Twitty also alleged that the person Van der Sloot supposedly called that evening was his father, Paulus, who, according to Twitty, "orchestrated what to do next". Holloway's parents alleged that Van der Sloot was receiving "special legal favors". After the court decision not to rearrest Van der Sloot was affirmed, Twitty stated, "I think that what I do take comfort in, his life is a living hell," later adding, "I'd be good with a Midnight Express prison anywhere for Joran." In response to her daughter's disappearance, Twitty founded the International Safe Travels Foundation, a non-profit organization designed "to inform and educate the public to help them travel more safely as they travel internationally." In May 2010, she announced that the Natalee Holloway Resource Center would open at the National Museum of Crime & Punishment. Located in Washington, D.C., the center opened on June 8 to aid families of missing people. Holloway's family initially discouraged a travel boycott of Aruba, but this changed by September 2005. Twitty urged that people not travel to Aruba and other Dutch territories because of what she stated were tourist safety issues. In a November 8, 2005, news conference, Governor Bob Riley and the Holloways urged Alabamians and others to boycott Aruba. Riley also wrote to other United States governors seeking their support—the governors of Georgia and Arkansas eventually joined in the call for a boycott. Philadelphia's city council voted to ask the Pennsylvania Governor Ed Rendell to call for a boycott. Rendell did not do so, and no federal support was given. The boycott was supported by some of Alabama's Congressional delegation, including both senators and Representative Spencer Bachus (R-AL), who represents Mountain Brook. Senator Richard Shelby (R-AL) voiced his support for the boycott in a letter to the American Society of Travel Agents. Shelby stated, "For the safety, security and well-being of our citizens, I do not believe that we can trust that we will be protected while in Aruba." Prime Minister Oduber stated that Aruban investigators have done their best to solve the case, and responded to the call for boycott, "This is a preposterous and irresponsible act. We are not guerillas. We are not terrorists. We don't pose a threat to the United States, nor to Alabama." Members of the Aruba Hotel and Tourism Association, the Aruba Tourism Authority, the Aruba Hospitality and Security Foundation, the Aruban Chamber of Commerce and government figures, including Public Relations Representative Ruben Trapenberg, formed an "Aruba Strategic Communications Task Force" to respond collectively to what they perceived to be unfounded and/or negative portrayals of the island. The group issued press releases and sent representatives to appear in news media. They joined the Aruban government in opposing the calls for a boycott of the island. ### Skeeters tape and Dr. Phil; lawsuits On September 15, 2005, the Dr.Phil television show showed parts of a hidden camera interview with Deepak Kalpoe in which he seemingly affirmed a suggestion that Holloway had sex with all three men. The taping had been instigated by Jamie Skeeters, a private investigator. When the tape was broadcast, news reports indicated an expectation of a rearrest, which Dompig termed a "strong possibility" if the tapes were legitimate. Aruban police subsequently provided a fuller version of the relevant part of the tape in which Kalpoe's response differed from the Dr.Phil version, apparently due to editing that may have altered the meaning of what was said. An unofficial Aruban-affiliated spokesperson and commentator on the case said that the uncut videotape showed that Kalpoe had shaken his head and said, "No, she didn't", thereby denying that Holloway had sex with him and the other two men. According to an MSNBC report, the crucial words are inaudible, and presenter Rita Cosby questioned if it could be substantiated that Kalpoe had ever made the statements attributed to him in the Dr.Phil version of the recording. In December 2006, the Kalpoes filed a slander and libel suit against Skeeters (who died in January 2007) and Dr.Phil in Los Angeles, California. Holloway's parents responded by filing a wrongful death lawsuit against the Kalpoes in the same venue. The wrongful death suit was dismissed for lack of personal jurisdiction on June 1, 2007; the libel and slander case was initially set for trial on October 12, 2011 but was later set for April 2015. An earlier suit had been filed in New York City by Holloway's parents against Joran and Paulus van der Sloot and served on them on a visit to New York. The case had been dismissed in August 2006 as filed in an inconvenient forum. On November 10, 2005, Paulus van der Sloot won an unjust detention action against the Aruban government, clearing him as a suspect and allowing him to retain his government contract. The elder Van der Sloot then brought a second action, seeking monetary damages for himself and his family because of his false arrest. The action was initially successful, but the award of damages was reversed on appeal. ### Amigoe article The Amigoe newspaper reported on interviews with Julia Renfro and Dompig in which they said that Aruban authorities had been systematically obstructed in their investigation by U.S. officials. They also said that within a day of Holloway's being reported missing, a medjet, unauthorized by Aruban authorities, had arrived on Aruba and had remained for several days for the purpose of covertly taking Holloway off the island without notifying local authorities. Renfro, an American-born editor of an English-language daily, Aruba Today, who at the time of Holloway's disappearance had become close friends with Twitty, also said she and Twitty received a phone call from an unknown woman on June 2, 2005, asking for money in return for information about Holloway's location, and asserting that Holloway was unwilling to return to her mother. According to Renfro, she and another American went to a drug house where Holloway supposedly was, bringing money, but found that Jug Twitty had already been to the area, spreading "a lot of uproar and panic in the direct vicinity", and nothing could be accomplished. The Twittys disputed Renfro's accounts, with Beth Twitty describing Renfro as "a witch". ### Film adaptations On April 19, 2009, LMN aired Natalee Holloway, a television film based on Twitty's book Loving Natalee. Starring Tracy Pollan as Beth Twitty, Grant Show as Jug Twitty, Amy Gumenick as Holloway, and Jacques Strydom as Van der Sloot, the film retells events leading up to the night of Holloway's disappearance in 2005 and the ensuing investigation in the aftermath. It was shot in South Africa. The movie stages re-creations of various scenarios based on the testimony of key players and suspects, including Van der Sloot. The broadcast of the film attracted 3.2 million viewers, garnering the highest television ratings in the network's eleven year history. Although it set ratings records for Lifetime, the movie received mixed reviews from critics. Alec Harvey of The Birmingham News called the movie "sloppy and uneven, a forgettable look at the tragedy that consumed the nation's attention for months." However, Jake Meaney of PopMatters found the film surprisingly "calm and levelheaded", and praised Pollan's portrayal of Holloway's mother. A follow-up film, Justice for Natalee Holloway, aired in mid-2011 on LMN. This film picks up in 2010, on the five-year anniversary of Holloway's disappearance. ### TV adaptations In an episode of Vanished with Beth Holloway, Beth recounted the story of Natalee's disappearance, which is reenacted for the show. ## Media coverage U.S. television networks devoted substantial air time to the search for Holloway, the investigation of her disappearance, and rumors surrounding the case. Greta Van Susteren, host of Fox News' On the Record, and Nancy Grace, on her eponymous Headline News program, were among the most prominent television personalities to devote time to the incident. Van Susteren's almost continuous coverage of the story garnered On the Record its best ratings to date, while Grace's show became the cornerstone of the new "Headline Prime" block on Headline News, which ran two episodes (a live show and a repeat) every night during primetime. As the case wore on, much of the attention was given to Beth Twitty and her statements. Aruban government spokesman Ruben Trapenberg stated, "The case is under a microscope, and the world is watching." The saturation of coverage triggered a backlash among some critics who argued that such extensive media attention validated the "missing white woman syndrome" theory, which argues that missing person cases involving white women and girls receive disproportionately more attention in the media compared with cases involving white males or people of color. CNN ran a segment criticizing the amount of coverage their competitors gave to the story despite what they characterized as a lack of new items to report, with CNN news anchor Anderson Cooper calling the coverage "downright ridiculous". Early in the case, political commentator and columnist Arianna Huffington wrote, "If you were to get your news only from television, you'd think the top issue facing our country right now is an 18-year-old girl named Natalee who went missing in Aruba. Every time one of these stories comes up, like, say, Michael Jackson, when it's finally over I think, what a relief, now we can get back to real news. But we never do." In March 2008, El Diario commented, "But if doubts persist about cases involving missing Latinos, there are reasons why. These cases rarely receive the attention and resources we see given to other missing persons. The English language media, for example, appear to be focused on the stories of missing white women, such as with the disappearance of Natalee Holloway in Aruba. Cases of missing Latino and African-American women often remain faceless, if and when they are even covered." CBS senior journalist Danna Walker stated, "There is criticism that it is only a story because she is a pretty blonde—and white—and it is criticism that journalists are taking to heart and looking elsewhere for other stories. But it is a big story because it is an American girl who went off on an adventure and didn't come back. It is a huge mystery; it is something people can identify with." Good Morning America anchor Chris Cuomo was unapologetic of his program's extensive coverage of the Holloway case, stating: "I don't believe it's my role to judge what people want to watch ... If they say, 'I want to know what happened to this girl' ... I want to help them find out." Holloway's family, however, took the opposite approach and criticized the lessening of coverage of her disappearance due to a shift in news priority when Hurricane Katrina struck on August 23, 2005. The saturation coverage of Holloway's disappearance would ultimately be eclipsed by the hurricane. Beth Twitty and Dave Holloway alleged that Aruba took advantage of the extensive coverage of Hurricane Katrina to release the suspects; however, the deadline for judicial review of Van der Sloot's detention was set long before Katrina. Dave Holloway lamented in his book: > Hurricane Katrina had left the door open for the boys to be sent on their way with little publicity and few restrictions because it took the world's focus off of [sic] Natalee, but only for a brief time. The huge amount of publicity had waned and, during that time of quiet for us, Joran and the Kalpoe brothers were sent home ... All of the news shows that had followed our every move only a day before had now become fixated on the next big ratings grabber: the victims of Hurricane Katrina. ## See also - List of people who disappeared
2,141,388
1974 White House helicopter incident
1,173,331,849
1974 incident
[ "1970s trials", "1974 crimes in the United States", "1974 in Maryland", "1974 in Washington, D.C.", "Accidents and incidents involving helicopters", "Aviation accidents and incidents in the United States in 1974", "Events that led to courts-martial", "February 1974 events in the United States" ]
On February 17, 1974, United States Army Private Robert Kenneth Preston (1953–2009) took off in a stolen Bell UH-1B Iroquois "Huey" helicopter from Tipton Field, Maryland, and landed it on the South Lawn of the White House in a significant breach of security. Preston had enlisted in the Army to become a helicopter pilot. However, he did not graduate from the helicopter training course and lost his opportunity to attain the rank of warrant officer pilot. His enlistment bound him to serve four years in the Army, and he was sent to Fort Meade as a helicopter mechanic. Preston believed this situation was unfair and later said he stole the helicopter to show his skill as a pilot. Shortly after midnight, Preston, on leave, was returning to Tipton Field, south of Fort Meade. Thirty helicopters at the base were fueled and ready to fly; he took off in one without anti-collision lights on or making the standard radio calls. The Maryland State Police were alerted. Preston flew southwest toward Washington, D.C., where he hovered close to the Lincoln Memorial and the Washington Monument and over the South Lawn of the White House. He then flew back toward Fort Meade with two Bell 206 JetRanger police helicopters and police cars in pursuit. After a chase over Maryland, he reversed course toward Washington again and entered the White House grounds. The Secret Service opened fire this time. Preston was lightly wounded, landed the helicopter, and was arrested and held in custody. Preston pleaded guilty to "wrongful appropriation and breach of the peace" in the plea bargain at his court-martial. He was sentenced to one year in prison, six months of which was time served, and a fine of US\$2,400 (equivalent to \$14,241 in 2022). After his release, Preston received a general discharge from the army, then lived a quiet life, married, and died of cancer in 2009. ## Background Robert Kenneth Preston was born in Panama City, Florida, on November 5, 1953. Having had longtime aspirations toward a military career, he enrolled in the Junior Reserve Officers' Training Corps program at Rutherford High School. He earned a private pilot's license for single-engine, fixed-wing aircraft and studied aviation management at Gulf Coast Community College, hoping to become a helicopter pilot in Vietnam. After enlisting in the United States Army in 1972, he trained to become a helicopter pilot, flying the Hughes TH-55 Osage at Fort Wolters, Texas. Preston failed the technical training due to "deficiency in the instrument phase", losing his opportunity to become a warrant officer pilot. The ongoing withdrawal of U.S. forces from Vietnam and consequent surplus of qualified helicopter pilots may have also been a factor in Preston not being accepted as a pilot. Still bound by his four-year obligation to serve with the army, Preston was sent to Fort Meade, Maryland, as a helicopter mechanic in January 1974. At the time of the incident, he was 20 years old, with the rank of private first class; he was described by his commanding officer as a "regular, quiet individual" with above-average intelligence. ## Incident On February 17, 1974, shortly after midnight, Preston left a dance hall and restaurant, downhearted due to a failed relationship and his unclear future in his military career. He returned to the Army Airfield, Tipton Field, south of Fort Meade, where thirty Bell UH-1 Iroquois "Huey" helicopters were fueled and ready. Preston later recalled, "I wanted to get up and fly and get behind the controls. It would make me feel better because I love flying". He parked his car at the unguarded airfield, climbed into one of the helicopters, serial number 62–1920, and started preflight checks. Soon after, he lifted off without activating his anti-collision lights or making standard radio calls; a controller in the control tower spotted the stolen helicopter and alerted the Maryland State Police. Preston flew low over the restaurant he had visited earlier, then briefly touched down in a nearby field where his hat was later recovered. He then decided to visit Washington, D.C., 20 miles (32 km) southwest, by following the lights of the Baltimore–Washington Parkway. Preston's helicopter was first discovered by the District of Columbia police when he was spotted hovering between the United States Capitol and the Lincoln Memorial. Flight over this area was strictly prohibited, but this was not enforced in any significant way at the time; surface-to-air missiles were not installed around Washington until after the September 11 attacks. Preston spent 5–6 minutes hovering a couple of feet above the Washington Monument's grounds, then flew over the Capitol and went on to follow Pennsylvania Avenue to the White House. The Secret Service policy, at the time, was to fire at aerial intruders, but when to do so was left vague—especially if it could harm bystanders. While Preston was hovering above and briefly touched down on the South Lawn, the White House Executive Office control center watch officer, Henry S. Kulbaski, attempted to contact his superiors by phone but received no answer. After the helicopter departed, Kulbaski ordered his agents to shoot it down if it returned. At 12:56 a.m., an air traffic controller at Washington National Airport noticed a blip on his radar scope; after realizing it was the stolen helicopter, the controller alerted the police. Preston then turned back toward Fort Meade in Maryland and left the restricted airspace; an old Bell 47 helicopter of the Maryland police followed but was too slow to keep up with Preston. The stolen helicopter soon appeared on the Baltimore–Washington International Airport's radar. Two Maryland State Police Bell 206 JetRangers were dispatched to intercept. Preston turned northeast, pursued by the two helicopters and police cars. He caused one police car to crash by executing a head-on pass just a few inches above its roof, briefly hovered above a doughnut shop, then followed the Baltimore–Washington Parkway once again toward Washington, planning to surrender personally to U.S. President Richard Nixon. Preston evaded one of the JetRangers with what its pilots described as "modern dogfighting tactics". With only one helicopter left chasing him, Preston flew along the Parkway at constantly changing speeds between 60–120 knots (110–220 km/h; 69–138 mph), sometimes just inches above car-top level. Preston's Huey came in over the White House grounds at 2 a.m., barely clearing the steel fence surrounding the area. According to the pilot of the JetRanger, Preston was so close, he "could have driven right in the front door". Floodlights suddenly illuminated the helicopter, and the Secret Service agents opened fire with automatic weapons and shotguns. Shots hit Preston's foot, and the helicopter veered to the side, bouncing on one skid, but he regained control and settled his helicopter on the South Lawn, 300 feet (91 m) from the mansion. Some 300 rounds were fired, of which five hit Preston, causing superficial wounds. He exited the helicopter and started running toward the White House but was tackled to the ground by Secret Service agents. Handcuffed, Preston was taken to the Walter Reed Army Medical Center for treatment, where he arrived smiling and "laughing like hell". At the time of the incident, President Nixon was traveling in Florida and First Lady Pat Nixon was in Indianapolis, visiting their sick daughter, Julie. ## Aftermath The helicopter became a major tourist sight that day. It was evaluated by army personnel and found to be flightworthy despite its many bullet holes and was flown off in front of a multitude of news cameras shortly before noon. The helicopter was extensively photographed as part of the investigation, repaired, and returned to service. It was later put on display at Naval Air Station Joint Reserve Base Willow Grove. It is believed that Preston's actions influenced Samuel Byck to attempt to hijack a plane five days later, carrying a .22 caliber revolver and a gasoline bomb. According to self-recorded audio before the hijacking, Byck intended to assassinate President Nixon. Police shot him, and he died by suicide. Preston was initially charged with unlawful entry into the White House grounds, a misdemeanor with a fine of \$100 (equivalent to \$593.39 in 2022) and a maximum six-month jail term. His lawyers arranged a plea bargain in which all charges under civilian jurisdiction would be dropped if the case were transferred to the military. At his court-martial, Preston was charged with several counts of attempted murder and several minor offenses. The pilot of one of the JetRangers stated that he had thought that Preston intended to commit suicide by crashing into the White House, but Preston maintained that he only wanted to draw attention to the perceived unfairness of his situation and show his skill as a pilot. He pled guilty to "wrongful appropriation and breach of the peace" and was sentenced to one year in prison and fined \$2,400 (equivalent to \$14,241 in 2022). The duration of his court-martial was given to him as time served; this meant he had to serve a further six months in prison. He instead served two months at Fort Riley, Kansas, before being granted a general discharge from the army for unsuitability. The Secret Service increased the size of the restricted airspace around the White House. Nixon congratulated Kulbaski and the pilot and copilot of the JetRanger; the three and other agents were presented with pairs of presidential cufflinks in a White House ceremony. Preston moved to the state of Washington after his release. He married in 1982 and raised his wife's two daughters. He died of cancer on July 21, 2009, aged 55, while living in Ephrata, Washington. ## See also - List of White House security breaches - Hawker Hunter Tower Bridge incident
71,871,554
1905–06 New Brompton F.C. season
1,121,840,539
null
[ "English football clubs 1905–06 season", "Gillingham F.C. seasons" ]
During the 1905–06 English football season, New Brompton F.C. competed in the Southern League Division One. It was the 12th season in which the club competed in the Southern League and the 11th in Division One. The team began the season in poor form; they failed to score any goals in six of their first eight Southern League games. By the midpoint of the season, the team had won only three times and were close to the bottom of the league table. The team's form improved in the new year, with three wins in the first seven Southern League games of 1906, but they ended the season in similar fashion to how they had started it, failing to score in eight of the final nine league games. New Brompton finished the season in 17th place out of 18 teams in the division. New Brompton also competed in the FA Cup, reaching the second round. The team played a total of 37 league and cup matches, winning 8, drawing 9 and losing 20. Bill Marriott was the club's top goalscorer, with four goals in the Southern League and one in the FA Cup; this figure was the lowest to date with which a player had finished a season as New Brompton's top scorer. Joe Walton made the most appearances, playing in 36 of the team's 37 competitive games. The highest attendance recorded at the club's home ground, Priestfield Road, was 5,500 for a game against Portsmouth on 27 January 1906. ## Background and pre-season New Brompton, founded in 1893, had played in the Southern League since the competition's formation in 1894. The 1905–06 season was the club's 11th season in Division One, the league's top division, following promotion from Division Two at the first attempt in 1895. In the 1904–05 season New Brompton had finished 9th out of 18 teams in the division, only the third time in 11 seasons that they had finished in the top half of the league table. At the time, only a handful of teams from the south of England played in the ostensibly national Football League, with most of the south's leading teams playing in the Southern League. The club did not employ a manager at the time and secretary William Ironside Groombridge had overall responsibility for the team. He was assisted by a trainer called F. Craddock. James Barnes was chairman of the club's board of directors. The team wore the club's usual black and white striped shirts. Several new players joined the club prior to the season, including Bill Floyd, a full-back from Gainsborough Trinity, and three forwards, Bill Marriott from Northampton Town, Jim Sheridan from Stoke, and Harry Phillips from Grimsby Town. Walter Leigh, the team's top goalscorer of the previous season, moved on, joining Clapton Orient, newly elected to the Football League Second Division. ## Southern League Division One ### September–December The club's first match of the season, on 2 September, was away to Queens Park Rangers; Floyd, Phillips, Sheridan and Marriott all made their Southern League debuts for New Brompton. Queens Park Rangers scored two goals in each half to win 4–0; the correspondent for The Daily News stated that the home team were "the better side at every point" and well deserved their large victory. The first Southern League game of the season at New Brompton's ground, Priestfield Road, a week later against Bristol Rovers, resulted in a 3–0 win for the away team. Phillips scored New Brompton's first Southern League goal of the season on 16 September and the team achieved their first victory, beating Northampton Town 2–0, before losing again in their next game, a 4–0 defeat away to Portsmouth. After a goalless draw with Swindon Town, New Brompton were 15th out of 18 teams in the Division One table at the end of September. On 7 October, New Brompton lost 5–0 away to Millwall, the first time the team had conceded as many goals in a Southern League match since December 1903. Two weeks later, they failed to score a goal for the sixth time in eight Southern League games and were defeated 6–0 by Tottenham Hotspur, the team's biggest defeat of the season. The correspondent for the Daily Telegraph said that New Brompton had been "completely outplay[ed]". The team ended the month with their second Southern League victory of the season, a 2–1 win over Brentford. The first match of November, however, resulted in another defeat as New Brompton lost 4–1 to Norwich City; the result left New Brompton 16th in the table. Their record of having conceded 28 goals in 10 games was by far the worst in the division; no other team had conceded more than 20 and only one other had conceded more than 16. The team's winless run continued for the rest of the month as they drew with Plymouth Argyle and Southampton and lost to Reading. John Campbell, a former Scottish international forward newly signed from Hibernian, made his debut against Reading. New Brompton began December with a game away to Watford; they twice took the lead only for Watford to equalise and the game ended in a draw. The team lost their next two games, away to West Ham United and at home to Fulham. On Christmas Day, New Brompton beat Brighton & Hove Albion 1–0; Campbell scored the winner from a penalty kick, his first goal for the club. New Brompton ended 1905 with a 2–0 home defeat to Queens Park Rangers; John Martin, normally a full-back, played as goalkeeper in place of Fred Griffiths. The Athletic News referred to this as a trial arrangement, but Martin retained the position for most of the remainder of the season. The reporter for the Athletic News wrote that on the whole New Brompton played as well as their opponents but that their forwards "proved quite incapable of turning to account the many openings which fell their way". ### January–April New Brompton's first Southern League game of 1906 was away to Bristol Rovers, who were reduced to ten men before half-time when their goalkeeper went off injured. For the second time in three games, Campbell scored the winning goal with a penalty kick. The team next beat Northampton Town, the only time during the season that they won two consecutive Southern League games. New Brompton's next game drew an attendance reported at 5,500, the largest of the season at Priestfield Road; the Athletic News attributed the large turnout to "the recent smart performances of the local eleven". The match, however, ended in the first of two consecutive 1–0 defeats for New Brompton. The team's winless Southern League run extended to four games with defeats away to Luton Town on 17 February and Brentford on 3 March. New Brompton then defeated Tottenham Hotspur, who were second in the league table ahead of the game, at Priestfield Road. Paddy Travers scored the only goal to give his team what the Daily Telegraph's reporter described as their best result of the season; the writer praised the New Brompton defence as "very sound". After this victory, the team played five more games in March and failed to score in any of them. After a goalless draw at home to Norwich City, they lost 5–0 away to Plymouth Argyle. According to the Athletic News, New Brompton were "quite outclassed" by Plymouth and "had [Plymouth] doubled the number of their goals they would have occasioned little surprise". Three further defeats left New Brompton bottom of the table at the end of March. Goalkeeper H. Metherell, who had played one Southern League game two years earlier but not been included in the team since, was included in the line-up against Reading and retained his place for the rest of the season. After five games without scoring, New Brompton defeated Watford 2–0 in the first match of April. Their next two games resulted in goalless draws against Brighton & Hove Albion and West Ham United; the Athletic News was again critical of New Brompton's forwards against West Ham, saying that the team's "lack of success was mainly due to the inability of their front line to accept the gifts which the fortune of the game offered". New Brompton's final game of the season was away to Fulham, who had already clinched the championship of Division One, and resulted in a 1–0 defeat. New Brompton finished the season 17th out of 18 teams in the league table, above only Northampton Town. The team had failed to score in eight of the last nine games of the Southern League season; the final total of 20 league goals was half the figure recorded in the previous season and the lowest figure New Brompton had registered in 12 seasons of competitive football. ### Match details Key - In result column, New Brompton's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal ### Partial league table ## FA Cup New Brompton entered the 1905–06 FA Cup at the first-round stage, where they played at home to fellow Southern League Division One club Northampton Town. New Brompton took the lead in the first half and, although their opponents equalised after the interval, Marriott scored a winning goal for the home team. In the second round, New Brompton played at home to another Southern League Division One team, Southampton; the match finished 0–0, necessitating a replay at the Dell. Southampton scored the only goal of the second game in the final minute to eliminate New Brompton from the competition. ### Match details Key - In result column, New Brompton's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ## Players During the season, 23 players made at least one appearance for New Brompton. Joe Walton made the most; he played in 36 of the team's 37 competitive games, missing only one league game. Travers and Joe Elliott also made 30 or more appearances and four others played more than 25 times. Two players made only one appearance each: Albert Webb and a player recorded only as Page. Marriott was the top goalscorer; he scored four times in the Southern League and once in the FA Cup. His total of five goals was the lowest with which a player had finished a season as top scorer in the club's history. Arthur Beadsworth scored four goals and two players scored three each; no other player scored more than once. ## Aftermath Despite finishing in the bottom two places, New Brompton were not relegated to Division Two as the Southern League opted to increase the number of clubs in Division One. The team's performance improved slightly the following season, as New Brompton finished 16th in an expanded 20-team division. The club, which changed its name to Gillingham in 1912, remained in the Southern League Division One until 1920 when the entire division was absorbed into the national Football League to form its new Third Division.
33,702
Wolf
1,173,048,106
Type of canine
[ "Apex predators", "Articles containing video clips", "Extant Middle Pleistocene first appearances", "Holarctic fauna", "Mammals described in 1758", "Mammals of Asia", "Mammals of Europe", "Mammals of North America", "Pleistocene carnivorans", "Scavengers", "Taxa named by Carl Linnaeus", "Wolves" ]
The wolf (Canis lupus; : wolves), also known as the gray wolf or grey wolf, is a large canine native to Eurasia and North America. More than thirty subspecies of Canis lupus have been recognized, including the dog and dingo, though gray wolves, as popularly understood, only comprise naturally-occurring wild subspecies. The wolf is the largest extant member of the family Canidae, and is further distinguished from other Canis species by its less pointed ears and muzzle, as well as a shorter torso and a longer tail. The wolf is nonetheless related closely enough to smaller Canis species, such as the coyote and the golden jackal, to produce fertile hybrids with them. The wolf's fur is usually mottled white, brown, gray, and black, although subspecies in the arctic region may be nearly all white. Of all members of the genus Canis, the wolf is most specialized for cooperative game hunting as demonstrated by its physical adaptations to tackling large prey, its more social nature, and its highly advanced expressive behaviour, including individual or group howling. It travels in nuclear families consisting of a mated pair accompanied by their offspring. Offspring may leave to form their own packs on the onset of sexual maturity and in response to competition for food within the pack. Wolves are also territorial, and fights over territory are among the principal causes of mortality. The wolf is mainly a carnivore and feeds on large wild hooved mammals as well as smaller animals, livestock, carrion, and garbage. Single wolves or mated pairs typically have higher success rates in hunting than do large packs. Pathogens and parasites, notably the rabies virus, may infect wolves. The global wild wolf population was estimated to be 300,000 in 2003 and is considered to be of Least Concern by the International Union for Conservation of Nature (IUCN). Wolves have a long history of interactions with humans, having been despised and hunted in most pastoral communities because of their attacks on livestock, while conversely being respected in some agrarian and hunter-gatherer societies. Although the fear of wolves exists in many human societies, the majority of recorded attacks on people have been attributed to animals suffering from rabies. Wolf attacks on humans are rare because wolves are relatively few, live away from people, and have developed a fear of humans because of their experiences with hunters, farmers, ranchers, and shepherds. ## Etymology The English "wolf" stems from the Old English , which is itself thought to be derived from the Proto-Germanic \*wulfaz. The Proto-Indo-European root \*' may also be the source of the Latin word for the animal lupus (\*'). The name "gray wolf" refers to the grayish colour of the species. Since pre-Christian times, Germanic peoples such as the Anglo-Saxons took on wulf as a prefix or suffix in their names. Examples include Wulfhere ("Wolf Army"), Cynewulf ("Royal Wolf"), Cēnwulf ("Bold Wolf"), Wulfheard ("Wolf-hard"), Earnwulf ("Eagle Wolf"), Wulfstān ("Wolf Stone") Æðelwulf ("Noble Wolf"), Wolfhroc ("Wolf-Frock"), Wolfhetan ("Wolf Hide"), Scrutolf ("Garb Wolf"), Wolfgang ("Wolf Gait") and Wolfdregil ("Wolf Runner"). ## Taxonomy In 1758, the Swedish botanist and zoologist Carl Linnaeus published in his Systema Naturae the binomial nomenclature. Canis is the Latin word meaning "dog", and under this genus he listed the doglike carnivores including domestic dogs, wolves, and jackals. He classified the domestic dog as Canis familiaris, and the wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its "cauda recurvata" (upturning tail) which is not found in any other canid. ### Subspecies In the third edition of Mammal Species of the World published in 2005, the mammalogist W. Christopher Wozencraft listed under C. lupus 36 wild subspecies, and proposed two additional subspecies: familiaris (Linnaeus, 1758) and dingo (Meyer, 1793). Wozencraft included hallstromi—the New Guinea singing dog—as a taxonomic synonym for the dingo. Wozencraft referred to a 1999 mitochondrial DNA (mtDNA) study as one of the guides in forming his decision, and listed the 38 subspecies of C. lupus under the biological common name of "wolf", the nominate subspecies being the Eurasian wolf (C. l. lupus) based on the type specimen that Linnaeus studied in Sweden. Studies using paleogenomic techniques reveal that the modern wolf and the dog are sister taxa, as modern wolves are not closely related to the population of wolves that was first domesticated. In 2019, a workshop hosted by the IUCN/Species Survival Commission's Canid Specialist Group considered the New Guinea singing dog and the dingo to be feral Canis familiaris, and therefore should not be assessed for the IUCN Red List. ### Evolution The phylogenetic descent of the extant wolf C. lupus from C. etruscus through C. mosbachensis is widely accepted. The earliest fossils of C. lupus were found in what was once eastern Beringia at Old Crow, Yukon, Canada, and at Cripple Creek Sump, Fairbanks, Alaska. The age is not agreed upon but could date to one million years ago. Considerable morphological diversity existed among wolves by the Late Pleistocene. They had more robust skulls and teeth than modern wolves, often with a shortened snout, a pronounced development of the temporalis muscle, and robust premolars. It is proposed that these features were specialized adaptations for the processing of carcass and bone associated with the hunting and scavenging of Pleistocene megafauna. Compared with modern wolves, some Pleistocene wolves showed an increase in tooth breakage similar to that seen in the extinct dire wolf. This suggests they either often processed carcasses, or that they competed with other carnivores and needed to consume their prey quickly. Compared with those found in the modern spotted hyena, the frequency and location of tooth fractures in these wolves indicates they were habitual bone crackers. Genomic studies suggest modern wolves and dogs descend from a common ancestral wolf population that existed 20,000 years ago. A 2021 study found that the Himalayan wolf and the Indian plains wolf are part of a lineage that is basal to other wolves and split from them 200,000 years ago. Other wolves appear to have originated in Beringia in an expansion that was driven by the huge ecological changes during the close of the Late Pleistocene. A study in 2016 indicates that a population bottleneck was followed by a rapid radiation from an ancestral population at a time during, or just after, the Last Glacial Maximum. This implies the original morphologically diverse wolf populations were out-competed and replaced by more modern wolves. A 2016 genomic study suggests that Old World and New World wolves split around 12,500 years ago followed by the divergence of the lineage that led to dogs from other Old World wolves around 11,100–12,300 years ago. An extinct Late Pleistocene wolf may have been the ancestor of the dog, with the dog's similarity to the extant wolf being the result of genetic admixture between the two. The dingo, Basenji, Tibetan Mastiff and Chinese indigenous breeds are basal members of the domestic dog clade. The divergence time for wolves in Europe, the Middle East, and Asia is estimated to be fairly recent at around 1,600 years ago. Among New World wolves, the Mexican wolf diverged around 5,400 years ago. ### Admixture with other canids In the distant past, there was gene flow between African wolves, golden jackals, and gray wolves. The African wolf is a descendant of a genetically admixed canid of 72% wolf and 28% Ethiopian wolf ancestry. One African wolf from the Egyptian Sinai Peninsula showed admixture with Middle Eastern gray wolves and dogs. There is evidence of gene flow between golden jackals and Middle Eastern wolves, less so with European and Asian wolves, and least with North American wolves. This indicates the golden jackal ancestry found in North American wolves may have occurred before the divergence of the Eurasian and North American wolves. The common ancestor of the coyote and the wolf admixed with a ghost population of an extinct unidentified canid. This canid was genetically close to the dhole and evolved after the divergence of the African hunting dog from the other canid species. The basal position of the coyote compared to the wolf is proposed to be due to the coyote retaining more of the mitochondrial genome of this unidentified canid. Similarly, a museum specimen of a wolf from southern China collected in 1963 showed a genome that was 12–14% admixed from this unknown canid. In North America, some coyotes and wolves show varying degrees of past genetic admixture. In more recent times, some male Italian wolves originated from dog ancestry, which indicates female wolves will breed with male dogs in the wild. In the Caucasus Mountains, ten percent of dogs including livestock guardian dogs, are first generation hybrids. Although mating between golden jackals and wolves has never been observed, evidence of jackal-wolf hybridization was discovered through mitochondrial DNA analysis of jackals living in the Caucasus Mountains and in Bulgaria. In 2021, a genetic study found that the dog's similarity to the extant gray wolf was the result of substantial dog-into-wolf gene flow, with little evidence of the reverse. ## Description The wolf is the largest extant member of the Canidae family, and is further distinguished from coyotes and jackals by a broader snout, shorter ears, a shorter torso and a longer tail. It is slender and powerfully built, with a large, deeply descending rib cage, a sloping back, and a heavily muscled neck. The wolf's legs are moderately longer than those of other canids, which enables the animal to move swiftly, and to overcome the deep snow that covers most of its geographical range in winter. The ears are relatively small and triangular. The wolf's head is large and heavy, with a wide forehead, strong jaws and a long, blunt muzzle. The skull is 230–280 mm (9–11 in) in length and 130–150 mm (5–6 in) in width. The teeth are heavy and large, making them better suited to crushing bone than those of other canids, though they are not as specialized as those found in hyenas. Its molars have a flat chewing surface, but not to the same extent as the coyote, whose diet contains more vegetable matter. Females tend to have narrower muzzles and foreheads, thinner necks, slightly shorter legs, and less massive shoulders than males. Adult wolves measure 105–160 cm (41–63 in) in length and 80–85 cm (31–33 in) at shoulder height. The tail measures 29–50 cm (11–20 in) in length, the ears 90–110 mm (3+1⁄2–4+3⁄8 in) in height, and the hind feet are 220–250 mm (8+5⁄8–9+7⁄8 in). The size and weight of the modern wolf increases proportionally with latitude in accord with Bergmann's rule. The mean body mass of the wolf is 40 kg (88 lb), the smallest specimen recorded at 12 kg (26 lb) and the largest at 79.4 kg (175 lb). On average, European wolves weigh 38.5 kg (85 lb), North American wolves 36 kg (79 lb), and Indian and Arabian wolves 25 kg (55 lb). Females in any given wolf population typically weigh 2.3–4.5 kg (5–10 lb) less than males. Wolves weighing over 54 kg (119 lb) are uncommon, though exceptionally large individuals have been recorded in Alaska and Canada. In central Russia, exceptionally large males can reach a weight of 69–79 kg (152–174 lb). ### Pelage The wolf has very dense and fluffy winter fur, with a short undercoat and long, coarse guard hairs. Most of the undercoat and some guard hairs are shed in spring and grow back in autumn. The longest hairs occur on the back, particularly on the front quarters and neck. Especially long hairs grow on the shoulders and almost form a crest on the upper part of the neck. The hairs on the cheeks are elongated and form tufts. The ears are covered in short hairs and project from the fur. Short, elastic and closely adjacent hairs are present on the limbs from the elbows down to the calcaneal tendons. The winter fur is highly resistant to the cold. Wolves in northern climates can rest comfortably in open areas at −40 °C (−40 °F) by placing their muzzles between the rear legs and covering their faces with their tail. Wolf fur provides better insulation than dog fur and does not collect ice when warm breath is condensed against it. In cold climates, the wolf can reduce the flow of blood near its skin to conserve body heat. The warmth of the foot pads is regulated independently from the rest of the body and is maintained at just above tissue-freezing point where the pads come in contact with ice and snow. In warm climates, the fur is coarser and scarcer than in northern wolves. Female wolves tend to have smoother furred limbs than males and generally develop the smoothest overall coats as they age. Older wolves generally have more white hairs on the tip of the tail, along the nose, and on the forehead. Winter fur is retained longest by lactating females, although with some hair loss around their teats. Hair length on the middle of the back is 60–70 mm (2+3⁄8–2+3⁄4 in), and the guard hairs on the shoulders generally do not exceed 90 mm (3+1⁄2 in), but can reach 110–130 mm (4+3⁄8–5+1⁄8 in). A wolf's coat colour is determined by its guard hairs. Wolves usually have some hairs that are white, brown, gray and black. The coat of the Eurasian wolf is a mixture of ochreous (yellow to orange) and rusty ochreous (orange/red/brown) colours with light gray. The muzzle is pale ochreous gray, and the area of the lips, cheeks, chin, and throat is white. The top of the head, forehead, under and between the eyes, and between the eyes and ears is gray with a reddish film. The neck is ochreous. Long, black tips on the hairs along the back form a broad stripe, with black hair tips on the shoulders, upper chest and rear of the body. The sides of the body, tail, and outer limbs are a pale dirty ochreous colour, while the inner sides of the limbs, belly, and groin are white. Apart from those wolves which are pure white or black, these tones vary little across geographical areas, although the patterns of these colours vary between individuals. In North America, the coat colours of wolves follow Gloger's rule, wolves in the Canadian arctic being white and those in southern Canada, the U.S., and Mexico being predominantly gray. In some areas of the Rocky Mountains of Alberta and British Columbia, the coat colour is predominantly black, some being blue-gray and some with silver and black. Differences in coat colour between sexes is absent in Eurasia; females tend to have redder tones in North America. Black-coloured wolves in North America acquired their colour from wolf-dog admixture after the first arrival of dogs across the Bering Strait 12,000 to 14,000 years ago. Research into the inheritance of white colour from dogs into wolves has yet to be undertaken. ## Ecology ### Distribution and habitat Wolves occur across Eurasia and North America. However, deliberate human persecution because of livestock predation and fear of attacks on humans has reduced the wolf's range to about one-third of its historic range; the wolf is now extirpated (locally extinct) from much of its range in Western Europe, the United States and Mexico, and completely in the British Isles and Japan. In modern times, the wolf occurs mostly in wilderness and remote areas. The wolf can be found between sea level and 3,000 m (9,800 ft). Wolves live in forests, inland wetlands, shrublands, grasslands (including Arctic tundra), pastures, deserts, and rocky peaks on mountains. Habitat use by wolves depends on the abundance of prey, snow conditions, livestock densities, road densities, human presence and topography. ### Diet Like all land mammals that are pack hunters, the wolf feeds predominantly on wild herbivorous hoofed mammals that can be divided into large size 240–650 kg (530–1,430 lb) and medium size 23–130 kg (51–287 lb), and have a body mass similar to that of the combined mass of the pack members. The wolf specializes in preying on the vulnerable individuals of large prey, with a pack of 15 able to bring down an adult moose. The variation in diet between wolves living on different continents is based on the variety of hoofed mammals and of available smaller and domesticated prey. In North America, the wolf's diet is dominated by wild large hoofed mammals (ungulates) and medium-sized mammals. In Asia and Europe, their diet is dominated by wild medium-sized hoofed mammals and domestic species. The wolf depends on wild species, and if these are not readily available, as in Asia, the wolf is more reliant on domestic species. Across Eurasia, wolves prey mostly on moose, red deer, roe deer and wild boar. In North America, important range-wide prey are elk, moose, caribou, white-tailed deer and mule deer. Wolves can digest their meal in a few hours and can feed several times in one day, making quick use of large quantities of meat. A well-fed wolf stores fat under the skin, around the heart, intestines, kidneys, and bone marrow, particularly during the autumn and winter. Nonetheless, wolves are not fussy eaters. Smaller-sized animals that may supplement their diet include rodents, hares, insectivores and smaller carnivores. They frequently eat waterfowl and their eggs. When such foods are insufficient, they prey on lizards, snakes, frogs, and large insects when available. Wolves in some areas may consume fish and even marine life. Wolves also consume some plant material. In Europe, they eat apples, pears, figs, melons, berries and cherries. In North America, wolves eat blueberries and raspberries. They also eat grass, which may provide some vitamins, but is most likely used mainly to induce vomiting to rid themselves of intestinal parasites or long guard hairs. They are known to eat the berries of mountain-ash, lily of the valley, bilberries, cowberries, European black nightshade, grain crops, and the shoots of reeds. In times of scarcity, wolves will readily eat carrion. In Eurasian areas with dense human activity, many wolf populations are forced to subsist largely on livestock and garbage. As prey in North America continue to occupy suitable habitats with low human density, North American wolves eat livestock and garbage only in dire circumstances. Cannibalism is not uncommon in wolves during harsh winters, when packs often attack weak or injured wolves and may eat the bodies of dead pack members. ### Interactions with other predators Wolves typically dominate other canid species in areas where they both occur. In North America, incidents of wolves killing coyotes are common, particularly in winter, when coyotes feed on wolf kills. Wolves may attack coyote den sites, digging out and killing their pups, though rarely eating them. There are no records of coyotes killing wolves, though coyotes may chase wolves if they outnumber them. According to a press release by the U.S. Department of Agriculture in 1921, the infamous Custer Wolf relied on coyotes to accompany him and warn him of danger. Though they fed from his kills, he never allowed them to approach him. Interactions have been observed in Eurasia between wolves and golden jackals, the latter's numbers being comparatively small in areas with high wolf densities. Wolves also kill red, Arctic and corsac foxes, usually in disputes over carcasses, sometimes eating them. Brown bears typically dominate wolf packs in disputes over carcasses, while wolf packs mostly prevail against bears when defending their den sites. Both species kill each other's young. Wolves eat the brown bears they kill, while brown bears seem to eat only young wolves. Wolf interactions with American black bears are much rarer because of differences in habitat preferences. Wolves have been recorded on numerous occasions actively seeking out American black bears in their dens and killing them without eating them. Unlike brown bears, American black bears frequently lose against wolves in disputes over kills. Wolves also dominate and sometimes kill wolverines, and will chase off those that attempt to scavenge from their kills. Wolverines escape from wolves in caves or up trees. Wolves may interact and compete with felids, such as the Eurasian lynx, which may feed on smaller prey where wolves are present and may be suppressed by large wolf populations. Wolves encounter cougars along portions of the Rocky Mountains and adjacent mountain ranges. Wolves and cougars typically avoid encountering each other by hunting at different elevations for different prey (niche partitioning). This is more difficult during winter. Wolves in packs usually dominate cougars and can steal their kills or even kill them, while one-to-one encounters tend to be dominated by the cat, who likewise will kill wolves. Wolves more broadly affect cougar population dynamics and distribution by dominating territory and prey opportunities and disrupting the feline's behaviour. Wolf and Siberian tiger interactions are well-documented in the Russian Far East, where tigers significantly depress wolf numbers, sometimes to the point of localized extinction. In Israel, Palestine, Central Asia and India wolves may encounter striped hyenas, usually in disputes over carcasses. Striped hyenas feed extensively on wolf-killed carcasses in areas where the two species interact. One-to-one, hyenas dominate wolves, and may prey on them, but wolf packs can drive off single or outnumbered hyenas. There is at least one case in Israel of a hyena associating and cooperating with a wolf pack. ### Infections Viral diseases carried by wolves include: rabies, canine distemper, canine parvovirus, infectious canine hepatitis, papillomatosis, and canine coronavirus. In wolves, the incubation period for rabies is eight to 21 days, and results in the host becoming agitated, deserting its pack, and travelling up to 80 km (50 mi) a day, thus increasing the risk of infecting other wolves. Although canine distemper is lethal in dogs, it has not been recorded to kill wolves, except in Canada and Alaska. The canine parvovirus, which causes death by dehydration, electrolyte imbalance, and endotoxic shock or sepsis, is largely survivable in wolves, but can be lethal to pups. Bacterial diseases carried by wolves include: brucellosis, Lyme disease, leptospirosis, tularemia, bovine tuberculosis, listeriosis and anthrax. Although lyme disease can debilitate individual wolves, it does not appear to significantly affect wolf populations. Leptospirosis can be contracted through contact with infected prey or urine, and can cause fever, anorexia, vomiting, anemia, hematuria, icterus, and death. Wolves are often infested with a variety of arthropod exoparasites, including fleas, ticks, lice, and mites. The most harmful to wolves, particularly pups, is the mange mite (Sarcoptes scabiei), though they rarely develop full-blown mange, unlike foxes. Endoparasites known to infect wolves include: protozoans and helminths (flukes, tapeworms, roundworms and thorny-headed worms). Most fluke species reside in the wolf's intestines. Tapeworms are commonly found in wolves, which they get though their prey, and generally cause little harm in wolves, though this depends on the number and size of the parasites, and the sensitivity of the host. Symptoms often include constipation, toxic and allergic reactions, irritation of the intestinal mucosa, and malnutrition. Wolves can carry over 30 roundworm species, though most roundworm infections appear benign, depending on the number of worms and the age of the host. ## Behaviour ### Social structure The wolf is a social animal. Its populations consist of packs and lone wolves, most lone wolves being temporarily alone while they disperse from packs to form their own or join another one. The wolf's basic social unit is the nuclear family consisting of a mated pair accompanied by their offspring. The average pack size in North America is eight wolves and in Europe 5.5 wolves. The average pack across Eurasia consists of a family of eight wolves (two adults, juveniles, and yearlings), or sometimes two or three such families, with examples of exceptionally large packs consisting of up to 42 wolves being known. Cortisol levels in wolves rise significantly when a pack member dies, indicating the presence of stress. During times of prey abundance caused by calving or migration, different wolf packs may join together temporarily. Offspring typically stay in the pack for 10–54 months before dispersing. Triggers for dispersal include the onset of sexual maturity and competition within the pack for food. The distance travelled by dispersing wolves varies widely; some stay in the vicinity of the parental group, while other individuals may travel great distances of upwards of 206 km (128 mi), 390 km (240 mi), and 670 km (420 mi) from their natal (birth) packs. A new pack is usually founded by an unrelated dispersing male and female, travelling together in search of an area devoid of other hostile packs. Wolf packs rarely adopt other wolves into their fold and typically kill them. In the rare cases where other wolves are adopted, the adoptee is almost invariably an immature animal of one to three years old, and unlikely to compete for breeding rights with the mated pair. This usually occurs between the months of February and May. Adoptee males may mate with an available pack female and then form their own pack. In some cases, a lone wolf is adopted into a pack to replace a deceased breeder. Wolves are territorial and generally establish territories far larger than they require to survive assuring a steady supply of prey. Territory size depends largely on the amount of prey available and the age of the pack's pups. They tend to increase in size in areas with low prey populations, or when the pups reach the age of six months when they have the same nutritional needs as adults. Wolf packs travel constantly in search of prey, covering roughly 9% of their territory per day, on average 25 km/d (16 mi/d). The core of their territory is on average 35 km<sup>2</sup> (14 sq mi) where they spend 50% of their time. Prey density tends to be much higher on the territory's periphery. Except out of desperation, wolves tend to avoid hunting on the fringes of their range to avoid fatal confrontations with neighbouring packs. The smallest territory on record was held by a pack of six wolves in northeastern Minnesota, which occupied an estimated 33 km<sup>2</sup> (13 sq mi), while the largest was held by an Alaskan pack of ten wolves encompassing 6,272 km<sup>2</sup> (2,422 sq mi). Wolf packs are typically settled, and usually leave their accustomed ranges only during severe food shortages. Territorial fights are among the principal causes of wolf mortality, one study concluding that 14–65% of wolf deaths in Minnesota and the Denali National Park and Preserve were due to other wolves. ### Communication Wolves communicate using vocalizations, body postures, scent, touch, and taste. The phases of the moon have no effect on wolf vocalization, and despite popular belief, wolves do not howl at the moon. Wolves howl to assemble the pack usually before and after hunts, to pass on an alarm particularly at a den site, to locate each other during a storm, while crossing unfamiliar territory, and to communicate across great distances. Wolf howls can under certain conditions be heard over areas of up to 130 km<sup>2</sup> (50 sq mi). Other vocalizations include growls, barks and whines. Wolves do not bark as loudly or continuously as dogs do in confrontations, rather barking a few times and then retreating from a perceived danger. Aggressive or self-assertive wolves are characterized by their slow and deliberate movements, high body posture and raised hackles, while submissive ones carry their bodies low, flatten their fur, and lower their ears and tail. Scent marking involves urine, feces, and anal gland scents. This is more effective at advertising territory than howling and is often used in combination with scratch marks. Wolves increase their rate of scent marking when they encounter the marks of wolves from other packs. Lone wolves will rarely mark, but newly bonded pairs will scent mark the most. These marks are generally left every 240 m (260 yd) throughout the territory on regular travelways and junctions. Such markers can last for two to three weeks, and are typically placed near rocks, boulders, trees, or the skeletons of large animals. Raised leg urination is considered to be one of the most important forms of scent communication in the wolf, making up 60–80% of all scent marks observed. ### Reproduction Wolves are monogamous, mated pairs usually remaining together for life. Should one of the pair die, another mate is found quickly. With wolves in the wild, inbreeding does not occur where outbreeding is possible. Wolves become mature at the age of two years and sexually mature from the age of three years. The age of first breeding in wolves depends largely on environmental factors: when food is plentiful, or when wolf populations are heavily managed, wolves can rear pups at younger ages to better exploit abundant resources. Females are capable of producing pups every year, one litter annually being the average. Oestrus and rut begin in the second half of winter and lasts for two weeks. Dens are usually constructed for pups during the summer period. When building dens, females make use of natural shelters like fissures in rocks, cliffs overhanging riverbanks and holes thickly covered by vegetation. Sometimes, the den is the appropriated burrow of smaller animals such as foxes, badgers or marmots. An appropriated den is often widened and partly remade. On rare occasions, female wolves dig burrows themselves, which are usually small and short with one to three openings. The den is usually constructed not more than 500 m (550 yd) away from a water source. It typically faces southwards where it can be better warmed by sunlight exposure, and the snow can thaw more quickly. Resting places, play areas for the pups, and food remains are commonly found around wolf dens. The odor of urine and rotting food emanating from the denning area often attracts scavenging birds like magpies and ravens. Though they mostly avoid areas within human sight, wolves have been known to nest near domiciles, paved roads and railways. During pregnancy, female wolves remain in a den located away from the peripheral zone of their territories, where violent encounters with other packs are less likely to occur. The gestation period lasts 62–75 days with pups usually being born in the spring months or early summer in very cold places such as on the tundra. Young females give birth to four to five young, and older females from six to eight young and up to 14. Their mortality rate is 60–80%. Newborn wolf pups look similar to German Shepherd Dog pups. They are born blind and deaf and are covered in short soft grayish-brown fur. They weigh 300–500 g (10+1⁄2–17+3⁄4 oz) at birth and begin to see after nine to 12 days. The milk canines erupt after one month. Pups first leave the den after three weeks. At one-and-a-half months of age, they are agile enough to flee from danger. Mother wolves do not leave the den for the first few weeks, relying on the fathers to provide food for them and their young. Pups begin to eat solid food at the age of three to four weeks. They have a fast growth rate during their first four months of life: during this period, a pup's weight can increase nearly 30 times. Wolf pups begin play-fighting at the age of three weeks, though unlike young coyotes and foxes, their bites are gentle and controlled. Actual fights to establish hierarchy usually occur at five to eight weeks of age. This is in contrast to young coyotes and foxes, which may begin fighting even before the onset of play behaviour. By autumn, the pups are mature enough to accompany the adults on hunts for large prey. ### Hunting and feeding Single wolves or mated pairs typically have higher success rates in hunting than do large packs; single wolves have occasionally been observed to kill large prey such as moose, bison and muskoxen unaided. The size of a wolf hunting pack is related to the number of pups that survived the previous winter, adult survival, and the rate of dispersing wolves leaving the pack. The optimal pack size for hunting elk is four wolves, and for bison a large pack size is more successful. Wolves move around their territory when hunting, using the same trails for extended periods. Wolves are nocturnal predators. During the winter, a pack will commence hunting in the twilight of early evening and will hunt all night, traveling tens of kilometres. Sometimes hunting large prey occurs during the day. During the summer, wolves generally tend to hunt individually, ambushing their prey and rarely giving pursuit. When hunting large gregarious prey, wolves will try to isolate an individual from its group. If successful, a wolf pack can bring down game that will feed it for days, but one error in judgement can lead to serious injury or death. Most large prey have developed defensive adaptations and behaviours. Wolves have been killed while attempting to bring down bison, elk, moose, muskoxen, and even by one of their smallest hoofed prey, the white-tailed deer. With smaller prey like beaver, geese, and hares, there is no risk to the wolf. Although people often believe wolves can easily overcome any of their prey, their success rate in hunting hoofed prey is usually low. The wolf must give chase and gain on its fleeing prey, slow it down by biting through thick hair and hide, and then disable it enough to begin feeding. Wolves may wound large prey and then lie around resting for hours before killing it when it is weaker due to blood loss, thereby lessening the risk of injury to themselves. With medium-sized prey, such as roe deer or sheep, wolves kill by biting the throat, severing nerve tracks and the carotid artery, thus causing the animal to die within a few seconds to a minute. With small, mouselike prey, wolves leap in a high arc and immobilize it with their forepaws. Once prey is brought down, wolves begin to feed excitedly, ripping and tugging at the carcass in all directions, and bolting down large chunks of it. The breeding pair typically monopolizes food to continue producing pups. When food is scarce, this is done at the expense of other family members, especially non-pups. Wolves typically commence feeding by gorging on the larger internal organs, like the heart, liver, lungs, and stomach lining. The kidneys and spleen are eaten once they are exposed, followed by the muscles. A wolf can eat 15–19% of its body weight in one sitting. ## Status and conservation The global wild wolf population in 2003 was estimated at 300,000. Wolf population declines have been arrested since the 1970s. This has fostered recolonization and reintroduction in parts of its former range as a result of legal protection, changes in land use, and rural human population shifts to cities. Competition with humans for livestock and game species, concerns over the danger posed by wolves to people, and habitat fragmentation pose a continued threat to the wolf. Despite these threats, the IUCN classifies the wolf as Least Concern on its Red List due to its relatively widespread range and stable population. The species is listed under AppendixII of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), meaning international trade in the species (including parts and derivatives) is regulated. However, populations of Bhutan, India, Nepal and Pakistan are listed in AppendixI which prohibits commercial international trade in wild-sourced specimens. ### North America In Canada, 50,000–60,000 wolves live in 80% of their historical range, making Canada an important stronghold for the species. Under Canadian law, First Nations people can hunt wolves without restrictions, but others must acquire licenses for the hunting and trapping seasons. As many as 4,000 wolves may be harvested in Canada each year. The wolf is a protected species in national parks under the Canada National Parks Act. In Alaska, 7,000–11,000 wolves are found on 85% of the state's 1,517,733 km<sup>2</sup> (586,000 sq mi) area. Wolves may be hunted or trapped with a license; around 1,200 wolves are harvested annually. In the contiguous United States, wolf declines were caused by the expansion of agriculture, the decimation of the wolf's main prey species like the American bison, and extermination campaigns. Wolves were given protection under the Endangered Species Act (ESA) of 1973, and have since returned to parts of their former range thanks to both natural recolonizations and reintroductions in Yellowstone and Idaho. The repopulation of wolves in Midwestern United States has been concentrated in the Great Lakes states of Minnesota, Wisconsin and Michigan where wolves number over 4,000 as of 2018. Wolves also occupy much of the northern Rocky Mountains region and the northwest, with a total population over 3,000 as of the 2020s. In Mexico and parts of the southwestern United States, the Mexican and U.S. governments collaborated from 1977 to 1980 in capturing all Mexican wolves remaining in the wild to prevent their extinction and established captive breeding programs for reintroduction. As of 2023, the reintroduced Mexican wolf population numbers over 200 individuals. ### Eurasia Europe, excluding Russia, Belarus and Ukraine, has 17,000 wolves in more than 28 countries. In many countries of the European Union, the wolf is strictly protected under the 1979 Berne Convention on the Conservation of European Wildlife and Natural Habitats (AppendixII) and the 1992 Council Directive 92/43/EEC on the Conservation of Natural Habitats and of Wild Fauna and Flora (AnnexII andIV). There is extensive legal protection in many European countries, although there are national exceptions. Wolves have been persecuted in Europe for centuries, having been exterminated in Great Britain by 1684, in Ireland by 1770, in Central Europe by 1899, in France by the 1930s, and in much of Scandinavia by the early 1970s. They continued to survive in parts of Finland, Eastern Europe and Southern Europe. Since 1980, European wolves have rebounded and expanded into parts of their former range. The decline of the traditional pastoral and rural economies seems to have ended the need to exterminate the wolf in parts of Europe. As of 2016, estimates of wolf numbers include: 4,000 in the Balkans, 3,460–3,849 in the Carpathian Mountains, 1,700–2,240 in the Baltic states, 1,100–2,400 in the Italian peninsula, and around 2,500 in the northwest Iberian peninsula as of 2007. In the former Soviet Union, wolf populations have retained much of their historical range despite Soviet-era large scale extermination campaigns. Their numbers range from 1,500 in Georgia, to 20,000 in Kazakhstan and up to 45,000 in Russia. In Russia, the wolf is regarded as a pest because of its attacks on livestock, and wolf management means controlling their numbers by destroying them throughout the year. Russian history over the past century shows that reduced hunting leads to an abundance of wolves. The Russian government has continued to pay bounties for wolves and annual harvests of 20–30% do not appear to significantly affect their numbers. In the Middle East, only Israel and Oman give wolves explicit legal protection. Israel has protected its wolves since 1954 and has maintained a moderately sized population of 150 through effective enforcement of conservation policies. These wolves have moved into neighboring countries. Approximately 300–600 wolves inhabit the Arabian Peninsula. The wolf also appears to be widespread in Iran. Turkey has an estimated population of about 7,000 wolves. Outside of Turkey, wolf populations in the Middle East may total 1,000–2,000. In southern Asia, the northern regions of Afghanistan and Pakistan are important strongholds for wolves. The wolf has been protected in India since 1972. The Indian wolf is distributed across the states of Gujarat, Rajasthan, Haryana, Uttar Pradesh, Madhya Pradesh, Maharashtra, Karnataka and Andhra Pradesh. As of 2019, it is estimated that there are around 2,000–3,000 Indian wolves in the country. In East Asia, Mongolia's population numbers 10,000–20,000. In China, Heilongjiang has roughly 650 wolves, Xinjiang has 10,000 and Tibet has 2,000. 2017 evidence suggests that wolves range across all of mainland China. Wolves have been historically persecuted in China but have been legally protected since 1998. The last Japanese wolf was captured and killed in 1905. ## Relationships with humans ### In culture #### In folklore, religion and mythology The wolf is a common motif in the mythologies and cosmologies of peoples throughout its historical range. The Ancient Greeks associated wolves with Apollo, the god of light and order. The Ancient Romans connected the wolf with their god of war and agriculture Mars, and believed their city's founders, Romulus and Remus, were suckled by a she-wolf. Norse mythology includes the feared giant wolf Fenrir, and Geri and Freki, Odin's faithful pets. In Chinese astronomy, the wolf represents Sirius and guards the heavenly gate. In China, the wolf was traditionally associated with greed and cruelty and wolf epithets were used to describe negative behaviours such as cruelty ("wolf's heart"), mistrust ("wolf's look") and lechery ("wolf-sex"). In both Hinduism and Buddhism, the wolf is ridden by gods of protection. In Vedic Hinduism, the wolf is a symbol of the night and the daytime quail must escape from its jaws. In Tantric Buddhism, wolves are depicted as inhabitants of graveyards and destroyers of corpses. In the Pawnee creation myth, the wolf was the first animal brought to Earth. When humans killed it, they were punished with death, destruction and the loss of immortality. For the Pawnee, Sirius is the "wolf star" and its disappearance and reappearance signified the wolf moving to and from the spirit world. Both Pawnee and Blackfoot call the Milky Way the "wolf trail". The wolf is also an important crest symbol for clans of the Pacific Northwest like the Kwakwakaʼwakw. The concept of people turning into wolves, and the inverse, has been present in many cultures. One Greek myth tells of Lycaon being transformed into a wolf by Zeus as punishment for his evil deeds. The legend of the werewolf has been widespread in European folklore and involves people willingly turning into wolves to attack and kill others. The Navajo have traditionally believed that witches would turn into wolves by donning wolf skins and would kill people and raid graveyards. The Dena'ina believed wolves were once men and viewed them as brothers. #### In fable and literature Aesop featured wolves in several of his fables, playing on the concerns of Ancient Greece's settled, sheep-herding world. His most famous is the fable of "The Boy Who Cried Wolf", which is directed at those who knowingly raise false alarms, and from which the idiomatic phrase "to cry wolf" is derived. Some of his other fables concentrate on maintaining the trust between shepherds and guard dogs in their vigilance against wolves, as well as anxieties over the close relationship between wolves and dogs. Although Aesop used wolves to warn, criticize and moralize about human behaviour, his portrayals added to the wolf's image as a deceitful and dangerous animal. The Bible uses an image of a wolf lying with a lamb in a utopian vision of the future. In the New Testament, Jesus is said to have used wolves as illustrations of the dangers his followers, whom he represents as sheep, would face should they follow him. Isengrim the wolf, a character first appearing in the 12th-century Latin poem Ysengrimus, is a major character in the Reynard Cycle, where he stands for the low nobility, whilst his adversary, Reynard the fox, represents the peasant hero. Isengrim is forever the victim of Reynard's wit and cruelty, often dying at the end of each story. The tale of "Little Red Riding Hood", first written in 1697 by Charles Perrault, is considered to have further contributed to the wolf's negative reputation in the Western world. The Big Bad Wolf is portrayed as a villain capable of imitating human speech and disguising itself with human clothing. The character has been interpreted as an allegorical sexual predator. Villainous wolf characters also appear in The Three Little Pigs and "The Wolf and the Seven Young Goats". The hunting of wolves, and their attacks on humans and livestock, feature prominently in Russian literature, and are included in the works of Leo Tolstoy, Anton Chekhov, Nikolay Nekrasov, Ivan Bunin, Leonid Pavlovich Sabaneyev, and others. Tolstoy's War and Peace and Chekhov's Peasants both feature scenes in which wolves are hunted with hounds and Borzois. The musical Peter and the Wolf involves a wolf being captured for eating a duck, but is spared and sent to a zoo. Wolves are among the central characters of Rudyard Kipling's The Jungle Book. His portrayal of wolves has been praised posthumously by wolf biologists for his depiction of them: rather than being villainous or gluttonous, as was common in wolf portrayals at the time of the book's publication, they are shown as living in amiable family groups and drawing on the experience of infirm but experienced elder pack members. Farley Mowat's largely fictional 1963 memoir Never Cry Wolf is widely considered to be the most popular book on wolves, having been adapted into a Hollywood film and taught in several schools decades after its publication. Although credited with having changed popular perceptions on wolves by portraying them as loving, cooperative and noble, it has been criticized for its idealization of wolves and its factual inaccuracies. ### Conflicts Human presence appears to stress wolves, as seen by increased cortisol levels in instances such as snowmobiling near their territory. #### Predation on livestock Livestock depredation has been one of the primary reasons for hunting wolves and can pose a severe problem for wolf conservation. As well as causing economic losses, the threat of wolf predation causes great stress on livestock producers, and no foolproof solution of preventing such attacks short of exterminating wolves has been found. Some nations help offset economic losses to wolves through compensation programs or state insurance. Domesticated animals are easy prey for wolves, as they have been bred under constant human protection, and are thus unable to defend themselves very well. Wolves typically resort to attacking livestock when wild prey is depleted. In Eurasia, a large part of the diet of some wolf populations consists of livestock, while such incidents are rare in North America, where healthy populations of wild prey have been largely restored. The majority of losses occur during the summer grazing period, untended livestock in remote pastures being the most vulnerable to wolf predation. The most frequently targeted livestock species are sheep (Europe), domestic reindeer (northern Scandinavia), goats (India), horses (Mongolia), cattle and turkeys (North America). The number of animals killed in single attacks varies according to species: most attacks on cattle and horses result in one death, while turkeys, sheep and domestic reindeer may be killed in surplus. Wolves mainly attack livestock when the animals are grazing, though they occasionally break into fenced enclosures. #### Competition with dogs A review of the studies on the competitive effects of dogs on sympatric carnivores did not mention any research on competition between dogs and wolves. Competition would favour the wolf, which is known to kill dogs; however, wolves usually live in pairs or in small packs in areas with high human persecution, giving them a disadvantage when facing large groups of dogs. Wolves kill dogs on occasion, and some wolf populations rely on dogs as an important food source. In Croatia, wolves kill more dogs than sheep, and wolves in Russia appear to limit stray dog populations. Wolves may display unusually bold behaviour when attacking dogs accompanied by people, sometimes ignoring nearby humans. Wolf attacks on dogs may occur both in house yards and in forests. Wolf attacks on hunting dogs are considered a major problem in Scandinavia and Wisconsin. Although the number of dogs killed each year by wolves is relatively low, it induces a fear of wolves entering villages and farmyards to prey on them. In many cultures, dogs are seen as family members, or at least working team members, and losing one can lead to strong emotional responses such as demanding more liberal hunting regulations. Dogs that are employed to guard sheep help to mitigate human–wolf conflicts, and are often proposed as one of the non-lethal tools in the conservation of wolves. Shepherd dogs are not particularly aggressive, but they can disrupt potential wolf predation by displaying what is to the wolf ambiguous behaviours, such as barking, social greeting, invitation to play or aggression. The historical use of shepherd dogs across Eurasia has been effective against wolf predation, especially when confining sheep in the presence of several livestock guardian dogs. Shepherd dogs are sometimes killed by wolves. #### Attacks on humans The fear of wolves has been pervasive in many societies, though humans are not part of the wolf's natural prey. How wolves react to humans depends largely on their prior experience with people: wolves lacking any negative experience of humans, or which are food-conditioned, may show little fear of people. Although wolves may react aggressively when provoked, such attacks are mostly limited to quick bites on extremities, and the attacks are not pressed. Predatory attacks may be preceded by a long period of habituation, in which wolves gradually lose their fear of humans. The victims are repeatedly bitten on the head and face, and are then dragged off and consumed unless the wolves are driven off. Such attacks typically occur only locally and do not stop until the wolves involved are eliminated. Predatory attacks can occur at any time of the year, with a peak in the June–August period, when the chances of people entering forested areas (for livestock grazing or berry and mushroom picking) increase. Cases of non-rabid wolf attacks in winter have been recorded in Belarus, Kirov and Irkutsk oblasts, Karelia and Ukraine. Also, wolves with pups experience greater food stresses during this period. The majority of victims of predatory wolf attacks are children under the age of 18 and, in the rare cases where adults are killed, the victims are almost always women. Indian wolves have a history of preying on children, a phenomenon called "child-lifting". They may be taken primarily in the spring and summer periods during the evening hours, and often within human settlements. Cases of rabid wolves are low when compared to other species, as wolves do not serve as primary reservoirs of the disease, but can be infected by animals such as dogs, jackals and foxes. Incidents of rabies in wolves are very rare in North America, though numerous in the eastern Mediterranean, the Middle East and Central Asia. Wolves apparently develop the "furious" phase of rabies to a very high degree. This, coupled with their size and strength, makes rabid wolves perhaps the most dangerous of rabid animals. Bites from rabid wolves are 15 times more dangerous than those of rabid dogs. Rabid wolves usually act alone, travelling large distances and often biting large numbers of people and domestic animals. Most rabid wolf attacks occur in the spring and autumn periods. Unlike with predatory attacks, the victims of rabid wolves are not eaten, and the attacks generally occur only on a single day. The victims are chosen at random, though most cases involve adult men. During the fifty years up to 2002, there were eight fatal attacks in Europe and Russia, and more than two hundred in southern Asia. #### Human hunting of wolves Theodore Roosevelt said wolves are difficult to hunt because of their elusiveness, sharp senses, high endurance, and ability to quickly incapacitate and kill hunting dogs. Historic methods included killing of spring-born litters in their dens, coursing with dogs (usually combinations of sighthounds, Bloodhounds and Fox Terriers), poisoning with strychnine, and trapping. A popular method of wolf hunting in Russia involves trapping a pack within a small area by encircling it with fladry poles carrying a human scent. This method relies heavily on the wolf's fear of human scents, though it can lose its effectiveness when wolves become accustomed to the odor. Some hunters can lure wolves by imitating their calls. In Kazakhstan and Mongolia, wolves are traditionally hunted with eagles and falcons, though this practice is declining, as experienced falconers are becoming few in number. Shooting wolves from aircraft is highly effective, due to increased visibility and direct lines of fire. Several types of dog, including the Borzoi and Kyrgyz Tajgan, have been specifically bred for wolf hunting. ### As pets and working animals Wolves and wolf-dog hybrids are sometimes kept as exotic pets. Although closely related to domestic dogs, wolves do not show the same tractability as dogs in living alongside humans, being generally less responsive to human commands and more likely to act aggressively. A person is more likely to be fatally mauled by a pet wolf or wolf-dog hybrid than by a dog.
52,624,147
Avenue Range Station massacre
1,101,020,088
Murder of a group of Aboriginal Australians by white settlers
[ "1848 in Australia", "1848 murders in Australia", "Deaths by firearm in South Australia", "Indigenous Australians in South Australia", "Limestone Coast", "Massacres in 1848", "Massacres of Indigenous Australians", "September 1848 events", "Settlers of South Australia" ]
The Avenue Range Station massacre was a murder of a group of Aboriginal Australians by white settlers during the Australian frontier wars. It occurred in about September 1848 at Avenue Range, a sheep station in the southeast of the Colony of South Australia. Information is scarce about the basic facts of the massacre, including the exact date and number of victims. A contemporary account of the massacre listed nine victims – three women, two teenage girls, three infants, and an "old man blind and infirm". Another account published by Christina Smith in 1880 gave the number of victims as eleven, and specified that they belonged to the Tanganekald people. Pastoralist James Brown and his overseer, a man named Eastwood, were suspected of committing the murders in retaliation for attacks on Brown's sheep. In January 1849, reports of the massacre reached Matthew Moorhouse, the Protector of Aborigines. He visited the district to investigate the claims, and based on his enquiries Brown was charged with the murders in March 1849. Proceedings against Brown began in June 1849 and continued in the Supreme Court of South Australia for several months, but were eventually abandoned. Some key witnesses, including Eastwood, either fled the colony or refused to cooperate with the investigation. There were also significant restrictions on the use of evidence given by Aboriginal witnesses, especially where a verdict could involve capital punishment. These legal hurdles and settler solidarity ensured the case did not go to trial, although the magistrate who committed him for trial told a friend that there was "little question of the butchery or the butcher". Although the details of the case were known for decades after the murders, distortions of the massacre eventually appeared in print and were embellished by local white and Aboriginal historians. Two key aspects of these later accounts were that Brown poisoned rather than shot the victims, and that he had undertaken an epic horse ride to Adelaide to establish an alibi. Historians Robert Foster, Rick Hosking and Amanda Nettelbeck contend that these "pioneer legend" alterations downplayed the seriousness of the crime. ## Background The colony of South Australia was established by white settlers in December 1836. In early 1839, settlers began spreading out from the capital, Adelaide. That May, James Brown and his brother Archibald arrived in the colony from Scotland. The following year, they took up a property in the Encounter Bay district about 100 kilometres (62 mi) south of the capital. James then branched off on his own and established Avenue Range, a 178-square-kilometre (69 sq mi) pastoral run in the Guichen Bay district, about 270 kilometres (170 mi) southeast of Adelaide. In common with other areas of Australia, settlers on the frontier in southeast South Australia employed various tactics to deal with Aboriginal resistance to being forced off their land. Initially, settlers attempted to keep them at a distance using threats of violence, but they soon escalated to using actual violence, hoping that by terrorising them they could prevent them from interfering with stock and other property. Violence by settlers towards Aboriginal people often went unreported to the authorities, and became more secretive after a settler was hanged in March 1847 for the murder of an Aboriginal man in the colony's southeast – the only such sentence carried out in the history of South Australia's early white settlement. This undeclared and covert fighting between settlers and Aboriginal people in South Australia is considered part of the Australian frontier wars. ## Massacre, investigation and legal proceedings Reports were received by the colonial authorities in January 1849 that some Aboriginal people had been killed near Guichen Bay. On 19 February, the Protector of Aborigines, Matthew Moorhouse arrived in the district to investigate. Moorhouse's role was to safeguard the rights and interests of Aboriginal people within the colony. He began his inquiries assisted by a mounted policeman, an interpreter, and an Aboriginal guide. An Aboriginal witness, Leandermin, took Moorhouse to the site of the alleged crime. He told Moorhouse that, on the day the killings occurred, he and a white man named Parker were walking along a road when they heard shots. He went to see what was happening and, from behind some trees, saw four or five Aboriginal women lying on the ground with fresh wounds. He also saw others on the ground, whom he presumed were dead because they were not moving. Two white men were present. Leandermin identified Brown as one of them, and stated that Brown had a gun in his hand. Brown's overseer, Eastwood, was suspected of being his accomplice. Moorhouse and his party then examined the scene, locating five holes containing human remains. Scattered nearby they found human bones and cartridge paper discharged from a firearm. Eighty paces from the graves they located the remains of a fire which contained more bones. Moorhouse concluded that the bodies had first been buried, but later exhumed and burnt in an attempt to destroy the evidence. The date of the massacre is unclear. Moorhouse's original report in March 1849 stated that it had occurred some months before, and in his published report of October 1849, he placed it "about September" of 1848. On 1 March 1849, Brown was charged with the murder of "unknown aboriginal natives". In late March or early April he appeared before a local magistrate in the district, Captain G. V. Butler, who committed him for trial. In May, Butler wrote a letter to Charles Hervey Bagot, a member of the South Australian Legislative Council, in which he listed the victims as one "old man blind and infirm", three female adults, two teenage girls (aged 15 and 12 years), and three female children (aged two years, 18 months, and a baby). Butler added that there was "little question of the butchery or the butcher". Brown's trial came before the Supreme Court in Adelaide on 11 June 1849. The presiding judge considered that the evidence presented was insufficient, and gave the prosecution another week to investigate. The weakness of the case was directly related to the provisions of the Aboriginal Witnesses Act of 1848 regarding testimony given by Aboriginal witnesses. It was generally believed that Aboriginal people could not understand the oath, but the Act allowed unsworn testimony to be offered by Aboriginal witnesses, with two significant limitations. The court could determine the weight and credibility to be given to Aboriginal testimony, but even more telling was the restriction that when the punishment for a crime was death or transportation, the evidence of an "uncivilised person or persons" was considered insufficient unless corroborated by other evidence. A week later, the judge remained unconvinced about the strength of the prosecution, but given "great suspicion rested on the case", he gave the prosecution a further extension of time, and released Brown on bail of £500. In July 1849, the South Australian Advocate General produced a summary of the investigation to date. Several difficulties were detailed, including the fact that Parker denied any knowledge of the crime, as did others who were believed to have heard the incident, discussed in Brown's presence. Brown's co-accused, Eastwood, alias "Yorkie", had fled when the investigation began and had apparently left the colony aboard a whaling ship off Kangaroo Island. An important witness named Joice had gone to the neighbouring Port Phillip District of the colony of New South Wales, and Leandermin himself, who it appears was being detained at Guichen Bay, absconded and had allegedly been "made away with". The remaining witnesses were those that knew Brown, and apparently would not give evidence against him. Despite the extremely difficult task faced by the prosecution under these circumstances, the Advocate General ordered that investigations continue and issued warrants for the arrest of those that had fled South Australia. Brown appeared at the Supreme Court yet again on 10 and 28 September, but the judge again refused to hear the case without further evidence. By the November sittings of the court, Brown's case had been removed from the listings, and this was the end of the matter as far as the formal investigation was concerned. Effectively, settler solidarity and the law of evidence ensured that Brown was never tried for the murders, despite the fact that those involved in the investigation had no doubt of his guilt. Possibly in response to Brown's case, the Aboriginal Witnesses Act of 1848 was amended in July 1849 to allow a person to be convicted on the sole testimony of an Aboriginal person. ## Later account In 1880, while Brown was still alive, the lay missionary Christina Smith wrote a book, The Booandik Tribe of South Australian Aborigines, which was published in Adelaide. It included memoirs from her time in the Rivoli Bay district, some 50 kilometres (31 mi) southeast of Guichen Bay. One of these was about an Aboriginal boy called Wergon, whom she had adopted in the 1840s. Wergon had converted to Christianity and went on several journeys to try to convert the local Aboriginal people. One of these evangelical visits was to the country of the Wattatonga tribe, a group whose traditional lands included the newly established Avenue Range Station. He returned from this trip to tell Smith that eleven of the tribe had been massacred by two white men. Wergon had persuaded a witness whose parents had apparently been killed in the massacre to return with him. According to Wergon, "the white men had shown no mercy to the grey-headed old man or to the helpless infant on its mother's breast", and the apparent motive for the massacre was the killing of sheep belonging to a settler in the Guichen Bay district. Smith's account did not include how the massacre was carried out, but did include the details that it was investigated, the bodies of the victims were burnt, and the murderer was discharged for lack of evidence. Smith did not name the settlers. In their 2001 book, the historians Robert Foster, Rick Hosking and Amanda Nettelbeck considered this unsurprising, given that Brown was alive and living nearby at the time Smith's book was published. While circumspect about naming Brown, Smith essentially recounted important details of the massacre in her book published over thirty years after it occurred. ## Legend of James Brown Brown became wealthy through his pastoral interests, and by the 1860s had expanded his holdings to 470 square kilometres (180 sq mi). He died in February 1890, and three years later Simpson Newland published his novel, Paving the Way, a Romance of the Australian Bush. It contained a fictionalisation of actual events on the frontier. One of the stories in the book is that of Roland Grantley, a pastoral property owner, and his overseer "Darkie", also a white man. In this account, Grantley and Darkie shoot a group of Aboriginal people in response to attacks on shepherds and the killing of stock. The story says that 10–12 Aboriginal people were killed in the massacre, and many more wounded. The bodies are burnt. The police investigate, and Darkie, not wanting to implicate his employer, flees the district on Grantley's prize horse, using his bush skills to elude his pursuers. He swims the horse across the mouth of the Murray River, then finds passage on a whaling ship off Kangaroo Island. This account, though using fictional names, essentially retold basic known facts of the case against Brown. In Rodney Cockburn's 1927 book, The Pastoral Pioneers of South Australia, Brown was described as a great benefactor, whose deceased estate had been used by his widow to establish "two great charitable institutions", the Kalyra Consumptive Home at Belair, and Estcourt House Convalescent Home at Grange. Cockburn remarked on the lack of publicity enjoyed by Brown, and explained that he had "received a severe set back" early in his career after being accused of "poisoning a blackfellow". He went on to claim that Brown was found not guilty of the murder by a jury, and defended Brown's reputation, stating that the poisoning incident was part and parcel of "the circumstances and conditions of the day". Foster et al. note that there is no evidence of Brown being involved in a poisoning, despite poisonings of Aboriginal people by white settlers having occurred in the southeast of the colony prior to Brown's arrival. They further note Brown also only ever appeared in court on one matter, that of the shooting of the Aboriginal group in 1848, which never went to trial. They surmise that the shooting may have become mixed up in some people's minds with a poisoning that occurred on the west coast of the colony in the same year which also received significant press coverage. In 1939, Clement Smith, the Member of Parliament for the electoral district of Victoria, which covered much of the southeast of the now state of South Australia, mentioned Brown during a speech in State Parliament. He stated that the story was in Paving the Way, referred to "many natives" being slaughtered, and related that he had personally seen "large quantities of bones of those natives" when crossing the swamps where they were shot down. He then spoke of Brown making an epic pony ride from his station to Adelaide, swimming across the Murray, to report to the police to meet his bail conditions. Foster et al. note that although Smith was telling the story to illustrate violence against Aboriginal people on the frontier in the context of a parliamentary debate about giving Aboriginal people greater rights, he turned it instead into "an account of a pioneer's heroic horse ride". They also note that while Smith may well have seen bones, they could not have been those of Brown's victims, as they had been collected as evidence by Moorhouse and it is highly unlikely they were returned to the scene. In 1944, a local historian, J. G. Hastings, wrote a manuscript, entitled The History of the Coorong, which proved influential in the development of the legend of James Brown. In it, Hastings elaborated on the poisoning incident in which Brown had supposedly been involved. By this account, Brown, responding to repeated attacks on his livestock, poisoned some flour and left it accessible to local Aboriginal people, who subsequently stole the flour and ate it, many dying as a result. Having set the trap, Brown then rode for Adelaide to establish an alibi, swimming across the Murray Mouth on his way. According to Hastings, when the case was investigated by the police, Brown's ride to Adelaide "set him in good stead". Hastings wrote that at Brown's trial, it was claimed that it was impossible that Brown had poisoned the flour and appeared in Adelaide immediately thereafter. Hastings claimed that there was no mention of the case in police records held by archives in Adelaide, but that instead, the incident had been related to him by residents of the district who had known Brown. He also placed it as having occurred between 1860 and 1870. Foster et al. note that this date is highly unlikely, as frontier violence in the district began when squatters arrived in 1843, and was tapering off by 1848 when the murders occurred. By the 1860s, Aboriginal labour was highly sought after by pastoralists in the district due to the exodus of white labourers to the Victorian gold rush. From the 1950s onwards, stories about Brown were often included in local histories of the southeast of South Australia. Examples include Elma Smith's History of Kingston published in 1950, and an account by local historian Verne McLaren in Kingston Flashbacks published in 1970. In Smith's version, the original crime was not discussed, and the focus was on the horse ride to Adelaide, and McLaren's account split the legend into two, one about a poisoning by Brown, and another in which Brown and an accomplice cornered Aboriginal people in the Papineau caves, smoked them out and shot them. The second of McLaren's stories ended with Brown's famous ride to Adelaide to report to the police. Another account was included in Barry Durman's A History of the Baker's Range Settlement, published in 1978. Durman reproduced a newspaper article about the investigation into Brown's involvement in the murders, passed over the manner by which the murders were carried out and stated that the case was not proven, and then told of a "wonderful feat of horsemanship" by Brown. Aboriginal versions of the story also exist. In 1987, an elderly Aboriginal man from nearby Kingston related a version of the story, in which Brown shot some Aboriginal people and poisoned others because they were stealing his sheep. Brown then established his alibi by riding to Adelaide. In the same year Hastings' account was reproduced verbatim in Tom McCourt and Hans Mincham's The Coorong and Lakes of the Lower Murray. Three years later, the South Australian Education Department published an Aboriginal studies book for secondary students. It repeated Hastings' version, and followed it with an Aboriginal version told by a Ngarrindjeri man, George Trevorrow, whose family were from the Guichen Bay area. Trevorrow described the exploitation of Aboriginal workers by white settlers, and stated that when times were lean, Aboriginal people would steal flour from landowners. He did not name Brown, but stated that a settler poisoned some flour and rode for Adelaide to establish an alibi. Foster et al. note that although these two accounts were similar, the difference was that Hastings portrayed Brown as a victim of Aboriginal attacks, while Trevorrow emphasised white exploitation of Aboriginal people. As of 2022, the James Brown Memorial Trust, which was formed from Brown's estate and incorporated by Act of Parliament in December 1894, continues to operate under the name Kalyra Communities as an aged care service provider in South Australia. Manning's Place Names of South Australia records that "Kalyra" is the "native name of Mr James Brown's station in the South-East better known as Avenue Range station. It is the Aboriginal name for a certain kind of wood from which the natives made spears and other weapons." ## Evolution of the story Foster et al. argue that it is unlikely there were two incidents, a shooting that was investigated, and a poisoning that was not, because Brown's appearance before the court is common to both stories, but is known to have only occurred in the case of the shooting murders. They advance the view that even if both incidents did occur, the original story of the shooting murders has been transformed through the filter of the "pioneer legend" into one where Brown is remembered not for committing an atrocity, but for an epic horse ride. This transformation included the morphing of a cold-blooded shooting into a sly and passive "set and forget" poisoning, where the Aboriginal victims are "complicit in their own demise" by stealing the flour. They arrive at the judgement that, by focussing on a poisoning rather than a shooting, the story plays down the seriousness of the crime. Further, they observe that the murder of Aboriginal people effectively became a plot device, while the horse ride became central to the legend. They conclude that the evolution of the story is "testament to the influence of the 'pioneer legend' in shaping White Australia's view of the past".
1,137,956
Sweet Track
1,171,463,000
Ancient causeway in the Somerset Levels, England
[ "1970 archaeological discoveries", "4th-millennium BC architecture", "Ancient trackways in England", "Archaeological discoveries in the United Kingdom", "Archaeological sites in Somerset", "Causeways in Europe", "Civil engineering", "Footpaths in Somerset", "History of Somerset", "Neolithic Europe", "Prehistoric objects in the British Museum", "Prehistoric wooden trackways in Europe", "Scheduled monuments in Sedgemoor", "Stone Age sites in England", "Structures on the Heritage at Risk register in Somerset" ]
The Sweet Track is an ancient trackway, or causeway, in the Somerset Levels, England, named after its finder, Ray Sweet. It was built in 3807 BC (determined using dendrochronology) and is the second-oldest timber trackway discovered in the British Isles, dating to the Neolithic. The Sweet Track was predominantly built along the course of an earlier structure, the Post Track. The track extended across the now largely drained marsh between what was then an island at Westhay and a ridge of high ground at Shapwick, a distance close to 1,800 metres (5,900 ft) or around 1.1 mi. The track is one of a network that once crossed the Somerset Levels. Various artifacts and prehistoric finds, including a jadeitite ceremonial axe head, have been found in the peat bogs along its length. Construction was of crossed wooden poles, driven into the waterlogged soil to support a walkway that consisted mainly of planks of oak, laid end-to-end. The track was used for a period of only around ten years and was then abandoned, probably due to rising water levels. Following its discovery in 1970, most of the track has been left in its original location, with active conservation measures taken, including a water pumping and distribution system to maintain the wood in its damp condition. Some of the track is stored at the British Museum and at the Museum of Somerset in Taunton. A reconstruction has been made on which visitors can walk, on the same line as the original, in Shapwick Heath National Nature Reserve. ## Location In the early fourth millennium BC the track was built between an island at Westhay and a ridge of high ground at Shapwick close to the River Brue. A group of mounds at Westhay mark the site of prehistoric lake dwellings, which were likely to have been similar to those found in the Iron Age Glastonbury Lake Village near Godney, itself built on a morass on an artificial foundation of timber filled with brushwood, bracken, rubble, and clay. The remains of similar tracks have been uncovered nearby, connecting settlements on the peat bog; they include the Honeygore, Abbotts Way, Bells, Bakers, Westhay, and Nidons trackways. Sites such as the nearby Meare Pool provide evidence that the purpose of these structures was to enable easier travel between the settlements. Investigation of the Meare Pool indicates that it was formed by the encroachment of raised peat bogs around it, particularly during the Subatlantic climatic period (1st millennium BC), and core sampling demonstrates that it is filled with at least 2 metres (6.6 ft) of detritus mud. The two Meare Lake Villages within Meare Pool appear to originate from a collection of structures erected on the surface of the dried peat, such as tents, windbreaks and animal folds. Clay was later spread over the peat, providing raised stands for occupation, industry and movement, and in some areas thicker clay spreads accommodated hearths built of clay or stone. ## Discovery and study The track was discovered in 1970 during peat excavations and is named after its finder, Ray Sweet. The company for which he worked, E. J. Godwin, sent part of a plank from the track to John Coles, an assistant lecturer in archaeology at Cambridge University, who had carried out some excavations on nearby trackways. Coles' interest in the trackways led to the Somerset Levels Project, which ran from 1973 to 1989, funded by various donors including English Heritage. The project undertook a range of local archaeological activities, and established the economic and geographic significance of various trackways from the third and first millennia BC. The work of John Coles, Bryony Coles, and the Somerset Levels Project was recognised in 1996 when they won the Imperial Chemical Industries (ICI) Award for the best archaeological project offering a major contribution to knowledge, and in 2006 with the European Archaeological Heritage Prize. Dendrochronology (tree-ring dating) of the timbers has enabled precise dating of the track, showing it was built in 3807 BC. This dating led to claims that the Sweet Track was the oldest roadway in the world, until the discovery in 2009 of a 6,000-year-old trackway built in 4100 BC, in Plumstead, near Belmarsh prison. Analysis of the Sweet Track's timbers has aided research into Neolithic Era dendrochronology; comparisons with wood from the River Trent and a submerged forest at Stolford enabled a fuller mapping of the rings, and their relationship with the climate of the period. The wood used to build the track is now classed as bog-wood, the name given to wood (of any source) that for long periods (sometimes hundreds of thousands of years) has been buried in peat bogs, and kept from decaying by the acidic and anaerobic bog conditions. Bog-wood usually is stained brown by tannins dissolved in the acidic water, and represents an early stage of fossilisation. The age of the track prompted large-scale excavations in 1973, funded by the Department of the Environment. In 1973 a jadeitite axehead was found alongside the track; it is thought to have been placed there as an offering. One of over 100 similar axe heads found in Britain and Ireland, its good condition and its precious material suggest that it was a symbolic axe, rather than one used to cut wood. Because of the difficulty of working this material, which was derived from the Alpine area of Europe, all the axe heads of this type found in Great Britain are thought to have been non-utilitarian and to have represented some form of currency or be the products of gift exchange. Radiocarbon dating of the peat in which the axe head was discovered suggests that it was deposited in about 3200 BC. Wooden artefacts found at the site include paddles, a dish, arrow shafts, parts of four hazel bows, a throwing axe, yew pins, digging sticks, a mattock, a comb, toggles, and a spoon fragment. Finds made from other materials, such as flint flakes, arrowheads, and a chipped flint axe (in mint condition) have also been made. A geophysical survey of the area in 2008 showed unclear magnetometer data; the wood may be influencing the peat's hydrology, causing the loss or collection of minerals within the pore water and peat matrix. ## Builders The community that constructed the trackway were Neolithic farmers who had colonised the area around 3900 BC, and the evidence suggests that they were, by the time of construction, well organised and settled. Before this human incursion, the uplands surrounding the levels were heavily wooded, but local inhabitants began to clear these forests about this time to make way for an economy that was predominately pastoral with small amounts of cultivation. During the winter, the flooded areas of the levels would have provided this fishing, hunting, foraging and farming community with abundant fish and wildfowl; in the summer, the drier areas provided rich, open grassland for grazing cattle and sheep, reeds, wood, and timber for construction, and abundant wild animals, birds, fruit, and seeds. The need to reach the islands in the bog was sufficiently pressing for them to mount the enormous communal activity required for the task of stockpiling the timber and building the trackway, presumably when the waters were at their lowest after a dry period. The work required for the construction of the track demonstrates that they had advanced woodworking skills and suggests some differentiation of occupation among the workers. They also appear to have been managing the surrounding woodland for at least 120 years. ## Construction Built in 3807 or 3806 BC, the track was a walkway consisting mainly of planks of oak laid end-to-end, supported by crossed pegs of ash, oak, and lime, driven into the underlying peat. The planks, which were up to 40 centimetres (16 in) wide, 3 metres (120 in) long and less than 5 centimetres (2.0 in) thick, were cut from trees up to 400 years old and 1 metre (39 in) in diameter, felled and split using only stone axes, wooden wedges, and mallets. The length, straightness, and lack of forks or branches in the pegs suggest that they were taken from coppiced woodland. Longitudinal log rails up to 6.1 metres (20 ft) long and 7.6 centimetres (3.0 in) in diameter, made of mostly hazel and alder, were laid down and held in place with the pegs, which were driven at an angle across the rails and into the peat base of the bog. Notches were then cut into the planks to fit the pegs, and the planks were laid along the X shapes to form the walkway. In some places a second rail was placed on top of the first one to bring the plank above it level with the rest of the walkway. Some of the planks were then stabilised with slender, vertical wooden pegs driven through holes cut near the end of the planks and into the peat, and sometimes the clay, beneath. At the southern end of the construction smaller trees were used, and the planks split across the grain to utilise the full diameter of the trunk. Fragments of other tree species including holly, willow, poplar, dogwood, ivy, birch, and apple have also been found. The wetland setting indicates that the track components must have arrived prefabricated, before being assembled on site, although the presence of wood chips and chopped branches indicates that some trimming was performed locally. The track was constructed from about 200,000 kilograms (440,000 lb) of timber, but Coles estimates that once the materials were transported to the site, ten men could have assembled it in one day. The Sweet Track was used only for about ten years; rising water levels may have engulfed it, and therefore curtailed its use. The variety of objects found alongside the track suggest that it was in daily use as part of the farming life of the community. Since its discovery, it has been determined that parts of the Sweet Track were built along the route of an even earlier track, the Post Track, which was constructed thirty years earlier in 3838 BC. ## Conservation Most of the track remains in its original location, which is now within the Shapwick Heath biological Site of Special Scientific Interest and National Nature Reserve. Following purchase of land by the National Heritage Memorial Fund, and installation of a water pumping and distribution system along a 500-metre (1,600 ft) section, several hundred metres of the track's length are now being actively conserved. This method of preserving wetland archaeological remains (maintaining a high water table and saturating the site) is rare. A 500-metre (1,600 ft) section, which lies within the land owned by the Nature Conservancy Council, has been surrounded by a clay bank to prevent drainage into surrounding lower peat fields, and water levels are regularly monitored. The viability of this method is demonstrated by comparing it with the nearby Abbot's Way, which has not had similar treatment, and which in 1996 was found to have become dewatered and desiccated. Evaluation and maintenance of water levels in the Shapwick Heath Nature Reserve involves the Nature Conservancy Council, the Department for Environment, Food and Rural Affairs, and the Somerset Levels Project. Although the wood recovered from the Levels was visually intact, it was extremely degraded and very soft. Where possible, pieces of wood in good condition, or the worked ends of pegs, were taken away and conserved for later analysis. The conservation process involved keeping the wood in heated tanks in a solution of polyethylene glycol and, by a process of evaporation, gradually replacing the water in the wood with the wax over a period of about nine months. After this treatment the wood was removed from the tank and wiped clean. As the wax cooled and hardened, the artefact became firm and could be handled freely. A section of the track on land owned by Fisons (who extracted peat from the area) was donated to the British Museum in London. Although this short section can be assembled for display purposes, it is currently kept in store, off site, and under controlled conditions. A reconstructed section was displayed at the Peat Moors Centre near Glastonbury. The centre was run by the Somerset Historic Environment Service, but was closed in October 2009 as a result of budget cuts imposed by Somerset County Council. The main exhibits are extant, but future public access is uncertain. Other samples of the track are held in the Museum of Somerset. Sections of the track have been designated as a scheduled monument, meaning that it is a "nationally important" historic structure and archaeological site protected against unauthorised change. These sections are also included in Historic England's Heritage at Risk Register.
52,308,092
Laundromat (song)
1,154,380,037
2003 single by Nivea
[ "2002 songs", "2003 singles", "Jive Records singles", "Nivea (singer) songs", "Song recordings produced by R. Kelly", "Songs about heartache", "Songs written by R. Kelly" ]
"Laundromat" is a song by American singer Nivea from her 2002 self-titled debut album. Jive released it in the UK as a double A-side single along with "Don't Mess With My Man" on April 28, 2003. R. Kelly wrote and produced "Laundromat", and performed some uncredited vocals on the recording, which is an R&B and pop track. It was recorded and mixed in Chicago, and was one of the last songs to be produced for the album. The track is structured as a telephone call in which Nivea breaks up with her boyfriend, who is played by Kelly. The lyrics use the laundromat as a metaphor for the washing away of an old relationship. Critics praised Kelly for his contributions to the song. In the US, "Laundromat" peaked at number 58 on the Billboard Hot 100 chart and at number 92 on the Billboard Year-End chart for Hot R&B/Hip-Hop Songs. The single received heavy airplay and appeared on the Hot R&B/Hip-Hop Songs Billboard chart; journalists cited this success as evidence that Kelly's 2002 arrest for possessing child pornography did not hurt his career. "Laundromat" further reached number 33 on the UK Singles Chart, number 89 on the Scottish Singles Chart, and number 98 on the European Hot 100 Singles chart. "Laundromat" was supported through live performances and a music video, although Nivea did not perform it with Kelly. The song's accompanying music video was filmed in a laundromat and includes Nick Cannon in Kelly's role. In 2013, Solange Knowles covered "Laundromat" in a laundromat; her performance was praised by critics, who enjoyed the song choice and the venue. ## Background and recording After Nivea's debut solo single "Don't Mess With the Radio" underperformed in 2001, her record label Jive delayed the US release of her self-titled debut album to the following year. During this time, she worked with R. Kelly on the songs "Ya Ya Ya", "The One for Me", and "Laundromat" for the album; these were the last tracks to be added to Nivea prior to its release. Although Nivea was delayed in the US, Jive released the album internationally with a different track list, which did not include Kelly's songs. Kelly wrote, arranged, and produced "Laundromat", and performed uncredited vocals on the track. In a 2003 Sister 2 Sister interview, Nivea said she did not know Kelly had not been credited and denied claims this was done to hide his involvement with the song. "Laundromat" was recorded and mixed at Rock Land recording studio, Chicago. Kelly mixed the song with Ian Mereness, who programmed it with Abel Garibaldi. Jason Mlodzinski assisted with mixing and programming. Andy Gallas recorded the track with Mereness and Garibaldi. Tom Coyne mastered all of the songs for Nivea, including "Laundromat". ## Music and lyrics "Laundromat" is a four-minute, 25-second R&B and pop song performed in the style of a slow jam. Billboard's Chuck Taylor reviewed the single as a pop track, describing it as straddling "the line between straight-up R&B and modern pop". Taylor wrote that, after the opening, "Laundromat" moves into "dreamy vocal layers" and a "slow-grooving, sing-songy chorus". AllMusic's Alex Henderson said "Laundromat" has "some '70s sweet soul influence" but noted that Nivea did not follow the same neo soul trend as Mary J. Blige, Jaguar Wright, Alicia Keys, and Jill Scott. Writing for Vibe, Laura Checkoway referred to "Laundromat" as "an R&B jam-meets-detergent jingle". In her review of the album, Checkoway characterized Nivea's vocals as a "sugary soprano". The song uses the sounds of bursting bubbles and dripping water as a part of its instrumental; Rolling Stone writers Hank Shteamer, Elias Leight, and Brittany Spanos identified a contrast between the single's sound and its lyrics, writing: "The bubbly funk arrangement can't conceal the song's tragic core." "Laundromat" is about the break-up of a relationship that occurs during a telephone call. The song's lyrics use the laundromat as a metaphor for the washing away of an old relationship, as exemplified in the chorus: "Soap, powder, bleach, towels / Fabric softener, dollars, change / Pants, socks, dirty drawers / I'm headed to the laundromat." The song opens with Nivea arguing with Kelly, who is referred to as Keith; she calls him "a lying, cheating, son of a ..." and later hangs up on him. The New York Times's Neil Strauss described the track as having "a half-spoken and half-sung arrangement", and likened the chorus to a Burt Bacharach song. ## Release and promotion Jive released "Laundromat" in the UK as a double A-side single with "Don't Mess With My Man" on the week of April 28, 2003. "Don't Mess With My Man" was previously distributed as a single on June 3, 2002. In the US, "Laundromat" was promoted as a single in 2003, first distributed to rhythmic contemporary and urban contemporary radio stations for the week beginning January 21, 2003; according to Billboard's Carla Hay, this occurred even while "Don't Mess With My Man" was "still in heavy rotation at many radio stations". "Laundromat" was made available as a 12-inch single, cassette, and a CD single. Nivea did not perform the single live with Kelly, and his part was done by a singer named Katrelle. The song appeared on the 2003 compilation album Totally R&B; AllMusic's Andy Kellman praised "Laundromat" as one of its highlights. Nzingha Stewart filmed the music video for "Laundromat" in a laundromat. Nick Cannon, who had previously collaborated with Kelly, was his stand-in for the video. Discussing Kelly's absence, author Mark Anthony Neal wrote: "A man accused of inappropriate sexual behavior with minors obviously cannot show up in a music video cooing in the ear of a teenager." ## Critical reception Tracey Cooper of the Waikato Times praised "Laundromat", along with "Don't Mess With My Man" and "What You Waitin' For", for having a "cruisy groove and attitude" that would appeal to "the soul sistas". The Herald Sun's Karen Tye described the song as a "blend of silken phrasings and smooth R&B beats". In a People article, Chuck Arnold highlighted its humor, and Steve Jones, writing for the Gannett News Service, considered it one of the "clever tracks" in which Nivea "shows her moxie". In 2007, Vibe's Sean Fennessey wrote that Nivea "never sounded fresher or sharper" than on "Laundromat". Although he enjoyed the chorus and verses separately, Neil Strauss said they were incompatible in a single song, and criticized the lyrics as seeming "quickly written and unconvincingly delivered". The Courier-Mail's Emma Chalmers enjoyed the chorus, but wrote that its "musical-style singing conversation doesn't come off overall". A reviewer for Music Week disliked the laundry metaphor and said the song was overshadowed by "Don't Mess With My Man" on the double A-side release. In her negative review of Nivea, the Edmonton Journal's Sandra Sperounes was critical of "Laundromat", writing: "You'll find more soul in a bottle of bath gel." Kelly was praised by reviewers, several of whom cited his songs as highlights of Nivea. Chistie Leo of the New Straits Times said that "Laundromat", as well as "Ya Ya Ya", had "catapulted [Nivea] to a promising start". Some retrospective articles remained positive toward Kelly. Fennessey included "Laundromat" in his 2007 list of Kelly's essential collaborations; in his entry for the song, he noted Nivea was "another in a long line of young females [he] has err, raised up ... with his vocal talent". In a 2013 Fuse article, Nicole James commended Kelly's performance of the chorus, although they were uncertain about using a laundromat as a metaphor. Other critics had more negative responses. While panning the choice of collaborators, Laura Checkoway wrote "the banter between barely legal Nivea and Kelly is unsettling". Sperounes viewed Kelly's songs as the worst on Nivea and was critical of the album as a whole. ## Commercial performance For the week of March 8, 2003, "Laundromat" peaked at number 58 on the Billboard Hot 100 chart and number 20 on the Hot R&B/Hip-Hop Songs Billboard chart. On the Billboard Year-End chart, it reached number 92 for the Hot R&B/Hip-Hop Songs category. Music journalists pointed to the single's success when discussing how Kelly's 2002 arrest for possessing child pornography did not damage his career. In a USA Today article, Steve Jones reported that the song still received "heavy airplay". The Atlanta Journal-Constitution's Sonia Murray said that 14 months after Kelly was alleged to have made child pornography, ten of his songs, including "Laundromat", were on the Hot R&B/Hip-Hop Songs Billboard chart. In a 2018 Vibe article, Khaaliq Crowder said Nivea found "modest success" with "Laundromat" and "Don't Mess With the Radio" but identified "Don't Mess With My Man" as her most successful single. "Laundromat" peaked at number 33 on the UK Singles Chart. It was Nivea's second top-40 entry following her 2000 collaboration with Mystikal, "Danger (Been So Long)". "Laundromat" reached number two on the UK Independent Singles Chart and number eight on the UK R&B Singles Chart. In Scotland, "Laundromat" peaked at number 89 on its singles chart. It reached number 98 on the European Hot 100 Singles chart. For these European charts, "Laundromat" charted with "Don't Mess With My Man" as part of its double A-side release. ## Solange Knowles cover On July 29, 2013, Solange Knowles performed a part of "Laundromat" at a laundromat in Boerum Hill as part of a series of live concerts called "Uncapped" that was sponsored by Vitaminwater and The Fader. Vitaminwater's brand manager Ben Garnero said the laundromat concert was intended to "take on these iconic, mundane moments and tackle Mondays in New York and these places you never expect to see a show". The event was a part of the company's "Make Boring Brilliant" campaign and was attended by approximately 150 people. Knowles sang both Nivea's and Kelly's parts while dancing on top of washing machines. During the performance, she said: "This laundromat has me feeling the drama queen, so excuse my theatrics." For the concert, Knowles was accompanied by a six-person band. She had rehearsed the cover five minutes before the show began. Knowles performed the first 90 seconds of "Laundromat" before moving into Dirty Projectors' 2009 song "Stillness Is the Move". When asked about her song choice, Knowles described "Laundromat" as one of her most-played songs and one she always wanted to cover. Critics praised Knowles's performance; The Fader's Deidre Dyer and Nicole James said she picked the perfect venue for the song. James said they have enjoyed the decision to set the event in a laundromat more than the actual performance itself. Fuse's Joe Lynch commended the arrangement, saying the band had modified the song for "Solange's casually funky vibe perfectly". ## Track listings ## Credits and personnel Credits are adapted from the liner notes of Nivea: Recording locations - Recorded and mixed in Rock Land Studios in Chicago Personnel - R. Kelly – arrangement, composition, mixing, production - Ian Mereness – mixing, programming, recording - Jason Mlodzinski – mixing assistance, programming assistance - Abel Garibaldi – programming, recording - Andy Gallas – recording - Tom Coyne – mastering ## Charts ### Weekly charts ### Year-end charts
4,266,546
MLS Cup 1999
1,169,228,295
1999 edition of the MLS Cup
[ "1999 Major League Soccer season", "1999 in sports in Massachusetts", "D.C. United matches", "July 1999 sports events in the United States", "LA Galaxy matches", "MLS Cup", "Soccer in Massachusetts", "Sports competitions in Foxborough, Massachusetts" ]
MLS Cup 1999 was the fourth edition of the MLS Cup, the championship soccer match of Major League Soccer (MLS), the top-level soccer league of the United States. It took place on November 21, 1999, at Foxboro Stadium in Foxborough, Massachusetts, and was contested by D.C. United and the Los Angeles Galaxy in a rematch of the inaugural 1996 final that had been played at the same venue. Both teams finished atop their respective conferences during the regular season under new head coaches and advanced through the first two rounds of the playoffs. United won 2–0 with first-half goals from Jaime Moreno and Ben Olsen for their third MLS Cup victory in four years. Galaxy defender Robin Fraser left the match with a broken collarbone during the opening minutes and goalkeeper Kevin Hartman collided with John Maessner at the end of the half. Olsen was named the most valuable player of the match for his winning goal, which was scored off a misplayed backpass. The final was played in front of 44,910 spectators—a record for the MLS Cup. It was also the first MLS match to be played with a standard game clock and without a tiebreaker shootout following a rule change approved by the league days earlier. The Galaxy blamed their performance on decisions by referee Tim Weyland and the quality of the pitch at Foxboro Stadium, which had a narrowed width and was damaged by an earlier National Football League game. Both finalists qualified for the 2000 CONCACAF Champions' Cup, which was hosted in Southern California. The tournament's semifinals featured a rematch of the MLS Cup final and was decided in a penalty shootout that the Galaxy won. The Galaxy went on to win the tournament, becoming the second MLS team to do so. ## Venue The 1999 final was played at Foxboro Stadium in Foxborough, Massachusetts, where the inaugural final had been contested in 1996. MLS announced the stadium as the host venue on October 23, 1998, and the match was scheduled three weeks later than previous editions to avoid conflicting with baseball's World Series. The scheduled date of November 14 was later moved back to November 21. The match was originally planned to be hosted at Raymond James Stadium in Tampa, Florida, but issues with the Tampa Bay Mutiny's lease at the stadium led to MLS revoking their hosting rights. Foxboro was selected ahead of bids from Washington, D.C., and San Jose, California, as well as an unsubmitted speculative bid from Chicago. The match was played six days after a home game for the New England Patriots of the National Football League, necessitating the retention of the stadium's bleacher sections. As a result, the field was narrowed from 72 yards (66 m) to 68 yards (62 m), and had visible dirt patches and yard lines. Approximately 30,000 tickets were sold before the finalists were confirmed. ## Road to the final The MLS Cup is the post-season championship of Major League Soccer (MLS), a professional club soccer league based in the United States that began playing in 1996. Twelve teams contested the league's fourth season; teams were organized into two conferences, each playing 32 matches during the regular season from March to September. Teams faced opponents from the same conference four times during the regular season, and from outside their conference twice. Before the season began, MLS reduced the number of permitted international players from five to four as a cost-saving measure. The top four teams from each conference qualified for the playoffs, which were organized into three rounds and played primarily in October. The first two rounds, named the Conference Semifinals and Conference Finals, were home-and-away series organized into a best-of-three format with a hosting advantage for the higher-seeded team. The winners of the Conference Finals advanced to the single-match MLS Cup final, which would be held at a predetermined neutral venue. MLS Cup 1999 was contested by two-time champions D.C. United and the Los Angeles Galaxy, both of which had played in the inaugural 1996 final, which ended in a 3–2 overtime victory for United. The 1996 final had also been played at Foxboro Stadium, and the 1999 match was the fourth consecutive MLS Cup appearance for United. The 1999 final was the first to be contested by the regular season winners of both conferences. During the regular season, the Galaxy and United met twice, each winning on the road. ### Los Angeles Galaxy Since their MLS Cup 1996 appearance, the Los Angeles Galaxy had qualified for the playoffs twice but were eliminated in earlier rounds. During the 1998 regular season, the team finished atop the league standings with a 24–8 record, which included a run of nine consecutive wins and a record 85 goals. The Galaxy earned two shootout wins at the start of the 1999 season but then lost three consecutive matches where they scored only three goals in total. The club dismissed Zambrano on April 21 and replaced him with Sigi Schmid, who had managed UCLA Bruins for 19 years and the men's national under-20 team for two years. Under Schmid, the Galaxy won a playoff berth by early September and rose to first in the West alongside the Colorado Rapids. The team finished the season with a 20–12 record and 54 points, and became the first MLS team to allow an average of less than one goal per match during the regular season with 29 goals in 32 matches. Schmid was named Coach of the Year, Hartman earned Goalkeeper of the Year, and Robin Fraser won Defender of the Year for their regular season performances. In the Western Conference Semifinals, the Galaxy faced the Rapids, who had finished fourth in the conference and failed to score in their last five consecutive matches. The Galaxy hosted the first leg and led with an eighth-minute strike from defender Ezra Hendrickson, but had midfielder Simon Elliott sent off with a red card ten minutes later. The team extended their lead from a penalty scored in the 52nd minute by Greg Vanney and a strike five minutes later by Mathis that Colorado goalkeeper Ian Feuer deflected into the net for a 3–0 victory. The Galaxy defeated the Rapids 2–0 at Mile High Stadium in Denver, scoring twice in the final 15 minutes through midfielders Danny Pena and Joe Franchino, to complete a two-match sweep in the series. The Galaxy advanced to play the Western Conference Final against the Dallas Burn, who had finished second in the conference and eliminated defending champions Chicago. The Galaxy won the first leg, which was played at the Rose Bowl, 2–1 with a goal from Ezra Hendrickson that was scored with 40 seconds remaining in the match. The Galaxy twice took the lead during the second leg at the Cotton Bowl through a brace from Carlos Hermosillo but Dallas equalized to force a tie-breaking shootout. Dallas won 4–3 in the shootout, forcing a deciding third leg at the Rose Bowl. The Galaxy clinched their place in their second MLS Cup final with a 3–1 win, having taken advantage of the Burn's weakened defense in their starting lineup due to an injury and suspension. Greg Vanney scored from a penalty in the second minute, which was followed by goals from Hermosillo and Mauricio Cienfuegos to extend the lead; Jason Kreis scored a late consolation goal for Dallas. ### D.C. United D.C. United had played in the first three MLS Cup finals, winning in 1996 and 1997 against the Galaxy and Colorado Rapids, respectively. Following their loss in the 1998 final to the Chicago Fire, manager Bruce Arena left the team to join the U.S. men's national team and was replaced by New England head coach Thomas Rongen. During the early part of their season, United played without several injured starting players and reserves, forcing the starting lineup to change several times. The team also lost several players to national team call-ups during the Copa América, but was able to take first place in the Eastern Conference. The team lost six starting players to national teams at the FIFA Confederations Cup in July. Rongen turned to a lineup of reserves, including an inexperienced four-man defense, minor-league players, and new acquisitions to secure a playoff berth in late August. The team also clinched first in the Eastern Conference in mid-September, having amassed a 15-point lead over the second-place Columbus Crew. During the regular season, United won 17 of their 20 matches against opponents in the Eastern Conference and finished atop the league with 57 points. United played Miami Fusion, who had a 13–9 record in the regular season, in the Eastern Conference Semifinals. United won 2–0 in the first leg, which they hosted at RFK Stadium; forward Jaime Moreno scored in the 34th and 88th minutes. The second leg in Florida ended 0–0 after regulation time and was decided in a shootout that United won 3–2. Goalkeeper Tom Presthus, having stopped four goals in regulation time, made four saves during the six-round shootout. In a repeat of the previous two Eastern Conference Finals, United played the Columbus Crew, who had defeated the Tampa Bay Mutiny. United took a lead in the series at RFK Stadium in the first leg, winning 2–1 with a strike from Moreno in the 15th minute and a volley from Ben Olsen in the 72nd minute. The second leg in Columbus ended in a 5–1 victory for the hosts, giving United their worst playoff defeat and forcing a third match in the series. Roy Lassiter scored early for United in the sixth minute but the Crew responded with first-half goals from Ansil Elcock and Jeff Cunningham, and a hat-trick from Stern John in the second half. United recovered in the third leg to win 4–0 and extended their unbeaten streak at home in the playoffs to 12 matches. Moreno scored in the 17th minute and was joined by a brace from Roy Lassiter on both sides of half-time, the latter coming from a bicycle kick in the penalty area. Marco Etcheverry, who had provided three assists on the earlier goals, scored a free kick from 23 yards (21 m) with four minutes remaining to clinch a MLS Cup final berth for United. ### Summary of results #### Regular season | | Club | | | | | | |---------|------------------------|-----|-----|-----|-----|-----| | 1 | D.C. United (') | 32 | 23 | 6 | 9 | 57 | | 2 | Columbus Crew | 32 | 19 | 6 | 13 | 45 | | 3 | Tampa Bay Mutiny | 32 | 14 | 5 | 18 | 32 | | 4 | Miami Fusion | 32 | 13 | 5 | 19 | 29 | | 5 | New England Revolution | 32 | 12 | 5 | 20 | 26 | | Key:''' | | | | | | | 1999 Eastern Conference table #### Playoffs Note: In all results below, the score of the finalist is given first (H: home; A: away). Playoffs were in best-of-three format with penalty shootout (SO) if scores were tied.'' ## Broadcasting and entertainment The MLS Cup final was broadcast in the United States by ABC with English commentary, and Spanish commentary was available via secondary audio programming. The ABC broadcast was led by play-by-play announcer Phil Schoen and color commentator Ty Keough, who were joined by studio host Rob Stone. MLS players John Harkes and Alexi Lalas joined the pre-game and half-time broadcasts as co-hosts. ABC deployed 18 cameras for the match and added field microphones to capture crowd noise. The television broadcast on ABC drew a 1.0 national rating, a 17 percent decline from 1998, partially due to competition from National Football League games. Pop singer Christina Aguilera sang the U.S. national anthem before the match and performed in the half-time show. ## Match ### Match rules The MLS Board of Governors, composed of team owners and their representatives, met in Boston before the MLS Cup to revise the league's match rules. Several of the league's experimental rules were eliminated in an effort to match international standards set by the International Football Association Board in the Laws of the Game and to appeal to hardcore fans. The countdown clock that was tracked via the stadium scoreboard was replaced with a normal match clock that was kept by the referee on the field; injury time was added at the end of each half, as displayed by the fourth official. Tiebreaker shootouts were replaced with two periods of sudden-death golden goal overtime that would be followed by a standard penalty shootout if the score remained tied. Although the shootout change was planned to take effect at the start of the 2000 season, after consulting with coaches Schmid and Rongen, league commissioner Don Garber announced the revised clock and tiebreaker would be used at MLS Cup 1999. ### Summary The MLS Cup final was played on November 21 in front of 44,910 spectators at Foxboro Stadium, setting a new attendance record for the MLS Cup and any soccer match played in Massachusetts. Approximately 5,000 D.C. United fans, including the club's two largest supporters groups Barra Brava and Screaming Eagles, traveled to the match. The match began at 1:30 p.m. Eastern Time under sunny skies with a temperature of 63 °F (17 °C), unlike the cold and rainy conditions of the 1996 final. The field was described as "badly scarred" due to a National Football League game at the stadium earlier in the week, which also caused the pitch to be narrowed to 68 yards (62 m). United took early control of the match and challenged the Galaxy defense on several plays. In the seventh minute, Galaxy defender Robin Fraser fell after being pushed from behind by Roy Lassiter on a play while challenging for the ball. Fraser left the match with a broken left collarbone and was replaced by Steve Jolley. Schmid adjusted Galaxy's defense into a three-man formation with Paul Caligiuri positioned as sweeper. Fraser later said he had been wearing a shoulder brace that restricted movement of his arm for most of the season, which prevented him from breaking the fall. Referee Tim Weyland did not award a foul for the play, for which Schmid and Galaxy players later criticized him. United then attempted to take advantage of the weakened Galaxy defense as both teams pushed aggressively for an opening goal, trading several chances. United took the lead in the 19th minute on a long throw-in from Marco Etcheverry that was misplayed by Jolley and fell to Lassiter, whose shot was saved by Kevin Hartman. Caligiuri failed to clear the ball, and Jaime Moreno converted from point-blank range. The Galaxy responded with a promising scoring opportunity off a corner kick taken by Greg Vanney in the 32nd minute. Danny Pena's header hit the goalpost and John Maessner deflected it toward the goal but the ball was cleared away by Richie Williams. The Galaxy protested to Weyland that the ball had crossed the line and struck Williams' hand but no foul was given. The Galaxy and United traded more scoring chances as the first half ended; play stopped in the 43rd minute after Maessner, who was clearing the ball, kneed Harman in the head. Hartman returned to the match and stopped a 22-yard (20 m) volley from United defender Jeff Agoos at the beginning of stoppage time, which Weyland set at four minutes. The Galaxy immediately responded with a counterattack led by Jones, who was clipped in the penalty area by Maessner though Weyland did not award a penalty. In the third minute of stoppage time, Hartman misplayed a backpass from Jolley while under pressure from Lassiter and Moreno. Ben Olsen intercepted Hartman's pass to Caligiuri and scored from just outside the six-yard box to give United a 2–0 lead at half-time. United looked to extend their lead in the second half but were unable to convert an early chance in the 47th minute as Lassiter headed a cross from Agoos wide of the goal. A breakaway chance in the 58th minute for Jones was thwarted by Carlos Llamosa, who tackled away a loose ball in the United penalty area. Galaxy attackers Mauricio Cienfuegos and Carlos Hermosillo were kept in check by United, particularly by defensive midfielder Richie Williams. Jones was left to attack on his own. Pena gave Galaxy two chances to score but Agoos blocked his first shot and the second went wide of the goal. With 20 minutes left to play, the teams traded back-to-back chances that were not finished. In the 71st minute, Olsen received a chipped pass from Etcheverry and shot towards the goal but hit the side netting. A minute later, a volley by Clint Mathis in the penalty area was struck wide of the goal. Williams then attempted a 23-yard (21 m) volley in the 76th minute that struck the post after beating Hartman's outstretched arm. In the match's last major action, Caligiuri attempted a drive from inside the box but his shot went wide of the goal. With six minutes remaining, Olsen was named the MLS Cup most valuable player (MVP). United goalkeeper Tom Presthus made one save during the match, on one of the Galaxy's two shots on goal. ### Details ## Post-match After winning three titles in four seasons, D.C. United were hailed as the first MLS dynasty despite the league's attempts to encourage parity among teams. Commissioner Don Garber stated he thought it was "terrific to have a dominant team" when asked whether United's performance would hurt the league but added he would "love some balance". United's players celebrated with cigars and champagne in the locker room following the near-collapse of the stage that had been set up for the trophy ceremony. Olsen became the first MLS Cup MVP to have been developed as part of the Project-40 program. On November 23, United were honored with a ten-block parade along Pennsylvania Avenue in Downtown Washington, D.C., which was attended by thousands of fans. United went on to miss the playoffs for three consecutive seasons but would win another MLS Cup in 2004 by defeating the Kansas City Wiz. After the match, Hartman attributed his miscue on the second goal to the poor condition of the pitch, which United defender Jeff Agoos also criticized. Galaxy coach Sigi Schmid, along with Jones and Hermosillo, were fined for criticizing referee Tim Weyland's calls; Schmid was also suspended for the first match of the 2000 season. Schmid highlighted the lack of calls after Fraser's injury and two potential penalties in the first half, along with fouls throughout the match. The Galaxy reached the MLS Cup final in 2001, losing to the San Jose Earthquakes, and won their first title in 2002 against New England at Gillette Stadium, which had replaced Foxboro Stadium. As of 2022, the Los Angeles Galaxy holds the record for the most MLS Cup titles, winning their fifth in 2014 to overtake United's record. As MLS Cup finalists, D.C. United and the Los Angeles Galaxy qualified as the U.S. representatives for the 2000 CONCACAF Champions' Cup, which was hosted in Southern California in January 2001. The two teams met in the semifinals, where the Galaxy defeated United in a penalty shootout following a 1–1 draw. The Galaxy won the tournament, becoming the second US club to win a CONCACAF competition and the last until Seattle Sounders FC in 2022. They earned a place in the 2001 FIFA Club World Championship, which was set to be played in Spain but was later cancelled amid a financing scandal.
1,792,493
Achelousaurus
1,169,082,238
Genus of ceratopsid dinosaur from North America
[ "Campanian genera", "Centrosaurines", "Ceratopsians of North America", "Cretaceous Montana", "Fossil taxa described in 1994", "Fossils of Montana", "Late Cretaceous ceratopsians", "Late Cretaceous dinosaurs of North America", "Ornithischian genera", "Taxa named by Scott D. Sampson" ]
Achelousaurus ( /əˌkiːloʊˈsɔːrəs, ˌækɪˌloʊəˈsɔːrəs/) is a genus of centrosaurine ceratopsid dinosaur that lived during the Late Cretaceous Period of what is now North America, about 74.2 million years ago. The first fossils of Achelousaurus were collected in Montana in 1987, by a team led by Jack Horner, with more finds made in 1989. In 1994, Achelousaurus horneri was described and named by Scott D. Sampson; the generic name means "Achelous lizard", in reference to the Greek deity Achelous, and the specific name refers to Horner. The genus is known from a few specimens consisting mainly of skull material from individuals, ranging from juveniles to adults. A large centrosaurine, Achelousaurus supposedly was about 6 m (20 ft) long, with a weight of about 3 t (3.3 short tons). As a ceratopsian, it walked on all fours, had a short tail and a large head with a hooked beak. It had a bony neck-frill at the rear of the skull, which sported a pair of long spikes, which curved towards the outside. Adult Achelousaurus had rough bosses (roundish protuberances) above the eyes and on the snout where other centrosaurines often had horns in the same positions. These bosses were covered by a thick layer of keratin, but their exact shape in life is uncertain. Some researchers hypothesize that the bosses were used in fights, with the animals butting each other's heads, as well as for display. Within the Ceratopsia, Achelousaurus lies within the clade Pachyrostra (or "thick-snouts"). It has been suggested that it was the direct descendant of the similar genus Einiosaurus (which had spikes but no bosses) and the direct ancestor of Pachyrhinosaurus (which had larger bosses). The first two genera would be transitional forms, evolving through anagenesis from Styracosaurus. There has been debate about this theory, with later discoveries showing that Achelousaurus is closely related to Pachyrhinosaurus in the group Pachyrhinosaurini. Achelousaurus is known from the Two Medicine Formation and lived in the island continent of Laramidia. As a ceratopsian, Achelousaurus would have been a herbivore and it appears to have had a high metabolic rate, though lower than that of modern mammals and birds. ## History of discovery ### Horner's expeditions to Landslide Butte All known Achelousaurus specimens were recovered from the Two Medicine Formation in Glacier County, Montana during excavations conducted by the Museum of the Rockies, which still houses the specimens. The discoveries came about by an accidental chain of events. In the spring of 1985, paleontologist John "Jack" R. Horner was informed that he would no longer be allowed to exploit the Willow Creek site, where he had studied the Maiasaura Egg Mountain nesting colony for six years. Having already made extensive arrangements for a new field season, he was suddenly forced to seek an alternative site. Horner had always been intrigued by the field diaries of Charles Whitney Gilmore who had reported the discovery of dinosaur eggs at Landslide Butte in 1928, but never published on them. In this locality, Gilmore had employed George Fryer Sternberg to excavate skeletons of the horned dinosaurs Brachyceratops and Styracosaurus ovatus. That summer, Horner obtained the permission of the Blackfeet Indian Tribal Council to prospect for fossils on Landslide Butte, which is part of the Blackfeet Indian Reservation; it was the first paleontological investigation there since the 1920s. In August 1985, Horner's associate Bob Makela discovered a rich fossil site on the land of the farmer Ricky Reagan, which was called the Dinosaur Ridge Quarry and contained fossils of horned dinosaurs. On 20 June 1986, Horner and Makela returned to the Blackfeet Indian Reservation and resumed work on the Dinosaur Ridge Quarry, which proved to contain, apart from eggs, more than a dozen skeletons of a horned dinosaur later named Einiosaurus. In August 1986, at a nearby site – the Canyon Bone Bed on the land of Gloria Sundquist, east of the Milk River – Horner's team discovered another Einiosaurus bone bed. Part of the discoveries made on this occasion was an additional horned dinosaur skull, specimen MOR 492, that later would be referred to (i.e., formally assigned to) Rubeosaurus, the genus name in 2010 given to Styracosaurus ovatus. During the field season of 1987 (early July), volunteer Sidney M. Hostetter located another horned dinosaur skull near the Canyon Bone Bed, specimen MOR 485. By the end of August, it had been secured and was driven on a grain truck to the Museum of the Rockies in Bozeman. On 23 June 1988, another site was discovered in the vicinity – the Blacktail Creek North. In the summer of 1989, graduate student Scott D. Sampson joined the team, wanting to study the function of the frill display structures in horned dinosaurs. At the end of June 1989, Horner, his son Jason and his head preparator Carrie Ancell discovered horned dinosaur specimen MOR 591, a subadult skull and partial postcranial skeleton, near the Blacktail Creek. ### Interpretation of the collected fossils It was initially assumed that all the horned dinosaur material recovered by the expeditions could be assigned to a single "styracosaur" species distinct from Styracosaurus albertensis, as the fossils represented a limited geological time period, then estimated at half a million years. Raymond Robert Rogers, who was studying the stratigraphy of the bone beds, referred to it as a Styracosaurus sp. (of undetermined species) in 1989. Styracosaurus ovatus – though sometimes considered an invalid nomen dubium – had already been found in the area by G. F. Sternberg and was an obvious candidate. But also the possibility was taken into account that the finds were of a species new to science. This species was informally named "Styracosaurus makeli" in honor of Bob Makela, who had died in a traffic accident just days before the discovery of specimen MOR 485. In 1990, this name, as an invalid nomen nudum, appeared in a photo caption in a book by Stephen Czerkas. Horner, an expert on the Hadrosauridae family, had less affinity for other kinds of dinosaurs. In 1987 and 1989, horned dinosaur specialist Peter Dodson was invited to investigate the new ceratopsian finds. In 1990, the fossil material was seen by Dodson as strengthening the case for the validity of a separate Styracosaurus ovatus, to be distinguished from Styracosaurus albertensis. Meanwhile, Horner had come to a more complex view of the situation. He still thought that the fossil material had been part of a single population but concluded that this had developed over time as a chronospecies evolving into a series of subsequent taxa. In 1992, Horner, David Varricchio, and Mark Goodwin published an article in Nature based on the six-year field study of sediments and dinosaurs from Montana. They proposed that the expeditions had uncovered three "transitional taxa" spanning the gap between the already known Styracosaurus and Pachyrhinosaurus. For the moment, they declined to name these taxa. The oldest form was indicated as "Transitional Taxon A," mainly represented by skull MOR 492. Then came "Taxon B" – the many skeletons of the Dinosaur Ridge Quarry and the Canyon Bone Bed. The youngest was "Taxon C," represented by skull MOR 485 and the horned dinosaur fossils of the Blacktail Creek. ### Sampson names Achelousaurus Sampson had continued his studies of the material since 1989. In 1994, in a talk during the annual meeting of the Society of Vertebrate Paleontology, he named "Taxon C" as a new genus and species, Achelousaurus horneri. Although an abstract was published containing a sufficient description, it did not identify a holotype, a name-bearing specimen. In 1995, in a subsequent article, Sampson indicated specimen MOR 485 as the holotype specimen of Achelousaurus horneri. The generic name consists of the words Achelous, the name of a Greek mythological figure, and saurus, which is Latinized Greek for lizard. Achelous (Ἀχελῷος) is a Greek river deity and a shapeshifter who was able to transform himself into anything. During a fight with Hercules, the mythical hero, Achelous took the form of a bull, but lost the battle when one of his horns was removed. This allusion is a reference to the supposedly transitional traits of the dinosaur and the characteristic loss of horns through ontogenetic and phylogenetic development, and thus through individual change and evolution. Dodson, in 1996, praised the generic name for being original and intelligent. The specific name honors Jack Horner, for his research on the dinosaurs of the Two Medicine Formation in Montana. Sampson also named "Taxon B" as the genus Einiosaurus in the same article wherein Achelousaurus was described. He said paleontologists needed to be cautious when naming new ceratopsian genera because their intraspecific variation (i.e., variation within a species) might be mistaken for interspecific differences (between species). Until 1995, only one new genus of centrosaurine dinosaur had been named since Pachyrhinosaurus in 1950, namely Avaceratops in 1986. Achelousaurus thus holds particular importance for being one of the few ceratopsid genera named in the late twentieth century. The holotype specimen MOR 485 was collected by Hostetter and Ray Rogers from the Landslide Butte Field Area about 40 km (25 mi) northwest of Cut Bank. In 1995 Sampson described it as the partial skull of an adult animal including the nasal and supraorbital (region above the eye socket) bosses (roundish protuberances instead of horns), and the parietal bones. Additionally, MOR 485 preserves some bones of the skull rear and sides, which in 2009 were listed by Tracy L. Ford as a right squamosal bone, the left squamosal, both maxillae, both lacrimal bones, both quadrate bones, both palatine bones, the braincase and the basioccipital bone. In 2015, Leonardo Maiorino reported that as part of the same specimen a fragmentary lower jaw has been catalogued as MOR 485-7-12-87-4. A right squamosal bone from another adult individual was recovered from the same Canyon Bone Bed site as MOR 485 (and catalogued under the same number), but only reported in 2010. Two other specimens were collected on the Blacktail Creek, 35 km (22 mi) to the south of Cut Bank and referred to Achelousaurus by Sampson in 1995. Specimen MOR 591 is a partial skull and an about 60% complete skeleton of a sub-adult specimen that includes the vertebral column, pelvis, sacrum and a femur. It also includes lower jaws, catalogued as MOR 591-7-15-89-1. Both skull and lower jaws are nearly complete, lacking only the braincase and occipital region. MOR 591 is smaller than the holotype with a skull base length of about 60 cm (24 in). Specimen MOR 571 includes a partial skull and lower jaws with associated ribs and vertebrae of an adult. The skull consists of only the parietals, and the lower jaws are limited to their upper rear bones, the surangulars and articulars. A fifth specimen is MOR 456.1, a subadult. None of the specimens were of an advanced individual age. According to Andrew McDonald and colleagues, the Achelousaurus finds represented single individuals, not bone beds. ### Possible Achelousaurus finds In addition to fossils that have been unequivocally assigned to Achelousaurus, some other material has been found of which the identity is uncertain. A centrosaurine ceratopsid specimen with bosses from the Dinosaur Park Formation (specimen TMP 2002.76.1) found in 1996 was suggested to belong to a new taxon in 2006, but may instead belong to Achelousaurus or Pachyrhinosaurus. Since it is missing the parietal bones, which are used to diagnose centrosaurines, it is not possible to assign it to any genus with confidence. In 2006, it was also proposed that Monoclonius lowei, a dubious species based on a skull (specimen CMN 8790) from the Dinosaur Park Formation, could be a sub-adult specimen of Styracosaurus, Achelousaurus or Einiosaurus, with which it is roughly contemporaneous. In addition, some indeterminate specimens from the Two Medicine Formation – such as fragmentary skull MOR 464 or snout MOR 449 – may belong to Achelousaurus or the two other roughly contemporary ceratopsids Einiosaurus and Rubeosaurus. ## Description ### General build Achelousaurus is estimated to have been 6 m (20 ft) long with a weight of 3 t (3.3 short tons). The skull of an adult individual (holotype specimen MOR 485) was estimated to have been 1.62 m (5.3 ft) long. This puts it in the same size-range as other members of the Centrosaurinae subgroup of ceratopsians that lived during the Campanian age. It was about as large as its close relative Einiosaurus, but with a much heavier build. Achelousaurus approached the robustness of one of the largest and most heavily built horned dinosaurs known: Triceratops. As a ceratopsid, Achelousaurus would have been a quadrupedal animal with hoofed digits and a shortened, downwards swept tail. Its very large head, which would have rested on a straight neck, had a hooked upper beak, very large nasal openings, and long tooth rows developed into dental batteries that contained hundreds of appressed and stacked individual teeth. In the tooth sockets, new teeth grew under the old ones, each position housing a column of teeth posed on top of each other. Achelousaurus had 25 to 28 such tooth positions in each maxilla (upper jaw bone). ### Distinguishing traits In 1995, when describing the species, Sampson gave a formal list of four traits that distinguish Achelousaurus from its centrosaurine relatives. Firstly, adult individuals have nasal bones with a boss on top that is relatively small and thin, and heavily covered with pits; secondly, adult individuals do not have true horns above the eye sockets but relatively large bosses with high ridges; thirdly, not yet fully grown individuals, or subadults, have true horncores (the bony part of the horns) above the eye sockets with the inward facing surface being concave; and fourthly, the parietal bones of the neck shield have a single pair of curved spikes sticking out from the rear margin to behind and to the outside. Besides these unique characteristics, Sampson pointed out additional differences with two very closely related forms. The frill spikes of Achelousaurus are more outwards oriented than the spikes of Einiosaurus, which are medially curved; the spikes of Achelousaurus are nevertheless less directed to the outside than the comparable spikes of Pachyrhinosaurus. Achelousaurus also differs from Pachyrhinosaurus in its smaller nasal boss that does not reach the frontal bones at its rear. Apart from the skull, no features of the skeleton are known that distinguish Achelousaurus from other members of the Centrosaurinae. ### Skull Horned dinosaurs mainly differ from each other in their horns, which are located on the snout and above the eyes, and in the large skull frill, which covers the neck like a shield. Achelousaurus exhibited the build of derived ("advanced") centrosaurines, which are typified by short brow horns or bosses, combined with elaborate frill spikes. The general frill proportions are typically centrosaurine, with a wide rounded squamosal bone at the side, which expanded towards the rear. It also shares the typical frill curvature with a top surface that is convex from side to side and concave from front to rear. Adult Achelousaurus skulls had a rugose, heavily pitted boss on the snout or nasal region, where many other ceratopsids had a horn. Such a boss is often called "pachyostotic", i.e. consisting of thickened bone. But describing it as a thick "boss" can be misleading: in fact, it forms a wide depression with a thin bone floor and irregular excavations, though it is less depressed than with Pachyrhinosaurus. The nasal boss covered about two-thirds of the top surface of the nasal bones. The boss was similar to that seen in the related genus Pachyrhinosaurus, though narrower, shorter, and less high. It covered 27% of the total skull length, was 30% longer than the nostril-eye socket distance and was about twice as long as the eye socket. Its rear edge did not reach the level of the eye socket. The nasal boss extended forward, where it fused onto the nasal and premaxilla bones (of the upper jaw) at the front of the snout, though the nasal bone itself did not fuse with the premaxilla. The boss of specimen MOR 485 furthermore had an excavation (or cavity) at its front end. The horn core that formed the boss may have developed by either becoming procurved (i.e. bent forward) during growth, like the horn of the related Einiosaurus, until it fused onto the nasal bone; or from a simple, erect horn, which later extended its base forward over the snout region, as in Pachyrhinosaurus. The nasal bone formed the top of a large bony nostril. From the rear edge of that nostril a sharp process stuck out to the front. The snout was – compared to that of Einiosaurus – relatively wide at the level of the rear nostrils. The lacrimal bone, in front of the eye socket, was thickened, mainly on the inner surface while the outer surface was featureless apart from a crater-like excavation. Adult skulls also possess large, rugose, and oval bosses on the supraorbital region above the eyes, instead of the horns of other ceratopsids. The supraorbital bosses extended from the postorbital bone forward to incorporate the triangular palpebral and prefrontal bones, and had high transverse ridges around the middle, which were thick at their base and thin towards their top. The palpebral bones strongly stood out, forming an "antorbital buttress". The fused prefrontals did not reach the nasal boss, forming a distinctive transverse saddle-shaped groove separating the nasal boss from the supraorbital bosses. This groove extended backwards, separating the supraorbital bosses from each other and forming a T-shape in top view. These bosses were similar to those of Pachyrhinosaurus, but with taller ridges and more pronounced rugosities. The long and low supraorbital horncores of the sub-adult specimen MOR 591 were similar to those of sub-adult Einiosaurus and Pachyrhinosaurus. They had a concave surface on the inner side as with Pachyrhinosaurus; ridges on the postorbital bones were present that may indicate a beginning transition to bosses. The skull roof of Achelousaurus had a midline cavity, with an opening at the top called the frontal fontanelle, a feature found in all ceratopsids, which have a "double" skull roof formed by the frontal bones folding towards each other between the brow horn bases. This cavity formed sinuses that extended below the supraorbital bosses, which were therefore relatively thin internally, being 25 mm (1 in) thick from the outside to the cavity roof. This cavity appears to have partially closed over as an animal aged, with only the rear part of the fontanelle being open in the adult specimen MOR 485. Like that of all other ceratopsids, the skull of Achelousaurus had a parietosquamosal frill or "neck shield", which was formed by the parietal bones at the rear and the squamosal bones at the sides. The parietal is one of the main bones used to distinguish centrosaurine taxa from each other and resolve relationships between them, whereas the squamosal is very similar across taxa. In Achelousaurus, the squamosal bone was much shorter than the parietal. Of its inner margin the rear portion formed a step in relation to the front part, with the suture between the squamosal and the parietal showing a kink to behind at the level of the rear supratemporal fenestra, a typical centrosaurine trait. The squamosal and the jugal bone, by touching each other, excluded the quadratojugal from the edge of the lateral temporal fenestra, i.e. the opening at the rear of the skull side. The frill of Achelousaurus had two conspicuous large spikes that were directed backwards and curved to the sides away from each other. During the 1990s, it was increasingly understood that such spikes on the parietals were not random growths but specific traits that could be used to determine the evolution of horned dinosaurs, if only it could be analyzed how they corresponded among species. Sampson, in the paper describing Achelousaurus in 1995, therefore introduced a generalized numbering system for such parietal processes, counting them from the midline to the side of the frill. This was applied to the Centrosaurinae as a whole in 1997. The large spikes of Achelousaurus correspond to "Process 3" spikes of other centrosaurines and were similar to those of Einiosaurus, though curved more to the sides, similar to Pachyrhinosaurus. They were shorter and thinner than the corresponding spikes of Styracosaurus. Between these spikes, on both sides of the central frill notch, were two small tab-like processes ("Process 2") that were directed towards the midline. Innermost "Process 1" spikes, as present in Centrosaurus, are lacking with Achelousaurus. The frill had two large paired openings, the parietal fenestrae, with a midline parietal bar between them. A linear row of rounded swellings ran along the top of the parietal bar, which may be homologous to the spikes and horns in the same area of some Pachyrhinosaurus specimens. A row of relatively small processes ran along the parietal shield margin from the "Process 3" spikes outwards, for a total per side of seven. They were largely equal in size, causing the P4 process to be reduced in comparison to the P3. These lower processes appear to have been capped by epoccipitals, bones that lined the frills of ceratopsids. In Achelousaurus these epoccipitals, which start as separate skin ossifications or osteoderms, fuse with the underlying frill bone to form spikes, at least in the third position. In 2020, it was denied that these processes were separate ossifications. In the most mature individuals, the front-most P6 and P7 processes would be less imbricated relative to each other, rotated around their longitudinal axes. #### Keratin sheaths The bosses on the skull of Achelousaurus may have been covered in a keratinous sheath in life, but their shape in a living animal is uncertain. In 2009, the paleontologist Tobin L. Hieronymus and colleagues examined correlations between skull morphology, horn, and skin features of modern horned animals, and examined the skull of centrosaurine dinosaurs for the same correlates. They proposed that the rugose bosses of Achelousaurus and Pachyrhinosaurus were covered by thick pads of cornified (or keratinized) skin, similar to the boss of modern muskoxen (Ovibos moschatus). The nasal horncore of adult Achelousaurus had an upward slant and its upper surface had correlates for a thick epidermal (outer layer of skin) pad that graded into correlates for a cornified sheath on the sides. A thick pad of epidermis may have grown from the V-shaped pitted notch at the tip of the nasal horncore. The growth direction of the nasal pad would have been towards the front. The supraorbital bosses may have had a thick pad of epidermis, which grew at a sideways angle similar to the curved horncores of Coronosaurus, as indicated by the orientation of the "fins" or ridges on the bosses. That the supraorbital bosses lacked a sulcus (or furrow) at their bases indicates that their horn pads stopped at the wrinkled edges of the bosses. The pitting might indicate a softer growing layer connecting the hard inner bone with the hard horn sheath. In addition, correlates for a rostral scale in front of the nasal boss and scale rows along the parietal midline and supraorbital-squamosal region were identified. ## Evolution ### Horner's hypothesis of anagenesis In 1992, the study by Horner et al. tried to account for the fact that within a limited geological period of time (about half a million years) there had been a quick succession of animal communities in the upper Two Medicine Formation. Normally, this would be interpreted as a series of invasions, with the new animal types replacing the old ones. But Horner noted that the newer forms often had a strong similarity to the previous types. This suggested to him that he had discovered a rare proof of evolution in action: the later fauna was basically the old one but at a more evolved stage. The various types found were not distinct species but transitional forms developed within a process of anagenesis. This conformed to the assumption, prevalent at the time, that a species should last about two to three million years. A further indication, according to Horner, was the failure to identify true autapomorphies – unique traits that prove a taxon is a separate species. The fossils instead showed a gradual change from basal (or ancestral) into more derived characters. The horned dinosaurs discovered by Horner exemplified this phenomenon. In the lowest layers of the Two Medicine Formation, 60 m (200 ft) below the overlaying Bearpaw Formation, "Transitional Taxon A" was present. It seemed to be identical to Styracosaurus albertensis, differing from it only in the possession of just a single pair of parietal spikes. The middle layers, 45 m (150 ft) below the Bearpaw, contained "Transitional Taxon B" that also had a single spike pair but differed in the form of its nasal horn that curved to the front over the anterior branches of the nasal bones. In the upper strata, 20 m (65 ft) below the Bearpaw, "Transitional Taxon C" had been excavated. It too had a spike pair but now the nasal horn was fused with the front branches. The upper surface of the horn was elevated and very rough. The orbital horns showed coarse ridges. Subsequently, "Taxon A" was named Stellasaurus, "Taxon B" became Einiosaurus, while "Taxon C" became Achelousaurus. In 1992, Horner et al. did not name these as species for the explicit reason that the entire evolutionary sequence was seen as representing a grade of transitional ceratopsians between Styracosaurus albertensis, known from the Judith River Formation, and the derived, hornless Pachyrhinosaurus from the Horseshoe Canyon Formation, which had the spike pair and bosses on the nose and above the eyes, as well as additional frill ornamentation. In 1997, Horner referred to the three taxa as "centrosaurine 1.", "centrosaurine 2." and "centrosaurine 3.". Horner thought he had found the mechanism driving this evolution, elaborating on ideas he had developed even before he had investigated Landslide Butte. The animals were living on a narrow strip on the east-coast of Laramidia, bordering the Western Interior Seaway and constrained in the west by the 3 to 4 kilometres (2 to 2.5 mi) high proto-Rocky Mountains. During the Bearpaw Transgression sea levels were rising, steadily reducing the width of their coastal habitat from about 300 km (200 mi) to 30 km (20 mi). This led to stronger selection pressures, the severest for Achelousaurus which lived during the phase that the coastal strip was at its narrowest. The lower number of individuals that the smaller habitat could have sustained constituted a population bottleneck, making rapid evolution possible. Increased sexual selection would have induced changes in the sexual ornamentation such as spikes, horns and bosses. A reduced environmental stress by lower sea levels on the other hand, would be typified by adaptive radiation. That sexual selection had indeed been the main mechanism would be proven by the fact that young individuals of all three populations were very similar: they all had two frill spikes, a small nasal horn pointing to the front, and orbital horns in the form of slightly elevated knobs. Only in the adult phase did they begin to differ. According to Horner, this also showed that the populations were very closely related. Horner did not perform an exact cladistic analysis determining the relationship between the three populations. Such an analysis calculates which evolutionary tree implies the lowest number of evolutionary changes and therefore is the most likely. He assumed that this would result in a tree in which the types were successive branches. Such a tree would, as a consequence of the method used, never show a direct ancestor-descendant relationship. Many scientists believed such a relation could never be proven anyway. Horner disagreed: he saw the gradual morphological changes as clear proof that, in this case, the evolution of one taxon into another, without a splitting of the populations, could be directly observed. Evolutionists in general would be too hesitant to recognize this. Such a transition is called anagenesis; he posited that, if the opposite, cladogenesis, could not be proven, a scientist was free to assume an anagenetic process. Basing himself on revised data, Sampson in 1995 estimated that the layers investigated represented a longer period of time than the initially assumed 500,000 years: after the deposition of Gilmore's Brachyceratops quarry, 860,000 years would have passed, and after the Einiosaurus beds 640,000 years, until the maximal extent of the Bearpaw transgression. He did not adopt Horner's hypothesis of anagenesis but assumed speciation took place, with the populations splitting. These time intervals were still short enough to indicate that the rate of speciation must have been high, which might have been true of all centrosaurines of the late Campanian. In 1996, Dodson raised two objections to Horner's hypothesis. Firstly, the possession of just one pair of main spikes seemed more basal than the presence of three pairs, as with Styracosaurus albertensis. This suggested to him that the Einiosaurus–Achelousaurus lineage was a separate branch within the Centrosaurinae. Secondly, he was concerned that Einiosaurus and Achelousaurus were a case of sexual dimorphism, one type being the males, the other the females. This would be suggested by the short geological time interval between the layers their fossils had been found in, which was estimated by him at about 250,000 years. But if the hypothesis were true, it would be perhaps the best example of fast evolution in the Dinosauria. In 2010, Horner admitted that specimen TMP 2002.76.1 seemed to indicate that Achelousaurus was not descended from Einiosaurus, as it preceded both in age, and yet had a nasal boss. But he stressed that even if the lineages split off, its ancestor might have resembled Einiosaurus. Furthermore, it might still be possible that Einiosaurus was a direct descendant of Rubeosaurus. Also, the process of rapid displacements and extinctions of species could in his opinion still be elegantly explained by a westward expansion of the Bearpaw Sea. The process of anagenesis was affirmed by John Wilson and Jack Scannella in 2016, who studied the ontogenetic changes in horned dinosaurs. They compared a small Einiosaurus specimen, MOR 456 8-8-87-1, with Achelousaurus specimen MOR 591. Both proved to be quite similar, with the main differences being a longer face in MOR 456 8-8-87-1, and a sharper supraorbital horncore in MOR 591. They concluded that Achelousaurus was likely the direct descendant of Einiosaurus. The more adult Einiosaurus individuals approached the Achelousaurus morphology. The differences between the two taxa would have been caused by heterochrony – differential changes in the speed the various traits developed during the lifetime of an individual. Since Wilson and colleagues found in 2020 that Stellasaurus (Horner's "Taxon A") was intermediate between Styracosaurus and Einiosaurus in morphology and stratigraphy, they could not discount that it was a transitional taxon within an anagenetic lineage. ### Classification In 1995, Sampson formally placed Achelousaurus in the Ceratopsidae, more precisely the Centrosaurinae. In all analyses, Einiosaurus and Achelousaurus are part of the clade Pachyrhinosaurini. By definition, Achelousaurus is a member of the clade Pachyrostra (or "thick-snouts"), in which it is united with Pachyrhinosaurus. In 2010, Gregory S. Paul assigned A. horneri to the genus Centrosaurus, as C. horneri. This has found no acceptance among other researchers, with subsequent taxonomic assessments invariably keeping the generic name Achelousaurus. ### Phylogeny Sampson felt, in 1995, that there was not enough evidence to conclude that Achelousaurus was a direct descendant of Einiosaurus. Unlike Horner, he decided to perform a cladistic analysis to establish a phylogeny. This showed an evolutionary tree wherein Achelousaurus split off between Einiosaurus and Pachyrhinosaurus, as Horner had predicted. Contrary to Horner's claim, Styracosaurus albertensis could not have been a direct ancestor, as it was a sister species of Centrosaurus in Sampson's analysis. Subsequent studies have sought to determine the precise relationships within this part of the evolutionary tree, with conflicting results regarding the question whether Styracosaurus albertensis or Einiosaurus might have been in the direct line of ascent to Achelousaurus. In 2005, an analysis by Michael Ryan and Anthony Russell found Styracosaurus more closely related to Achelousaurus than to Centrosaurus. This was confirmed by analyses by Ryan in 2007, Nicholas Longrich in 2010, and Xu et al. in 2010. The same year Horner and Andrew T. McDonald moved Styracosaurus ovatus to its own genus, Rubeosaurus, finding it a sister species of Einiosaurus, while Styracosaurus albertensis was again located on the Centrosaurus branch. They also assigned specimen MOR 492, the basis of "Taxon A", to Rubeosaurus. In 2011, a subsequent study by Andrew T. McDonald in this respect replicated the outcome of his previous one, as did a publication by Andre Farke et al. In 2017, J.P. Wilson and Ryan further complicated the issue, concluding that MOR 492 ("Taxon A") was not referable to Rubeosaurus and announcing that yet another genus would be named for it. Wilson and colleagues moved MOR 492 to the new genus Stellasaurus in 2020, which therefore corresponds to "Taxon A". Their study found Rubeosaurus ovatus to be the sister species of Styracosaurus albertensis, and concluded Rubeosaurus to be synonymous with Styracosaurus. Before Achelousaurus was described, Pachyrhinosaurus canadensis had been considered a solitary aberrant form among centrosaurines, set apart from them by its unusual bosses. Achelousaurus gave evolutionary context to the Canadian species, while expanding the temporal and geographical range for what came to be seen as "pachyrhinosaurs." In all analyses, Achelousaurus and Pachyrhinosaurus were sister groups. In 2008, another closely related species was named, Pachyrhinosaurus lakustai. In that study, the term "Pachyrhinosaurs" was used for the clade consisting of Achelousaurus and Pachyrhinosaurus. When Pachyrhinosaurus perotorum was described in 2012, the clade name Pachyrostra was coined, uniting the two genera; Achelousaurus is the basalmost pachyrostran. Shared derived traits (or synapomorphies) of the group are an enlarged nasal ornamentation and a change of the nasal and brow horns into bosses. At the end of the Campanian, there seems to have been a trend of pachyrostrans replacing other centrosaurines. Also in 2012, the clade Pachyrhinosaurini was named, consisting of species more closely related to Pachyrhinosaurus or Achelousaurus than to Centrosaurus. Apart from Einiosaurus and Rubeosaurus, this included Sinoceratops and Xenoceratops, according to a 2013 study. Cladistic analyses develop gradually, reflecting new discoveries and insights. Their results can be shown in a cladogram, with the relationships found ordered in an evolutionary tree. The cladogram below shows the phylogenetic position of Achelousaurus in a cladogram from Wilson and colleagues, 2020. ## Paleobiology ### Function of skull ornamentation In 1995, Sampson noted that earlier studies had found that the horns and frills of ceratopsians most likely had a function in intraspecific display and combat, and that these features would therefore have resulted from sexual selection for successful mating. Likewise, in 1997 Horner concluded that such ornamentation was used by males to establish dominance and that females would have preferred well-equipped males as their offspring would then inherit these traits, conferring a reproduction benefit. Dodson thought that in the Centrosaurinae in general the display value of the frill had been reduced compared to the nasal and supraorbital ornamentation. Sampson in 1995 rejected the possibility that the difference in skull ornamentation between Einiosaurus and Achelousaurus represented sexual dimorphism, for three reasons. Firstly, the extensive Einiosaurus bone beds did not contain any specimens with bosses, as would have been expected if one of the sexes had them. Secondly, Einiosaurus and Achelousaurus are found in strata of a different age. Thirdly, in a situation of sexual dimorphism usually only one of the sexes shows exaggerated secondary sexual characters. Einiosaurus and Achelousaurus however, each have developed a distinct set of such traits. Hieronymus, in 2009, concluded that the nasal and supraorbital bosses were used for butting or ramming the head or the flank of a rival. The bone structure indicates that the bosses were covered by cornified pads as in modern muskoxen, suggesting dominance fights similar to those of members of the Caprinae subfamily. In the latter group, an evolutionary transition can be observed, where the originally straight horns become more robust, padded, and increasingly curved downwards. The evolution from horncores into bosses in Centrosaurinae would likewise have reflected a change in fighting technique, from clashing to high-energy head-butting. Head-butting would have been an expensive and risky behavior. Opponents would have engaged this way only after assessing each other's strengths visually. For this reason, Hieronymus considered it unlikely that the bosses served for species recognition as this was already guaranteed by the innate species-specific display rituals preceding a real – instead of a ritual – fight. The bosses would have evolved for actual combat, part of a social selection in which individuals competed for scarce resources such as mates, food and breeding grounds. Previously it had been suggested that the fusion of the first three neck vertebrae, such as seen in the mature specimen MOR 571, might have been a paleopathology, an instance of the disease spondyloarthropathy, but in 1997 it was concluded that it was more likely a normal ontogenetic trait, the vertebrae growing together to form a so-called "syncervical" to support the heavy head. All three main known specimens have syncervicals consisting of three fused neck vertebrae; the trait could have been inherited from a smaller ancestor using a stiffer neck for burrowing or food acquisition. ### Social behavior It has been claimed that ceratopsian dinosaurs were herding animals, due to the large number of known bone beds containing multiple members of the same ceratopsian species. In 2010, Hunt and Farke pointed out that this was mainly true for centrosaurine ceratopsians. Horner assumed that the horned dinosaurs at Landslide Butte lived in herds which had been killed by drought or disease. Dodson concluded that the fact that the Achelousaurus bone beds were monospecific (containing only one species) confirmed the existence of herds. ### Metabolism There has long been debate about the thermoregulation of dinosaurs, centered around whether they were ectotherms ("cold-blooded") or endotherms ("warm-blooded"). Mammals and birds are homeothermic endotherms, which generate their own body heat and have a high metabolism, whereas reptiles are heterothermic ectotherms, which receive most of their body heat from their surroundings. A 1996 study examined the oxygen isotopes from bone phosphates of animals from the Two Medicine Formation, including the juvenile Achelousaurus specimen MOR 591. δ<sup>18</sup>O values of phosphate in vertebrate bones depend on the δ<sup>18</sup>O values in their body water and the temperature when the bones were deposited, making it possible to measure fluctuations in temperature for each bone of an individual when they were deposited. The study analyzed seasonal variations in the body temperature and differences in temperature between skeletal regions, to determine whether the dinosaurs maintained their temperature seasonally. A varanid lizard fossil sampled for the study showed isotopic variation consistent with it being an heterothermic ectotherm. The variation of the dinosaurs, including Achelousaurus, was consistent with them being homeothermic endotherms. The metabolic rate of these dinosaurs was likely not as high as that of modern mammals and birds, and they may have been intermediate endotherms. ## Paleoenvironment Achelousaurus is known from the Two Medicine Formation, which preserves coastal sediments dating from the Campanian stage of the Late Cretaceous Period, between 83 and 74 million years ago. Achelousaurus specimens are found in the highest levels of the formation, probably closer to the end of that timeframe, 74 million years ago. The Two Medicine Formation is typified by a warm semiarid climate. Its layers were deposed on the east coast of the Laramidia island continent (which consisted of western North America). The high cordillera in the west, combined with predominantly western winds, would have caused a rain shadow, limiting annual rainfall. Rain would mainly have fallen during the summer, when convection storms flooded the landscape. The climate would thus also have been very seasonal, with a long dry season and a short wet season. Vegetation would have been sparse and a little varied. In such conditions, horned dinosaurs would have been dependent on oxbow lakes for a continuous supply of water and food – the main river channels tending to run dry earlier – and perished in them during severe droughts when the animals concentrated around the last watering holes, causing bone beds to form. The brown paleosol in which the horned dinosaurs were found – a mixture of clay and coalified wood fragments – resembles that of modern seasonally dry swamps. The surrounding vegetation might have consisted of about 25 m (80 ft) high conifer trees. Achelousaurus ate much smaller plants, though: a 2013 study determined that ceratopsid herbivores on Laramidia were restricted to feeding on vegetation with a height of 1 m (3.5 ft) or lower. More or less contemporary dinosaur genera of the area included Prosaurolophus, Scolosaurus, Hypacrosaurus, Einiosaurus and tyrannosaurids of uncertain classification. As proven by tooth marks, horned dinosaur fossils in the Landslide Butte Field Area had been scavenged by a large theropod predator, which Rogers suggested were Albertosaurus. The exact composition of the fauna Achelousaurus was part of is uncertain, as its fossils have not been discovered in direct association with other taxa. Its intermediate anagenetic position suggests that Achelousaurus shared its habitat with forms roughly found in the middle or at the end of the time range of its formation. As with horned dinosaurs, Horner assumed he had found transitional taxa in other dinosaur groups of the Two Medicine Formation. One of these was a form in between Lambeosaurus and Hypacrosaurus; in 1994 he would name it Hypacrosaurus stebingeri. Today, Hypacrosaurus stebingeri is no longer seen as having evolved through anagenesis because autapomorphies of the species have been identified. Horner saw some pachycephalosaur skulls as indicative for a taxon in between Stegoceras and Pachycephalosaurus; these have not been consistently referred to a new genus. Finally, Horner thought there was a taxon present that was transitional between Daspletosaurus and Tyrannosaurus. In 2017, tyrannosaurid remains from the Two Medicine Formation were named as a new species of Daspletosaurus: Daspletosaurus horneri. The 2017 study considered it plausible that D. horneri was a direct descendant of D. torosus in a process of anagenesis, but rejected the possibility that D. horneri was the ancestor of Tyrannosaurus. Other ceratopsians from the Two Medicine Formation include Einiosaurus and Stellasaurus. In addition, remains of other indeterminate and dubious centrosaurines, including Brachyceratops, are known from the formation and though they may represent younger stages of the three valid genera, this is not possible to demonstrate. Whereas Horner assumed that Einiosaurus and Achelousaurus were separate in time, in 2010 Donald M. Henderson considered it possible that at least their descendants or ancestors were overlapping or sympatric and thus would have competed for food sources unless there had been niche partitioning. The skull of Achelousaurus was more than twice as strong than that of Einiosaurus in its bending strength and torsion resistance. This might have indicated a difference in diet to avoid competition. The bite strength of Achelousaurus, measured as an ultimate tensile strength, was 30.5 newtons per square millimeter (N/mm<sup>2</sup>) at the maxillary tooth row and 18 N/mm<sup>2</sup> at the beak. Wilson and colleagues found that since the Two Medicine centrosaurines were separated stratigraphically, they were therefore possibly not contemporaneous. However, in 2021 a study by Wilson and Scannella pointed out that specimen MOR 591 was of a younger individual age than the Einiosaurus skull MOR 456 8-8-87-1, but of the same size. If MOR 591 could indeed be referred to Achelousaurus, this might indicate this genus reached its adult size more quickly. The indeterminate specimen TMP 2002.76.1 is from the Dinosaur Park Formation and, if it belongs to Achelousaurus, the genus would be the stratigraphically oldest known pachyrhinosaurine taxon. Achelousaurus would then also be the only Campanian ceratopsid known from more than one formation. Both animals occur right below the marine shales of the Bearpaw Formation, but due to longitudinal differences, TMP 2002.76.1 is about 500,000 years older than the Achelousaurus fossils from the Two Medicine Formation. ## See also - Timeline of ceratopsian research
43,460
Han dynasty
1,173,180,433
Imperial dynasty in China from 202 BC to 220 AD
[ "1st century BC in China", "1st century in China", "200s BC establishments", "206 BC", "220 disestablishments", "2nd century BC in China", "2nd century in China", "3rd century BC in China", "3rd-century BC establishments in China", "3rd-century disestablishments in China", "Dynasties of China", "Former countries in Chinese history", "Han dynasty", "States and territories disestablished in the 3rd century", "States and territories established in the 3rd century BC" ]
The Han dynasty (UK: /ˈhæn/, US: /ˈhɑːn/; Chinese: 汉朝; pinyin: Hàncháo) was an imperial dynasty of China (202 BC – 9 AD, 25–220 AD), established by Liu Bang and ruled by the House of Liu. The dynasty was preceded by the short-lived Qin dynasty (221–207 BC) and a warring interregnum known as the Chu–Han contention (206–202 BC), and it was succeeded by the Three Kingdoms period (220–280 AD). The dynasty was briefly interrupted by the Xin dynasty (9–23 AD) established by usurping regent Wang Mang, and is thus separated into two periods—the Western Han (202 BC – 9 AD) and the Eastern Han (25–220 AD). Spanning over four centuries, the Han dynasty is considered a golden age in Chinese history, and it has influenced the identity of the Chinese civilization ever since. Modern China's majority ethnic group refers to themselves as the "Han people" or "Han Chinese". The Sinitic language and written Chinese are referred to respectively as the "Han language" and "Han characters". The emperor was at the pinnacle of Han society. He presided over the Han government but shared power with both the nobility and appointed ministers who came largely from the scholarly gentry class. The Han Empire was divided into areas directly controlled by the central government called commanderies, as well as a number of semi-autonomous kingdoms. These kingdoms gradually lost all vestiges of their independence, particularly following the Rebellion of the Seven States. From the reign of Emperor Wu (r. 141–87 BC) onward, the Chinese court officially sponsored Confucianism in education and court politics, synthesized with the cosmology of later scholars such as Dong Zhongshu. This policy endured until the fall of the Qing dynasty in 1912 AD. The Han dynasty saw an age of economic prosperity and witnessed a significant growth of the money economy first established during the Zhou dynasty (c. 1050–256 BC). The coinage issued by the central government mint in 119 BC remained the standard coinage of China until the Tang dynasty (618–907 AD). The period saw a number of limited institutional innovations. To finance its military campaigns and the settlement of newly conquered frontier territories, the Han government nationalized the private salt and iron industries in 117 BC, though these government monopolies were later repealed during the Eastern Han dynasty. Science and technology during the Han period saw significant advances, including the process of papermaking, the nautical steering ship rudder, the use of negative numbers in mathematics, the raised-relief map, the hydraulic-powered armillary sphere for astronomy, and a seismometer employing an inverted pendulum that could be used to discern the cardinal direction of distant earthquakes. The Han dynasty is known for the many conflicts it had with the Xiongnu, a nomadic steppe confederation to the dynasty's north. The Xiongnu initially had the upper hand in these conflicts. They defeated the Han in 200 BC and forced the Han to submit as a de facto inferior and vassal partner for several decades, while continuing their military raids on the dynasty's borders. This changed in 133 BC, during the reign of Emperor Wu, when Han forces began a series of intensive military campaigns and operations against the Xiongnu. The Han ultimately defeated the Xiongnu in these campaigns, and the Xiongnu were forced to accept vassal status as Han tributaries. Additionally, the campaigns brought the Hexi Corridor and the Tarim Basin of Central Asia under Han control, split the Xiongnu into two separate confederations, and helped establish the vast trade network known as the Silk Road, which reached as far as the Mediterranean world. The territories north of Han's borders were later overrun by the nomadic Xianbei confederation. Emperor Wu also launched successful military expeditions in the south, annexing Nanyue in 111 BC and Dian in 109 BC. He further expanded Han territory into the northern Korean Peninsula, where Han forces conquered Gojoseon and established the Xuantu and Lelang Commanderies in 108 BC. After 92 AD, the palace eunuchs increasingly involved themselves in the dynasty's court politics, engaging in violent power struggles between the various consort clans of the empresses and empresses dowager, causing the Han's ultimate downfall. Imperial authority was also seriously challenged by large Daoist religious societies which instigated the Yellow Turban Rebellion and the Five Pecks of Rice Rebellion. Following the death of Emperor Ling (r. 168–189 AD), the palace eunuchs suffered wholesale massacre by military officers, allowing members of the aristocracy and military governors to become warlords and divide the empire. When Cao Pi, king of Wei, usurped the throne from Emperor Xian, the Han dynasty ceased to exist. ## Etymology According to the Records of the Grand Historian, after the collapse of the Qin dynasty the hegemon Xiang Yu appointed Liu Bang as prince of the small fief of Hanzhong, named after its location on the Han River (in modern southwest Shaanxi). Following Liu Bang's victory in the Chu–Han Contention, the resulting Han dynasty was named after the Hanzhong fief. ## History ### Western Han China's first imperial dynasty was the Qin dynasty (221–207 BC). The Qin united the Chinese Warring States by conquest, but their regime became unstable after the death of the first emperor Qin Shi Huang. Within four years, the dynasty's authority had collapsed in a rebellion. Two former rebel leaders, Xiang Yu (d. 202 BC) of Chu and Liu Bang (d. 195 BC) of Han, engaged in a war to decide who would become hegemon of China, which had fissured into 18 kingdoms, each claiming allegiance to either Xiang Yu or Liu Bang. Although Xiang Yu proved to be an effective commander, Liu Bang defeated him at the Battle of Gaixia (202 BC), in modern-day Anhui. Liu Bang assumed the title "emperor" (huangdi) at the urging of his followers and is known posthumously as Emperor Gaozu (r. 202–195 BC). Chang'an (known today as Xi'an) was chosen as the new capital of the reunified empire under Han. At the beginning of the Western Han (pinyin: Xīhàn), also known as the Former Han (pinyin: Qiánhàn) dynasty, thirteen centrally-controlled commanderies—including the capital region—existed in the western third of the empire, while the eastern two-thirds were divided into ten semi-autonomous kingdoms. To placate his prominent commanders from the war with Chu, Emperor Gaozu enfeoffed some of them as kings. By 196 BC, the Han court had replaced all but one of these kings (the exception being in Changsha) with royal Liu family members, since the loyalty of non-relatives to the throne was questioned. After several insurrections by Han kings—the largest being the Rebellion of the Seven States in 154 BC—the imperial court enacted a series of reforms beginning in 145 BC limiting the size and power of these kingdoms and dividing their former territories into new centrally-controlled commanderies. Kings were no longer able to appoint their own staff; this duty was assumed by the imperial court. Kings became nominal heads of their fiefs and collected a portion of tax revenues as their personal incomes. The kingdoms were never entirely abolished and existed throughout the remainder of Western and Eastern Han. To the north of China proper, the nomadic Xiongnu chieftain Modu Chanyu (r. 209–174 BC) conquered various tribes inhabiting the eastern portion of the Eurasian Steppe. By the end of his reign, he controlled Manchuria, Mongolia, and the Tarim Basin, subjugating over twenty states east of Samarkand. Emperor Gaozu was troubled about the abundant Han-manufactured iron weapons traded to the Xiongnu along the northern borders, and he established a trade embargo against the group. In retaliation, the Xiongnu invaded what is now Shanxi province, where they defeated the Han forces at Baideng in 200 BC. After negotiations, the heqin agreement in 198 BC nominally held the leaders of the Xiongnu and the Han as equal partners in a royal marriage alliance, but the Han were forced to send large amounts of tribute items such as silk clothes, food, and wine to the Xiongnu. Despite the tribute and negotiation between Laoshang Chanyu (r. 174–160 BC) and Emperor Wen (r. 180–157 BC) to reopen border markets, many of the Chanyu's Xiongnu subordinates chose not to obey the treaty and periodically raided Han territories south of the Great Wall for additional goods. In a court conference assembled by Emperor Wu (r. 141–87 BC) in 135 BC, the majority consensus of the ministers was to retain the heqin agreement. Emperor Wu accepted this, despite continuing Xiongnu raids. However, a court conference the following year convinced the majority that a limited engagement at Mayi involving the assassination of the Chanyu would throw the Xiongnu realm into chaos and benefit the Han. When this plot failed in 133 BC, Emperor Wu launched a series of massive military invasions into Xiongnu territory. The assault culminated in 119 BC at the Battle of Mobei, when Han commanders Huo Qubing (d. 117 BC) and Wei Qing (d. 106 BC) forced the Xiongnu court to flee north of the Gobi Desert, and Han forces reached as far north as Lake Baikal. After Wu's reign, Han forces continued to fight the Xiongnu. The Xiongnu leader Huhanye Chanyu (r. 58–31 BC) finally submitted to the Han as a tributary vassal in 51 BC. Huhanye's rival claimant to the throne, Zhizhi Chanyu (r. 56–36 BC), was killed by Han forces under Chen Tang and Gan Yanshou (甘延壽) at the Battle of Zhizhi, in modern Taraz, Kazakhstan. In 121 BC, Han forces expelled the Xiongnu from a vast territory spanning the Hexi Corridor to Lop Nur. They repelled a joint Xiongnu-Qiang invasion of this northwestern territory in 111 BC. In that same year, the Han court established four new frontier commanderies in this region to consolidate their control: Jiuquan, Zhangyi, Dunhuang, and Wuwei. The majority of people on the frontier were soldiers. On occasion, the court forcibly moved peasant farmers to new frontier settlements, along with government-owned slaves and convicts who performed hard labor. The court also encouraged commoners, such as farmers, merchants, landowners, and hired laborers, to voluntarily migrate to the frontier. Even before the Han's expansion into Central Asia, diplomat Zhang Qian's travels from 139 to 125 BC had established Chinese contacts with many surrounding civilizations. Zhang encountered Dayuan (Fergana), Kangju (Sogdiana), and Daxia (Bactria, formerly the Greco-Bactrian Kingdom); he also gathered information on Shendu (Indus River valley of North India) and Anxi (the Parthian Empire). All of these countries eventually received Han embassies. These connections marked the beginning of the Silk Road trade network that extended to the Roman Empire, bringing Han items like silk to Rome and Roman goods such as glasswares to China. From roughly 115 to 60 BC, Han forces fought the Xiongnu over control of the oasis city-states in the Tarim Basin. The Han was eventually victorious and established the Protectorate of the Western Regions in 60 BC, which dealt with the region's defense and foreign affairs. The Han also expanded southward. The naval conquest of Nanyue in 111 BC expanded the Han realm into what are now modern Guangdong, Guangxi, and northern Vietnam. Yunnan was brought into the Han realm with the conquest of the Dian Kingdom in 109 BC, followed by parts of the Korean Peninsula with the Han conquest of Gojoseon and colonial establishments of Xuantu Commandery and Lelang Commandery in 108 BC. In China's first known nationwide census taken in 2 AD, the population was registered as having 57,671,400 individuals in 12,366,470 households. To pay for his military campaigns and colonial expansion, Emperor Wu nationalized several private industries. He created central government monopolies administered largely by former merchants. These monopolies included salt, iron, and liquor production, as well as bronze-coin currency. The liquor monopoly lasted only from 98 to 81 BC, and the salt and iron monopolies were eventually abolished in the early Eastern Han. The issuing of coinage remained a central government monopoly throughout the rest of the Han dynasty. The government monopolies were eventually repealed when a political faction known as the Reformists gained greater influence in the court. The Reformists opposed the Modernist faction that had dominated court politics in Emperor Wu's reign and during the subsequent regency of Huo Guang (d. 68 BC). The Modernists argued for an aggressive and expansionary foreign policy supported by revenues from heavy government intervention in the private economy. The Reformists, however, overturned these policies, favoring a cautious, non-expansionary approach to foreign policy, frugal budget reform, and lower tax-rates imposed on private entrepreneurs. ### Wang Mang's reign and civil war Wang Zhengjun (71 BC – 13 AD) was first empress, then empress dowager, and finally grand empress dowager during the reigns of the Emperors Yuan (r. 49–33 BC), Cheng (r. 33–7 BC), and Ai (r. 7–1 BC), respectively. During this time, a succession of her male relatives held the title of regent. Following the death of Ai, Wang Zhengjun's nephew Wang Mang (45 BC – 23 AD) was appointed regent as Marshall of State on 16 August under Emperor Ping (r. 1 BC – 6 AD). When Ping died on 3 February 6 AD, Ruzi Ying (d. 25 AD) was chosen as the heir and Wang Mang was appointed to serve as acting emperor for the child. Wang promised to relinquish his control to Liu Ying once he came of age. Despite this promise, and against protest and revolts from the nobility, Wang Mang claimed on 10 January that the divine Mandate of Heaven called for the end of the Han dynasty and the beginning of his own: the Xin dynasty (9–23 AD). Wang Mang initiated a series of major reforms that were ultimately unsuccessful. These reforms included outlawing slavery, nationalizing land to equally distribute between households, and introducing new currencies, a change which debased the value of coinage. Although these reforms provoked considerable opposition, Wang's regime met its ultimate downfall with the massive floods of c. 3 AD and 11 AD. Gradual silt buildup in the Yellow River had raised its water level and overwhelmed the flood control works. The Yellow River split into two new branches: one emptying to the north and the other to the south of the Shandong Peninsula, though Han engineers managed to dam the southern branch by 70 AD. The flood dislodged thousands of peasant farmers, many of whom joined roving bandit and rebel groups such as the Red Eyebrows to survive. Wang Mang's armies were incapable of quelling these enlarged rebel groups. Eventually, an insurgent mob forced their way into the Weiyang Palace and killed Wang Mang. The Gengshi Emperor (r. 23–25 AD), a descendant of Emperor Jing (r. 157–141 BC), attempted to restore the Han dynasty and occupied Chang'an as his capital. However, he was overwhelmed by the Red Eyebrow rebels who deposed, assassinated, and replaced him with the puppet monarch Liu Penzi. Gengshi's distant cousin Liu Xiu, known posthumously as Emperor Guangwu (r. 25–57 AD), after distinguishing himself at the Battle of Kunyang in 23 AD, was urged to succeed Gengshi as emperor. Under Guangwu's rule the Han Empire was restored. Guangwu made Luoyang his capital in 25 AD, and by 27 AD his officers Deng Yu and Feng Yi had forced the Red Eyebrows to surrender and executed their leaders for treason. From 26 until 36 AD, Emperor Guangwu had to wage war against other regional warlords who claimed the title of emperor; when these warlords were defeated, China reunified under the Han. The period between the foundation of the Han dynasty and Wang Mang's reign is known as the Western Han (pinyin: Xīhàn) or Former Han (pinyin: Qiánhàn) (206 BC – 9 AD). During this period the capital was at Chang'an (modern Xi'an). From the reign of Guangwu the capital was moved eastward to Luoyang. The era from his reign until the fall of Han is known as the Eastern Han or Later Han (25–220 AD). ### Eastern Han The Eastern Han (pinyin: Dōnghàn), also known as the Later Han (pinyin: Hòuhàn), formally began on 5 August AD 25, when Liu Xiu became Emperor Guangwu of Han. During the widespread rebellion against Wang Mang, the state of Goguryeo was free to raid Han's Korean commanderies; Han did not reaffirm its control over the region until AD 30. The Trưng Sisters of Vietnam rebelled against Han in AD 40. Their rebellion was crushed by Han general Ma Yuan (d. AD 49) in a campaign from AD 42–43. Wang Mang renewed hostilities against the Xiongnu, who were estranged from Han until their leader Bi (比), a rival claimant to the throne against his cousin Punu (蒲奴), submitted to Han as a tributary vassal in AD 50. This created two rival Xiongnu states: the Southern Xiongnu led by Bi, an ally of Han, and the Northern Xiongnu led by Punu, an enemy of Han. During the turbulent reign of Wang Mang, China lost control over the Tarim Basin, which was conquered by the Northern Xiongnu in AD 63 and used as a base to invade the Hexi Corridor in Gansu. Dou Gu (d. 88 AD) defeated the Northern Xiongnu at the Battle of Yiwulu in AD 73, evicting them from Turpan and chasing them as far as Lake Barkol before establishing a garrison at Hami. After the new Protector General of the Western Regions Chen Mu (d. AD 75) was killed by allies of the Xiongnu in Karasahr and Kucha, the garrison at Hami was withdrawn. At the Battle of Ikh Bayan in AD 89, Dou Xian (d. AD 92) defeated the Northern Xiongnu chanyu who then retreated into the Altai Mountains. After the Northern Xiongnu fled into the Ili River valley in AD 91, the nomadic Xianbei occupied the area from the borders of the Buyeo Kingdom in Manchuria to the Ili River of the Wusun people. The Xianbei reached their apogee under Tanshihuai (檀石槐) (d. AD 180), who consistently defeated Chinese armies. However, Tanshihuai's confederation disintegrated after his death. Ban Chao (d. AD 102) enlisted the aid of the Kushan Empire, occupying the area of modern India, Pakistan, Afghanistan, and Tajikistan, to subdue Kashgar and its ally Sogdiana. When a request by Kushan ruler Vima Kadphises (r. c. 90 – c. 100 AD) for a marriage alliance with the Han was rejected in AD 90, he sent his forces to Wakhan (Afghanistan) to attack Ban Chao. The conflict ended with the Kushans withdrawing because of lack of supplies. In AD 91, the office of Protector General of the Western Regions was reinstated when it was bestowed on Ban Chao. Foreign travelers to Eastern-Han China included Buddhist monks who translated works into Chinese, such as An Shigao from Parthia, and Lokaksema from Kushan-era Gandhara, India. In addition to tributary relations with the Kushans, the Han Empire received gifts from the Parthian Empire, from a king in modern Burma, from a ruler in Japan, and initiated an unsuccessful mission to Daqin (Rome) in AD 97 with Gan Ying as emissary. A Roman embassy of Emperor Marcus Aurelius (r. 161–180 AD) is recorded in the Weilüe and Hou Hanshu to have reached the court of Emperor Huan of Han (r. 146–168 AD) in AD 166, yet Rafe de Crespigny asserts that this was most likely a group of Roman merchants. In addition to Roman glasswares and coins found in China, Roman medallions from the reign of Antoninus Pius and his adopted son Marcus Aurelius have been found at Óc Eo in Vietnam. This was near the commandery of Rinan (also Jiaozhi) where Chinese sources claim the Romans first landed, as well as embassies from Tianzhu (in northern India) in the years 159 and 161. Óc Eo is also thought to be the port city "Cattigara" described by Ptolemy in his Geography (c. 150 AD) as lying east of the Golden Chersonese (Malay Peninsula) along the Magnus Sinus (i.e. Gulf of Thailand and South China Sea), where a Greek sailor had visited. Emperor Zhang's (r. 75–88 AD) reign came to be viewed by later Eastern Han scholars as the high point of the dynastic house. Subsequent reigns were increasingly marked by eunuch intervention in court politics and their involvement in the violent power struggles of the imperial consort clans. In 92 AD, with the aid of the eunuch Zheng Zhong (d. 107 AD), Emperor He (r. 88–105 AD) had Empress Dowager Dou (d. 97 AD) put under house arrest and her clan stripped of power. This was in revenge for Dou's purging of the clan of his natural mother—Consort Liang—and then concealing her identity from him. After Emperor He's death, his wife Empress Deng Sui (d. 121 AD) managed state affairs as the regent empress dowager during a turbulent financial crisis and widespread Qiang rebellion that lasted from 107 to 118 AD. When Empress Dowager Deng died, Emperor An (r. 106–125 AD) was convinced by the accusations of the eunuchs Li Run (李閏) and Jiang Jing (江京) that Deng and her family had planned to depose him. An dismissed Deng's clan members from office, exiled them, and forced many to commit suicide. After An's death, his wife, Empress Dowager Yan (d. 126 AD) placed the child Marquess of Beixiang on the throne in an attempt to retain power within her family. However, palace eunuch Sun Cheng (d. 132 AD) masterminded a successful overthrow of her regime to enthrone Emperor Shun of Han (r. 125–144 AD). Yan was placed under house arrest, her relatives were either killed or exiled, and her eunuch allies were slaughtered. The regent Liang Ji (d. 159 AD), brother of Empress Liang Na (d. 150 AD), had the brother-in-law of Consort Deng Mengnü (later empress) (d. 165 AD) killed after Deng Mengnü resisted Liang Ji's attempts to control her. Afterward, Emperor Huan employed eunuchs to depose Liang Ji, who was then forced to commit suicide. Students from the Imperial University organized a widespread student protest against the eunuchs of Emperor Huan's court. Huan further alienated the bureaucracy when he initiated grandiose construction projects and hosted thousands of concubines in his harem at a time of economic crisis. Palace eunuchs imprisoned the official Li Ying (李膺) and his associates from the Imperial University on a dubious charge of treason. In 167 AD, the Grand Commandant Dou Wu (d. 168 AD) convinced his son-in-law, Emperor Huan, to release them. However, the emperor permanently barred Li Ying and his associates from serving in office, marking the beginning of the Partisan Prohibitions. Following Huan's death, Dou Wu and the Grand Tutor Chen Fan (d. 168 AD) attempted a coup d'état against the eunuchs Hou Lan (d. 172 AD), Cao Jie (d. 181 AD), and Wang Fu (王甫). When the plot was uncovered, the eunuchs arrested Empress Dowager Dou (d. 172 AD) and Chen Fan. General Zhang Huan (張奐) favored the eunuchs. He and his troops confronted Dou Wu and his retainers at the palace gate where each side shouted accusations of treason against the other. When the retainers gradually deserted Dou Wu, he was forced to commit suicide. Under Emperor Ling (r. 168–189 AD) the eunuchs had the partisan prohibitions renewed and expanded, while also auctioning off top government offices. Many affairs of state were entrusted to the eunuchs Zhao Zhong (d. 189 AD) and Zhang Rang (d. 189 AD) while Emperor Ling spent much of his time roleplaying with concubines and participating in military parades. ### End of the Han dynasty The Partisan Prohibitions were repealed during the Yellow Turban Rebellion and Five Pecks of Rice Rebellion in 184 AD, largely because the court did not want to continue to alienate a significant portion of the gentry class who might otherwise join the rebellions. The Yellow Turbans and Five-Pecks-of-Rice adherents belonged to two different hierarchical Daoist religious societies led by faith healers Zhang Jue (d. 184 AD) and Zhang Lu (d. 216 AD), respectively. Zhang Lu's rebellion, in modern northern Sichuan and southern Shaanxi, was not quelled until 215 AD. Zhang Jue's massive rebellion across eight provinces was annihilated by Han forces within a year; however, the following decades saw much smaller recurrent uprisings. Although the Yellow Turbans were defeated, many generals appointed during the crisis never disbanded their assembled militia forces and used these troops to amass power outside of the collapsing imperial authority. General-in-Chief He Jin (d. 189 AD), half-brother to Empress He (d. 189 AD), plotted with Yuan Shao (d. 202 AD) to overthrow the eunuchs by having several generals march to the outskirts of the capital. There, in a written petition to Empress He, they demanded the eunuchs' execution. After a period of hesitation, Empress He consented. When the eunuchs discovered this, however, they had her brother He Miao (何苗) rescind the order. The eunuchs assassinated He Jin on September 22, 189 AD. Yuan Shao then besieged Luoyang's Northern Palace while his brother Yuan Shu (d. 199 AD) besieged the Southern Palace. On September 25 both palaces were breached and approximately two thousand eunuchs were killed. Zhang Rang had previously fled with Emperor Shao (r. 189 AD) and his brother Liu Xie—the future Emperor Xian of Han (r. 189–220 AD). While being pursued by the Yuan brothers, Zhang committed suicide by jumping into the Yellow River. General Dong Zhuo (d. 192 AD) found the young emperor and his brother wandering in the countryside. He escorted them safely back to the capital and was made Minister of Works, taking control of Luoyang and forcing Yuan Shao to flee. After Dong Zhuo demoted Emperor Shao and promoted his brother Liu Xie as Emperor Xian, Yuan Shao led a coalition of former officials and officers against Dong, who burned Luoyang to the ground and resettled the court at Chang'an in May 191 AD. Dong Zhuo later poisoned Emperor Shao. Dong was killed by his adopted son Lü Bu (d. 198 AD) in a plot hatched by Wang Yun (d. 192 AD). Emperor Xian fled from Chang'an in 195 AD to the ruins of Luoyang. Xian was persuaded by Cao Cao (155–220 AD), then Governor of Yan Province in modern western Shandong and eastern Henan, to move the capital to Xuchang in 196 AD. Yuan Shao challenged Cao Cao for control over the emperor. Yuan's power was greatly diminished after Cao defeated him at the Battle of Guandu in 200 AD. After Yuan died, Cao killed Yuan Shao's son Yuan Tan (173–205 AD), who had fought with his brothers over the family inheritance. His brothers Yuan Shang and Yuan Xi were killed in 207 AD by Gongsun Kang (d. 221 AD), who sent their heads to Cao Cao. After Cao's defeat at the naval Battle of Red Cliffs in 208 AD, China was divided into three spheres of influence, with Cao Cao dominating the north, Sun Quan (182–252 AD) dominating the south, and Liu Bei (161–223 AD) dominating the west. Cao Cao died in March 220 AD. By December his son Cao Pi (187–226 AD) had Emperor Xian relinquish the throne to him and is known posthumously as Emperor Wen of Wei. This formally ended the Han dynasty and initiated an age of conflict between three states: Cao Wei, Eastern Wu, and Shu Han. ## Culture and society ### Social class In the hierarchical social order, the emperor was at the apex of Han society and government. However, the emperor was often a minor, ruled over by a regent such as the empress dowager or one of her male relatives. Ranked immediately below the emperor were the kings who were of the same Liu family clan. The rest of society, including nobles lower than kings and all commoners excluding slaves, belonged to one of twenty ranks (ershi gongcheng 二十公乘). Each successive rank gave its holder greater pensions and legal privileges. The highest rank, of full marquess, came with a state pension and a territorial fiefdom. Holders of the rank immediately below, that of ordinary marquess, received a pension, but had no territorial rule. Officials who served in government belonged to the wider commoner social class and were ranked just below nobles in social prestige. The highest government officials could be enfeoffed as marquesses. By the Eastern Han period, local elites of unattached scholars, teachers, students, and government officials began to identify themselves as members of a larger, nationwide gentry class with shared values and a commitment to mainstream scholarship. When the government became noticeably corrupt in mid-to-late Eastern Han, many gentrymen even considered the cultivation of morally-grounded personal relationships more important than serving in public office. The farmer, or specifically the small landowner-cultivator, was ranked just below scholars and officials in the social hierarchy. Other agricultural cultivators were of a lower status, such as tenants, wage laborers, and slaves. The Han dynasty made adjustments to slavery in China and saw an increase in agricultural slaves. Artisans, technicians, tradespeople, and craftsmen had a legal and socioeconomic status between that of owner-cultivator farmers and common merchants. State-registered merchants, who were forced by law to wear white-colored clothes and pay high commercial taxes, were considered by the gentry as social parasites with a contemptible status. These were often petty shopkeepers of urban marketplaces; merchants such as industrialists and itinerant traders working between a network of cities could avoid registering as merchants and were often wealthier and more powerful than the vast majority of government officials. Wealthy landowners, such as nobles and officials, often provided lodging for retainers who provided valuable work or duties, sometimes including fighting bandits or riding into battle. Unlike slaves, retainers could come and go from their master's home as they pleased. Medical physicians, pig breeders, and butchers had a fairly high social status, while occultist diviners, runners, and messengers had low status. ### Marriage, gender, and kinship The Han-era family was patrilineal and typically had four to five nuclear family members living in one household. Multiple generations of extended family members did not occupy the same house, unlike families of later dynasties. According to Confucian family norms, various family members were treated with different levels of respect and intimacy. For example, there were different accepted time frames for mourning the death of a father versus a paternal uncle. Marriages were highly ritualized, particularly for the wealthy, and included many important steps. The giving of betrothal gifts, known as bridewealth and dowry, were especially important. A lack of either was considered dishonorable and the woman would have been seen not as a wife, but as a concubine. Arranged marriages were normal, with the father's input on his offspring's spouse being considered more important than the mother's. Monogamous marriages were also normal, although nobles and high officials were wealthy enough to afford and support concubines as additional lovers. Under certain conditions dictated by custom, not law, both men and women were able to divorce their spouses and remarry. However, a woman who had been widowed continued to belong to her husband's family after his death. In order to remarry, the widow would have to be returned to her family in exchange for a ransom fee. Her children would not be allowed to go with her. Apart from the passing of noble titles or ranks, inheritance practices did not involve primogeniture; each son received an equal share of the family property. Unlike the practice in later dynasties, the father usually sent his adult married sons away with their portions of the family fortune. Daughters received a portion of the family fortune through their marriage dowries, though this was usually much less than the shares of sons. A different distribution of the remainder could be specified in a will, but it is unclear how common this was. Women were expected to obey the will of their father, then their husband, and then their adult son in old age. However, it is known from contemporary sources that there were many deviations to this rule, especially in regard to mothers over their sons, and empresses who ordered around and openly humiliated their fathers and brothers. Women were exempt from the annual corvée labor duties, but often engaged in a range of income-earning occupations aside from their domestic chores of cooking and cleaning. The most common occupation for women was weaving clothes for the family, for sale at market, or for large textile enterprises that employed hundreds of women. Other women helped on their brothers' farms or became singers, dancers, sorceresses, respected medical physicians, and successful merchants who could afford their own silk clothes. Some women formed spinning collectives, aggregating the resources of several different families. ### Education, literature, and philosophy The early Western Han court simultaneously accepted the philosophical teachings of Legalism, Huang-Lao Daoism, and Confucianism in making state decisions and shaping government policy. However, the Han court under Emperor Wu gave Confucianism exclusive patronage. He abolished all academic chairs or erudites (bóshì 博士) not dealing with the Confucian Five Classics in 136 BCE and encouraged nominees for office to receive a Confucian-based education at the Imperial University that he established in 124 BCE. Unlike the original ideology espoused by Confucius, or Kongzi (551–479 BCE), Han Confucianism in Emperor Wu's reign was the creation of Dong Zhongshu (179–104 BCE). Dong was a scholar and minor official who aggregated the ethical Confucian ideas of ritual, filial piety, and harmonious relationships with five phases and yin-yang cosmologies. Much to the interest of the ruler, Dong's synthesis justified the imperial system of government within the natural order of the universe. The Imperial University grew in importance as the student body grew to over 30,000 by the 2nd century CE. A Confucian-based education was also made available at commandery-level schools and private schools opened in small towns, where teachers earned respectable incomes from tuition payments. Schools were established in far southern regions where standard Chinese texts were used to assimilate the local populace. Some important texts were created and studied by scholars. Philosophical works written by Yang Xiong (53 BCE – 18 CE), Huan Tan (43 BCE – 28 CE), Wang Chong (27–100 CE), and Wang Fu (78–163 CE) questioned whether human nature was innately good or evil and posed challenges to Dong's universal order. The Records of the Grand Historian by Sima Tan (d. 110 BCE) and his son Sima Qian (145–86 BCE) established the standard model for all of imperial China's Standard Histories, such as the Book of Han written by Ban Biao (3–54 CE), his son Ban Gu (32–92 CE), and his daughter Ban Zhao (45–116 CE). Biographies on important figures were written by various gentrymen. There were also dictionaries published during the Han period such as the Shuowen Jiezi by Xu Shen (c. 58 – c. 147 CE) and the Fangyan by Yang Xiong. Han dynasty poetry was dominated by the fu genre, which achieved its greatest prominence during the reign of Emperor Wu. ### Law and order Han scholars such as Jia Yi (201–169 BCE) portrayed the previous Qin dynasty as a brutal regime. However, archeological evidence from Zhangjiashan and Shuihudi reveal that many of the statutes in the Han law code compiled by Chancellor Xiao He (d. 193 BCE) were derived from Qin law. Various cases for rape, physical abuse, and murder were prosecuted in court. Women, although usually having fewer rights by custom, were allowed to level civil and criminal charges against men. While suspects were jailed, convicted criminals were never imprisoned. Instead, punishments were commonly monetary fines, periods of forced hard labor for convicts, and the penalty of death by beheading. Early Han punishments of torturous mutilation were borrowed from Qin law. A series of reforms abolished mutilation punishments with progressively less-severe beatings by the bastinado. Acting as a judge in lawsuits was one of the many duties of the county magistrate and Administrators of commanderies. Complex, high-profile, or unresolved cases were often deferred to the Minister of Justice in the capital or even the emperor. In each Han county was several districts, each overseen by a chief of police. Order in the cities was maintained by government officers in the marketplaces and constables in the neighborhoods. ### Food The most common staple crops consumed during Han were wheat, barley, foxtail millet, proso millet, rice, and beans. Commonly eaten fruits and vegetables included chestnuts, pears, plums, peaches, melons, apricots, strawberries, red bayberries, jujubes, calabash, bamboo shoots, mustard plant, and taro. Domesticated animals that were also eaten included chickens, Mandarin ducks, geese, cows, sheep, pigs, camels, and dogs (various types were bred specifically for food, while most were used as pets). Turtles and fish were taken from streams and lakes. Commonly hunted game, such as owl, pheasant, magpie, sika deer, and Chinese bamboo partridge were consumed. Seasonings included sugar, honey, salt, and soy sauce. Beer and wine were regularly consumed. ### Clothing The types of clothing worn and the materials used during the Han period depended upon social class. Wealthy folk could afford silk robes, skirts, socks, and mittens, coats made of badger or fox fur, duck plumes, and slippers with inlaid leather, pearls, and silk lining. Peasants commonly wore clothes made of hemp, wool, and ferret skins. ### Religion, cosmology, and metaphysics Families throughout Han China made ritual sacrifices of animals and food to deities, spirits, and ancestors at temples and shrines. They believed that these items could be used by those in the spiritual realm. It was thought that each person had a two-part soul: the spirit-soul (hun 魂) which journeyed to the afterlife paradise of immortals (xian), and the body-soul (po 魄) which remained in its grave or tomb on earth and was only reunited with the spirit-soul through a ritual ceremony. In addition to his many other roles, the emperor acted as the highest priest in the land who made sacrifices to Heaven, the main deities known as the Five Powers, and the spirits (shen 神) of mountains and rivers. It was believed that the three realms of Heaven, Earth, and Mankind were linked by natural cycles of yin and yang and the five phases. If the emperor did not behave according to proper ritual, ethics, and morals, he could disrupt the fine balance of these cosmological cycles and cause calamities such as earthquakes, floods, droughts, epidemics, and swarms of locusts. It was believed that immortality could be achieved if one reached the lands of the Queen Mother of the West or Mount Penglai. Han-era Daoists assembled into small groups of hermits who attempted to achieve immortality through breathing exercises, sexual techniques, and the use of medical elixirs. By the 2nd century CE, Daoists formed large hierarchical religious societies such as the Way of the Five Pecks of Rice. Its followers believed that the sage-philosopher Laozi () was a holy prophet who would offer salvation and good health if his devout followers would confess their sins, ban the worship of unclean gods who accepted meat sacrifices, and chant sections of the Daodejing. Buddhism first entered Imperial China through the Silk Road during the Eastern Han, and was first mentioned in 65 CE. Liu Ying (d. 71 CE), a half-brother to Emperor Ming of Han (r. 57–75 CE), was one of its earliest Chinese adherents, although Chinese Buddhism at this point was heavily associated with Huang-Lao Daoism. China's first known Buddhist temple, the White Horse Temple, was constructed outside the wall of the capital, Luoyang, during Emperor Ming's reign. Important Buddhist canons were translated into Chinese during the 2nd century CE, including the Sutra of Forty-two Chapters, Perfection of Wisdom, Shurangama Sutra, and Pratyutpanna Sutra. ## Government and politics ### Central government In Han government, the emperor was the supreme judge and lawgiver, the commander-in-chief of the armed forces and sole designator of official nominees appointed to the top posts in central and local administrations; those who earned a 600-bushel salary-rank or higher. Theoretically, there were no limits to his power. However, state organs with competing interests and institutions such as the court conference (tíngyì 廷議)—where ministers were convened to reach majority consensus on an issue—pressured the emperor to accept the advice of his ministers on policy decisions. If the emperor rejected a court conference decision, he risked alienating his high ministers. Nevertheless, emperors sometimes did reject the majority opinion reached at court conferences. Below the emperor were his cabinet members known as the Three Councilors of State (Sān gōng 三公). These were the Chancellor or Minister over the Masses (Chéngxiāng 丞相 or Dà sìtú 大司徒), the Imperial Counselor or Excellency of Works (Yùshǐ dàfū 御史大夫 or Dà sìkōng 大司空), and Grand Commandant or Grand Marshal (Tàiwèi 太尉 or Dà sīmǎ 大司馬). The Chancellor, whose title was changed to 'Minister over the Masses' in 8 BC, was chiefly responsible for drafting the government budget. The Chancellor's other duties included managing provincial registers for land and population, leading court conferences, acting as judge in lawsuits, and recommending nominees for high office. He could appoint officials below the salary-rank of 600 bushels. The Imperial Counselor's chief duty was to conduct disciplinary procedures for officials. He shared similar duties with the Chancellor, such as receiving annual provincial reports. However, when his title was changed to Minister of Works in 8 BC, his chief duty became the oversight of public works projects. The Grand Commandant, whose title was changed to Grand Marshal in 119 BC before reverting to Grand Commandant in 51 AD, was the irregularly posted commander of the military and then regent during the Western Han period. In the Eastern Han era he was chiefly a civil official who shared many of the same censorial powers as the other two Councilors of State. Ranked below the Three Councilors of State were the Nine Ministers (Jiǔ qīng 九卿), who each headed a specialized ministry. The Minister of Ceremonies (Tàicháng 太常) was the chief official in charge of religious rites, rituals, prayers, and the maintenance of ancestral temples and altars. The Minister of the Household (Guāng lù xūn 光祿勳) was in charge of the emperor's security within the palace grounds, external imperial parks, and wherever the emperor made an outing by chariot. The Minister of the Guards (Wèiwèi 衛尉) was responsible for securing and patrolling the walls, towers, and gates of the imperial palaces. The Minister Coachman (Tàipú 太僕) was responsible for the maintenance of imperial stables, horses, carriages, and coach-houses for the emperor and his palace attendants, as well as the supply of horses for the armed forces. The Minister of Justice (Tíngwèi 廷尉) was the chief official in charge of upholding, administering, and interpreting the law. The Minister Herald (Dà hónglú 大鴻臚) was the chief official in charge of receiving honored guests at the imperial court, such as nobles and foreign ambassadors. The Minister of the Imperial Clan (Zōngzhèng 宗正) oversaw the imperial court's interactions with the empire's nobility and extended imperial family, such as granting fiefs and titles. The Minister of Finance (Dà sìnóng 大司農) was the treasurer for the official bureaucracy and the armed forces who handled tax revenues and set standards for units of measurement. The Minister Steward (Shǎofǔ 少府) served the emperor exclusively, providing him with entertainment and amusements, proper food and clothing, medicine and physical care, valuables and equipment. ### Local government The Han empire, excluding kingdoms and marquessates, was divided, in descending order of size, into political units of provinces, commanderies, and counties. A county was divided into several districts (xiang 鄉), the latter composed of a group of hamlets (li 里), each containing about a hundred families. The heads of provinces, whose official title was changed from Inspector to Governor and vice versa several times during Han, were responsible for inspecting several commandery-level and kingdom-level administrations. On the basis of their reports, the officials in these local administrations would be promoted, demoted, dismissed, or prosecuted by the imperial court. A governor could take various actions without permission from the imperial court. The lower-ranked inspector had executive powers only during times of crisis, such as raising militias across the commanderies under his jurisdiction to suppress a rebellion. A commandery consisted of a group of counties, and was headed by an Administrator. He was the top civil and military leader of the commandery and handled defense, lawsuits, seasonal instructions to farmers, and recommendations of nominees for office sent annually to the capital in a quota system first established by Emperor Wu. The head of a large county of about 10,000 households was called a Prefect, while the heads of smaller counties were called Chiefs, and both could be referred to as Magistrates. A Magistrate maintained law and order in his county, registered the populace for taxation, mobilized commoners for annual corvée duties, repaired schools, and supervised public works. ### Kingdoms and marquessates Kingdoms—roughly the size of commanderies—were ruled exclusively by the emperor's male relatives as semi-autonomous fiefdoms. Before 157 BC some kingdoms were ruled by non-relatives, granted to them in return for their services to Emperor Gaozu. The administration of each kingdom was very similar to that of the central government. Although the emperor appointed the Chancellor of each kingdom, kings appointed all the remaining civil officials in their fiefs. However, in 145 BC, after several insurrections by the kings, Emperor Jing removed the kings' rights to appoint officials whose salaries were higher than 400 bushels. The Imperial Counselors and Nine Ministers (excluding the Minister Coachman) of every kingdom were abolished, although the Chancellor was still appointed by the central government. With these reforms, kings were reduced to being nominal heads of their fiefs, gaining a personal income from only a portion of the taxes collected in their kingdom. Similarly, the officials in the administrative staff of a full marquess's fief were appointed by the central government. A marquess's Chancellor was ranked as the equivalent of a county Prefect. Like a king, the marquess collected a portion of the tax revenues in his fief as personal income. Up until the reign of Emperor Jing of Han, the Emperors of the Han had great difficulty bringing the vassal kings under control, as kings often switched their allegiance to the Xiongnu Chanyu whenever threatened by Imperial attempts to centralize power. Within the seven years of Han Gaozu's reign, three vassal kings and one marquess either defected to or allied with the Xiongnu. Even imperial princes in control of fiefdoms would sometimes invite the Xiongnu to invade in response to threats by the Emperor to remove their power. The Han emperors moved to secure a treaty with the Chanyu to demarcate authority between them, recognizing each other as the "two masters" (兩主), the sole representatives of their respective peoples, and cemented it with a marriage alliance (heqin), before eliminating the rebellious vassal kings in 154 BC. This prompted some vassal kings of the Xiongnu to switch their allegiance to the Han emperor from 147 BC. Han court officials were initially hostile to the idea of disrupting the status quo and expanding into the Xiongnu steppe territory. The surrendered Xiongnu were integrated into a parallel military and political structure under the Han Emperor, and opened the avenue for the Han dynasty to challenge the Xiongnu cavalry on the steppe. This also introduced the Han to the interstate networks in the Tarim Basin (Xinjiang), allowing for the expansion of the Han dynasty from a limited regional state to a universalist and cosmopolitan empire through further marriage alliances with another steppe power, the Wusun. ### Military At the beginning of the Han dynasty, every male commoner aged twenty-three was liable for conscription into the military. The minimum age for the military draft was reduced to twenty after Emperor Zhao's (r. 87–74 BC) reign. Conscripted soldiers underwent one year of training and one year of service as non-professional soldiers. The year of training was served in one of three branches of the armed forces: infantry, cavalry, or navy. Soldiers who completed their term of service still needed to train to maintain their skill because they were subject to annual military readiness inspections and could be called up for future service - until this practice was discontinued after 30 AD with the abolishment of much of the conscription system. The year of active service was served either on the frontier, in a king's court, or under the Minister of the Guards in the capital. A small professional (full time career) standing army was stationed near the capital. During the Eastern Han, conscription could be avoided if one paid a commutable tax. The Eastern Han court favored the recruitment of a volunteer army. The volunteer army comprised the Southern Army (Nanjun 南軍), while the standing army stationed in and near the capital was the Northern Army (Beijun 北軍). Led by Colonels (Xiaowei 校尉), the Northern Army consisted of five regiments, each composed of several thousand soldiers. When central authority collapsed after 189 AD, wealthy landowners, members of the aristocracy/nobility, and regional military-governors relied upon their retainers to act as their own personal troops. The latter were known as buqu 部曲, a special social class in Chinese history. During times of war, the volunteer army was increased, and a much larger militia was raised across the country to supplement the Northern Army. In these circumstances, a General (Jiangjun 將軍) led a division, which was divided into regiments led by Colonels and sometimes Majors (Sima 司馬). Regiments were divided into companies and led by Captains. Platoons were the smallest units of soldiers. ## Economy ### Currency The Han dynasty inherited the ban liang coin type from the Qin. In the beginning of the Han, Emperor Gaozu closed the government mint in favor of private minting of coins. This decision was reversed in 186 BC by his widow Grand Empress Dowager Lü Zhi (d. 180 BC), who abolished private minting. In 182 BC, Lü Zhi issued a bronze coin that was much lighter in weight than previous coins. This caused widespread inflation that was not reduced until 175 BC when Emperor Wen allowed private minters to manufacture coins that were precisely 2.6 g (0.09 oz) in weight. In 144 BC Emperor Jing abolished private minting in favor of central-government and commandery-level minting; he also introduced a new coin. Emperor Wu introduced another in 120 BC, but a year later he abandoned the ban liangs entirely in favor of the wuzhu (五銖) coin, weighing 3.2 g (0.11 oz). The wuzhu became China's standard coin until the Tang dynasty (618–907 AD). Its use was interrupted briefly by several new currencies introduced during Wang Mang's regime until it was reinstated in 40 AD by Emperor Guangwu. Since commandery-issued coins were often of inferior quality and lighter weight, the central government closed commandery mints and monopolized the issue of coinage in 113 BC. This central government issuance of coinage was overseen by the Superintendent of Waterways and Parks, this duty being transferred to the Minister of Finance during the Eastern Han. ### Taxation and property Aside from the landowner's land tax paid in a portion of their crop yield, the poll tax and property taxes were paid in coin cash. The annual poll tax rate for adult men and women was 120 coins and 20 coins for minors. Merchants were required to pay a higher rate of 240 coins. The poll tax stimulated a money economy that necessitated the minting of over 28,000,000,000 coins from 118 BC to 5 AD, an average of 220,000,000 coins a year. The widespread circulation of coin cash allowed successful merchants to invest money in land, empowering the very social class the government attempted to suppress through heavy commercial and property taxes. Emperor Wu even enacted laws which banned registered merchants from owning land, yet powerful merchants were able to avoid registration and own large tracts of land. The small landowner-cultivators formed the majority of the Han tax base; this revenue was threatened during the latter half of Eastern Han when many peasants fell into debt and were forced to work as farming tenants for wealthy landlords. The Han government enacted reforms in order to keep small landowner-cultivators out of debt and on their own farms. These reforms included reducing taxes, temporary remissions of taxes, granting loans, and providing landless peasants temporary lodging and work in agricultural colonies until they could recover from their debts. In 168 BC, the land tax rate was reduced from one-fifteenth of a farming household's crop yield to one-thirtieth, and later to one-hundredth of a crop yield for the last decades of the dynasty. The consequent loss of government revenue was compensated for by increasing property taxes. The labor tax took the form of conscripted labor for one month per year, which was imposed upon male commoners aged fifteen to fifty-six. This could be avoided in Eastern Han with a commutable tax, since hired labor became more popular. ### Private manufacture and government monopolies In the early Western Han, a wealthy salt or iron industrialist, whether a semi-autonomous king or wealthy merchant, could boast funds that rivaled the imperial treasury and amass a peasant workforce of over a thousand. This kept many peasants away from their farms and denied the government a significant portion of its land tax revenue. To eliminate the influence of such private entrepreneurs, Emperor Wu nationalized the salt and iron industries in 117 BC and allowed many of the former industrialists to become officials administering the state monopolies. By Eastern Han times, the central government monopolies were repealed in favor of production by commandery and county administrations, as well as private businessmen. Liquor was another profitable private industry nationalized by the central government in 98 BC. However, this was repealed in 81 BC and a property tax rate of two coins for every 0.2 L (0.05 gallons) was levied for those who traded it privately. By 110 BC Emperor Wu also interfered with the profitable trade in grain when he eliminated speculation by selling government-stored grain at a lower price than that demanded by merchants. Apart from Emperor Ming's creation of a short-lived Office for Price Adjustment and Stabilization, which was abolished in 68 AD, central-government price control regulations were largely absent during the Eastern Han. ## Science and technology The Han dynasty was a unique period in the development of premodern Chinese science and technology, comparable to the level of scientific and technological growth during the Song dynasty (960–1279). ### Writing materials In the 1st millennium BC, typical ancient Chinese writing materials were bronzewares, animal bones, and bamboo slips or wooden boards. By the beginning of the Han dynasty, the chief writing materials were clay tablets, silk cloth, hemp paper, and rolled scrolls made from bamboo strips sewn together with hempen string; these were passed through drilled holes and secured with clay stamps. The oldest known Chinese piece of hempen paper dates to the 2nd century BC. The standard papermaking process was invented by Cai Lun (AD 50–121) in 105. The oldest known surviving piece of paper with writing on it was found in the ruins of a Han watchtower that had been abandoned in AD 110, in Inner Mongolia. ### Metallurgy and agriculture Evidence suggests that blast furnaces, that convert raw iron ore into pig iron, which can be remelted in a cupola furnace to produce cast iron by means of a cold blast and hot blast, were operational in China by the late Spring and Autumn period (722–481 BC). The bloomery was nonexistent in ancient China; however, the Han-era Chinese produced wrought iron by injecting excess oxygen into a furnace and causing decarburization. Cast iron and pig iron could be converted into wrought iron and steel using a fining process. The Han dynasty Chinese used bronze and iron to make a range of weapons, culinary tools, carpenters' tools, and domestic wares. A significant product of these improved iron-smelting techniques was the manufacture of new agricultural tools. The three-legged iron seed drill, invented by the 2nd century BC, enabled farmers to carefully plant crops in rows instead of casting seeds out by hand. The heavy moldboard iron plow, also invented during the Han dynasty, required only one man to control it with two oxen to pull it. It had three plowshares, a seed box for the drills, a tool which turned down the soil and could sow roughly 45,730 m<sup>2</sup> (11.3 acres) of land in a single day. To protect crops from wind and drought, the grain intendant Zhao Guo (趙過) created the alternating fields system (daitianfa 代田法) during Emperor Wu's reign. This system switched the positions of furrows and ridges between growing seasons. Once experiments with this system yielded successful results, the government officially sponsored it and encouraged peasants to use it. Han farmers also used the pit field system (aotian 凹田) for growing crops, which involved heavily fertilized pits that did not require plows or oxen and could be placed on sloping terrain. In the southern and small parts of central Han-era China, paddy fields were chiefly used to grow rice, while farmers along the Huai River used transplantation methods of rice production. ### Structural and geotechnical engineering Timber was the chief building material during the Han dynasty; it was used to build palace halls, multi-story residential towers and halls, and single-story houses. Because wood decays rapidly, the only remaining evidence of Han wooden architecture is a collection of scattered ceramic roof tiles. The oldest surviving wooden halls in China date to the Tang dynasty (AD 618–907). Architectural historian Robert L. Thorp points out the scarcity of Han-era archeological remains, and claims that often unreliable Han-era literary and artistic sources are used by historians for clues about lost Han architecture. Though Han wooden structures decayed, some Han-dynasty ruins made of brick, stone, and rammed earth remain intact. This includes stone pillar-gates, brick tomb chambers, rammed-earth city walls, rammed-earth and brick beacon towers, rammed-earth sections of the Great Wall, rammed-earth platforms where elevated halls once stood, and two rammed-earth castles in Gansu. The ruins of rammed-earth walls that once surrounded the capitals Chang'an and Luoyang still stand, along with their drainage systems of brick arches, ditches, and ceramic water pipes. Monumental stone pillar-gates, twenty-nine of which survive from the Han period, formed entrances of walled enclosures at shrine and tomb sites. These pillars feature artistic imitations of wooden and ceramic building components such as roof tiles, eaves, and balustrades. The courtyard house is the most common type of home portrayed in Han artwork. Ceramic architectural models of buildings, like houses and towers, were found in Han tombs, perhaps to provide lodging for the dead in the afterlife. These provide valuable clues about lost wooden architecture. The artistic designs found on ceramic roof tiles of tower models are in some cases exact matches to Han roof tiles found at archeological sites. Over ten Han-era underground tombs have been found, many of them featuring archways, vaulted chambers, and domed roofs. Underground vaults and domes did not require buttress supports since they were held in place by earthen pits. The use of brick vaults and domes in aboveground Han structures is unknown. From Han literary sources, it is known that wooden-trestle beam bridges, arch bridges, simple suspension bridges, and floating pontoon bridges existed in Han China. However, there are only two known references to arch bridges in Han literature. There is only one relief sculpture dated to the Han period that depicts an arch bridge; it is located in Sichuan province. Underground mine shafts, some reaching depths over 100 m (330 ft), were created for the extraction of metal ores. Borehole drilling and derricks were used to lift brine to iron pans where it was distilled into salt. The distillation furnaces were heated by natural gas funneled to the surface through bamboo pipelines. These boreholes perhaps reached a depth of 600 m (2000 ft). ### Mechanical and hydraulic engineering Han-era mechanical engineering comes largely from the choice observational writings of sometimes-disinterested Confucian scholars who generally considered scientific and engineering endeavors to be far beneath them. Professional artisan-engineers (jiang 匠) did not leave behind detailed records of their work. Han scholars, who often had little or no expertise in mechanical engineering, sometimes provided insufficient information on the various technologies they described. Nevertheless, some Han literary sources provide crucial information. For example, in 15 BC the philosopher and poet Yang Xiong described the invention of the belt drive for a quilling machine, which was of great importance to early textile manufacturing. The inventions of mechanical engineer and craftsman Ding Huan are mentioned in the Miscellaneous Notes on the Western Capital. Around AD 180, Ding created a manually operated rotary fan used for air conditioning within palace buildings. Ding also used gimbals as pivotal supports for one of his incense burners and invented the world's first known zoetrope lamp. Modern archeology has led to the discovery of Han artwork portraying inventions which were otherwise absent in Han literary sources. As observed in Han miniature tomb models, but not in literary sources, the crank handle was used to operate the fans of winnowing machines that separated grain from chaff. The odometer cart, invented during the Han period, measured journey lengths, using mechanical figures banging drums and gongs to indicate each distance traveled. This invention is depicted in Han artwork by the 2nd century, yet detailed written descriptions were not offered until the 3rd century. Modern archeologists have also unearthed specimens of devices used during the Han dynasty, for example a pair of sliding metal calipers used by craftsmen for making minute measurements. These calipers contain inscriptions of the exact day and year they were manufactured. These tools are not mentioned in any Han literary sources. The waterwheel appeared in Chinese records during the Han. As mentioned by Huan Tan about AD 20, they were used to turn gears that lifted iron trip hammers, and were used in pounding, threshing, and polishing grain. However, there is no sufficient evidence for the watermill in China until about the 5th century. The Nanyang Commandery Administrator, mechanical engineer, and metallurgist Du Shi (d. 38 AD) created a waterwheel-powered reciprocator that worked the bellows for the smelting of iron. Waterwheels were also used to power chain pumps that lifted water to raised irrigation ditches. The chain pump was first mentioned in China by the philosopher Wang Chong in his 1st-century Balanced Discourse. The armillary sphere, a three-dimensional representation of the movements in the celestial sphere, was invented in Han China by the 1st century BC. Using a water clock, waterwheel, and a series of gears, the Court Astronomer Zhang Heng (AD 78–139) was able to mechanically rotate his metal-ringed armillary sphere. To address the problem of slowed timekeeping in the pressure head of the inflow water clock, Zhang was the first in China to install an additional tank between the reservoir and inflow vessel. Zhang also invented a device he termed an "earthquake weathervane" (houfeng didong yi 候風地動儀), which the British biochemist, sinologist, and historian Joseph Needham described as "the ancestor of all seismographs". This device was able to detect the exact cardinal or ordinal direction of earthquakes from hundreds of kilometers away. It employed an inverted pendulum that, when disturbed by ground tremors, would trigger a set of gears that dropped a metal ball from one of eight dragon mouths (representing all eight directions) into a metal toad's mouth. The account of this device in the Book of the Later Han describes how, on one occasion, one of the metal balls was triggered without any of the observers feeling a disturbance. Several days later, a messenger arrived bearing news that an earthquake had struck in Longxi Commandery (in modern Gansu Province), the direction the device had indicated, which forced the officials at court to admit the efficacy of Zhang's device. ### Mathematics Three Han mathematical treatises still exist. These are the Book on Numbers and Computation, the Arithmetical Classic of the Gnomon and the Circular Paths of Heaven, and the Nine Chapters on the Mathematical Art. Han-era mathematical achievements include solving problems with right-angle triangles, square roots, cube roots, and matrix methods, finding more accurate approximations for pi, providing mathematical proof of the Pythagorean theorem, use of the decimal fraction, Gaussian elimination to solve linear equations, and continued fractions to find the roots of equations. One of the Han's greatest mathematical advancements was the world's first use of negative numbers. Negative numbers first appeared in the Nine Chapters on the Mathematical Art as black counting rods, where positive numbers were represented by red counting rods. Negative numbers were also used by the Greek mathematician Diophantus around AD 275, and in the 7th-century Bakhshali manuscript of Gandhara, South Asia, but were not widely accepted in Europe until the 16th century. The Han applied mathematics to various diverse disciplines. In musical tuning, Jing Fang (78–37 BC) realized that 53 perfect fifths was approximate to 31 octaves. He also created a musical scale of 60 tones, calculating the difference at <sup>177147</sup>⁄<sub>176776</sub> (the same value of 53 equal temperament discovered by the German mathematician Nicholas Mercator [1620–1687], i.e. 3<sup>53</sup>/2<sup>84</sup>). ### Astronomy Mathematics were essential in drafting the astronomical calendar, a lunisolar calendar that used the Sun and Moon as time-markers throughout the year. During the spring and autumn periods of the 5th century BC, the Chinese established the Sifen calendar (古四分历), which measured the tropical year at 365.25 days. This was replaced in 104 BC with the Taichu calendar (太初曆) that measured the tropical year at 365+385⁄1539 (\~ 365.25016) days and the lunar month at 29+43⁄81 days. However, Emperor Zhang later reinstated the Sifen calendar. Han dynasty astronomers made star catalogues and detailed records of comets that appeared in the night sky, including recording the 12 BC appearance of the comet now known as Halley's Comet. They adopted a geocentric model of the universe, theorizing that it was shaped like a sphere surrounding the earth in the center. They assumed that the Sun, Moon, and planets were spherical and not disc-shaped. They also thought that the illumination of the Moon and planets was caused by sunlight, that lunar eclipses occurred when the Earth obstructed sunlight falling onto the Moon, and that a solar eclipse occurred when the Moon obstructed sunlight from reaching the Earth. Although others disagreed with his model, Wang Chong accurately described the water cycle of the evaporation of water into clouds. ### Cartography, ships, and vehicles Evidence found in Chinese literature, and archeological evidence, show that cartography existed in China before the Han. Some of the earliest Han maps discovered were ink-penned silk maps found amongst the Mawangdui Silk Texts in a 2nd-century-BC tomb. The general Ma Yuan created the world's first known raised-relief map from rice in the 1st century. This date could be revised if the tomb of Emperor Qin Shi Huang is excavated and the account in the Records of the Grand Historian concerning a model map of the empire is proven to be true. Although the use of the graduated scale and grid reference for maps was not thoroughly described until the published work of Pei Xiu (AD 224–271), there is evidence that in the early 2nd century, cartographer Zhang Heng was the first to use scales and grids for maps. Han dynasty Chinese sailed in a variety of ships different from those of previous eras, such as the tower ship. The junk design was developed and realized during the Han era. Junk ships featured a square-ended bow and stern, a flat-bottomed hull or carvel-shaped hull with no keel or sternpost, and solid transverse bulkheads in the place of structural ribs found in Western vessels. Moreover, Han ships were the first in the world to be steered using a rudder at the stern, in contrast to the simpler steering oar used for riverine transport, allowing them to sail on the high seas. Although ox-carts and chariots were previously used in China, the wheelbarrow was first used in Han China in the 1st century BC. Han artwork of horse-drawn chariots shows that the Warring-States-Era heavy wooden yoke placed around a horse's chest was replaced by the softer breast strap. Later, during the Northern Wei (386–534), the fully developed horse collar was invented. ### Medicine Han-era medical physicians believed that the human body was subject to the same forces of nature that governed the greater universe, namely the cosmological cycles of yin and yang and the five phases. Each organ of the body was associated with a particular phase. Illness was viewed as a sign that qi or "vital energy" channels leading to a certain organ had been disrupted. Thus, Han-era physicians prescribed medicine that was believed to counteract this imbalance. For example, since the wood phase was believed to promote the fire phase, medicinal ingredients associated with the wood phase could be used to heal an organ associated with the fire phase. Besides dieting, Han physicians also prescribed moxibustion, acupuncture, and calisthenics as methods of maintaining one's health. When surgery was performed by the Chinese physician Hua Tuo (d. AD 208), he used anesthesia to numb his patients' pain and prescribed a rubbing ointment that allegedly sped the process of healing surgical wounds. Whereas the physician Zhang Zhongjing (c. AD 150 – c. 219) is known to have written the Shanghan lun ("Dissertation on Typhoid Fever"), it is thought that both he and Hua Tuo collaborated in compiling the Shennong Ben Cao Jing medical text. ## See also - Battle of Jushi - Campaign against Dong Zhuo - Comparative studies of the Roman and Han empires - Han Emperors family tree - Shuanggudui - Ten Attendants
3,879,675
No. 34 Squadron RAAF
1,161,688,290
Royal Australian Air Force VIP transport squadron
[ "Air transport of heads of state", "Aircraft squadrons of the Royal Australian Air Force in World War II", "Military units and formations established in 1942", "RAAF squadrons" ]
No. 34 Squadron is a Royal Australian Air Force (RAAF) VIP transport squadron. It operates Boeing 737 Business Jets and Dassault Falcon 7Xs from Defence Establishment Fairbairn in Canberra. The squadron was formed in February 1942 for standard transport duties during World War II, initially flying de Havilland DH.84 Dragons in Northern Australia. In 1943 it re-equipped with Douglas C-47 Dakotas, which it operated in New Guinea and the Dutch East Indies prior to disbanding in June 1946. The unit was re-established in March 1948 as No. 34 (Communications) Squadron at RAAF Station Mallala, South Australia, where it supported activities at the Woomera Rocket Range before disbanding in October 1955. It was re-raised as No. 34 (VIP) Flight in March 1956 at RAAF Base Canberra (later Fairbairn). No. 34 Flight was redesignated No. 34 (Special Transport) Squadron in July 1959, and No. 34 Squadron in June 1963. During the 1960s it operated Dakotas, Convair Metropolitans, Vickers Viscounts, Dassault Falcon-Mysteres, Hawker Siddeley HS 748s, and BAC 1-11s, the last three types continuing in service until the late 1980s. The squadron's fleet consisted solely of Dassault Falcon 900s from 1989 until 2002, when it began operating the 737 and Bombardier Challenger 604s. The Challengers were replaced with the Falcon 7Xs in 2019. ## Role and equipment No. 34 Squadron is the Royal Australian Air Force (RAAF) unit responsible for the transport of VIPs, including members of the Australian government, the Governor-General, senior Australian Defence Force personnel, and visiting dignitaries. It is based at Defence Establishment Fairbairn in Canberra, and administered by No. 86 Wing, which is part of Air Mobility Group. The squadron has a secondary role providing emergency transport during humanitarian operations. Its motto is Eo et redeo ("I Go and I Return"). As at 2011, No. 34 Squadron's strength included around thirty pilots and thirty flight attendants. Captains are generally senior pilots who have previously flown the RAAF's Boeing C-17 Globemaster, Lockheed C-130 Hercules, or Lockheed AP-3C Orion. Their co-pilots are new RAAF personnel who have recently graduated from No. 2 Flying Training School, and the crew attendants are posted to the squadron after completing training and a period of service with No. 33 Squadron. The squadron's VIP Operations Cell (VIPOPS) is responsible for managing requests for VIP air transport as well as dedicated security staff. Most logistical support, including meal preparation, is provided under commercial arrangements rather than by RAAF personnel. No. 34 Squadron operates two Boeing 737 Business Jets and three Dassault Falcon 7Xs. The aircraft are leased from, and maintained by, Northrop Grumman Integrated Defence Services (previously Qantas Defence Services). The twin-engined Boeing Business Jet (BBJ) is crewed by two pilots and up to four flight attendants, and can carry thirty passengers. The tri-jet Falcon 7X has a crew of two pilots and one flight attendant, and carries up to fourteen passengers. The jets are classified as "Special Purpose Aircraft", meaning that their tasking is governed by Federal guidelines for carrying "entitled persons" on official business. To minimise government outlay, the jets may not be employed when available commercial flights satisfy the timing, location and security requirements of a given task. No. 34 Squadron conducts between 1,200 and 1,800 flights each year. A Schedule of Special Purpose Flights is tabled twice annually in Federal Parliament. VIPOPS usually assigns one of No. 34 Squadron's aircraft to approved tasks, but other Australian Defence Force aircraft are occasionally used for tasks not suited to the BBJ or Challenger; for instance, Prime Minister Julia Gillard travelled to China on board a No. 33 Squadron Airbus KC-30A Multi Role Tanker Transport in April 2013. ## History ### World War II and aftermath During February and March 1942, the RAAF formed four transport units: Nos. 33, 34, 35 and 36 Squadrons. No. 34 (Transport) Squadron was established on 23 February at RAAF Station Darwin, Northern Territory, four days after the city was bombed for the first time. Coming under the control of North-Western Area Command, the squadron's initial strength was six personnel and two de Havilland DH.84 Dragons. They were immediately tasked with transport duties in northern Australia. As well as carrying freight, this involved collecting the first Japanese prisoner of war to be captured in Australia, a navy fighter pilot who had crashlanded during the raid on Darwin. One of the squadron's two officers, Flight Lieutenant J.W. Warwick, became the first (acting) commanding officer on 2 March. The following day, one of the Dragons was destroyed on the ground at Wyndham, Western Australia, by enemy air attack. With its other aircraft unserviceable, and accommodation at Darwin's civil airfield inadequate, squadron headquarters relocated to Daly Waters Airfield on 5 March. On 14 March another Dragon was allocated; this was joined by two Avro Ansons and two de Havilland Tiger Moths in mid-May, by which time the squadron had moved to Batchelor Airfield. By the end of the month, the squadron had thirty-four personnel, including six officers. It lost one of the Tiger Moths to a bushfire on 1 July, a few days after the plane crashlanded south of Katherine. The squadron relocated again on 15 July, this time to Hughes Airfield. It remained at Hughes until 27 August, when it transferred to Manbulloo Airfield; it operated from Manbulloo until it was temporarily disbanded on 13 December and its aircraft transferred to No. 6 Communications Flight. No. 34 Squadron was re-formed on 3 January 1943 at Parafield Airport, South Australia, from elements of No. 36 Squadron formerly based at Essendon, Victoria. Initially comprising ninety-six personnel and eight aircraft, by the end of the month the squadron's strength had been reduced to seventy personnel and three Dragons operating in South Australia and the Northern Territory. On 11 March one of the Dragons was destroyed on takeoff at Parafield, causing two deaths—No. 34 Squadron's first fatalities. Another Dragon was lost in a fire after it crashlanded near Tennant Creek in April. Beginning in May 1943, the Dragons were augmented by Douglas C-47 Dakotas, giving the squadron a total strength of three Dakotas and two Dragons by the following month. By July, No. 34 Squadron was operating five Dakotas, which had fully replaced the Dragons, and in August its strength stood at seven Dakotas and 153 personnel, including forty-seven officers. It subsequently received an Airspeed Oxford and a Douglas DC-2, and began making supply drops and medical evacuations as far north as Port Moresby, New Guinea. The squadron had its busiest month in May 1944, transporting almost 1,900 passengers and over 1,000,000 pounds (450,000 kg) of cargo. On 1 June it became the first operational RAAF squadron to have personnel of the Women's Auxiliary Australian Air Force (WAAAF) in its ranks, a contingent made up of an officer and twenty airwomen. The WAAAF had been formed in 1941 and eventually made up thirty-one per cent of RAAF ground staff; its members were primarily employed in technical trades and were not permitted to serve in combat theatres. October 1944 saw a detachment of the squadron operating from Cape York in Far North Queensland to bases in the Dutch East Indies. Other detachments were located at Townsville, Queensland, and Coomalie Creek, Northern Territory. In February 1945, No. 34 Squadron commenced a relocation to Morotai in the Dutch East Indies, under the control of the Australian First Tactical Air Force, and was fully established at its new base by mid-April. The squadron supported the invasion of Borneo, and its Dakotas were the first Allied aircraft to land at Labuan and Tarakan after the islands were captured. It remained at Morotai until the end of the war, at which time it became involved in repatriating Australian former prisoners of war from Singapore, and then in courier flights supporting the formation of the British Commonwealth Occupation Force in Japan. No. 34 Squadron returned to Australia between January and March 1946 and disbanded at RAAF Station Richmond, New South Wales, on 6 June. The squadron was re-established at RAAF Station Mallala, South Australia, on 1 March 1948, when No. 2 (Communications) Squadron was renamed No. 34 (Communications) Squadron. It operated as a VIP transport, courier and reconnaissance unit, primarily in support of the Woomera rocket range, focal point of the Anglo-Australian Long Range Weapons Project during the Cold War. No. 34 (Communications) Squadron flew the only Vickers Viking to be taken on strength by the RAAF, and was also the only RAAF squadron to operate the Bristol Freighter. Three Freighters were taken on strength in April and May 1949, and a fourth in September 1951; one was lost with all three crew members in a crash near Mallala on 25 November 1953 after its wing failed in flight. The squadron also operated Percival Prince, Auster, Dakota and Anson aircraft, undertaking regular transport duties and disaster relief along with its Woomera support work before disbanding at Mallala on 28 October 1955. ### VIP operations No. 34 (VIP) Flight was established at RAAF Base Canberra on 12 March 1956, and charged with the safe carriage of the Governor-General, senior Australian politicians and military officers, and visiting foreign dignitaries. It was formed from the VIP Flight of No. 36 Squadron, under No. 86 (Transport) Wing. The VIP Flight had absorbed the RAAF's Governor-General's Flight in October 1950. In its first year of operation, No. 34 Flight carried the Duke of Edinburgh on his tour of Australia. It was equipped with two Convair 440 Metropolitans, as well as Dakotas. The flight remained in Canberra when No. 86 Wing relocated to Richmond in 1958. On 1 July 1959, it was re-formed as No. 34 (Special Transport) Squadron, leaving the control of No. 86 Wing to become an independent unit directly administered by Home Command and tasked by RAAF Base Canberra. "Possibly because of the rank of its clients", contended the official history of the post-war Air Force, the squadron maintained higher standards than other transport units, adopting some procedures from the civil aviation world. It also benefitted from the personal interest of senior officials when it came to upgrading its equipment, though this had some negative aspects. The acquisition of the Metropolitans, the first pressurised aircraft in the VIP fleet, was organised by Minister for Air Athol Townley without any advance discussion with the RAAF. Although the Air Force raised performance and safety concerns, the type's entry into service was a fait accompli, and it remained on strength for twelve years. Until the early 1960s, the VIP unit also operated two de Havilland Vampire jets and two CAC Winjeel trainers to allow staff officers at Canberra's Department of Air to maintain their flying proficiency. No. 34 (Special Transport) Squadron's home in Canberra was renamed RAAF Base Fairbairn in March 1962, and the unit was redesignated No. 34 Squadron on 13 June 1963. That year, the squadron carried Queen Elizabeth II for the first time. In October 1964, two second-hand Vickers Viscount turboprop transports were obtained to supplement the Dakotas and Convairs; the two piston-engine types were withdrawn after the delivery of two Hawker Siddeley HS 748s beginning in April 1967 and three Dassault Falcon 20 jets (known as Mystere in RAAF service) in June. Two BAC 1-11 jets joined the squadron on 19 January 1968, and the two Viscounts were retired in March the following year. The wholesale re-equipment of the VIP fleet in the late 1960s was controversial, and questions were raised in Parliament regarding its cost and operations. The so-called "VIP affair" led to more stringent guidelines governing No. 34 Squadron's tasking, requiring approval for flights to be made by the British Royal Family, the Governor-General, the Prime Minister, or the Minister for Air. Eligibility criteria were also codified, and potential passengers included Federal ministers, opposition leaders, "individuals of similar status and importance visiting Australia", two-star officers and above, and other dignitaries of similar status. During the 1970s one of No. 34 Squadron's BAC 1-11s experienced an engine failure over the Tasman Sea while carrying Prime Minister Gough Whitlam to New Zealand. The aircraft made a safe landing in Australia, but the incident led the RAAF to investigate using three- or four-engined aircraft for future VIP flights involving long over-water legs. The government eventually purchased two Boeing 707s from Qantas to perform long-range VIP flights and to improve the RAAF's strategic transport capabilities. Entering service in 1979, they joined the newly established No. 33 Flight (later No. 33 Squadron) in 1981. More 707s were acquired between 1983 and 1988, and four were converted for air-to-air refuelling in the early 1990s. In 1984, No. 34 Squadron was awarded the Gloucester Cup for its proficiency. The squadron again became part of No. 86 Wing in June 1988, though its tasking continued to be controlled by the Governor-General, the Prime Minister, and the Minister for Defence. Commencing in September 1989, the twinjet Mysteres and BAC 1-11s were replaced by five trijet Dassault Falcon 900s leased from Hawker Pacific, the first time the RAAF had leased aircraft from a commercial company. The two HS 748s were transferred to the newly formed No. 32 Squadron at RAAF Base East Sale, Victoria. Responsibility for servicing the Falcon 900s was shared by No. 34 Squadron and Hawker Pacific, the latter performing heavy maintenance. In an unusual operation for the squadron, one of the Falcons was dispatched to Jordan in September 1990 to evacuate thirteen Australian citizens who had been held hostage in Iraq. On 21 December 1992, a Falcon 900 became the first RAAF aircraft to take part in United Nations peacekeeping efforts in Somalia, when it departed RAAF Base Amberley, Queensland, with a team of Australian Army personnel to reconnoitre the theatre of operations. The unit received a commendation from the Chief of the Defence Force, General Peter Gration, shortly before his retirement in 1993. In January 1998, No. 84 Wing was organised as a special transport wing under Air Lift Group (renamed Air Mobility Group in April 2014). The term "special transport" referred to activities not directly related to army support, such as carrying VIPs. Headquartered at Richmond, No. 84 Wing took control of Nos. 32, 33 and 34 Squadrons. A flight by one of No. 34 Squadron's Falcons preceded INTERFET operations in East Timor in 1999, carrying senior Australian military and diplomatic staff to Dili on a goodwill mission. The Falcon 900s were replaced by two Boeing 737 Business Jets and three Bombardier Challenger 604s in July 2002. The new aircraft also replaced the two Boeing 707s operated by No. 33 Squadron in the VIP transport role. The 707s had permitted journalists to travel with the Prime Minister on international flights, and in replacing the bigger jets with 737s the Liberal government of the time determined that media contingents covering VIP trips should travel on civil aircraft. This decision led to controversy in 2007, after the crash of a Garuda airliner killed four Australian government officials and a journalist travelling in connection with a visit to Indonesia by Foreign Minister Alexander Downer, who had flown on a Challenger. No. 34 Squadron and Qantas Defence Services marked 20,000 incident-free flying hours with the 737s and Challengers on 21 October 2008. The following year saw further controversy when Prime Minister Kevin Rudd had to apologise for remarks to a cabin attendant over the meal he was served on one of the jets. In 2011, the squadron provided VIP transport during tours of Australia by Queen Elizabeth, Prince William, and Frederik and Mary of Denmark, as well as support for US President Barack Obama's visit to Canberra. It also flew senior government and military personnel in support of relief efforts during the Queensland floods, and was again awarded the Gloucester Cup for proficiency. No. 34 Squadron celebrated its 70th anniversary at Parliament House, Canberra, on 18 February 2012; the following day, a memorial to its first fatalities in March 1942 was unveiled at Fairbairn. On 13 October 2017, No. 34 Squadron was transferred from No. 84 Wing to No. 86 Wing. The squadron's Challengers were replaced with three Dassault Falcon 7Xs in 2019. The new aircraft are larger and longer-ranged than the Challengers, and carry more advanced communications equipment. In April 2020, No. 34 Squadron was awarded the Gloucester Cup for its performance the previous year. The 737s failed to achieve their programmed flying hours from around 2020 due to the age of the aircraft and need for maintenance. They also suffered from reliability problems; one incident caused an important National Cabinet meeting to be delayed when Prime Minister Scott Morrison was unable to depart from Cairns. As a result, in December 2021 the government decided to replace the 737s with two Boeing 737 MAX 8 aircraft. These aircraft will be provided by the National Australia Bank. They are scheduled to enter service in 2024 and be retained until 2036. ## See also - Royal Australian Air Force VIP aircraft - Air transports of heads of state and government
47,263
243 Ida
1,170,014,308
Main-belt asteroid
[ "Astronomical objects discovered in 1884", "Binary asteroids", "Discoveries by Johann Palisa", "Galileo program", "Koronis asteroids", "Minor planets visited by spacecraft", "Named minor planets", "S-type asteroids (SMASS)", "S-type asteroids (Tholen)" ]
Ida, minor planet designation 243 Ida, is an asteroid in the Koronis family of the asteroid belt. It was discovered on 29 September 1884 by Austrian astronomer Johann Palisa at Vienna Observatory and named after a nymph from Greek mythology. Later telescopic observations categorized Ida as an S-type asteroid, the most numerous type in the inner asteroid belt. On 28 August 1993, Ida was visited by the uncrewed Galileo spacecraft while en route to Jupiter. It was the second asteroid visited by a spacecraft and the first found to have a natural satellite. Ida's orbit lies between the planets Mars and Jupiter, like all main-belt asteroids. Its orbital period is 4.84 years, and its rotation period is 4.63 hours. Ida has an average diameter of 31.4 km (19.5 mi). It is irregularly shaped and elongated, apparently composed of two large objects connected together. Its surface is one of the most heavily cratered in the Solar System, featuring a wide variety of crater sizes and ages. Ida's moon Dactyl was discovered by mission member Ann Harch in images returned from Galileo. It was named after the Dactyls, creatures which inhabited Mount Ida in Greek mythology. Dactyl is only 1.4 kilometres (0.87 mi) in diameter, about 1/20 the size of Ida. Its orbit around Ida could not be determined with much accuracy, but the constraints of possible orbits allowed a rough determination of Ida's density and revealed that it is depleted of metallic minerals. Dactyl and Ida share many characteristics, suggesting a common origin. The images returned from Galileo and the subsequent measurement of Ida's mass provided new insights into the geology of S-type asteroids. Before the Galileo flyby, many different theories had been proposed to explain their mineral composition. Determining their composition permits a correlation between meteorites falling to the Earth and their origin in the asteroid belt. Data returned from the flyby pointed to S-type asteroids as the source for the ordinary chondrite meteorites, the most common type found on the Earth's surface. ## Discovery and observations Ida was discovered on 29 September 1884 by Austrian astronomer Johann Palisa at the Vienna Observatory. It was his 45th asteroid discovery. Ida was named by Moriz von Kuffner, a Viennese brewer and amateur astronomer. In Greek mythology, Ida was a nymph of Crete who raised the god Zeus. Ida was recognized as a member of the Koronis family by Kiyotsugu Hirayama, who proposed in 1918 that the group comprised the remnants of a destroyed precursor body. Ida's reflection spectrum was measured on 16 September 1980 by astronomers David J. Tholen and Edward F. Tedesco as part of the eight-color asteroid survey (ECAS). Its spectrum matched those of the asteroids in the S-type classification. Many observations of Ida were made in early 1993 by the US Naval Observatory in Flagstaff and the Oak Ridge Observatory. These improved the measurement of Ida's orbit around the Sun and reduced the uncertainty of its position during the Galileo flyby from 78 to 60 km (48 to 37 mi). ## Exploration ### Galileo flyby Ida was visited in 1993 by the Jupiter-bound space probe Galileo. Its encounters of the asteroids Gaspra and Ida were secondary to the Jupiter mission. These were selected as targets in response to a new NASA policy directing mission planners to consider asteroid flybys for all spacecraft crossing the belt. No prior missions had attempted such a flyby. Galileo was launched into orbit by the Space Shuttle Atlantis mission STS-34 on 18 October 1989. Changing Galileo's trajectory to approach Ida required that it consume 34 kg (75 lb) of propellant. Mission planners delayed the decision to attempt a flyby until they were certain that this would leave the spacecraft enough propellant to complete its Jupiter mission. Galileo's trajectory carried it into the asteroid belt twice on its way to Jupiter. During its second crossing, it flew by Ida on 28 August 1993 at a speed of 12,400 m/s (41,000 ft/s) relative to the asteroid. The onboard imager observed Ida from a distance of 240,350 km (149,350 mi) to its closest approach of 2,390 km (1,490 mi). Ida was the second asteroid, after Gaspra, to be imaged by a spacecraft. About 95% of Ida's surface came into view of the probe during the flyby. Transmission of many Ida images was delayed due to a permanent failure in the spacecraft's high-gain antenna. The first five images were received in September 1993. These comprised a high-resolution mosaic of the asteroid at a resolution of 31–38 m/pixel. The remaining images were sent in February 1994, when the spacecraft's proximity to the Earth allowed higher speed transmissions. ### Discoveries The data returned from the Galileo flybys of Gaspra and Ida, and the later NEAR Shoemaker asteroid mission, permitted the first study of asteroid geology. Ida's relatively large surface exhibited a diverse range of geological features. The discovery of Ida's moon Dactyl, the first confirmed satellite of an asteroid, provided additional insights into Ida's composition. Ida is classified as an S-type asteroid based on ground-based spectroscopic measurements. The composition of S-types was uncertain before the Galileo flybys, but was interpreted to be either of two minerals found in meteorites that had fallen to the Earth: ordinary chondrite (OC) and stony-iron. Estimates of Ida's density are constrained to less than 3.2 g/cm<sup>3</sup> by the long-term stability of Dactyl's orbit. This all but rules out a stony-iron composition; were Ida made of 5 g/cm<sup>3</sup> iron- and nickel-rich material, it would have to contain more than 40% empty space. The Galileo images also led to the discovery that space weathering was taking place on Ida, a process which causes older regions to become more red in color over time. The same process affects both Ida and its moon, although Dactyl shows a lesser change. The weathering of Ida's surface revealed another detail about its composition: the reflection spectra of freshly exposed parts of the surface resembled that of OC meteorites, but the older regions matched the spectra of S-type asteroids. Both of these discoveries—the space weathering effects and the low density—led to a new understanding about the relationship between S-type asteroids and OC meteorites. S-types are the most numerous kind of asteroid in the inner part of the asteroid belt. OC meteorites are, likewise, the most common type of meteorite found on the Earth's surface. The reflection spectra measured by remote observations of S-type asteroids, however, did not match that of OC meteorites. The Galileo flyby of Ida found that some S-types, particularly the Koronis family, could be the source of these meteorites. ## Physical characteristics Ida's mass is between 3.65 and 4.99 × 10<sup>16</sup> kg. Its gravitational field produces an acceleration of about 0.3 to 1.1 cm/s<sup>2</sup> over its surface. This field is so weak that an astronaut standing on its surface could leap from one end of Ida to the other, and an object moving in excess of 20 m/s (70 ft/s) could escape the asteroid entirely. Ida is a distinctly elongated asteroid, with an irregular surface. Ida is 2.35 times as long as it is wide, and a "waist" separates it into two geologically dissimilar halves. This constricted shape is consistent with Ida being made of two large, solid components, with loose debris filling the gap between them. However, no such debris was seen in high-resolution images captured by Galileo. Although there are a few steep slopes tilting up to about 50° on Ida, the slope generally does not exceed 35°. Ida's irregular shape is responsible for the asteroid's very uneven gravitational field. The surface acceleration is lowest at the extremities because of their high rotational speed. It is also low near the "waist" because the mass of the asteroid is concentrated in the two halves, away from this location. ## Surface features Ida's surface appears heavily cratered and mostly gray, although minor color variations mark newly formed or uncovered areas. Besides craters, other features are evident, such as grooves, ridges, and protrusions. Ida is covered by a thick layer of regolith, loose debris that obscures the solid rock beneath. The largest, boulder-sized, debris fragments are called ejecta blocks, several of which have been observed on the surface. ### Regolith The surface of Ida is covered in a blanket of pulverized rock, called regolith, about 50–100 m (160–330 ft) thick. This material is produced in impact events and redistributed across Ida's surface by geological processes. Galileo observed evidence of recent downslope regolith movement. Ida's regolith is composed of the silicate minerals olivine and pyroxene. Its appearance changes over time through a process called space weathering. Because of this process, older regolith appears more red in color compared to freshly exposed material. About 20 large (40–150 m across) ejecta blocks have been identified, embedded in Ida's regolith. Ejecta blocks constitute the largest pieces of the regolith. Because ejecta blocks are expected to break down quickly by impact events, those present on the surface must have been either formed recently or uncovered by an impact event. Most of them are located within the craters Lascaux and Mammoth, but they may not have been produced there. This area attracts debris due to Ida's irregular gravitational field. Some blocks may have been ejected from the young crater Azzurra on the opposite side of the asteroid. ### Structures Several major structures mark Ida's surface. The asteroid appears to be split into two halves, here referred to as region 1 and region 2, connected by a "waist". This feature may have been filled in by debris, or blasted out of the asteroid by impacts. Region 1 of Ida contains two major structures. One is a prominent 40 km (25 mi) ridge named Townsend Dorsum that stretches 150 degrees around Ida's surface. The other structure is a large indentation named Vienna Regio. Ida's region 2 features several sets of grooves, most of which are 100 m (330 ft) wide or less and up to 4 km (2.5 mi) long. They are located near, but are not connected with, the craters Mammoth, Lascaux, and Kartchner. Some grooves are related to major impact events, for example a set opposite Vienna Regio. ### Craters Ida is one of the most densely cratered bodies yet explored in the Solar System, and impacts have been the primary process shaping its surface. Cratering has reached the saturation point, meaning that new impacts erase evidence of old ones, leaving the total crater count roughly the same. It is covered with craters of all sizes and stages of degradation, and ranging in age from fresh to as old as Ida itself. The oldest may have been formed during the breakup of the Koronis family parent body. The largest crater, Lascaux, is almost 12 km (7.5 mi) across. Region 2 contains nearly all of the craters larger than 6 km (3.7 mi) in diameter, but Region 1 has no large craters at all. Some craters are arranged in chains. Ida's major craters are named after caves and lava tubes on Earth. The crater Azzurra, for example, is named after a submerged cave on the island of Capri, also known as the Blue Grotto. Azzurra seems to be the most recent major impact on Ida. The ejecta from this collision is distributed discontinuously over Ida and is responsible for the large-scale color and albedo variations across its surface. An exception to the crater morphology is the fresh, asymmetric Fingal, which has a sharp boundary between the floor and wall on one side. Another significant crater is Afon, which marks Ida's prime meridian. The craters are simple in structure: bowl-shaped with no flat bottoms and no central peaks. They are distributed evenly around Ida, except for a protrusion north of crater Choukoutien which is smoother and less cratered. The ejecta excavated by impacts is deposited differently on Ida than on planets because of its rapid rotation, low gravity and irregular shape. Ejecta blankets settle asymmetrically around their craters, but fast-moving ejecta that escapes from the asteroid is permanently lost. ## Composition Ida was classified as an S-type asteroid based on the similarity of its reflectance spectra with similar asteroids. S-types may share their composition with stony-iron or ordinary chondrite (OC) meteorites. The composition of the interior has not been directly analyzed, but is assumed to be similar to OC material based on observed surface color changes and Ida's bulk density of 2.27–3.10 g/cm<sup>3</sup>. OC meteorites contain varying amounts of the silicates olivine and pyroxene, iron, and feldspar. Olivine and pyroxene were detected on Ida by Galileo. The mineral content appears to be homogeneous throughout its extent. Galileo found minimal variations on the surface, and the asteroid's spin indicates a consistent density. Assuming that its composition is similar to OC meteorites, which range in density from 3.48 to 3.64 g/cm<sup>3</sup>, Ida would have a porosity of 11–42%. Ida's interior probably contains some amount of impact-fractured rock, called megaregolith. The megaregolith layer of Ida extends between hundreds of meters below the surface to a few kilometers. Some rock in Ida's core may have been fractured below the large craters Mammoth, Lascaux, and Undara. ## Orbit and rotation Ida is a member of the Koronis family of asteroid-belt asteroids. Ida orbits the Sun at an average distance of 2.862 AU (428.1 Gm), between the orbits of Mars and Jupiter. Ida takes 4.84089 years to complete one orbit. Ida's rotation period is 4.63 hours (roughly 5 hours), making it one of the fastest rotating asteroids yet discovered. The calculated maximum moment of inertia of a uniformly dense object the same shape as Ida coincides with the spin axis of the asteroid. This suggests that there are no major variations of density within the asteroid. Ida's axis of rotation precesses with a period of 77 thousand years, due to the gravity of the Sun acting upon the nonspherical shape of the asteroid. ## Origin Ida originated in the breakup of the roughly 120 km (75 mi) diameter Koronis parent body. The progenitor asteroid had partially differentiated, with heavier metals migrating to the core. Ida carried away insignificant amounts of this core material. It is uncertain how long ago the disruption event occurred. According to an analysis of Ida's cratering processes, its surface is more than a billion years old. However, this is inconsistent with the estimated age of the Ida–Dactyl system of less than 100 million years; it is unlikely that Dactyl, due to its small size, could have escaped being destroyed in a major collision for longer. The difference in age estimates may be explained by an increased rate of cratering from the debris of the Koronis parent body's destruction. ## Dactyl Ida has a moon named Dactyl, official designation (243) Ida I Dactyl. It was discovered in images taken by the Galileo spacecraft during its flyby in 1993. These images provided the first direct confirmation of an asteroid moon. At the time, it was separated from Ida by a distance of 90 kilometres (56 mi), moving in a prograde orbit. Dactyl is heavily cratered, like Ida, and consists of similar materials. Its origin is uncertain, but evidence from the flyby suggests that it originated as a fragment of the Koronis parent body. ### Discovery Dactyl was found on 17 February 1994 by Galileo mission member Ann Harch, while examining delayed image downloads from the spacecraft. Galileo recorded 47 images of Dactyl over an observation period of 5.5 hours in August 1993. The spacecraft was 10,760 kilometres (6,690 mi) from Ida and 10,870 kilometres (6,750 mi) from Dactyl when the first image of the moon was captured, 14 minutes before Galileo made its closest approach. Dactyl was initially designated 1993 (243) 1. It was named by the International Astronomical Union in 1994, for the mythological dactyls who inhabited Mount Ida on the island of Crete. ### Physical characteristics Dactyl is an "egg-shaped" but "remarkably spherical" object measuring 1.6 by 1.4 by 1.2 kilometres (0.99 by 0.87 by 0.75 mi). It is oriented with its longest axis pointing towards Ida. Like Ida, Dactyl's surface exhibits saturation cratering. It is marked by more than a dozen craters with a diameter greater than 80 m (260 ft), indicating that the moon has suffered many collisions during its history. At least six craters form a linear chain, suggesting that it was caused by locally produced debris, possibly ejected from Ida. Dactyl's craters may contain central peaks, unlike those found on Ida. These features, and Dactyl's spheroidal shape, imply that the moon is gravitationally controlled despite its small size. Like Ida, its average temperature is about 200 K (−73 °C; −100 °F). Dactyl shares many characteristics with Ida. Their albedos and reflection spectra are very similar. The small differences indicate that the space weathering process is less active on Dactyl. Its small size would make the formation of significant amounts of regolith impossible. This contrasts with Ida, which is covered by a deep layer of regolith. The two largest imaged craters on Dactyl were named Acmon /ˈækmən/ and Celmis /ˈsɛlmɪs/, after two of the mythological dactyls. Acmon is the largest crater in the above image, and Celmis is near the bottom of the image, mostly obscured in shadow. The craters are 300 and 200 meters in diameter, respectively. ### Orbit Dactyl's orbit around Ida is not precisely known. Galileo was in the plane of Dactyl's orbit when most of the images were taken, which made determining its exact orbit difficult. Dactyl orbits in the prograde direction and is inclined about 8° to Ida's equator. Based on computer simulations, Dactyl's pericenter must be more than about 65 km (40 mi) from Ida for it to remain in a stable orbit. The range of orbits generated by the simulations was narrowed down by the necessity of having the orbits pass through points at which Galileo observed Dactyl to be at 16:52:05 UT on 28 August 1993, about 90 km (56 mi) from Ida at longitude 85°. On 26 April 1994, the Hubble Space Telescope observed Ida for eight hours and was unable to spot Dactyl. It would have been able to observe it if it were more than about 700 km (430 mi) from Ida. If in a circular orbit at the distance at which it was seen, Dactyl's orbital period would be about 20 hours. Its orbital speed is roughly 10 m/s (33 ft/s), "about the speed of a fast run or a slowly thrown baseball". ### Age and origin Dactyl may have originated at the same time as Ida, from the disruption of the Koronis parent body. However, it may have formed more recently, perhaps as ejecta from a large impact on Ida. It is extremely unlikely that it was captured by Ida. Dactyl may have suffered a major impact around 100 million years ago, which reduced its size. ## See also - List of geological features on 243 Ida and Dactyl - List of minor planets
31,757,964
1986–87 Gillingham F.C. season
1,136,196,585
null
[ "1986–87 Football League Third Division by team", "Gillingham F.C. seasons" ]
During the 1986–87 English football season, Gillingham F.C. competed in the Football League Third Division. It was the 55th season in which the club competed in the Football League, and the 37th since the club was voted back into the league in 1950. Gillingham began the season strongly and were top of the Third Division table shortly before the mid-point of the season. The team's form declined in the second half of the season; to qualify for the play-offs for promotion to the Football League Second Division, the team needed to win their final game and both Bristol City and Notts County had to fail to win theirs. A victory over Bolton Wanderers, combined with both the other teams being held to draws, meant that Gillingham finished in fifth place and qualified for the play-offs. After beating Sunderland in the semi-finals, Gillingham faced Swindon Town in the final. The two teams drew 2–2 on aggregate, necessitating a replay at a neutral venue, which Swindon won 2–0 to claim a place in the Second Division. During the season, Gillingham also reached the third round of the FA Cup, the second round of the Football League Cup, and the southern section semi-finals of the Associate Members' Cup. The team played 63 competitive matches, winning 31, drawing 12 (including one decided by a penalty shoot-out), and losing 20. Tony Cascarino was the club's leading goalscorer, with 30 goals in all competitions. Howard Pritchard and Paul Haylock made the most appearances; both played in 62 of the club's 63 matches. The highest attendance recorded at the club's home ground, Priestfield Stadium, was 16,775, for the home leg of the play-off final. ## Background and pre-season The 1986–87 season was Gillingham's 55th season playing in the Football League and the 37th since the club was elected back into the League in 1950 after being voted out in 1938. It was the club's 13th consecutive season in the Football League Third Division, the third tier of the English football league system, since the team gained promotion from the Fourth Division as runners-up in 1974. In the 12 seasons since then, the team had achieved a best finish of fourth place, one position away from promotion to the Second Division, a feat achieved in both the 1978–79 and 1984–85 seasons. The club had never reached the second level of English football in its history. In the 1985–86 season, Gillingham had finished fifth and missed out on promotion by two places. Keith Peacock was the club's manager for a sixth season, having been appointed in July 1981. Paul Taylor served as assistant manager, Bill Collins, who had been with the club in a variety of roles since the early 1960s, held the post of first-team trainer, and John Gorman managed the youth team. Mark Weatherly took over as team captain, replacing Keith Oakes, who was only named as a substitute for the opening game of the season and left the club soon afterwards to join Fulham. Before the season began, a series of disputes took place involving the club's board of directors. In late June, chairman Charles Cox announced that he had dismissed three directors from their posts; the following day, the ousted trio gave an interview to the press and claimed that under his chairmanship the club's debt had reached £700,000 and that it faced the threat of a potential liquidation order from the Inland Revenue. Four days later, following a showdown meeting between the two parties, Cox resigned as chairman and the three deposed directors returned to the board with one of them, Roy Wood, becoming the new chairman. The directors then issued a statement to Gillingham supporters stating that the club's finances were under control and that money would be available to manager Peacock to sign new players in anticipation of another challenge for promotion. In July, the club's financial director announced that a settlement had been reached with the Inland Revenue. Following the resolution of the issues behind the scenes, Peacock signed six new players before the season began. In July, midfielder Trevor Quow joined from Peterborough United for a transfer fee of £8,500. The following month, Gillingham signed Howard Pritchard, a winger who had made one appearance for the Welsh national team in 1985, from Bristol City for a fee of £22,500. Defender Graham Pearce and midfielder Mel Eves arrived from Brighton & Hove Albion and Sheffield United respectively on free transfers, and the club paid semi-professional club Welling United a fee of £3,000 to sign winger Dave Smith. Defender Paul Haylock signed for £25,000 from Norwich City, having rejected a new contract shortly after helping the team win the Second Division championship. Several players left the club, including defender Mel Sage, one of the club's most promising young players, who joined Derby County of the Second Division for a fee of £60,000. Gillingham had hoped for a significantly higher fee, but with the two clubs unable to agree on terms, the transfer fee had to be set by an independent tribunal. Karl Elsey came close to leaving the club, but failed to agree a contract with Reading and so remained at Gillingham. The team prepared for the new season with several friendly matches, including a testimonial match for the long-serving Weatherly, for which Tottenham Hotspur of the First Division provided the opposition. The team retained the first-choice kit worn in the previous season of blue shirts with a white panel down each side. The second-choice shirts to be worn in the event of a clash of colours with the opposition changed from plain red to white with a blue zig-zag band across the chest. ## Third Division ### August–December The team's first game of the season was an away match against Newport County; Haylock, Pearce, Pritchard and Quow all made their debuts in a 2–1 victory. Weatherly scored the team's first goal of the season and Dave Shearer scored the winner. The first home league game took place at Priestfield Stadium seven days later against Bristol City in front of a crowd of 4,185, the largest attendance for Gillingham's opening home game since 1981. Shearer scored in a 1–1 draw and then got the only goal of the game away to Rotherham United to give Gillingham the win and seven points out of a possible nine from the first three games of the season. The team's unbeaten run in the league extended to six games with a goalless draw against Middlesbrough and 2–0 wins against both York City and Brentford, before the first defeat of the season came against Mansfield Town. Colin Greenall, a highly rated defender who had been signed from Blackpool for £40,000 at the start of September, made his debut in the Middlesbrough game. A game against Chester City which should have taken place in late September was postponed because of an outbreak of illness among the opposing players. Following the defeat at Mansfield, Gillingham were unbeaten for the next seven league games, winning five and drawing two. Shearer scored in four consecutive games, taking his total for the season to seven goals. The team had no game on October 11, as the scheduled match away to AFC Bournemouth was postponed because Dorset Constabulary did not have sufficient manpower to police both the match and the Conservative Party Conference, which was taking place in the town. The unbeaten run came to an end with a 2–0 defeat away to Doncaster Rovers on 7 November, but the team then won 3–1 against fellow promotion-chasers Notts County, a game in which Shearer scored twice. Despite a defeat away to Wigan Athletic on 29 November, in a match which was unusually played in the morning to avoid a clash with an international rugby league match taking place in the town, Gillingham ended the month in second place in the league table. The postponed game away to AFC Bournemouth was played on 2 December; since the start of the season Bournemouth had won every league game played at their home stadium, Dean Court. Gillingham, however, secured a 2–0 win with goals from Martin Robinson and Pritchard, which took the team to the top of the Third Division table. In the next league game, Gillingham lost 3–0 away to mid-table Bolton Wanderers, a game in which Tony Cascarino was sent off. Gillingham bounced back from the defeat with a 4–1 victory over Bristol Rovers which ensured they were back on top of the division heading into the Christmas period. The team ended 1986 with two games on consecutive days; a draw with Fulham on Boxing Day followed by a defeat to Swindon Town the next day left Gillingham in third place in the Third Division table going into the new year. The game against Swindon, regarded by fans as one of Gillingham's rivals since the 1970s, drew an attendance of 9,982, more than 4,000 higher than that at any other match at Priestfield to that point of the season. ### January–May Gillingham began 1987 with a home win over Walsall on New Year's Day. Pritchard scored a hat-trick before half-time in a 4–0 victory which brought the team to within one point of league leaders Middlesbrough, who lost away to York City. After this the team lost four of their next six matches and increasingly began to lose touch with the teams at the top of the league; following a defeat against Brentford on 21 February Gillingham had dropped to sixth in the table. In February, Shearer sustained an injury, so Peacock signed Colin Gordon on loan from Wimbledon; the striker scored twice in four games before returning to his parent club. In the same month, goalkeeper Phil Kite joined from Southampton, initially on loan, after Ron Hillyard was injured; the transfer was made permanent after some impressive performances and Kite retained the goalkeeping position for the remainder of the season, playing in every match. Midfielder Steve Jacobs, who had joined the club from Charlton Athletic in December, made his debut in February and played six consecutive league games, but then did not play again for over a month. In March, Gillingham won three consecutive league matches for the first time since November, beating Carlisle United, Darlington, and Bournemouth. Shearer, in his first start after his injury, scored twice against Carlisle. Against Darlington, Cascarino became the second Gillingham player of the season to score a hat-trick, with three goals in a 4–1 victory. The Bournemouth game, played on Easter Monday, drew the club's largest home crowd since the Swindon game in December. Experienced defender Les Berry joined Gillingham from Brighton & Hove Albion during March; he made his debut in an away defeat to Bury at the end of the month and was an ever-present for the remainder of the season. The team began April with two consecutive wins against Doncaster Rovers and Blackpool but then lost to Walsall. On his return to the team against Walsall, Jacobs was sent off for retaliating after being fouled by an opponent. He did not play in any of the team's remaining games and left the club at the end of the season. A win against Bristol Rovers on 25 April left Gillingham in fifth place in the league table, but the next three games produced two draws and one defeat, after which the team had fallen to seventh position with one game remaining. At the start of the season, the Football League had introduced a new play-off system, under which the teams which finished just below the automatic promotion places in the Second, Third, and Fourth Divisions would have the opportunity to compete for one further promotion place with one team from the division above; in the Third Division this meant that the teams finishing third, fourth, and fifth in the final table would take part. To finish in fifth position and qualify for the play-offs, the team needed to defeat Bolton Wanderers on the last day of the league season and needed both Bristol City and Notts County not to win. A goal from Cascarino secured a 1–0 win, and as both of their rivals were held to 1–1 draws, Gillingham clinched a play-off place. Cascarino's goal was his 16th of the season in Third Division matches, tying him with Shearer as the club's top goalscorer in league matches. ### Match results Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ### Partial league table ### Play-offs Under the original format of the play-offs, the club which had finished immediately above the automatic relegation places in the Second Division competed with the three clubs which had finished immediately below the automatic promotion places in the Third Division for a place in the second tier of English football for the following season. At the semi-final stage, Gillingham were paired with Sunderland, who had finished the season in 20th place in the Second Division, in a two-legged tie. Sunderland took a 1–0 lead in the first half of the first leg at Priestfield, but Cascarino scored a hat-trick after the interval to put Gillingham 3–1 up. A late goal for Sunderland made the final score 3–2 to Gillingham. Three days later at Roker Park in Sunderland, Pritchard scored for Gillingham inside the first five minutes to give his team a two-goal lead on aggregate, but Sunderland then scored twice. In the second half, Cascarino made the score 2–2 on the day and 5–4 on aggregate, but Sunderland got a third goal in the final minute to bring the aggregate scores level at 5–5 and send the game into extra time. Both teams got one more goal in the extra period, at the end of which the score was 6–6 on aggregate, but Gillingham progressed to the final because they had scored more away goals. The final against Swindon Town was also played over two legs and again the first match took place at Priestfield, where the attendance of 16,775 was the largest crowd of the season at the stadium. Violence broke out before the game between the two sets of fans and two British Transport Police officers were injured. In the match itself, Swindon were "superior in all departments except the telling ones – finishing and goalkeeping" according to David Powell of The Times. The game remained goal-less until the 81st minute when Smith scored following a free kick from Quow to give Gillingham a single-goal lead going into the second leg. Three days later at Swindon's County Ground, Elsey volleyed the ball into the goal to give Gillingham a two-goal lead on aggregate, but goals from Peter Coyne and Charlie Henry for Swindon made the result on the day 2–1 to Swindon and the aggregate score across the two legs 2–2. Unlike in the semi-final, away goals were not used as a tiebreaker in the final; instead the rules stated that, in the event of the scores finishing level after the two legs, a replay would take place at a neutral stadium. Robert Armstrong of The Guardian described the second leg as "an epic battle, in the best Anglo-Saxon tradition of the knockout competition". The replay took place at Selhurst Park, home of Crystal Palace. It was Gillingham's 63rd match of the season, a new record for the highest number of games the team had played in a season since joining the Football League. Swindon took the lead after only two minutes following a defensive error by Gillingham. Although Gillingham were the stronger team in the second half, they could not manage to score a goal, and Steve White of Swindon got his second goal of the match in the second half. Despite further pressure on their goal, Swindon held on for a 2–0 victory and promotion to the Second Division. ### Match results Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - N = Match at a neutral venue - pen. = Penalty kick - o.g. = Own goal Results ## Cup matches ### FA Cup As a Third Division team, Gillingham entered the 1986–87 FA Cup in the first round, where they were drawn to play Kettering Town of the Football Conference, the highest level of non-League football. Gillingham won 3–0 with goals from Robinson, Hinnigan and an own goal from a Kettering player. In the second round, Gillingham played another non-League team, Chelmsford City of the Southern League. Cascarino scored twice in a 2–0 victory. Gillingham's third round opponents were fellow Third Division team Wigan Athletic. The game was twice postponed due to heavy snow in the south of England, the club at one point hiring a police Land Rover to pick up players who lived in outlying areas after the football authorities initially refused the second postponement. After that decision was reversed, the match finally took place on 19 January. Greenall scored a goal from a penalty kick but Wigan scored twice to end Gillingham's cup run. #### Match results Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ### Football League Cup Gillingham entered the 1986–87 Football League Cup in the first round, being drawn against Northampton Town of the Fourth Division. The first round was played over two legs; Gillingham won the first leg at Priestfield 1–0 and drew the second leg at the County Ground 2–2 for a 3–2 aggregate win. In the second round, Gillingham were drawn against the reigning cup-holders, Oxford United of the First Division. The first leg was played at Oxford's home ground, the Manor Ground, where Gillingham were comprehensively outplayed, losing 6–0. Republic of Ireland international striker John Aldridge scored four of the Oxford goals. This was the most goals conceded by Gillingham in a match since a 7–1 defeat by York City in November 1984. Although Gillingham managed to hold their First Division opponents to a 1–1 draw at Priestfield in the second leg, they lost 7–1 on aggregate and were eliminated from the League Cup. #### Match results Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ### Associate Members' Cup The 1986–87 Associate Members' Cup, a tournament exclusively for Third and Fourth Division teams, began with a preliminary round in which the teams were drawn into groups of three, contested on a round-robin basis. Gillingham were drawn with Notts County of the Third Division and Northampton Town of the Fourth and won both games without conceding a goal, defeating Notts County 5–0 away from home and Northampton 1–0 at Priestfield Stadium. The team thus qualified for the first round, where they were paired with Colchester United of the Fourth Division. Goals from Smith and Cascarino gave Gillingham a 2–0 win in front of 1,984 fans, the smallest crowd to attend a match at Priestfield during the season. In the southern section quarter-final, Gillingham played Port Vale; the match finished 3–3 after extra time, meaning that a penalty shoot-out was required to determine which team would progress to the next round. Gillingham scored all five of their penalties and then Hillyard saved Port Vale's final penalty meaning that Gillingham won the shoot-out 5–4. Now just two wins away from the final, Gillingham's next opponents were fellow Third Division promotion challengers Bristol City; it was the second consecutive season in which the two teams had met at this stage of the competition. The attendance of 10,540 at Bristol City's home stadium, Ashton Gate, was the largest crowd in front of which Gillingham had played up to that point in the season. Gillingham lost 2–0, meaning that their participation in the Associate Members' Cup was ended by Bristol City for the second season in a row. #### Match results Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ## Players Pritchard and Haylock made the most appearances of any Gillingham player during the season, both missing only a single game. Pritchard was in the starting line-up for 44 of the 46 league games and came on as a substitute in both the others. He also played in all of the club's games in the FA Cup, League Cup, Associate Members' Cup and play-offs with the sole exception of the match against Chelmsford City in the FA Cup, for a total of 62 games in all competitions. Haylock was absent for one league game, against Middlesbrough in April, but played in every match in the other competitions and thus finished the season with the same number of appearances; unlike Pritchard, he was in the starting line-up for all 62 games in which he played. Cascarino had the next highest number of appearances, with 60; he missed three consecutive league games between December 26 and January 1 but started every other match. Oakes, Graham Westley and youth-team manager Gorman made the fewest appearances, each playing twice. Gorman, aged 37, was named as a substitute in the first leg of the League Cup tie against Oxford United and was in the starting line-up for the second leg; he had not played a professional match in England since 1979. Westley's two appearances were both as a substitute, making him the only player to play for Gillingham during the season without starting a game. The veteran Weatherly made his 500th appearance for the team in April, only the fourth player in the club's history to reach this milestone. Cascarino was the team's leading scorer when considering goals in all competitions. The striker scored 16 goals in Third Division matches, 2 in the FA Cup, 3 in the League Cup, 4 in the Associate Members' Cup and 5 in the play-offs for a total of 30 goals. Shearer scored the same number of goals as Cascarino in Third Division matches but only added 2 in other competitions for a total of 18. Cascarino scored a hat-trick on two occasions, once in the Third Division and once in the play-offs. Pritchard scored the team's only other hat-trick of the season and was the only other player to reach double figures, scoring 12 goals in the Third Division and 14 overall. Both Cascarino and Greenall were elected by their fellow professionals into the PFA Team of the Year for the Third Division. ## Aftermath Gillingham manager Peacock noted that he felt "as low as I have ever felt in football" after the play-off final defeat. He also rued the absence of Shearer for parts of the season, contending that if the Scottish striker had been fit throughout, his goalscoring partnership with Cascarino would have secured an automatic promotion place. It had been speculated during the season that if Gillingham again failed to gain promotion, Cascarino, seen as the team's most valued player, would be signed by a club in a higher division, officials from several top clubs having visited Gillingham matches to watch him in action. Shortly after the play-off final defeat he joined Millwall of the Second Division for a transfer fee of £225,000. This was at the time the highest fee which Gillingham had ever received for a player. He would go on to play at the highest level in both England and Scotland and represent the Republic of Ireland at Euro 1988, the 1990 World Cup and the 1994 World Cup. Robinson, who had been a regular starter in the first half of the season but featured less frequently in the latter stages, also moved on, joining Southend United for £25,000. Gillingham began the following season mounting another challenge for promotion, and in the early part of the season beat Southend United 8–1 and Chesterfield 10–0 on consecutive Saturdays. The team's form quickly declined and Peacock was sacked in December 1987, to be replaced by his former assistant Taylor. The team finished the 1987–88 season in 13th place in the Third Division.
5,076,476
Super Mario Galaxy
1,173,716,099
2007 video game
[ "2007 video games", "3D platform games", "Asymmetrical multiplayer video games", "British Academy Games Award for Best Game winners", "Cooperative video games", "D.I.C.E. Award for Adventure Game of the Year winners", "Multiplayer and single-player video games", "Nintendo Entertainment Analysis and Development games", "Science fantasy video games", "Spike Video Game Award winners", "Super Mario", "Video games about dinosaurs", "Video games about extraterrestrial life", "Video games developed in Japan", "Video games produced by Shigeru Miyamoto", "Video games scored by Koji Kondo", "Video games scored by Mahito Yokota", "Video games set in outer space", "Video games set on fictional planets", "Wii games", "Wii games re-released on the Nintendo eShop", "Wii-only games" ]
is a 2007 platform game developed and published by Nintendo for the Wii. It is the third 3D game in the Super Mario series. As Mario, the player embarks on a quest to rescue Princess Peach, save the universe from Bowser, and collect 120 Power Stars, after which the player can play the game as Luigi for a more difficult experience. The levels consist of galaxies filled with minor planets and worlds, with different variations of gravity, the central element of gameplay. The player character is controlled using the Wii Remote and Nunchuk and completes missions, fights bosses, and reaches certain areas to collect Power Stars. Certain levels use the motion-based Wii Remote functions. Nintendo EAD Tokyo began developing Super Mario Galaxy after the release of Donkey Kong Jungle Beat in late 2004, when Shigeru Miyamoto suggested that Nintendo commission a large-scale Mario game. The concept of spherical platforms originated from Super Mario 128, a GameCube tech demo shown at Nintendo Space World in 2000. Nintendo aimed to make the game appeal to players of all ages, and the team had more freedom in designing it compared to other Super Mario games because of the outer space setting. The game was directed by Yoshiaki Koizumi and soundtrack was composed by Mahito Yokota and Koji Kondo, using a symphony orchestra for the first time in the series. Super Mario Galaxy was a critical and commercial success, hailed as one of the best games in the series and one of the greatest video games of all time. At the time of its closure in 2019, Super Mario Galaxy was the highest-rated game of all time on review-aggregating site GameRankings. The game's graphics, gravity mechanics, level design, soundtrack, setting, and story all received high praise. It won several awards from top gaming publications, including multiple "Game of the Year" titles, and became the first Nintendo title to win the BAFTA Award for Best Game. The game is the ninth best-selling Wii game worldwide with sales of 12.80 million. The game was released as a Nintendo Selects title in 2011, as a download via the Wii U's eShop in 2015, on the Nvidia Shield in China in 2018, and as part of the Super Mario 3D All-Stars collection for the Nintendo Switch in 2020. A sequel, Super Mario Galaxy 2, was released for the Wii in 2010. ## Gameplay ### Premise and setting Super Mario Galaxy is set in outer space, where Mario travels through different galaxies to collect Power Stars, earned by completing missions, defeating a boss, or reaching a particular area. Each galaxy contains planetoids and orbiting structures for the player to explore. Each astronomical object has its own gravitational force, allowing the player to completely circumnavigate the planetoids, walking sideways or upside down. The player can usually jump from one independent object and fall towards another one nearby. Although the main gameplay is in 3D, there are several areas in the game in which the player's movements are restricted to a two-dimensional plane. The game's main hub is the Comet Observatory, a spaceship which contains six domes that provide access to most of the game's 42 available galaxies, with each dome except one holding five. Five domes end with a boss level in which the objective is to defeat Bowser or Bowser Jr. and earn a special Power Star, known as a Grand Star, that gives the player access to the next dome. The player only has access to one galaxy when they begin the game; as more Power Stars are collected, more galaxies and Stars become available. The player is awarded the ability to play as Luigi after collecting 120 Power Stars as Mario. Once 120 Power Stars are collected with both characters, the player is rewarded one further challenge which, upon completion, awards the player with the final two stars, and two commemorative pictures of the characters that can be sent to the Wii Message Board. ### Controls The player-character is controlled via the Wii Remote and Nunchuk. While most of Mario's abilities are taken directly from Super Mario 64, such as the long jump, wall jumps, and a variety of somersaults, a new feature called the Star Pointer that uses the Wii Remote's motion sensor is included. It is a small blue cursor that appears when the Wii Remote pointer is pointed at the screen. The Star Pointer is used to pick up special konpeito-shaped objects called "Star Bits", which can then be shot to stun enemies, manipulate obstacles, or feed Hungry Lumas (star-shaped sentient beings). The pointer can also latch onto small blue objects called "Pull Stars", which can pull Mario through space. In certain levels that encase the player in a floating bubble, the Star Pointer is used to blow wind and maneuver the bubble. Early in the game, the player learns a new ability known as the "Spin" technique, which has appeared in varying forms throughout the Super Mario franchise. In Super Mario Galaxy, the "spin" is primarily used for melee attacks to stun enemies and shatter objects, as well as triggering special propellers called "Sling Stars" or "Launch Stars" that launch Mario across large distances through space. The "spin" utility is also used for climbing vines, ice skating, flipping switches, unscrewing bolts, and for activating several power-ups. Other Wii Remote functions are available for smaller quests, such as surfing aboard a manta ray or balancing atop a large ball and rolling it through an obstacle course. ### Power-ups and lives Nine power-ups grant Mario temporary abilities. For example, special mushrooms bestow the player with a Bee, Boo, or Spring Mushroom. The Bee Mushroom allows Mario to hover through the air, climb on honeycombs, and walk on clouds and flowers; the Boo Mushroom allows him to float through the air, become transparent, and move through certain obstacles; and the Spring Mushroom allows him to jump to high areas that would be otherwise inaccessible. The Fire Flower allows Mario to throw fireballs at enemies, and the Ice Flower grants flame attack immunity and allows Mario to create hexagonal ice tiles to cover any liquid surface he walks on. The Rainbow Star grants Mario invincibility and lets him run faster. Mario's health consists of a three-piece health meter, which is depleted through contact with enemies and hazards. When swimming underwater, Mario has an air supply meter, which quickly depletes his main health meter if it runs out. Mario's health and air supply can be restored by collecting coins, or through touching bubbles if underwater. When the health meter becomes empty, the player loses a life and must go back to a predetermined checkpoint. The health meter can temporarily expand to six units through the use of a Life Mushroom. Instant death can occur by being swallowed by quicksand or dark matter, by being crushed by hazards, or by falling into black holes or other bottomless pits. The player can obtain extra lives by collecting 1-Up Mushrooms, 50 coins without losing a life, or 50 Star Bits. ### Multiplayer Super Mario Galaxy has a co-operative two-player option called "Co-Star" mode, in which one player controls Mario while the other uses only the Wii Remote to control a second Star Pointer on-screen to gather Star Bits and shoot them at enemies. The second player can also make Mario jump, and the height of Mario's jump can be increased if the first and second player press the A button at the same time. The second player can prevent some enemies from moving by aiming the pointer star at them and holding the A button. ## Plot The centennial Star Festival is held to watch a comet in the Mushroom Kingdom. On the night of the Star Festival, Princess Peach discovers a star-shaped creature called a Luma and invites Mario to come to the festival to see the Luma she discovered. Just as Mario arrives at the town, Bowser invades the Mushroom Kingdom in a fleet of airships, littering the landscape with fireballs and petrifying its citizens in crystals. He invites Peach to his creation of his galaxy and removes Peach's castle from its foundations using a giant flying saucer, and lifts it into outer space. Kamek, one of Bowser's minions, launches Mario, who tried to rescue Princess Peach, into space and onto a small planet with his magic while the Luma escapes from her hands. On the planet, Mario wakes up after being knocked unconscious and meets the Luma which Peach found earlier, along with the space Princess Rosalina and her star-shaped companions, the Lumas. Rosalina describes herself as a watcher of the stars who uses the Comet Observatory to travel across the universe. However, Bowser has stolen all of the Power Stars that act as the Observatory's power source, rendering it immobile. Bestowed with the power to travel through space through one of the Lumas, Mario sets off on a journey across the universe to reclaim the Power Stars and restore power to Rosalina's observatory. Along the way, he finds friends from the Mushroom Kingdom such as Luigi and the Toads while fighting Bowser and Bowser Jr. at certain points. Upon collecting enough Power Stars, the Comet Observatory flies to the center of the universe, where Bowser is holding Peach captive. While confronting Bowser, Mario learns that he plans to rule the entire universe with Peach at his side. Mario defeats Bowser and frees Peach, but one of the galaxy's planets collapses on itself, becoming a supermassive black hole that begins consuming the entire universe. The Lumas sacrifice themselves and jump into the black hole to destroy it, causing the black hole to collapse into a singularity and the universe is recreated entirely as the singularity explodes in a huge supernova. Rosalina appears to Mario, revealing that dying stars are later reborn as new stars. When the universe is recreated, Mario awakens in the Mushroom Kingdom, which was recreated in the supernova, alongside Peach and Bowser, and he celebrates the new galaxy that has emerged in the skies. If the player collects 120 stars, Rosalina will thank the player and, with the reborn Lumas, leave aboard the Comet Observatory to travel the cosmos again. ## Development The concept for Super Mario Galaxy's gameplay originated from ideas taken from Super Mario 128, a technology demonstration shown at Nintendo Space World in 2000 to exemplify the processing power of the GameCube. The demonstration's director (and future director of Super Mario Galaxy), Yoshiaki Koizumi, desired that one of its distinguishing features, spherical-based platforms, should be used in a future game, but was held back in belief that such a feature would be impossible for technical reasons. Super Mario creator Shigeru Miyamoto suggested to work on the next large-scale Mario game after Nintendo EAD Tokyo finished development on Donkey Kong Jungle Beat in late 2004, pushing for the spherical platform concept to be realised. A prototype of the game's physics system took three months to build, where it was decided that the game's use of spherical platforms would best be suited to planetoids in an outer space environment, with the concept of gravity as a major feature. During development, the designers would often exchange ideas with Miyamoto from his office in Kyoto, where he would make suggestions to the game design. According to Koizumi, many ideas were conceived before development of the Wii console itself begun. The idea for Mario to have a "spin" attack came during the early stages of development, when it was decided that jumping on enemies on a spherical map would be difficult for some players – at one point, Koizumi remarked that making characters jump in a 3D environment was "absurd". Takeo Shimizu, the game's producer and programmer, noted that the most basic action in a 3D action game was to simply run, and concluded that the easiest way to attack was to "spin", not jump. Prior to the development team shifting focus on the Wii and realising the potential of its different controls, the "spin" attack was originally planned to be executed by swivelling the analogue stick on the GameCube controller. The "spin" was initially activated via rotation of the Nunchuk's control stick, but after motion sensing was confirmed to be implemented in the Wii Remote, the "spin" was changed to be activated through shaking the latter. Nintendo president Satoru Iwata wanted to prioritise the game's "fun factor" by giving the player a sense of achievement after they have completed a difficult task; Iwata noted an increasing number of consumers giving up during a video game and thus wanted Super Mario Galaxy to appeal to that audience. In response, the development team created a co-operative mode which allowed one player to control Mario whilst the other controlled the pointer with the Wii Remote, therefore enabling lesser experienced players to enjoy themselves in the game. The development team wanted the game to be enjoyed from the ages of "5 to 95", so during early stages of development, they took steps to ensure that the player would adjust to the game without difficulty. However, Miyamoto thought that it was too easy and lacked insensitivity, asserting that a game loses its excitement when it is made unchallenging. To balance out the difficulty, Koizumi suggested that Mario's health meter should have a maximum capacity of three instead of eight, but at the same time more 1-Up Mushrooms and checkpoints would be placed in the game. Koizumi said that he wanted to alter the game's "intensity factor" by limiting the number of hits the player could take to three, as opposed to Super Mario 64 and Super Mario Sunshine, which featured eight. Retrospectively, Iwata added that decreasing the health meter to three from eight is "representative of the things that players do not notice that actually changes the gameplay dramatically". With the concept of gravity and spherical platforms being the central elements of gameplay, the development team drafted several ideas on how to implement them into the game. Koichi Hayashida, a co-designer of the game, initially expressed scepticism of incorporating a spherical playing field into a jump-based platform game, stating that it would be "a bad match". Shimizu also had a negative reaction to the idea, with his main concern being that the implementation of spherical platforms would be impossible to achieve due to technical reasons, and "felt a sense of danger" when the plan was eventually approved. However, once Shimizu started debugging the game he realised that the experience felt "totally fresh" and thought that he was "playing a game like nothing that's come before it". Futoshi Shirai, the game's level designer, stated that unlike Hayashida and Shimizu, he had a positive impression of the new gameplay elements. Shirai liked the idea of being able to run on different types of planetoids, and came up with designs such as planets in the shape of ice cream and apples. Because the game was in outer space, the team could devise ideas that would have otherwise been hard to implement in other Super Mario games. Shirai said that the benefit of working with a spherical-shaped world was that they could design and discover new things, with Kenta Motokura, the game's artist, similarly stating that the player would be continuously enjoying their adventure by travelling to new planets. Koizumi appreciated the "free and open" feel of developing the game, saying that it enabled the team to make the game more fun for the player. Throughout development, staff members enjoyed the level of freedom the game offered, in particular the transforming abilities of Mario. Iwata noted that Mario's Bee Suit was popular with women, and also stated that the titular character's other suits were designed to add variations to the gameplay. According to Hayashida, the idea to include transformations in the game came from Koizumi. One of the female members of staff who worked on Super Mario Galaxy wrote a note saying "I want a Bee Mario" when asked by Koizumi what they wanted to transform Mario into. Shirai stated that the development team always discussed their ideas together, and devised ways to incorporate an idea into the game and make it more entertaining. Iwata concluded that having the game take place in space was advantageous, as it was flexible enough to accommodate a wide range of ideas. After development was finished, the team reflected that the fundamental part of a Super Mario game was to make the player think about how "fun" it was to play the game itself, rather than simply finishing it. To accomplish this, Koizumi made sure that there were certain areas of the game which could be enjoyed by all types of people, including children. Shimizu added that Super Mario Galaxy's ulterior motive was to have everybody "gather around the TV", as he felt that a game starring Mario was not necessarily something which could be enjoyed by playing alone. The game was made to support six different save files – Shimizu liked the idea of one player looking at the progress of another player and seeing how they compared against their own. Iwata stated that when the first Super Mario game was released, there used to be "many more" people gathering around the television who would enjoy watching the gameplay experience. Iwata asserted that well-made video games were more enjoyable to spectate, and hoped that Super Mario Galaxy's co-operative mode would tempt someone who does not usually play video games to join. ### Music During development, Mahito Yokota, who was in charge of the musical direction, originally wanted Super Mario Galaxy to have a Latin American style of music; and even had composed 28 tracks in that style. Latin American percussion instruments had already been featured in previous Super Mario installments, such as steelpans, bongo drums, and congas. For Super Mario Galaxy's theme, Yokota used Latin American instruments and a synthesiser to replicate the sounds featured in old science fiction films. The composition was approved by Yoshiaki Koizumi, the game's director and designer, but when Yokota presented it to the game's sound supervisor, Koji Kondo, he stated that it was "no good". When asked why his music was rejected, Kondo responded: "if somewhere in your mind you have an image that Mario is cute, please get rid of it". Incensed by the rejection, Yokota almost resigned from his job, but Kondo implied that Mario's character was "cool" and instructed him to try again. According to Yokota, he was under the impression that Mario was suited for children, causing him to create "cute" music that would appeal to the targeted audience. The game's director, Yoshiaki Koizumi, after Yokota's music was rejected, later complimented him telling him "It wasn't so bad". Three months later, Yokota presented three different styles of music to Miyamoto: one piece had an orchestral sound, the other had pop music, and the last featured a mix of both orchestral and pop music. Miyamoto chose the orchestral piece, as it sounded the most "space-like". Yokota stated that Miyamoto chose the piece without knowing that Kondo actually wrote it. In a retrospective interview, Satoru Iwata said that Miyamoto chose the music that sounded "space-like" because he was looking for a sound that would express the game, in contrast to the tropical sounds of Super Mario Bros. Yokota revealed that he initially struggled to create music that sounded like a Super Mario game, but as time progressed he declared that the songs he made for the game had "become natural". To create a sense of variety with the soundtrack, Yokota and Kondo wrote pieces individually; Kondo composed four pieces for the game whereas Yokota composed the rest. Kondo composed the pieces that Yokota specifically requested, as he thought that the game's soundtrack would "end up all sounding the same" if it were composed by one person. The game originally heavily utilised the Wii Remote speaker for "all sorts of sound [effects]", but Masafumi Kawamura, the game's sound director, decided they were redundant when played in tandem with those from the television. Kawamura decided to restrict Wii Remote sound effects to those triggered by Mario's actions, such as hitting an enemy, feeling that it better immersed the player. The game's soundtrack features 28 orchestral songs performed by a 50-person symphony orchestra. Yokota initially had concerns whether or not orchestral music would fit in with the rhythm of a Mario game, but thought that such music would make the scale of the game "seem more epic". Kondo, on the other hand, believed if orchestral music were used the player would be "obligated to play the game in time to the music". To synchronise the soundtrack to gameplay, Kawamura utilised similar techniques he used to synchronise sound effects in The Legend of Zelda: Wind Waker and Donkey Kong Jungle Beat — in which the game synchronises MIDI data with streaming data, resulting in sound effects playing at the same time as background music. To make synchronisation possible, the audio team requested the orchestra to perform at different tempos set with a metronome. The official soundtrack was released on 24 January 2008. It was initially an exclusive to Club Nintendo subscribers in Japan, although the soundtrack became available to European Club Nintendo members in November 2008. The soundtrack was released in two versions: the Original Soundtrack, which only contains 28 tracks from the game, and the Platinum Edition, which contains another 53 tracks on a second disc for a total of 81 tracks. In North America, the Original Soundtrack was included in a black Wii Family Edition console bundle alongside New Super Mario Bros. Wii in 2011. ## Reception Super Mario Galaxy has received critical acclaim, becoming the sixth-highest-rated game of all time on review aggregator Metacritic with an aggregate score of 97 out of 100 based on 73 reviews. Before review-aggregate website GameRankings shut down in December 2019, it was listed as the highest-rated game with at least 20 reviews, having a 97.64% ranking based on 78 reviews. The visuals and presentation were the most praised aspects of the game. Chris Scullion of the Official Nintendo Magazine asserted that the graphics pushed the Wii to its full potential, and stated that its visual effects and large playing areas would constantly astound the player. Jeremy Parish from 1UP.com noted that despite the Wii's limitations, the visuals were "absolutely impressive", especially when modified at a higher resolution. Computer and Video Games's Andrew Robinson opined that Nintendo favored gameplay over graphics, but thought Super Mario Galaxy "got both perfect". Margaret Robertson of Eurogamer called the visuals an "explosion of inventiveness", stating that the game's detail is only matched by its mission design ingenuity. Andrew Reiner of Game Informer approved of the game's portrayal of water and particle effects, but noted the visuals were in similar detail to Super Mario Sunshine. Patrick Shaw from GamePro opined that the game takes full advantage of the Wii's capabilities, both in terms of presentation and control schemes. Regarding the presentation, Game Revolution's Chris Hudak thought that Super Mario Galaxy was a "next-gen reincarnation" of Super Mario 64, stating the game was polished, engaging and evocative. Alex Navarro of GameSpot commended the colourful and vibrant level details, animations and character designs, saying that "there simply isn't a better-looking Wii game available". Furthermore, Navarro praised the game engine's ability of keeping frame rate drops to "infrequent bouts". Bryn Williams of GameSpy asserted that the game had the best visuals on the Wii, saying that the graphics "are out of this world" and that its wide range of colours produces "better-than-expected" texturing. A reviewer from GamesRadar stated that "words simply can't describe" the game's visual concepts. Louis Bedigan from GameZone thought the visualisations from Super Mario Galaxy contrast from the blocky characters of previous Super Mario games, praising the planet designs as beautiful and everything else as "pure eye candy". Matt Casamassina of IGN thought Super Mario Galaxy was the only game that pushed the Wii console, stating it combines "great art" with "great tech", resulting in what he described to be "stunning results". David Halverson of Play opined that the game was "supremely" polished and featured "gorgeous next-gen" graphics. The gameplay, in particular the gravity mechanics and use of the Wii Remote, was also praised. A reviewer from Famitsu commented on the game's tempo, believing it was "abnormally good" and that the different variations in level design and difficulty gradually "builds things up". A reviewer from Edge praised the game's use of the Wii Remote, stating the control schemes were more subtle and persuasive as opposed to the "vigorous literalism" of The Legend of Zelda: Twilight Princess. Scullion was initially sceptical about using the Wii Remote as a pointer, but admitted that "within mere minutes it felt like we'd been doing this since the days of Mario 64". Scullion also thought that the game's strongest aspect was the "incomparable" gameplay. Parish praised the fluctuating gravity that was featured in the game, stating that it "makes even the wildest challenge feel almost second nature". Robinson similarly commended the gravity, saying that the different uses of the game's gravitational pulls allows the scale to grow to "genuinely draw-dropping proportions". Robertson regarded the use of gravity as an "explosion of inventiveness". Reiner thought that the game reinvented the platform genre for the seventh generation of video game consoles, stating that Super Mario Galaxy was both nostalgic and new by breaking the laws of physics. Shaw asserted that the new gameplay mechanics reinvigorated the Super Mario franchise, and summarised by saying it was the best title since Super Mario 64. Similarly, Hudak thought that the game was a reincarnation of Super Mario 64, whilst stating that the variety of gameplay had a "signature Miyamoto style". Navarro said that the level designs were "top flight in every regard" and also praised the game's introduction of suits, adding that they brought a "great dimension" to gameplay. Williams opined that the game's "shallow" two-player mode did not add anything to the overall experience. He did praise the various gameplay components and the use of both the Wii Remote and Nunchuck, stating that the setup was "pinpoint accurate". A reviewer from GamesRadar thought that the control scheme had a fluid response that improved over the controls of its predecessor, Super Mario Sunshine. Regarding the controls and world designs, Bedigan stated that both aspects are "close to perfection as a game can get". Casamassina found the gameplay mechanics, in particular varying physics, as "ridiculously entertaining". He also regarded the motion control as being well implemented, stating that the player would appreciate the change of pace that the levels offer. Halverson particularly commended the innovative controls, saying the Wii Remote and Nunchuck was "at its finest" and that it was difficult to imagine playing it in another fashion. The soundtrack and audio were well received by critics. Scullion believed it to be the best out of any Super Mario game, declaring that each track matches the environments featured throughout the game. Parish considered the orchestrated music superior to the visuals, saying that the dynamic sounds were "quintessentially Mario" yet uncharacteristically sophisticated. Reiner stated that the orchestrated soundtrack was beautiful as well as nostalgic, with Robinson similarly citing it as "amazing". Navarro praised the modernised orchestrated soundtrack, stating that it was both excellent and "top-notch". Williams thought the game featured the best sound on the Wii, stating that original soundtrack would "go down in history" as Nintendo's best first-party effort. A reviewer from GamesRadar stated that Super Mario Galaxy featured the finest orchestral bombast ever heard in a game. Bedigan asserted that the soundtrack was "another step forward" in video game music, praising the music as moving and breathtaking. Casamassina judged the game's music "so exceptional" and "absolutely superb", summarising that it had the best music out of any Nintendo game to date. Hudak criticised the "traditional Mario-esque" lack of voice acting, despite admitting that if the game did feature voice acting it would "probably seem lame and wrong". ### Sales Super Mario Galaxy has been a commercial success, selling 350,000 units in Japan within its first few weeks of sale. In the United States, the game sold 500,000 units within its first week of release, earning it the highest first-week sales for a Mario game in the country at the time. The NPD Group reported that 1.4 million copies of the game were sold in the US in December 2007, making it the highest-selling game of the month, and that the game had become the fifth best-selling game of 2007 with 2.52 million units having been sold since its release. After 13 months, it had sold 7.66 million copies worldwide. By January 2010, the game had sold 4.1 million units in the US, and by February it had become one of nine Wii titles to surpass 5 million unit sales in the country. By the end of March 2020, Nintendo had sold 12.80 million copies of the game worldwide, making it the third best-selling non-bundled Wii game and the ninth best-selling Nintendo-published game for the Wii. ### Awards Super Mario Galaxy received Game of the Year 2007 awards from IGN, GameSpot, Nintendo Power, Kotaku, and Yahoo! Games. The game was also perceived as the highest ranking title in 2007 according to the review aggregator GameRankings. In February 2008, Super Mario Galaxy received the "Adventure Game of the Year" award from the Academy of Interactive Arts & Sciences at the 11th Annual Interactive Achievement Awards (now known as the D.I.C.E. Awards); it also received nominations for "Overall Game of the Year", "Console Game of the Year", "Outstanding Achievement in Game Design", "Outstanding Achievement in Gameplay Engineering" and "Outstanding Innovation in Gaming". Super Mario Galaxy placed third in the Official Nintendo Magazine's "100 greatest Nintendo games of all time" list. In 2009, the game won the "Game of the Year" BAFTA at the 5th British Academy Games Awards, surpassing Call of Duty 4: Modern Warfare and becoming the first Nintendo game to win this award, whilst in the same year, Super Mario Galaxy was named the number one Wii game by IGN. It was also named by Eurogamer and IGN as the "Game of the Generation". In 2015, the game placed 11th on USgamer's "15 Best Games Since 2000" list. Guinness World Records ranked Super Mario Galaxy 29th in their list of top 50 console games of all time based on initial impact and lasting legacy. In their final issue, the Official Nintendo Magazine ranked Super Mario Galaxy as the greatest Nintendo game of all time. The soundtrack also won the "Best Design in Audio" award from Edge. ## Legacy In the 1,000th issue of Famitsu, Miyamoto expressed his interest in making a sequel to Super Mario Galaxy. The game was originally called "Super Mario Galaxy More" during development, and was initially going to feature variations of planets featured in Super Mario Galaxy. Over time, new elements and ideas were brought into the game, and it was decided that the game would be a full sequel. Super Mario Galaxy 2 was announced during the Nintendo conference at E3 2009 held in Los Angeles. It was released on 23 May 2010 in North America, 27 May 2010 in Japan and on 11 June 2010 in Europe. The sequel has been met with as much critical acclaim as its predecessor, and has sold 6.36 million copies worldwide as of April 2011. Super Mario Galaxy, as well as several other Wii games, was rereleased for Nvidia's Shield Tablet in China on 22 March 2018 as the result of a partnership between Nintendo, Nvidia and iQiyi. The game runs on the Shield via an emulator, but has interface and control modifications, and support for 1080p resolution. Due to the lack of motion controls on the Shield, some controls are remapped; for example, the on-screen pointer is remapped to the right analogue stick and the button to choose a Galaxy is remapped to the right trigger. The game is included alongside Super Mario 64 and Super Mario Sunshine in the Super Mario 3D All-Stars collection on Nintendo Switch. It was released on 18 September 2020.
1,040,599
Pre-dreadnought battleship
1,170,530,330
Battleships built from the 1880s to 1905
[ "19th-century military equipment", "19th-century military history", "20th-century military equipment", "20th-century military history", "Battleships", "Naval history", "Ship types" ]
Pre-dreadnought battleships were sea-going battleships built from the mid- to late- 1880s to the early 1900s. Their designs were conceived before the appearance of HMS Dreadnought in 1906 and their classification as 'pre-dreadnought' is retrospectively applied. In their day, they were simply known as 'battleships' or else more rank-specific terms such as 'first-class battleship' and so forth. The pre-dreadnought battleships were the pre-eminent warships of their time and replaced the ironclad battleships of the 1870s and 1880s. In contrast to the multifarious development of ironclads in preceding decades, the 1890s saw navies worldwide start to build battleships to a common design as dozens of ships essentially followed the design of the Royal Navy's Majestic class. Built from steel, protected by compound, nickel steel or case-hardened steel armour, pre-dreadnought battleships were driven by coal-fired boilers powering compound reciprocating steam engines which turned underwater screws. These ships distinctively carried a main battery of very heavy guns upon the weather deck, in large rotating mounts either fully or partially armoured over, and supported by one or more secondary batteries of lighter weapons on broadside. The similarity in appearance of battleships in the 1890s was underlined by the increasing number of ships being built. New naval powers such as Germany, Japan, the United States, and to a lesser extent Italy and Austria-Hungary, began to establish themselves with fleets of pre-dreadnoughts. Meanwhile, the battleship fleets of the United Kingdom, France, and Russia expanded to meet these new threats. The last decisive clash of pre-dreadnought fleets was between the Imperial Japanese Navy and the Imperial Russian Navy at the Battle of Tsushima on 27 May 1905. These battleships were abruptly made obsolete by the arrival of HMS Dreadnought in 1906. Dreadnought followed the trend in battleship design to heavier, longer-ranged guns by adopting an "all-big-gun" armament scheme of ten 12-inch guns. Her innovative steam turbine engines also made her faster. The existing battleships were decisively outclassed, with no more being designed to their format thereafter; the new, larger and more powerful, battleships built from then on were known as dreadnoughts. This was the point at which the ships that had been laid down before were redesignated 'pre-dreadnoughts'. ## Evolution The pre-dreadnought developed from the ironclad battleship. The first ironclads—the French Gloire and HMS Warrior—looked much like sailing frigates, with three tall masts and broadside batteries, when they were commissioned in the early 1860s. HMVS Cerberus, the first breastwork monitor, was launched in 1868, followed in 1871 by HMS Devastation, a turreted ironclad which more resembled a pre-dreadnought than the previous, and its contemporary, turretless ironclads. Both ships dispensed with masts and carried four heavy guns in two turrets fore and aft. Devastation was the first ocean-worthy breastwork monitor; because of her very low freeboard, her decks were subject to being swept by water and spray, interfering with the working of her guns. Navies worldwide continued to build masted, turretless battleships which had sufficient freeboard and were seaworthy enough to fight on the high seas. The distinction between coast-assault battleship and cruising battleship became blurred with the Admiral-class ironclads, ordered in 1880. These ships reflected developments in ironclad design, being protected by iron-and-steel compound armour rather than wrought iron. Equipped with breech-loading guns of between 12-inch and 16 1⁄4-inch (305 mm and 413 mm) calibre, the Admirals continued the trend of ironclad warships mounting gigantic weapons. The guns were mounted in open barbettes to save weight. Some historians see these ships as a vital step towards pre-dreadnoughts; others view them as a confused and unsuccessful design. The subsequent Royal Sovereign class of 1889 retained barbettes but were uniformly armed with 13.5-inch (343 mm) guns; they were also significantly larger (at 14,000 tons displacement) and faster (because of triple-expansion steam engines) than the Admirals. Just as importantly, the Royal Sovereigns had a higher freeboard, making them unequivocally capable of the high-seas battleship role. The pre-dreadnought design reached maturity in 1895 with the Majestic class. These ships were built and armoured entirely of steel, and their guns were now mounted in fully-enclosed rotating turrets. They also adopted 12-inch (305 mm) main guns, which, because of advances in gun construction and the use of cordite propellant, were lighter and more powerful than the previous guns of larger calibre. The Majestics provided the model for battleship building in the Royal Navy and many other navies for years to come. ## Armament Pre-dreadnoughts carried guns of several different calibres, for different roles in ship-to-ship combat. ### Main battery Very few pre-dreadnoughts deviated from what became the classic arrangement of heavy weaponry: A main battery of four heavy guns mounted in two centre-line gunhouses fore and aft (these could be either fully enclosed barbettes or true turrets but, regardless of type, were later to be universally referred to as 'turrets'). These main guns were slow-firing, and initially of limited accuracy; but they were the only guns heavy enough to penetrate the thick armour which protected the engines, magazines, and main guns of enemy battleships. The most common calibre for this main armament was 12-inch (305 mm), although earlier ships often had larger-calibre weapons of lower muzzle velocity (guns in the 13-inch to 14-inch range) and some designs used smaller guns because they could attain higher rates of fire. All British first-class battleships from the Majestic class onwards carried 12-inch weapons, as did French battleships from the Charlemagne class, laid down in 1894. Japan, importing most of its guns from Britain, used this calibre also. The United States used both 12-inch (305 mm) and 13-inch (330 mm) guns for most of the 1890s until the Maine class, laid down in 1899 (not the earlier Maine of Spanish–American War notoriety), after which the 12-inch gun was universal. The Russians used both 12 and 10-inch (254 mm) as their main armament; the Petropavlovsk class, Retvizan, Tsesarevich, and Borodino class had 12-inch (305 mm) main batteries while the Peresvet class mounted 10-inch (254 mm) guns. The first German pre-dreadnought class used an 11-inch (279 mm) gun but decreased to a 9.4-inch (239 mm) gun for the two following classes and returned to 11-inch guns with the Braunschweig class. While the calibre of the main battery remained quite constant, the performance of the guns improved as longer barrels were introduced. The introduction of slow-burning nitrocellulose and cordite propellant allowed the employment of a longer barrel, and therefore higher muzzle velocity—giving greater range and penetrating power for the same calibre of shell. Between the Majestic class and Dreadnought, the length of the British 12-inch gun increased from 35 calibres to 45. and muzzle velocity increased from 706 metres (2,317 ft) per second to 770 metres (2,525 ft) per second. ### Secondary battery Pre-dreadnoughts also carried a secondary battery of smaller guns, typically 6-inch (152 mm), though calibres from 4 to 9.4 inches (100 to 240 mm) were used. Virtually all secondary guns were "quick firing", employing a number of innovations to increase the rate of fire. The propellant was provided in a brass cartridge, and both the breech mechanism and the mounting were suitable for rapid aiming and reloading. A principal role of the secondary battery was to damage the less armoured parts of an enemy battleship; while unable to penetrate the main armour belt, it might score hits on lightly armoured areas like the bridge, or start fires. Equally important, the secondary armament was to be used against smaller enemy vessels such as cruisers, destroyers, and even torpedo boats. A medium-calibre gun could expect to penetrate the light armour of smaller ships, while the rate of fire of the secondary battery was important in scoring a hit against a small, manoeuvrable target. Secondary guns were mounted in a variety of ways; sometimes carried in turrets, they were just as often positioned in fixed armoured casemates in the side of the hull, or in unarmoured positions on upper decks. ### Intermediate battery Some of the pre-dreadnoughts carried an "intermediate" battery, typically of 8-to-10-inch (200 to 250 mm) calibre. The intermediate battery was a method of packing more heavy firepower into the same battleship, principally of use against battleships or at long ranges. The United States Navy pioneered the intermediate battery concept in the Indiana, Iowa, and Kearsarge classes, but not in the battleships laid down between 1897 and 1901. Shortly after the USN re-adopted the intermediate battery, the British, Italian, Russian, French, and Japanese navies laid down intermediate-battery ships. Almost all of this later generation of intermediate-battery ships finished building after Dreadnought, and hence were obsolescent before completion. ### Tertiary battery The pre-dreadnought's armament was completed by a tertiary battery of light, rapid-fire guns, of any calibre from 3-inch (76 mm) down to machine guns. Their role was to give short-range protection against torpedo boats, or to rake the deck and superstructure of a battleship. ### Torpedoes In addition to their gun armament, many pre-dreadnought battleships were armed with torpedoes, fired from fixed tubes located either just above or below the waterline. By the pre-dreadnought era the torpedo was typically 18-inch (457 mm) in diameter and had an effective range of several thousand metres. However, it was virtually unknown for a battleship to score a hit with a torpedo. ### Range of combat During the ironclad age, the range of engagements increased; in the Sino-Japanese War of 1894–95 battles were fought at around 1 mile (1.5 km), while in the Battle of the Yellow Sea in 1904, the Russian and Japanese fleets fought at ranges of 3.5 miles (5.5 km). The increase in engagement range was due in part to the longer range of torpedoes, and in part to improved gunnery and fire control. In consequence, shipbuilders tended towards heavier secondary armament, of the same calibre that the "intermediate" battery had been; the Royal Navy's last pre-dreadnought class, the Lord Nelson class, carried ten 9.2-inch guns as secondary armament. Ships with a uniform, heavy secondary battery are often referred to as "semi-dreadnoughts". ## Protection Pre-dreadnought battleships carried a considerable weight of steel armour, providing them with effective defence against the great majority of naval guns in service during the period. 'Medium' calibre guns up to 8-9.4 inch would generally prove incapable of piercing their thickest armour, while it still provided some measure of defence against even the 'heavy' guns of the day which were considered capable of piercing these plates. ### Vertical side armour Experience with the first generations of ironclads showed that rather than giving the ship's entire length uniform armour protection, it was best to concentrate armour in greater thickness over limited but critical areas. Therefore the central section of the hull, which housed the boilers and engines, was protected by the main belt, which ran from just below the waterline to some distance above it. This "central citadel" was intended to protect the engines from even the most powerful shells. Yet the emergence of the quick-firing gun and high explosives in the 1880s meant that the 1870s to early 1880s concept of the pure central citadel was also inadequate in the 1890s and that thinner armour extensions towards the extremities would greatly aid the ship's defensive qualities. Thus, the main belt armour would normally taper to a lesser thickness along the side of the hull towards bow and stern; it might also taper up from the central citadel towards the superstructure. ### Other armour The main armament and the magazines were protected by projections of thick armour from the main belt. The beginning of the pre-dreadnought era was marked by a move from mounting the main armament in open barbettes to an all-enclosed, turret mounting. The deck was typically lightly armoured with 2 to 4 inches of steel. This lighter armour was to prevent high-explosive shells from wrecking the superstructure of the ship. The majority of battleships during this period of construction were fitted with a heavily-armoured conning tower, or CT, which was intended for the use of the command staff during battle. This was protected by a vertical, full height, ring of armour nearly equivalent in thickness to the main battery gunhouses and provided with observation slits. A narrow armoured tube extended down below this to the citadel; this contained & protected the various voice-tubes used for communication from the CT to various key stations during battle. ### Metallurgical advances in armour The battleships of the late 1880s, for instance the Royal Sovereign class, were armoured with iron and steel compound armour. This was soon replaced with more effective case-hardened steel armour made using the Harvey process developed in the United States. First tested in 1891, Harvey armour was commonplace in ships laid down from 1893 to 1895. However, its reign was brief; in 1895, the German Kaiser Friedrich III pioneered the superior Krupp armour. Europe adopted Krupp plate within five years, and only the United States persisted in using Harvey steel into the 20th century. The improving quality of armour plate meant that new ships could have better protection from a thinner and lighter armour belt; 12 inches (305 mm) of compound armour provided the same protection as just 7.5 inches (190 mm) of Harvey or 5.75 inches (133 mm) of Krupp. ## Propulsion Almost all pre-dreadnoughts were powered by reciprocating steam engines. Most were capable of top speeds between 16 and 18 knots (21 mph; 33 km/h). The ironclads of the 1880s used compound engines, and by the end of the 1880s the even-more efficient triple expansion compound engine was in use. Some fleets, though not the British, adopted the quadruple-expansion steam engine. The main improvement in engine performance during the pre-dreadnought period came from the adoption of increasingly higher pressure steam from the boiler. Scotch marine boilers were superseded by more compact water-tube boilers, allowing higher-pressure steam to be produced with less fuel consumption. Water-tube boilers were also safer, with less risk of explosion, and more flexible than fire-tube types. The Belleville-type water-tube boiler had been introduced in the French fleet as early as 1879, but it took until 1894 for the Royal Navy to adopt it for armoured cruisers and pre-dreadnoughts; other water-tube boilers followed in navies worldwide. The engines drove either two or three screw propellers. France and Germany preferred the three-screw approach, which allowed the engines to be shorter and hence more easily protected; they were also more maneuverable and had better resistance to accidental damage. Triple screws were, however, generally larger and heavier than the twin-screw arrangements preferred by most other navies. Coal was the almost exclusive fuel for the pre-dreadnought period, though navies made the first experiments with oil propulsion in the late 1890s. An extra knot or two of speed could be gained for short bursts by applying a 'forced draught' to the furnaces, where air was pumped into the furnaces, but this risked damage to the boilers if used for prolonged periods. The French built the only class of turbine powered pre-dreadnought battleships, the Danton class of 1907. ## Pre-dreadnought fleets and battles The pre-dreadnought battleship in its heyday was the core of a very diverse navy. Many older ironclads were still in service. Battleships served alongside cruisers of many descriptions: modern armoured cruisers which were essentially cut-down battleships, lighter protected cruisers, and even older unarmoured cruisers, sloops and frigates whether built out of steel, iron or wood. The battleships were threatened by torpedo boats; it was during the pre-dreadnought era that the first destroyers were constructed to deal with the torpedo-boat threat, though at the same time the first effective submarines were being constructed. The pre-dreadnought age saw the beginning of the end of the 19th century naval balance of power in which France and Russia vied for competition against the massive Royal Navy, and saw the start of the rise of the 'new naval powers' of Germany, Japan and the United States. The new ships of the Imperial Japanese Navy and to a lesser extent the U.S. Navy supported those powers' colonial expansion. While pre-dreadnoughts were adopted worldwide, there were no clashes between pre-dreadnought battleships until the very end of their period of dominance. The First Sino-Japanese War in 1894–95 influenced pre-dreadnought development, but this had been a clash between Chinese battleships and a Japanese fleet consisting of mostly cruisers. The Spanish–American War of 1898 was also a mismatch, with the American pre-dreadnought fleet engaging Spanish shore batteries at San Juan and then a Spanish squadron of armoured cruisers and destroyers at the Battle of Santiago de Cuba. Not until the Russo-Japanese War of 1904–05 did pre-dreadnoughts engage on an equal footing. This happened in three battles: the Russian tactical victory during the Battle of Port Arthur on 8–9 February 1904, the indecisive Battle of the Yellow Sea on 10 August 1904, and the decisive Japanese victory at the Battle of Tsushima on 27 May 1905. These battles upended prevailing theories of how naval battles would be fought, as the fleets began firing at one another at much greater distances than before; naval architects realized that plunging fire (explosive shells falling on their targets largely from above, instead of from a trajectory close to horizontal) was a much greater threat than had been thought. Gunboat diplomacy was typically conducted by cruisers or smaller warships. A British squadron of three protected cruisers and two gunboats brought about the capitulation of Zanzibar in 1896; and while battleships participated in the combined fleet Western powers deployed during the Boxer Rebellion, the naval part of the action was performed by gunboats, destroyers and sloops. ### Europe European navies remained dominant in the pre-dreadnought era. The Royal Navy remained the world's largest fleet, though both Britain's traditional naval rivals and the new European powers increasingly asserted themselves against its supremacy. In 1889, Britain formally adopted a 'two power standard' committing it to building enough battleships to exceed the two largest other navies combined; at the time, this meant France and Russia, which became formally allied in the early 1890s. The Royal Sovereign and Majestic classes were followed by a regular programme of construction at a much quicker pace than in previous years. The Canopus, Formidable, Duncan and King Edward VII classes appeared in rapid succession from 1897 to 1905. Counting two ships ordered by Chile but taken over by the British, the Royal Navy had 50 pre-dreadnought battleships ready or being built by 1904, from the 1889 Naval Defence Act's ten units onwards. Over a dozen older battleships remained in service. The last two British pre-dreadnoughts, the 'semi-dreadnought' Lord Nelsons, appeared after Dreadnought herself. France, Britain's traditional naval rival, had paused its battleship building during the 1880s because of the influence of the Jeune École doctrine, which favoured torpedo boats to battleships. After the Jeune École's influence faded, the first French battleship laid down was Brennus, in 1889. Brennus and the ships which followed her were individual, as opposed to the large classes of British ships; they also carried an idiosyncratic arrangement of heavy guns, with Brennus carrying three 13.4-inch (340 mm) guns and the ships which followed carrying two 12-inch and two 10.8-inch guns in single turrets. The Charlemagne class, laid down 1894–1896, were the first to adopt the standard four 12-inch (305 mm) gun heavy armament. The Jeune École retained a strong influence on French naval strategy, and by the end of the 19th century France had abandoned competition with Britain in battleship numbers. The French suffered the most from the dreadnought revolution, with four ships of the Liberté class still building when Dreadnought launched, and a further six of the Danton class begun afterwards. Germany's first pre-dreadnoughts, the Brandenburg class, were laid down in 1890. By 1905, a further 19 battleships were built or under construction, thanks to the sharp increase in naval expenditure justified by the 1898 and 1900 Navy Laws. This increase was due to the determination of the navy chief Alfred von Tirpitz and the growing sense of national rivalry with the UK. Besides the Brandenburg class, German pre-dreadnoughts include the ships of the Kaiser Friedrich III, Wittelsbach, and Braunschweig classes—culminating in the Deutschland class, which served in both world wars. On the whole, the German ships were less powerful than their British equivalents but equally robust. Russia equally entered into a programme of naval expansion in the 1890s; one of Russia's main objectives was to maintain its interests against Japanese expansion in the Far East. The Petropavlovsk class begun in 1892 took after the British Royal Sovereigns; later ships showed more French influence on their designs, such as the Borodino class. The weakness of Russian shipbuilding meant that many ships were built overseas for Russia; the best ship, the Retvizan, being largely constructed in the United States. The Russo-Japanese War of 1904–05 was a disaster for the Russian pre-dreadnoughts; of the 15 battleships completed since Petropavlovsk, eleven were sunk or captured during the war. One of these, the famous Potemkin, mutinied and was briefly taken over by Romania at the end of the mutiny. However, she was soon recovered and recommissioned as Panteleimon. After the war, Russia completed four more pre-dreadnoughts after 1905. Between 1893 and 1904, Italy laid down eight battleships; the later two classes of ship were remarkably fast, though the Regina Margherita class was poorly protected and the Regina Elena class lightly armed. In some ways, these ships presaged the concept of the battlecruiser. The Austro-Hungarian Empire also saw a naval renaissance during the 1890s, though of the nine pre-dreadnought battleships ordered only the three of the Habsburg class arrived before Dreadnought herself made them obsolete. ### America and the Pacific The United States started building its first battleships in 1891. These ships were short-range coast-defence battleships that were similar to the British HMS Hood except for an innovative intermediate battery of 8-inch guns. The US Navy continued to build ships that were relatively short-range and poor in heavy seas, until the Virginia class laid down in 1901–02. Nevertheless, it was these earlier ships that ensured American naval dominance against the antiquated Spanish fleet—which included no pre-dreadnoughts—in the Spanish–American War, most notably at the Battle of Santiago de Cuba. The final two classes of American pre-dreadnoughts (the Connecticuts and Mississippis) were completed after the completion of the Dreadnought and after the start of design work on the USN's own initial class of dreadnoughts. The US Great White Fleet of 16 pre-dreadnought battleships circumnavigated the world from 16 December 1907, to 22 February 1909. Japan was involved in two of the three major naval wars of the pre-dreadnought era. The first Japanese pre-dreadnought battleships, the Fuji class, were still being built at the outbreak of the First Sino-Japanese War of 1894–95, which saw Japanese armoured cruisers and protected cruisers defeat the Chinese Beiyang Fleet, composed of a mixture of old ironclad battleships and cruisers, at the Battle of the Yalu River. Following their victory, and facing Russian pressure in the region, the Japanese placed orders for four more pre-dreadnoughts; along with the two Fujis these battleships formed the core of the fleet which twice engaged the numerically superior Russian fleets at the Battle of the Yellow Sea and the Battle of Tsushima. After capturing eight Russian battleships of various ages, Japan built several more classes of pre-dreadnoughts after the Russo-Japanese War. ## Obsolescence In 1906, the commissioning of HMS Dreadnought brought about the obsolescence of all existing battleships. Dreadnought, by scrapping the secondary battery, was able to carry ten 12-inch (305 mm) guns rather than four. She could fire eight heavy guns broadside, as opposed to four from a pre-dreadnought; and six guns ahead, as opposed to two. The move to an 'all-big-gun' design was a logical conclusion of the increasingly long engagement ranges and heavier secondary batteries of the last pre-dreadnoughts; Japan and the United States had designed ships with a similar armament before Dreadnought, but were unable to complete them before the British ship. It was felt that because of the longer distances at which battles could be fought, only the largest guns were effective in battle, and by mounting more 12-inch guns Dreadnought was two to three times more effective in combat than an existing battleship. The armament of the new breed of ships was not their only crucial advantage. Dreadnought used steam turbines for propulsion, giving her a top speed of 21 knots, against the 18 knots typical of the pre-dreadnought battleships. Able both to outgun and outmaneuver their opponents, the dreadnought battleships decisively outclassed earlier battleship designs. Nevertheless, pre-dreadnoughts continued in active service and saw significant combat use even when obsolete. Dreadnoughts and battlecruisers were believed vital for the decisive naval battles which at the time all nations expected, hence they were jealously guarded against the risk of damage by mines or submarine attack, and kept close to home as much as possible. The obsolescence and consequent expendability of the pre-dreadnoughts meant that they could be deployed into more dangerous situations and more far-flung areas. ### World War I During World War I, a large number of pre-dreadnoughts remained in service. The advances in machinery and armament meant that a pre-dreadnought was not necessarily the equal of even a modern armoured cruiser, and was totally outclassed by a modern dreadnought battleship or battlecruiser. Nevertheless, the pre-dreadnought played a major role in the war. This was first illustrated in the skirmishes between British and German navies around South America in 1914. While two German cruisers menaced British shipping, the Admiralty insisted that no battlecruisers could be spared from the main fleet and sent to the other side of the world to deal with them. Instead the British dispatched a pre-dreadnought of 1896 vintage, HMS Canopus. Intended to stiffen the British cruisers in the area, in fact her slow speed meant that she was left behind at the disastrous Battle of Coronel. Canopus redeemed herself at the Battle of the Falkland Islands, but only when grounded to act as a harbour-defence vessel; she fired at extreme range (13,500 yards, 12,300 m) on the German cruiser SMS Gneisenau, and while the only hit was from an inert practice shell which had been left loaded from the previous night (the 'live' shells of the salvo broke up on contact with water; one inert shell ricocheted into one of Gneisenau's funnels), this certainly deterred Gneisenau. The subsequent battle was decided by the two Invincible-class battlecruisers which had been dispatched after Coronel. This appears to have been the only meaningful engagement of an enemy ship by a British pre-dreadnought. In the Black Sea five Russian pre-dreadnoughts saw brief action against the Ottoman battlecruiser Yavuz during the Battle of Cape Sarych in November 1914. The principle that disposable pre-dreadnoughts could be used where no modern ship could be risked was affirmed by British, French and German navies in subsidiary theatres of war. The German navy used its pre-dreadnoughts frequently in the Baltic campaign. However, the largest number of pre-dreadnoughts was engaged at the Gallipoli campaign. Twelve British and French pre-dreadnoughts formed the bulk of the force which attempted to 'force the Dardanelles' in March 1915. The role of the pre-dreadnoughts was to support the brand-new dreadnought HMS Queen Elizabeth engaging the Turkish shore defences. Three of the pre-dreadnoughts were sunk by mines, and several more badly damaged. However, it was not the damage to the pre-dreadnoughts which led to the operation being called off. The two battlecruisers were also damaged; since Queen Elizabeth could not be risked in the minefield, and the pre-dreadnoughts would be unable to deal with the Turkish battlecruiser lurking on the other side of the straits, the operation had failed. Pre-dreadnoughts were also used to support the Gallipoli landings, with the loss of three more: HMS Goliath, HMS Triumph and HMS Majestic. A squadron of German pre-dreadnoughts was present at the Battle of Jutland in 1916; German sailors called them the "five-minute ships", which was the amount of time they were expected to survive in a pitched battle. In spite of their limitations, the pre-dreadnought squadron played a useful role. As the German fleet disengaged from the battle, the pre-dreadnoughts risked themselves by turning on the British battlefleet as dark set. Nevertheless, only one of the pre-dreadnoughts was sunk: SMS Pommern went down in the confused night action as the battlefleets disengaged. Following the November 1918 Armistice, the U.S. Navy converted fifteen older battleships, eight armoured cruisers and two larger protected cruisers for temporary service as transports. These ships made one to six trans-Atlantic round-trips each, bringing home a total of more than 145,000 passengers. ### World War II After World War I, most battleships, dreadnought and pre-dreadnought alike, were disarmed under the terms of the Washington Naval Treaty. Largely this meant the ships being broken up for scrap; others were destroyed in target practice or relegated to training and supply duties. One, Mikasa, was given a special exemption to the Washington Treaty and was maintained as a museum and memorial ship. Germany, which lost most of its fleet under the terms of the Versailles treaty, was allowed to keep eight pre-dreadnoughts (of which only six could be in active service at any one time) which were counted as armoured coast-defence ships; two of these were still in use at the beginning of World War II. One of these, Schleswig-Holstein, shelled the Polish Westerplatte peninsula, opening the German invasion of Poland and firing the first shots of the Second World War. Schleswig-Holstein served for most of the war as a training ship; she was sunk while under refit in December 1944, and broken up in situ in January 1945. The other, Schlesien, was mined and then scuttled in March 1945. A number of the inactive or disarmed pre-dreadnoughts were nevertheless sunk in action during World War II, such as the Greek pre-dreadnoughts Kilkis and Lemnos, bought from the U.S. Navy in 1914. While neither of the ships was in active service, they were both sunk by German dive bombers after the German invasion in 1941. In the Pacific, the U.S. Navy submarine USS Salmon sank the disarmed Japanese pre-dreadnought Asahi in May 1942. A veteran of the Battle of Tsushima, she was serving as a repair ship. ### Post World War II No pre-dreadnoughts served post–World War II as armed ships; the last serving pre-dreadnought was the former SMS Hessen, which was used as a target ship by the Soviet Union into the early 1960s as the Tsel. The hull of the former USS Kearsarge served as a crane ship from 1920 until its scrapping in 1955. The hulk of the ex-USS Oregon was used as an ammunition barge at Guam until 1948, after which she was scrapped in 1956. ## Survivors There is only one pre-dreadnought preserved today: the Imperial Japanese Navy's flagship at the Battle of Tsushima, Mikasa, which is now located in Yokosuka, where she has been a museum ship since 1925. ## See also
184,399
Halifax Explosion
1,170,875,630
1917 maritime disaster in Halifax, Nova Scotia, Canada
[ "1910s fires in North America", "1910s tsunamis", "1917 disasters in Canada", "1917 fires", "1917 in Nova Scotia", "20th century in Halifax, Nova Scotia", "20th-century fires in Canada", "December 1917 events", "Disasters in Nova Scotia", "Events of National Historic Significance (Canada)", "Explosions in 1917", "Explosions in Canada", "Firefighting memorials", "Industrial fires and explosions in Canada", "Maritime incidents in 1917", "Maritime incidents in Canada", "Ship fires", "Urban fires in Canada" ]
On the morning of 6 December 1917, the French cargo ship SS Mont-Blanc collided with the Norwegian vessel SS Imo in the harbour of Halifax, Nova Scotia, Canada. The Mont-Blanc, laden with high explosives, caught fire and exploded, devastating the Richmond district of Halifax. At least 1,782 people were killed, largely in Halifax and Dartmouth, by the blast, debris, fires, or collapsed buildings, and an estimated 9,000 others were injured. The blast was the largest human-made explosion at the time. It released the equivalent energy of roughly 2.9 kilotons of TNT (12 TJ). Mont-Blanc was under orders from the French government to carry her cargo from New York City via Halifax to Bordeaux, France. At roughly 8:45 am, she collided at low speed, approximately one knot (1.2 mph or 1.9 km/h), with the unladen Imo, chartered by the Commission for Relief in Belgium to pick up a cargo of relief supplies in New York. On the Mont-Blanc, the impact damaged benzol barrels stored on deck, leaking vapours which were ignited by sparks from the collision, setting off a fire on board that quickly grew out of control. Approximately 20 minutes later at 9:04:35 am, the Mont-Blanc exploded. Nearly all structures within an 800-metre (half-mile) radius, including the community of Richmond, were obliterated. A pressure wave snapped trees, bent iron rails, demolished buildings, grounded vessels (including Imo, which was washed ashore by the ensuing tsunami), and scattered fragments of Mont-Blanc for kilometres. Across the harbour, in Dartmouth, there was also widespread damage. A tsunami created by the blast wiped out the community of the Mi'kmaq First Nation who had lived in the Tufts Cove area for generations. Relief efforts began almost immediately, and hospitals quickly became full. Rescue trains began arriving the day of the explosion from across Nova Scotia and New Brunswick while other trains from central Canada and the Northeastern United States were impeded by blizzards. Construction of temporary shelters to house the many people left homeless began soon after the disaster. The initial judicial inquiry found Mont-Blanc to have been responsible for the disaster, but a later appeal determined that both vessels were to blame. The North End of Halifax has several memorials to the victims of the explosion. ## Background Dartmouth lies on the east shore of Halifax Harbour, and Halifax is on the west shore. By 1917, "Halifax's inner harbour had become a principal assembly point for merchant convoys leaving for Britain and France." Halifax and Dartmouth had thrived during times of war; the harbour was one of the British Royal Navy's most important bases in North America, a centre for wartime trade, and a home to privateers who harried the British Empire's enemies during the American Revolution, the Napoleonic Wars, and the War of 1812. The completion of the Intercolonial Railway and its Deep Water Terminal in 1880 allowed for increased steamship trade and led to accelerated development of the port area, but Halifax faced an economic downturn in the 1890s as local factories struggled to compete with businesses in central Canada. The British garrison left the city in late 1905 and early 1906. The Canadian government took over the Halifax Dockyard (now CFB Halifax) from the Royal Navy. This dockyard later became the command centre of the Royal Canadian Navy upon its founding in 1910. Just before the First World War, the Canadian government began a determined, costly effort to develop the harbour and waterfront facilities. The outbreak of the war brought Halifax back to prominence. As the Royal Canadian Navy had virtually no seaworthy ships of its own, the Royal Navy assumed responsibility for maintaining Atlantic trade routes by re-adopting Halifax as its North American base of operations. In 1915, management of the harbour fell under the control of the Royal Canadian Navy; by 1917 there was a growing naval fleet in Halifax, including patrol ships, tugboats, and minesweepers. The population of Halifax/Dartmouth had increased to between 60,000 and 65,000 people by 1917. Convoys carried men, animals, and supplies to the European theatre of war. The two main points of departure were in Nova Scotia at Sydney, on Cape Breton Island, and Halifax. Hospital ships brought the wounded to the city, so a new military hospital was constructed. The success of German U-boat attacks on ships crossing the Atlantic Ocean led the Allies to institute a convoy system to reduce losses while transporting goods and soldiers to Europe. Merchant ships gathered at Bedford Basin on the northwestern end of the harbour, which was protected by two sets of anti-submarine nets and guarded by patrol ships of the Royal Canadian Navy. The convoys departed under the protection of British cruisers and destroyers. A large army garrison protected the city with forts, gun batteries, and anti-submarine nets. These factors drove a major military, industrial, and residential expansion of the city, and the weight of goods passing through the harbour increased nearly ninefold. All neutral ships bound for ports in North America were required to report to Halifax for inspection. ## Disaster The Norwegian ship SS Imo had sailed from the Netherlands en route to New York to take on relief supplies for Belgium, under the command of Haakon From. The ship arrived in Halifax on 3 December for neutral inspection and spent two days in Bedford Basin awaiting refuelling supplies. Though she had been given clearance to leave the port on 5 December, Imo's departure was delayed because her coal load did not arrive until late that afternoon. The loading of fuel was not completed until after the anti-submarine nets had been raised for the night. Therefore, the vessel could not depart until the next morning. The French cargo ship SS Mont-Blanc arrived from New York late on 5 December, under the command of Aimé Le Medec. The vessel was fully loaded with the explosives TNT and picric acid, the highly flammable fuel benzol and guncotton. She intended to join a slow convoy gathering in Bedford Basin readying to depart for Europe but was too late to enter the harbour before the nets were raised. Ships carrying dangerous cargo were not allowed into the harbour before the war, but the risks posed by German submarines had resulted in a relaxation of regulations. Navigating into or out of Bedford Basin required passage through a strait called the Narrows. Ships were expected to keep close to the side of the channel situated on their starboard ("right"), and pass oncoming vessels "port to port", that is to keep them on their "left" side. Ships were restricted to a speed of 5 knots (9.3 km/h; 5.8 mph) within the harbour. ### Collision and fire Imo was granted clearance to leave Bedford Basin by signals from the guard ship HMCS Acadia at approximately 7:30 on the morning of 6 December, with Pilot William Hayes on board. The ship entered the Narrows well above the harbour's speed limit in an attempt to make up for the delay experienced in loading her coal. Imo met American tramp steamer SS Clara being piloted up the wrong (western) side of the harbour. The pilots agreed to pass starboard-to-starboard. Soon afterwards, Imo was forced to head even further towards the Dartmouth shore after passing the tugboat Stella Maris, which was travelling up the harbour to Bedford Basin near mid-channel. Horatio Brannen, the captain of Stella Maris, saw Imo approaching at excessive speed and ordered his ship closer to the western shore to avoid an accident. Francis Mackey, an experienced harbour pilot, had boarded Mont-Blanc on the evening of 5 December 1917; he had asked about "special protections" such as a guard ship, given the Mont-Blanc's cargo, but no protections were put in place. Mont-Blanc started moving at 7:30 am on 6 December and was the second ship to enter the harbour as the anti-submarine net between Georges Island and Pier 21 opened for the morning. Mont-Blanc headed towards Bedford Basin on the Dartmouth side of the harbour. Mackey kept his eye on the ferry traffic between Halifax and Dartmouth and other small boats in the area. He first spotted Imo when she was about 1.21 kilometres (0.75 mi) away and became concerned as her path appeared to be heading towards his ship's starboard side, as if to cut him off. Mackey gave a short blast of his ship's signal whistle to indicate that he had the right of way but was met with two short blasts from Imo, indicating that the approaching vessel would not yield its position. The captain ordered Mont-Blanc to halt her engines and angle slightly to starboard, closer to the Dartmouth side of the Narrows. He let out another single blast of his whistle, hoping the other vessel would likewise move to starboard but was again met with a double-blast. Sailors on nearby ships heard the series of signals and, realizing that a collision was imminent, gathered to watch as Imo bore down on Mont-Blanc. Both ships had cut their engines by this point, but their momentum carried them towards each other at slow speed. Unable to ground his ship for fear of a shock that would set off his explosive cargo, Mackey ordered Mont-Blanc to steer hard to port (starboard helm) and crossed the bow of Imo in a last-second bid to avoid a collision. The two ships were almost parallel to each other, when Imo suddenly sent out three signal blasts, indicating the ship was reversing its engines. The combination of the cargoless ship's height in the water and the transverse thrust of her right-hand propeller caused the ship's head to swing into Mont-Blanc. Imo's prow pushed into the No. 1 hold of Mont Blanc, on her starboard side. The collision occurred at 8:45 am. The damage to Mont Blanc was not severe, but barrels of deck cargo toppled and broke open. This flooded the deck with benzol that quickly flowed into the hold. As Imo's engines kicked in, she disengaged, which created sparks inside Mont-Blanc's hull. These ignited the vapours from the benzol. A fire started at the water line and travelled quickly up the side of the ship. Surrounded by thick black smoke, and fearing she would explode almost immediately, the captain ordered the crew to abandon ship. A growing number of Halifax citizens gathered on the street or stood at the windows of their homes or businesses to watch the spectacular fire. The frantic crew of Mont-Blanc shouted from their two lifeboats to some of the other vessels that their ship was about to explode, but they could not be heard above the noise and confusion. As the lifeboats made their way across the harbour to the Dartmouth shore, the abandoned ship continued to drift and beached herself at Pier 6 near the foot of Richmond street. Towing two scows at the time of the collision, Stella Maris responded immediately to the fire, anchoring the barges and steaming back towards Pier 6 to spray the burning ship with their fire hose. The tug's captain, Horatio H. Brannen, and his crew realized that the fire was too intense for their single hose and backed off from the burning Mont Blanc. They were approached by a whaler from HMS Highflyer and later a steam pinnace belonging to HMCS Niobe. Captain Brannen and Albert Mattison of Niobe agreed to secure a line to the French ship's stern so as to pull it away from the pier to avoid setting it on fire. The five-inch (125 mm) hawser initially produced was deemed too small and orders for a ten-inch (250 mm) hawser came down. It was at this point that the blast occurred. ### Explosion At 9:04:35 am the out-of-control fire on board Mont-Blanc set off her cargo of high explosives. The ship was completely blown apart and a powerful blast wave radiated away from the explosion initially at more than 1,000 metres (3,300 ft) per second. Temperatures of 5,000 °C (9,000 °F) and pressures of thousands of atmospheres accompanied the moment of detonation at the centre of the explosion. White-hot shards of iron fell down upon Halifax and Dartmouth. Mont-Blanc's forward 90-mm gun landed approximately 5.6 kilometres (3.5 mi) north of the explosion site near Albro Lake in Dartmouth with its barrel bent and half torn away by the force of the blast, and the shank of Mont-Blanc's anchor, weighing half a ton, landed 3.2 kilometres (2.0 mi) south at Armdale. A cloud of white smoke rose to at least 3,600 metres (11,800 ft). The blast was felt as far away as Cape Breton (207 kilometres or 129 miles) and Prince Edward Island (180 kilometres or 110 miles). An area of over 1.6 square kilometres (400 acres) was completely destroyed by the explosion, and the harbour floor was momentarily exposed by the volume of water that was displaced. A tsunami was formed by water surging in to fill the void; it rose as high as 18 metres (60 ft) above the high-water mark on the Halifax side of the harbour. Imo was carried onto the shore at Dartmouth by the tsunami. The blast killed all but one on the whaler, everyone on the pinnace and 21 of the 26 men on Stella Maris; she ended up on the Dartmouth shore, severely damaged. The captain's son, First Mate Walter Brannen, who had been thrown into the hold by the blast, survived, as did four others. All but one of the Mont-Blanc crew members survived. Over 1,600 people were killed instantly and 9,000 were injured, more than 300 of whom later died. Every building within a 2.6-kilometre (1.6 mi) radius, over 12,000 in total, was destroyed or badly damaged. Hundreds of people who had been watching the fire from their homes were blinded when the blast wave shattered the windows in front of them. Overturned stoves and lamps started fires throughout Halifax, particularly in the North End, where entire city blocks burned, trapping residents inside their houses. Firefighter Billy Wells, who was thrown away from the explosion and had his clothes torn from his body, described the devastation survivors faced: "The sight was awful, with people hanging out of windows dead. Some with their heads missing, and some thrown onto the overhead telegraph wires." He was the only member of the eight-man crew of the fire engine Patricia to survive. Large brick and stone factories near Pier 6, such as the Acadia Sugar Refinery, disappeared into unrecognizable heaps of rubble, killing most of their workers. The Nova Scotia cotton mill located 1.5 km (0.93 mile) from the blast was destroyed by fire and the collapse of its concrete floors. The Royal Naval College of Canada building was badly damaged, and several cadets and instructors maimed. The Richmond Railway Yards and station were destroyed, killing 55 railway workers and destroying and damaging over 500 railway cars. The North Street Station, one of the busiest in Canada, was badly damaged. The death toll could have been worse had it not been for the self-sacrifice of an Intercolonial Railway dispatcher, Patrick Vincent (Vince) Coleman, operating at the railyard about 230 metres (750 ft) from Pier 6, where the explosion occurred. He and his co-worker, William Lovett, learned of the dangerous cargo aboard the burning Mont-Blanc from a sailor and began to flee. Coleman remembered that an incoming passenger train from Saint John, New Brunswick, was due to arrive at the railyard within minutes. He returned to his post alone and continued to send out urgent telegraph messages to stop the train. Several variations of the message have been reported, among them this from the Maritime Museum of the Atlantic: "Hold up the train. Ammunition ship afire in harbor making for Pier 6 and will explode. Guess this will be my last message. Good-bye boys." Coleman's message was responsible for bringing all incoming trains around Halifax to a halt. It was heard by other stations all along the Intercolonial Railway, helping railway officials to respond immediately. Passenger Train No. 10, the overnight train from Saint John, is believed to have heeded the warning and stopped a safe distance from the blast at Rockingham, saving the lives of about 300 railway passengers. Coleman was killed at his post. ## Rescue efforts First rescue efforts came from surviving neighbours and co-workers who pulled and dug out victims from buildings. The initial informal response was soon joined by surviving policemen, firefighters and military personnel who began to arrive, as did anyone with a working vehicle; cars, trucks and delivery wagons of all kinds were enlisted to collect the wounded. A flood of victims soon began to arrive at the city's hospitals, which were quickly overwhelmed. The new military hospital, Camp Hill, admitted approximately 1,400 victims on 6 December. Firefighters were among the first to respond to the disaster, rushing to Mont-Blanc to attempt to extinguish the blaze before the explosion even occurred. They also played a role after the blast, with fire companies arriving to assist from across Halifax, and by the end of the day from as far away as Amherst, Nova Scotia, (200 kilometres or 120 miles) and Moncton, New Brunswick, (260 kilometres or 160 miles) on relief trains. Halifax Fire Department's West Street Station 2 was the first to arrive at Pier 6 with the crew of the Patricia, the first motorized fire engine in Canada. In the final moments before the explosion, hoses were being unrolled as the fire spread to the docks. Nine members of the Halifax Fire Department lost their lives performing their duty that day. Royal Navy cruisers in port sent some of the first organized rescue parties ashore. HMS Highflyer, along with the armed merchant cruisers HMS Changuinola, HMS Knight Templar and HMS Calgarian, sent boats ashore with rescue parties and medical personnel and soon began to take wounded aboard. A US Coast Guard cutter, , also sent a rescue party ashore. Out at sea, the American cruiser USS Tacoma and armed merchant cruiser USS Von Steuben (formerly SS Kronprinz Wilhelm) were passing Halifax en route to the United States. Tacoma was rocked so severely by the blast wave that her crew went to general quarters. Spotting the large and rising column of smoke, Tacoma altered course and arrived to assist rescue at 2 pm. Von Steuben arrived a half-hour later. The American steamship Old Colony, docked in Halifax for repairs, suffered little damage and was quickly converted to serve as a hospital ship, staffed by doctors and orderlies from the British and American navy vessels in the harbour. Dazed survivors immediately feared that the explosion was the result of a bomb dropped from a German plane. Troops at gun batteries and barracks immediately turned out in case the city was under attack, but within an hour switched from defence to rescue roles as the cause and location of the explosion were determined. All available troops were called in from harbour fortifications and barracks to the North End to rescue survivors and provide transport to the city's hospitals, including the two army hospitals in the city. Adding to the chaos were fears of a potential second explosion. A cloud of steam shot out of ventilators at the ammunition magazine at Wellington Barracks as naval personnel extinguished a fire by the magazine. The fire was quickly put out; the cloud was seen from blocks away and quickly led to rumours that another explosion was imminent. Uniformed officers ordered everyone away from the area. As the rumour spread across the city, many families fled their homes. The confusion hampered efforts for over two hours until fears were dispelled by about noon. Many rescuers ignored the evacuation, and naval rescue parties continued working uninterrupted at the harbour. Surviving railway workers in the railyards at the heart of the disaster carried out rescue work, pulling people from the harbour and from under debris. The overnight train from Saint John was just approaching the city when hit by the blast but was only slightly damaged. It continued into Richmond until the track was blocked by wreckage. Passengers and soldiers aboard used the emergency tools from the train to dig people out of houses and bandaged them with sheets from the sleeping cars. The train was loaded with injured and left the city at 1:30 with a doctor aboard, to evacuate the wounded to Truro. Led by Lieutenant Governor MacCallum Grant, leading citizens formed the Halifax Relief Commission at around noon. The committee organized members in charge of organizing medical relief for both Halifax and Dartmouth, supplying transportation, food and shelter, and covering medical and funeral costs for victims. The commission would continue until 1976, participating in reconstruction and relief efforts and later distributing pensions to survivors. Men and women turned out to serve as everything from hospital aides to shelter staff, while children contributed to the relief effort by carrying messages from site to site. Community facilities like the Young Mens Christian Association (YMCA) were rapidly converted to emergency hospital facilities with medical students providing care. Rescue trains were dispatched from across Atlantic Canada, as well as the northeastern United States. The first left Truro around 10 am carrying medical personnel and supplies, arrived in Halifax by noon and returned to Truro with the wounded and homeless by 3 pm. The track had become impassable after Rockingham, on the western edge of Bedford Basin. To reach the wounded, rescue personnel had to walk through parts of the devastated city until they reached a point where the military had begun to clear the streets. By nightfall, a dozen trains had reached Halifax from the Nova Scotian towns of Truro, Kentville, Amherst, Stellarton, Pictou, and Sydney and from New Brunswick, including the town of Sackville, and the cities of Moncton and Saint John. Relief efforts were hampered the following day by a blizzard that blanketed Halifax with 16 inches (41 cm) of heavy snow. Trains en route from other parts of Canada and from the United States were stalled in snowdrifts, and telegraph lines that had been hastily repaired following the explosion were again knocked down. Halifax was isolated by the storm, and while rescue committees were forced to suspend the search for survivors, the storm also aided efforts to put out fires throughout the city. ## Destruction and loss of life The exact number killed by the disaster is unknown. The Halifax Explosion Remembrance Book, an official database of the Nova Scotia Archives and Records Management, identified 1,782 victims. As many as 1,600 people died immediately in the blast, tsunami, and collapse of buildings. The last body, a caretaker killed at the Exhibition Grounds, was not recovered until summer 1919. An additional 9,000 were injured. 1,630 homes were destroyed in the explosion and fires, and another 12,000 damaged; roughly 6,000 people were left homeless and 25,000 had insufficient shelter. The city's industrial sector was in large part gone, with many workers among the casualties and the dockyard heavily damaged. A mortuary committee chaired by Alderman R. B. Coldwell was quickly formed at Halifax City Hall on the morning of the disaster. The Chebucto Road School (now the Maritime Conservatory of Performing Arts) in Halifax's west end was chosen as a central morgue. A company of the Royal Canadian Engineers (RCE) repaired and converted the basement of the school to serve as a morgue and classrooms to serve as offices for the Halifax coroner. Trucks and wagons soon began to arrive with bodies. Arthur S. Barnstead took over from Coldwell as the morgue went into operation and implemented a system based on the one his father, John Henry Barnstead, developed to catalogue the dead in the aftermath of the disaster of 1912. Many of the wounds inflicted by the blast were permanently debilitating, such as those caused by flying glass or by the flash of the explosion. Thousands of people had stopped to watch the ship burning in the harbour, many from inside buildings, leaving them directly in the path of glass fragments from shattered windows. Roughly 5,900 eye injuries were reported, and 41 people lost their sight permanently. An estimated CA\$35 million in damage resulted (CA\$ million today). About \$30 million in financial aid was raised from various sources, including \$18 million from the federal government, over \$4 million from the British government, and \$750,000 from the Commonwealth of Massachusetts. ### Dartmouth Dartmouth was not as densely populated as Halifax and was separated from the blast by the width of the harbour, but still suffered heavy damage. Almost 100 people were estimated to have died on the Dartmouth side. Windows were shattered and many buildings were damaged or destroyed, including the Oland Brewery and parts of the Starr Manufacturing Company. Nova Scotia Hospital was the only hospital in Dartmouth and many of the victims were treated there. ### Mi'kmaq settlement There were small enclaves of Mi'kmaq in and around the coves of Bedford Basin on the Dartmouth shore. Directly opposite to Pier 9 on the Halifax side sat a community in Tufts Cove which included the Mi'kmaq community of Turtle Grove. In the years and months preceding the explosion, the Department of Indian Affairs had been actively trying to force the Mi'kmaq to give up their land and move to a reserve, but this had not occurred by the time of the explosion. Turtle Grove was close to the centre of the blast and the physical structures of the settlement were obliterated by the explosion and tsunami. A precise Mi'kmaq death toll is unknown as the Department of Indian Affairs and census records for the community were incomplete. Nine bodies were recovered from Turtle Grove and there were eleven known survivors. The Halifax Remembrance Book lists 16 members of the Tufts Cove Community as dead; not all the dead listed as in Tufts Cove were Indigenous. The Turtle Grove settlement was not rebuilt in the wake of the disaster. Survivors were housed in a racially segregated building under generally poor conditions and most were eventually dispersed around Nova Scotia. ### Africville The black community of Africville, on the southern shores of Bedford Basin adjacent to the Halifax Peninsula, was spared the direct force of the blast by the shadow effect of the raised ground to the south. Africville's small and frail homes were heavily damaged by the explosion. Families recorded the deaths of five residents. A combination of persistent racism and a growing conviction that Africville should be demolished to make way for industrial development resulted in the people of Africville receiving no police or fire protection; they had to make do without water mains and sewer lines, despite paying city taxes. Africville received little of the donated relief funds and none of the progressive reconstruction invested in other parts of the city after the explosion. ## Investigation Many people in Halifax first thought the explosion to be the result of a German attack. The Halifax Herald continued to propagate this belief for some time, reporting, for example, that Germans had mocked victims of the explosion. While John Johansen, the Norwegian helmsman of Imo, was being treated for serious injuries sustained during the explosion, it was reported to the military police that he had been behaving suspiciously. Johansen was arrested on suspicions of being a German spy when a search turned up a letter on his person, supposedly written in German. It turned out that the letter was actually written in Norwegian. Immediately following the explosion, most of the German survivors in Halifax had been rounded up and imprisoned. Eventually the fear dissipated as the real cause of the explosion became known, although rumours of German involvement persisted. A judicial inquiry known as the Wreck Commissioner's Inquiry was formed to investigate the causes of the collision. Proceedings began at the Halifax Court House on 13 December 1917, presided over by Justice Arthur Drysdale. The inquiry's report of 4 February 1918 blamed Mont-Blanc's captain, Aimé Le Médec, the ship's pilot, Francis Mackey, and Commander F. Evan Wyatt, the Royal Canadian Navy's chief examining officer in charge of the harbour, gates and anti-submarine defences, for causing the collision. Drysdale agreed with Dominion Wreck Commissioner L. A. Demers' opinion that "it was the Mont-Blanc's responsibility alone to ensure that she avoided a collision at all costs" given her cargo; he was likely influenced by local opinion, which was strongly anti-French, as well as by the "street fighter" style of argumentation used by Imo lawyer Charles Burchell. According to Crown counsel W. A. Henry, this was "a great surprise to most people", who had expected the Imo to be blamed for being on the wrong side of the channel. All three men were charged with manslaughter and criminal negligence at a preliminary hearing heard by Stipendiary Magistrate Richard A. McLeod, and bound over for trial. A Nova Scotia Supreme Court justice, Benjamin Russell, found there was no evidence to support these charges. Mackey was discharged on a writ of habeas corpus and the charges dropped. Because the pilot and the captain were arrested on the same warrant, the charges against Le Médec were also dismissed. On 17 April 1918, a jury acquitted Wyatt in a trial that lasted less than a day. Drysdale oversaw the first civil litigation trial, in which the owners of the two ships sought damages from each other. His decision (27 April 1918) found Mont-Blanc entirely at fault. Subsequent appeals to the Supreme Court of Canada (19 May 1919), and the Judicial Committee of the Privy Council in London (22 March 1920), determined Mont-Blanc and Imo were equally to blame for navigational errors that led to the collision. No party was ever convicted for any crime or otherwise successfully prosecuted for any actions that precipitated the disaster. ## Reconstruction Efforts began shortly after the explosion to clear debris, repair buildings, and establish temporary housing for survivors left homeless by the explosion. By late January 1918, around 5,000 were still without shelter. A reconstruction committee under Colonel Robert Low constructed 832 new housing units, which were furnished by the Massachusetts-Halifax Relief Fund. Partial train service resumed from a temporary rail terminal in the city's South End on 7 December. Full service resumed on 9 December when tracks were cleared and the North Street Station reopened. The Canadian Government Railways created a special unit to clear and repair railway yards as well as rebuild railway piers and the Naval Dockyard. Most piers returned to operation by late December and were repaired by January. The North End Halifax neighbourhood of Richmond bore the brunt of the explosion. In 1917, Richmond was considered a working-class neighbourhood and had few paved roads. After the explosion, the Halifax Relief Commission approached the reconstruction of Richmond as an opportunity to improve and modernize the city's North End. English town planner Thomas Adams and Montreal architectural firm Ross and Macdonald were recruited to design a new housing plan for Richmond. Adams, inspired by the Victorian garden city movement, aimed to provide public access to green spaces and to create a low-rise, low-density, and multifunctional urban neighbourhood. The planners designed 326 large homes that each faced a tree-lined, paved boulevard. They specified that the homes be built with a new and innovative fireproof material, blocks of compressed cement called Hydrostone. The first of these homes was occupied by March 1919, just a few months before Prince Edward, Prince of Wales, visited the site on 17 August, touring many of the houses and hearing stories about the impacts of the tragedy and "of the kindness of the people who quickly came to their aid." Once finished, the Hydrostone neighbourhood consisted of homes, businesses, and parks, which helped create a new sense of community in the North End of Halifax. It has now become an upscale neighbourhood and shopping district. In contrast, the equally poor and underdeveloped area of Africville was not included in reconstruction efforts. Every building in the Halifax dockyard required some degree of rebuilding, as did HMCS Niobe and the docks themselves; all of the Royal Canadian Navy's minesweepers and patrol boats were undamaged. Prime Minister Robert Borden pledged that the government would be "co-operating in every way to reconstruct the Port of Halifax: this was of utmost importance to the Empire". Captain Symington of USS Tacoma speculated that the port would not be operational for months, but a convoy departed on 11 December and dockyard operations resumed before Christmas. ## Legacy The Halifax Explosion was one of the largest artificial non-nuclear explosions. An extensive comparison of 130 major explosions by Halifax historian Jay White in 1994 concluded that it "remains unchallenged in overall magnitude as long as five criteria are considered together: number of casualties, force of blast, radius of devastation, quantity of explosive material, and total value of property destroyed." For many years afterward, the Halifax Explosion was the standard by which all large blasts were measured. For instance, in its report on the atomic bombing of Hiroshima, Time wrote that the explosive power of the Little Boy bomb was seven times that of the Halifax Explosion. The many eye injuries resulting from the disaster led to better understanding of how to care for damaged eyes, and "with the recently formed Canadian National Institute for the Blind, Halifax became internationally known as a centre for care for the blind", according to Dalhousie University professor Victoria Allen. The lack of coordinated pediatric care in such a disaster was noted by William Ladd, a surgeon from Boston who had arrived to help. His insights from the explosion are generally credited with inspiring him to pioneer the specialty of pediatric surgery in North America. The Halifax Explosion inspired a series of health reforms, including around public sanitation and maternity care. The event was traumatic for the whole surviving community, so the memory was largely suppressed. After the first anniversary, the city stopped commemorating the explosion for decades. The second official commemoration did not take place before the 50th anniversary in 1967, and even after that, the activities stopped again. Construction began in 1964 on the Halifax North Memorial Library, designed to commemorate the victims of the explosion. The library entrance featured the first monument built to mark the explosion, the Halifax Explosion Memorial Sculpture, created by artist Jordi Bonet. The sculpture was dismantled by the Halifax Regional Municipality in 2004. The Halifax Explosion Memorial Bells were built in 1985, relocating memorial carillon bells from a nearby church to a large concrete sculpture on Fort Needham Hill, facing the "ground zero" area of the explosion. The Bell Tower is the location of an annual civic ceremony every 6 December. A memorial at the Halifax Fire Station on Lady Hammond Road honours the firefighters killed while responding to the explosion. Fragments of Mont-Blanc have been mounted as neighbourhood monuments to the explosion at Albro Lake Road in Dartmouth, at Regatta Point, and elsewhere in the area. Simple monuments mark the mass graves of explosion victims at the Fairview Lawn Cemetery and the Bayers Road Cemetery. A Memorial Book listing the names of all the known victims is displayed at the Halifax North Memorial Library and at the Maritime Museum of the Atlantic, which has a large permanent exhibit about the Halifax Explosion. Harold Gilman was commissioned to create a painting memorializing the event; his work, Halifax Harbour at Sunset, "tells very little about the recent devastation, as the viewpoint is set back so that the harbour appears undisturbed". Hugh MacLennan's novel Barometer Rising (1941) is set in Halifax at the time of the explosion and includes a carefully researched description of its impact on the city. Following in MacLennan's footsteps, journalist Robert MacNeil penned Burden of Desire (1992) and used the explosion as a metaphor for the societal and cultural changes of the day. MacLennan and MacNeil's use of the romance genre to fictionalize the explosion is similar to the first attempt by Lieutenant-Colonel Frank McKelvey Bell, author of the novella A Romance of the Halifax Disaster (1918). This work follows the love affair of a young woman and an injured soldier. Keith Ross Leckie wrote a miniseries entitled Shattered City: The Halifax Explosion (2003), which took the title but has no relationship to Janet Kitz's non-fiction book Shattered City: The Halifax Explosion and the Road to Recovery (1990). The film was criticized for distortions and inaccuracies. The response to the explosion from Boston and the appreciation in Halifax cemented ongoing warm Boston–Halifax relations. In 1918, Halifax sent a Christmas tree to Boston in thanks and remembrance for the help that the Boston Red Cross and the Massachusetts Public Safety Committee provided immediately after the disaster. That gift was revived in 1971 by the Lunenburg County Christmas Tree Producers Association, which began an annual donation of a large tree to promote Christmas tree exports as well as acknowledge Boston's support after the explosion. The gift was later taken over by the Nova Scotia government to continue the goodwill gesture and to promote trade and tourism. The tree is Boston's official Christmas tree and is lit on Boston Common throughout the holiday season. In deference to its symbolic importance for both cities, the Nova Scotia Department of Natural Resources has specific guidelines for selecting the tree and has tasked an employee to oversee the selection. ## See also - List of accidents and incidents involving transport or storage of ammunition - Black Tom explosion of 1916 - Port Chicago disaster in World War II - Bombay Explosion (1944), explosion on a ship in Bombay Harbour - Explosion of the RFA Bedenham, explosion of an ammunition ship in the Port of Gibraltar - Explosion in Bergen 1944
4,173
Babe Ruth
1,172,727,925
American baseball player (1895–1948)
[ "1895 births", "1948 deaths", "American League All-Stars", "American League ERA champions", "American League RBI champions", "American League batting champions", "American League home run champions", "American people of German descent", "American people of Prussian descent", "American sportsmen", "Babe Ruth", "Baltimore Orioles (International League) players", "Baseball players from Baltimore", "Boston Braves players", "Boston Red Sox players", "Brooklyn Dodgers coaches", "Burials at Gate of Heaven Cemetery (Hawthorne, New York)", "Catholics from Maryland", "Deaths from cancer in New York (state)", "Deaths from esophageal cancer", "Major League Baseball first base coaches", "Major League Baseball left fielders", "Major League Baseball pitchers", "Major League Baseball players with retired numbers", "Major League Baseball right fielders", "National Baseball Hall of Fame inductees", "New York Yankees players", "Presidential Medal of Freedom recipients", "Providence Grays (minor league) players", "Vaudeville performers" ]
George Herman "Babe" Ruth (February 6, 1895 – August 16, 1948) was an American professional baseball player whose career in Major League Baseball (MLB) spanned 22 seasons, from 1914 through 1935. Nicknamed "the Bambino" and "the Sultan of Swat", he began his MLB career as a star left-handed pitcher for the Boston Red Sox, but achieved his greatest fame as a slugging outfielder for the New York Yankees. Ruth is regarded as one of the greatest sports heroes in American culture and is considered by many to be the greatest baseball player of all time. In 1936, Ruth was elected into the Baseball Hall of Fame as one of its "first five" inaugural members. At age seven, Ruth was sent to St. Mary's Industrial School for Boys, a reformatory where he was mentored by Brother Matthias Boutlier of the Xaverian Brothers, the school's disciplinarian and a capable baseball player. In 1914, Ruth was signed to play Minor League baseball for the Baltimore Orioles but was soon sold to the Red Sox. By 1916, he had built a reputation as an outstanding pitcher who sometimes hit long home runs, a feat unusual for any player in the dead-ball era. Although Ruth twice won 23 games in a season as a pitcher and was a member of three World Series championship teams with the Red Sox, he wanted to play every day and was allowed to convert to an outfielder. With regular playing time, he broke the MLB single-season home run record in 1919 with 29. After that season, Red Sox owner Harry Frazee sold Ruth to the Yankees amid controversy. The trade fueled Boston's subsequent 86-year championship drought and popularized the "Curse of the Bambino" superstition. In his 15 years with the Yankees, Ruth helped the team win seven American League (AL) pennants and four World Series championships. His big swing led to escalating home run totals that not only drew fans to the ballpark and boosted the sport's popularity but also helped usher in baseball's live-ball era, which evolved from a low-scoring game of strategy to a sport where the home run was a major factor. As part of the Yankees' vaunted "Murderers' Row" lineup of 1927, Ruth hit 60 home runs, which extended his own MLB single-season record by a single home run. Ruth's last season with the Yankees was 1934; he retired from the game the following year, after a short stint with the Boston Braves. In his career, he led the American League in home runs twelve times. During Ruth's career, he was the target of intense press and public attention for his baseball exploits and off-field penchants for drinking and womanizing. After his retirement as a player, he was denied the opportunity to manage a major league club, most likely because of poor behavior during parts of his playing career. In his final years, Ruth made many public appearances, especially in support of American efforts in World War II. In 1946, he became ill with nasopharyngeal cancer and died from the disease two years later. Ruth remains a major figure in American culture. ## Early years George Herman Ruth Jr. was born on February 6, 1895, at 216 Emory Street in the Pigtown section of Baltimore, Maryland. Ruth's parents, Katherine (née Schamberger) and George Herman Ruth Sr., were both of German ancestry. According to the 1880 census, his parents were both born in Maryland. His paternal grandparents were from Prussia and Hanover, Germany. Ruth Sr. worked a series of jobs that included lightning rod salesman and streetcar operator. The elder Ruth then became a counterman in a family-owned combination grocery and saloon business on Frederick Street. George Ruth Jr. was born in the house of his maternal grandfather, Pius Schamberger, a German immigrant and trade unionist. Only one of young Ruth's seven siblings, his younger sister Mamie, survived infancy. Many details of Ruth's childhood are unknown, including the date of his parents' marriage. As a child, Ruth spoke German. When Ruth was a toddler, the family moved to 339 South Woodyear Street, not far from the rail yards; by the time he was six years old, his father had a saloon with an upstairs apartment at 426 West Camden Street. Details are equally scanty about why Ruth was sent at the age of seven to St. Mary's Industrial School for Boys, a reformatory and orphanage. However, according to Julia Ruth Stevens' recount in 1999, because George Sr. was a saloon owner in Baltimore and had given Ruth little supervision growing up, he became a delinquent. Ruth was sent to St. Mary's because George Sr. ran out of ideas to discipline and mentor his son. As an adult, Ruth admitted that as a youth he ran the streets, rarely attended school, and drank beer when his father was not looking. Some accounts say that following a violent incident at his father's saloon, the city authorities decided that this environment was unsuitable for a small child. Ruth entered St. Mary's on June 13, 1902. He was recorded as "incorrigible" and spent much of the next 12 years there. Although St. Mary's boys received an education, students were also expected to learn work skills and help operate the school, particularly once the boys turned 12. Ruth became a shirtmaker and was also proficient as a carpenter. He would adjust his own shirt collars, rather than having a tailor do so, even during his well-paid baseball career. The boys, aged 5 to 21, did most of the work around the facility, from cooking to shoemaking, and renovated St. Mary's in 1912. The food was simple, and the Xaverian Brothers who ran the school insisted on strict discipline; corporal punishment was common. Ruth's nickname there was "Niggerlips", as he had large facial features and was darker than most boys at the all-white reformatory. Ruth was sometimes allowed to rejoin his family or was placed at St. James's Home, a supervised residence with work in the community, but he was always returned to St. Mary's. He was rarely visited by his family; his mother died when he was 12 and, by some accounts, he was permitted to leave St. Mary's only to attend the funeral. How Ruth came to play baseball there is uncertain: according to one account, his placement at St. Mary's was due in part to repeatedly breaking Baltimore's windows with long hits while playing street ball; by another, he was told to join a team on his first day at St. Mary's by the school's athletic director, Brother Herman, becoming a catcher even though left-handers rarely play that position. During his time there he also played third base and shortstop, again unusual for a left-hander, and was forced to wear mitts and gloves made for right-handers. He was encouraged in his pursuits by the school's Prefect of Discipline, Brother Matthias Boutlier, a native of Nova Scotia. A large man, Brother Matthias was greatly respected by the boys both for his strength and for his fairness. For the rest of his life, Ruth would praise Brother Matthias, and his running and hitting styles closely resembled his teacher's. Ruth stated, "I think I was born as a hitter the first day I ever saw him hit a baseball." The older man became a mentor and role model to Ruth; biographer Robert W. Creamer commented on the closeness between the two: > Ruth revered Brother Matthias ... which is remarkable, considering that Matthias was in charge of making boys behave and that Ruth was one of the great natural misbehavers of all time. ... George Ruth caught Brother Matthias' attention early, and the calm, considerable attention the big man gave the young hellraiser from the waterfront struck a spark of response in the boy's soul ... [that may have] blunted a few of the more savage teeth in the gross man whom I have heard at least a half-dozen of his baseball contemporaries describe with admiring awe and wonder as "an animal." The school's influence remained with Ruth in other ways. He was a lifelong Catholic who would sometimes attend Mass after carousing all night, and he became a well-known member of the Knights of Columbus. He would visit orphanages, schools, and hospitals throughout his life, often avoiding publicity. He was generous to St. Mary's as he became famous and rich, donating money and his presence at fundraisers, and spending \$5,000 to buy Brother Matthias a Cadillac in 1926—subsequently replacing it when it was destroyed in an accident. Nevertheless, his biographer Leigh Montville suggests that many of the off-the-field excesses of Ruth's career were driven by the deprivations of his time at St. Mary's. Most of the boys at St. Mary's played baseball in organized leagues at different levels of proficiency. Ruth later estimated that he played 200 games a year as he steadily climbed the ladder of success. Although he played all positions at one time or another, he gained stardom as a pitcher. According to Brother Matthias, Ruth was standing to one side laughing at the bumbling pitching efforts of fellow students, and Matthias told him to go in and see if he could do better. Ruth had become the best pitcher at St. Mary's, and when he was 18 in 1913, he was allowed to leave the premises to play weekend games on teams that were drawn from the community. He was mentioned in several newspaper articles, for both his pitching prowess and ability to hit long home runs. ## Professional baseball ### Minor leagues: Baltimore Orioles In early 1914, Ruth signed a professional baseball contract with Jack Dunn, who owned and managed the minor-league Baltimore Orioles, an International League team. The circumstances of Ruth's signing are not known with certainty. By some accounts, Dunn was urged to attend a game between an all-star team from St. Mary's and one from another Xaverian facility, Mount St. Mary's College. Some versions have Ruth running away before the eagerly awaited game, to return in time to be punished, and then pitching St. Mary's to victory as Dunn watched. Others have Washington Senators pitcher Joe Engel, a Mount St. Mary's graduate, pitching in an alumni game after watching a preliminary contest between the college's freshmen and a team from St. Mary's, including Ruth. Engel watched Ruth play, then told Dunn about him at a chance meeting in Washington. Ruth, in his autobiography, stated only that he worked out for Dunn for a half hour, and was signed. According to biographer Kal Wagenheim, there were legal difficulties to be straightened out as Ruth was supposed to remain at the school until he turned 21, though SportsCentury stated in a documentary that Ruth had already been discharged from St. Mary's when he turned 19, and earned a monthly salary of \$100. The train journey to spring training in Fayetteville, North Carolina, in early March was likely Ruth's first outside the Baltimore area. The rookie ballplayer was the subject of various pranks by veteran players, who were probably also the source of his famous nickname. There are various accounts of how Ruth came to be called "Babe", but most center on his being referred to as "Dunnie's babe" (or some variant). SportsCentury reported that his nickname was gained because he was the new "darling" or "project" of Dunn, not only because of Ruth's raw talent, but also because of his lack of knowledge of the proper etiquette of eating out in a restaurant, being in a hotel, or being on a train. "Babe" was, at that time, a common nickname in baseball, with perhaps the most famous to that point being Pittsburgh Pirates pitcher and 1909 World Series hero Babe Adams, who appeared younger than his actual age. Ruth made his first appearance as a professional ballplayer in an inter-squad game on March 7, 1914. He played shortstop and pitched the last two innings of a 15–9 victory. In his second at-bat, Ruth hit a long home run to right field; the blast was locally reported to be longer than a legendary shot hit by Jim Thorpe in Fayetteville. Ruth made his first appearance against a team in organized baseball in an exhibition game versus the major-league Philadelphia Phillies. Ruth pitched the middle three innings and gave up two runs in the fourth, but then settled down and pitched a scoreless fifth and sixth innings. In a game against the Phillies the following afternoon, Ruth entered during the sixth inning and did not allow a run the rest of the way. The Orioles scored seven runs in the bottom of the eighth inning to overcome a 6–0 deficit, and Ruth was the winning pitcher. Once the regular season began, Ruth was a star pitcher who was also dangerous at the plate. The team performed well, yet received almost no attention from the Baltimore press. A third major league, the Federal League, had begun play, and the local franchise, the Baltimore Terrapins, restored that city to the major leagues for the first time since 1902. Few fans visited Oriole Park, where Ruth and his teammates labored in relative obscurity. Ruth may have been offered a bonus and a larger salary to jump to the Terrapins; when rumors to that effect swept Baltimore, giving Ruth the most publicity he had experienced to date, a Terrapins official denied it, stating it was their policy not to sign players under contract to Dunn. The competition from the Terrapins caused Dunn to sustain large losses. Although by late June the Orioles were in first place, having won over two-thirds of their games, the paid attendance dropped as low as 150. Dunn explored a possible move by the Orioles to Richmond, Virginia, as well as the sale of a minority interest in the club. These possibilities fell through, leaving Dunn with little choice other than to sell his best players to major league teams to raise money. He offered Ruth to the reigning World Series champions, Connie Mack's Philadelphia Athletics, but Mack had his own financial problems. The Cincinnati Reds and New York Giants expressed interest in Ruth, but Dunn sold his contract, along with those of pitchers Ernie Shore and Ben Egan, to the Boston Red Sox of the American League (AL) on July 4. The sale price was announced as \$25,000 but other reports lower the amount to half that, or possibly \$8,500 plus the cancellation of a \$3,000 loan. Ruth remained with the Orioles for several days while the Red Sox completed a road trip, and reported to the team in Boston on July 11. ### Boston Red Sox (1914–1919) #### Developing star On July 11, 1914, Ruth arrived in Boston with Egan and Shore. Ruth later told the story of how that morning he had met Helen Woodford, who would become his first wife. She was a 16-year-old waitress at Landers Coffee Shop, and Ruth related that she served him when he had breakfast there. Other stories, though, suggested that the meeting occurred on another day, and perhaps under other circumstances. Regardless of when he began to woo his first wife, he won his first game as a pitcher for the Red Sox that afternoon, 4–3, over the Cleveland Naps. His catcher was Bill Carrigan, who was also the Red Sox manager. Shore was given a start by Carrigan the next day; he won that and his second start and thereafter was pitched regularly. Ruth lost his second start, and was thereafter little used. In his major league debut as a batter, Ruth went 0-for-2 against left-hander Willie Mitchell, striking out in his first at bat before being removed for a pinch hitter in the seventh inning. Ruth was not much noticed by the fans, as Bostonians watched the Red Sox's crosstown rivals, the Braves, begin a legendary comeback that would take them from last place on the Fourth of July to the 1914 World Series championship. Egan was traded to Cleveland after two weeks on the Boston roster. During his time with the Red Sox, he kept an eye on the inexperienced Ruth, much as Dunn had in Baltimore. When he was traded, no one took his place as supervisor. Ruth's new teammates considered him brash and would have preferred him as a rookie to remain quiet and inconspicuous. When Ruth insisted on taking batting practice despite being both a rookie who did not play regularly and a pitcher, he arrived to find his bats sawed in half. His teammates nicknamed him "the Big Baboon", a name the swarthy Ruth, who had disliked the nickname "Niggerlips" at St. Mary's, detested. Ruth had received a raise on promotion to the major leagues and quickly acquired tastes for fine food, liquor, and women, among other temptations. Manager Carrigan allowed Ruth to pitch two exhibition games in mid-August. Although Ruth won both against minor-league competition, he was not restored to the pitching rotation. It is uncertain why Carrigan did not give Ruth additional opportunities to pitch. There are legends—filmed for the screen in The Babe Ruth Story (1948)—that the young pitcher had a habit of signaling his intent to throw a curveball by sticking out his tongue slightly, and that he was easy to hit until this changed. Creamer pointed out that it is common for inexperienced pitchers to display such habits, and the need to break Ruth of his would not constitute a reason to not use him at all. The biographer suggested that Carrigan was unwilling to use Ruth because of the rookie's poor behavior. On July 30, 1914, Boston owner Joseph Lannin had purchased the minor-league Providence Grays, members of the International League. The Providence team had been owned by several people associated with the Detroit Tigers, including star hitter Ty Cobb, and as part of the transaction, a Providence pitcher was sent to the Tigers. To soothe Providence fans upset at losing a star, Lannin announced that the Red Sox would soon send a replacement to the Grays. This was intended to be Ruth, but his departure for Providence was delayed when Cincinnati Reds owner Garry Herrmann claimed him off of waivers. After Lannin wrote to Herrmann explaining that the Red Sox wanted Ruth in Providence so he could develop as a player, and would not release him to a major league club, Herrmann allowed Ruth to be sent to the minors. Carrigan later stated that Ruth was not sent down to Providence to make him a better player, but to help the Grays win the International League pennant (league championship). Ruth joined the Grays on August 18, 1914. After Dunn's deals, the Baltimore Orioles managed to hold on to first place until August 15, after which they continued to fade, leaving the pennant race between Providence and Rochester. Ruth was deeply impressed by Providence manager "Wild Bill" Donovan, previously a star pitcher with a 25–4 win–loss record for Detroit in 1907; in later years, he credited Donovan with teaching him much about pitching. Ruth was often called upon to pitch, in one stretch starting (and winning) four games in eight days. On September 5 at Maple Leaf Park in Toronto, Ruth pitched a one-hit 9–0 victory, and hit his first professional home run, his only one as a minor leaguer, off Ellis Johnson. Recalled to Boston after Providence finished the season in first place, he pitched and won a game for the Red Sox against the New York Yankees on October 2, getting his first major league hit, a double. Ruth finished the season with a record of 2–1 as a major leaguer and 23–8 in the International League (for Baltimore and Providence). Once the season concluded, Ruth married Helen in Ellicott City, Maryland. Creamer speculated that they did not marry in Baltimore, where the newlyweds boarded with George Ruth Sr., to avoid possible interference from those at St. Mary's—both bride and groom were not yet of age and Ruth remained on parole from that institution until his 21st birthday. In March 1915, Ruth reported to Hot Springs, Arkansas, for his first major league spring training. Despite a relatively successful first season, he was not slated to start regularly for the Red Sox, who already had two "superb" left-handed pitchers, according to Creamer: the established stars Dutch Leonard, who had broken the record for the lowest earned run average (ERA) in a single season; and Ray Collins, a 20-game winner in both 1913 and 1914. Ruth was ineffective in his first start, taking the loss in the third game of the season. Injuries and ineffective pitching by other Boston pitchers gave Ruth another chance, and after some good relief appearances, Carrigan allowed Ruth another start, and he won a rain-shortened seven inning game. Ten days later, the manager had him start against the New York Yankees at the Polo Grounds. Ruth took a 3–2 lead into the ninth, but lost the game 4–3 in 13 innings. Ruth, hitting ninth as was customary for pitchers, hit a massive home run into the upper deck in right field off of Jack Warhop. At the time, home runs were rare in baseball, and Ruth's majestic shot awed the crowd. The winning pitcher, Warhop, would in August 1915 conclude a major league career of eight seasons, undistinguished but for being the first major league pitcher to give up a home run to Babe Ruth. Carrigan was sufficiently impressed by Ruth's pitching to give him a spot in the starting rotation. Ruth finished the 1915 season 18–8 as a pitcher; as a hitter, he batted .315 and had four home runs. The Red Sox won the AL pennant, but with the pitching staff healthy, Ruth was not called upon to pitch in the 1915 World Series against the Philadelphia Phillies. Boston won in five games. Ruth was used as a pinch hitter in Game Five, but grounded out against Phillies ace Grover Cleveland Alexander. Despite his success as a pitcher, Ruth was acquiring a reputation for long home runs; at Sportsman's Park against the St. Louis Browns, a Ruth hit soared over Grand Avenue, breaking the window of a Chevrolet dealership. In 1916, attention focused on Ruth's pitching as he engaged in repeated pitching duels with Washington Senators' ace Walter Johnson. The two met five times during the season with Ruth winning four and Johnson one (Ruth had a no decision in Johnson's victory). Two of Ruth's victories were by the score of 1–0, one in a 13-inning game. Of the 1–0 shutout decided without extra innings, AL president Ban Johnson stated, "That was one of the best ball games I have ever seen." For the season, Ruth went 23–12, with a 1.75 ERA and nine shutouts, both of which led the league. Ruth's nine shutouts in 1916 set a league record for left-handers that would remain unmatched until Ron Guidry tied it in 1978. The Red Sox won the pennant and World Series again, this time defeating the Brooklyn Robins (as the Dodgers were then known) in five games. Ruth started and won Game 2, 2–1, in 14 innings. Until another game of that length was played in 2005, this was the longest World Series game, and Ruth's pitching performance is still the longest postseason complete game victory. Carrigan retired as player and manager after 1916, returning to his native Maine to be a businessman. Ruth, who played under four managers who are in the National Baseball Hall of Fame, always maintained that Carrigan, who is not enshrined there, was the best skipper he ever played for. There were other changes in the Red Sox organization that offseason, as Lannin sold the team to a three-man group headed by New York theatrical promoter Harry Frazee. Jack Barry was hired by Frazee as manager. #### Emergence as a hitter Ruth went 24–13 with a 2.01 ERA and six shutouts in 1917, but the Sox finished in second place in the league, nine games behind the Chicago White Sox in the standings. On June 23 at Washington, when home plate umpire 'Brick' Owens called the first four pitches as balls, Ruth was ejected from the game and threw a punch at him, and was later suspended for ten days and fined \$100. Ernie Shore was called in to relieve Ruth, and was allowed eight warm-up pitches. The runner who had reached base on the walk was caught stealing, and Shore retired all 26 batters he faced to win the game. Shore's feat was listed as a perfect game for many years. In 1991, Major League Baseball's (MLB) Committee on Statistical Accuracy amended it to be listed as a combined no-hitter. In 1917, Ruth was used little as a batter, other than for his plate appearances while pitching, and hit .325 with two home runs. The United States' entry into World War I occurred at the start of the season and overshadowed baseball. Conscription was introduced in September 1917, and most baseball players in the big leagues were of draft age. This included Barry, who was a player-manager, and who joined the Naval Reserve in an attempt to avoid the draft, only to be called up after the 1917 season. Frazee hired International League President Ed Barrow as Red Sox manager. Barrow had spent the previous 30 years in a variety of baseball jobs, though he never played the game professionally. With the major leagues shorthanded because of the war, Barrow had many holes in the Red Sox lineup to fill. Ruth also noticed these vacancies in the lineup. He was dissatisfied in the role of a pitcher who appeared every four or five days and wanted to play every day at another position. Barrow used Ruth at first base and in the outfield during the exhibition season, but he restricted him to pitching as the team moved toward Boston and the season opener. At the time, Ruth was possibly the best left-handed pitcher in baseball, and allowing him to play another position was an experiment that could have backfired. Inexperienced as a manager, Barrow had player Harry Hooper advise him on baseball game strategy. Hooper urged his manager to allow Ruth to play another position when he was not pitching, arguing to Barrow, who had invested in the club, that the crowds were larger on days when Ruth played, as they were attracted by his hitting. In early May, Barrow gave in; Ruth promptly hit home runs in four consecutive games (one an exhibition), the last off of Walter Johnson. For the first time in his career (disregarding pinch-hitting appearances), Ruth was assigned a place in the batting order higher than ninth. Although Barrow predicted that Ruth would beg to return to pitching the first time he experienced a batting slump, that did not occur. Barrow used Ruth primarily as an outfielder in the war-shortened 1918 season. Ruth hit .300, with 11 home runs, enough to secure him a share of the major league home run title with Tilly Walker of the Philadelphia Athletics. He was still occasionally used as a pitcher, and had a 13–7 record with a 2.22 ERA. In 1918, the Red Sox won their third pennant in four years and faced the Chicago Cubs in the World Series, which began on September 5, the earliest date in history. The season had been shortened because the government had ruled that baseball players who were eligible for the military would have to be inducted or work in critical war industries, such as armaments plants. Ruth pitched and won Game One for the Red Sox, a 1–0 shutout. Before Game Four, Ruth injured his left hand in a fight but pitched anyway. He gave up seven hits and six walks, but was helped by outstanding fielding behind him and by his own batting efforts, as a fourth-inning triple by Ruth gave his team a 2–0 lead. The Cubs tied the game in the eighth inning, but the Red Sox scored to take a 3–2 lead again in the bottom of that inning. After Ruth gave up a hit and a walk to start the ninth inning, he was relieved on the mound by Joe Bush. To keep Ruth and his bat in the game, he was sent to play left field. Bush retired the side to give Ruth his second win of the Series, and the third and last World Series pitching victory of his career, against no defeats, in three pitching appearances. Ruth's effort gave his team a three-games-to-one lead, and two days later the Red Sox won their third Series in four years, four-games-to-two. Before allowing the Cubs to score in Game Four, Ruth pitched 29+2⁄3 consecutive scoreless innings, a record for the World Series that stood for more than 40 years until 1961, broken by Whitey Ford after Ruth's death. Ruth was prouder of that record than he was of any of his batting feats. With the World Series over, Ruth gained exemption from the war draft by accepting a nominal position with a Pennsylvania steel mill. Many industrial establishments took pride in their baseball teams and sought to hire major leaguers. The end of the war in November set Ruth free to play baseball without such contrivances. During the 1919 season, Ruth was used as a pitcher in only 17 of his 130 games and compiled a 9–5 record. Barrow used him as a pitcher mostly in the early part of the season, when the Red Sox manager still had hopes of a second consecutive pennant. By late June, the Red Sox were clearly out of the race, and Barrow had no objection to Ruth concentrating on his hitting, if only because it drew people to the ballpark. Ruth had hit a home run against the Yankees on Opening Day, and another during a month-long batting slump that soon followed. Relieved of his pitching duties, Ruth began an unprecedented spell of slugging home runs, which gave him widespread public and press attention. Even his failures were seen as majestic—one sportswriter said, "When Ruth misses a swipe at the ball, the stands quiver." Two home runs by Ruth on July 5, and one in each of two consecutive games a week later, raised his season total to 11, tying his career best from 1918. The first record to fall was the AL single-season mark of 16, set by Ralph "Socks" Seybold in 1902. Ruth matched that on July 29, then pulled ahead toward the major league record of 25, set by Buck Freeman in 1899. By the time Ruth reached this in early September, writers had discovered that Ned Williamson of the 1884 Chicago White Stockings had hit 27—though in a ballpark where the distance to right field was only 215 feet (66 m). On September 20, "Babe Ruth Day" at Fenway Park, Ruth won the game with a home run in the bottom of the ninth inning, tying Williamson. He broke the record four days later against the Yankees at the Polo Grounds, and hit one more against the Senators to finish with 29. The home run at Washington made Ruth the first major league player to hit a home run at all eight ballparks in his league. In spite of Ruth's hitting heroics, the Red Sox finished sixth, 20+1⁄2 games behind the league champion White Sox. In his six seasons with Boston, he won 89 games and recorded a 2.19 ERA. He had a four-year stretch where he was second in the AL in wins and ERA behind Walter Johnson, and Ruth had a winning record against Johnson in head-to-head matchups. ### Sale to New York As an out-of-towner from New York City, Frazee had been regarded with suspicion by Boston's sportswriters and baseball fans when he bought the team. He won them over with success on the field and a willingness to build the Red Sox by purchasing or trading for players. He offered the Senators \$60,000 for Walter Johnson, but Washington owner Clark Griffith was unwilling. Even so, Frazee was successful in bringing other players to Boston, especially as replacements for players in the military. This willingness to spend for players helped the Red Sox secure the 1918 title. The 1919 season saw record-breaking attendance, and Ruth's home runs for Boston made him a national sensation. In March 1919 Ruth was reported as having accepted a three-year contract for a total of \$27,000, after protracted negotiations. Nevertheless, on December 26, 1919, Frazee sold Ruth's contract to the New York Yankees. Not all the circumstances concerning the sale are known, but brewer and former congressman Jacob Ruppert, the New York team's principal owner, reportedly asked Yankee manager Miller Huggins what the team needed to be successful. "Get Ruth from Boston", Huggins supposedly replied, noting that Frazee was perennially in need of money to finance his theatrical productions. In any event, there was precedent for the Ruth transaction: when Boston pitcher Carl Mays left the Red Sox in a 1919 dispute, Frazee had settled the matter by selling Mays to the Yankees, though over the opposition of AL President Johnson. According to one of Ruth's biographers, Jim Reisler, "why Frazee needed cash in 1919—and large infusions of it quickly—is still, more than 80 years later, a bit of a mystery". The often-told story is that Frazee needed money to finance the musical No, No, Nanette, which was a Broadway hit and brought Frazee financial security. That play did not open until 1925, however, by which time Frazee had sold the Red Sox. Still, the story may be true in essence: No, No, Nanette was based on a Frazee-produced play, My Lady Friends, which opened in 1919. There were other financial pressures on Frazee, despite his team's success. Ruth, fully aware of baseball's popularity and his role in it, wanted to renegotiate his contract, signed before the 1919 season for \$10,000 per year through 1921. He demanded that his salary be doubled, or he would sit out the season and cash in on his popularity through other ventures. Ruth's salary demands were causing other players to ask for more money. Additionally, Frazee still owed Lannin as much as \$125,000 from the purchase of the club. Although Ruppert and his co-owner, Colonel Tillinghast Huston, were both wealthy, and had aggressively purchased and traded for players in 1918 and 1919 to build a winning team, Ruppert faced losses in his brewing interests as Prohibition was implemented, and if their team left the Polo Grounds, where the Yankees were the tenants of the New York Giants, building a stadium in New York would be expensive. Nevertheless, when Frazee, who moved in the same social circles as Huston, hinted to the colonel that Ruth was available for the right price, the Yankees owners quickly pursued the purchase. Frazee sold the rights to Babe Ruth for \$100,000, the largest sum ever paid for a baseball player. The deal also involved a \$350,000 loan from Ruppert to Frazee, secured by a mortgage on Fenway Park. Once it was agreed, Frazee informed Barrow, who, stunned, told the owner that he was getting the worse end of the bargain. Cynics have suggested that Barrow may have played a larger role in the Ruth sale, as less than a year after, he became the Yankee general manager, and in the following years made a number of purchases of Red Sox players from Frazee. The \$100,000 price included \$25,000 in cash, and notes for the same amount due November 1 in 1920, 1921, and 1922; Ruppert and Huston assisted Frazee in selling the notes to banks for immediate cash. The transaction was contingent on Ruth signing a new contract, which was quickly accomplished—Ruth agreed to fulfill the remaining two years on his contract, but was given a \$20,000 bonus, payable over two seasons. The deal was announced on January 6, 1920. Reaction in Boston was mixed: some fans were embittered at the loss of Ruth; others conceded that Ruth had become difficult to deal with. The New York Times suggested that "The short right field wall at the Polo Grounds should prove an easy target for Ruth next season and, playing seventy-seven games at home, it would not be surprising if Ruth surpassed his home run record of twenty-nine circuit clouts next Summer." According to Reisler, "The Yankees had pulled off the sports steal of the century." According to Marty Appel in his history of the Yankees, the transaction, "changed the fortunes of two high-profile franchises for decades". The Red Sox, winners of five of the first 16 World Series, those played between 1903 and 1919, would not win another pennant until 1946, or another World Series until 2004, a drought attributed in baseball superstition to Frazee's sale of Ruth and sometimes dubbed the "Curse of the Bambino". Conversely, the Yankees had not won the AL championship prior to their acquisition of Ruth. They won seven AL pennants and four World Series with him, and lead baseball with 40 pennants and 27 World Series titles in their history. ### New York Yankees (1920–1934) #### Initial success (1920–1923) When Ruth signed with the Yankees, he completed his transition from a pitcher to a power-hitting outfielder. His fifteen-season Yankee career consisted of over 2,000 games, and Ruth broke many batting records while making only five widely scattered appearances on the mound, winning all of them. At the end of April 1920, the Yankees were 4–7, with the Red Sox leading the league with a 10–2 mark. Ruth had done little, having injured himself swinging the bat. Both situations began to change on May 1, when Ruth hit a tape measure home run that sent the ball completely out of the Polo Grounds, a feat believed to have been previously accomplished only by Shoeless Joe Jackson. The Yankees won, 6–0, taking three out of four from the Red Sox. Ruth hit his second home run on May 2, and by the end of the month had set a major league record for home runs in a month with 11, and promptly broke it with 13 in June. Fans responded with record attendance figures. On May 16, Ruth and the Yankees drew 38,600 to the Polo Grounds, a record for the ballpark, and 15,000 fans were turned away. Large crowds jammed stadiums to see Ruth play when the Yankees were on the road. The home runs kept on coming. Ruth tied his own record of 29 on July 15 and broke it with home runs in both games of a doubleheader four days later. By the end of July, he had 37, but his pace slackened somewhat after that. Nevertheless, on September 4, he both tied and broke the organized baseball record for home runs in a season, snapping Perry Werden's 1895 mark of 44 in the minor Western League. The Yankees played well as a team, battling for the league lead early in the summer, but slumped in August in the AL pennant battle with Chicago and Cleveland. The pennant and the World Series were won by Cleveland, who surged ahead after the Black Sox Scandal broke on September 28 and led to the suspension of many of Chicago's top players, including Shoeless Joe Jackson. The Yankees finished third, but drew 1.2 million fans to the Polo Grounds, the first time a team had drawn a seven-figure attendance. The rest of the league sold 600,000 more tickets, many fans there to see Ruth, who led the league with 54 home runs, 158 runs, and 137 runs batted in (RBIs). In 1920 and afterwards, Ruth was aided in his power hitting by the fact that A.J. Reach Company—the maker of baseballs used in the major leagues—was using a more efficient machine to wind the yarn found within the baseball. The new baseballs went into play in 1920 and ushered the start of the live-ball era; the number of home runs across the major leagues increased by 184 over the previous year. Baseball statistician Bill James pointed out that while Ruth was likely aided by the change in the baseball, there were other factors at work, including the gradual abolition of the spitball (accelerated after the death of Ray Chapman, struck by a pitched ball thrown by Mays in August 1920) and the more frequent use of new baseballs (also a response to Chapman's death). Nevertheless, James theorized that Ruth's 1920 explosion might have happened in 1919, had a full season of 154 games been played rather than 140, had Ruth refrained from pitching 133 innings that season, and if he were playing at any other home field but Fenway Park, where he hit only 9 of 29 home runs. Yankees business manager Harry Sparrow had died early in the 1920 season. Ruppert and Huston hired Barrow to replace him. The two men quickly made a deal with Frazee for New York to acquire some of the players who would be mainstays of the early Yankee pennant-winning teams, including catcher Wally Schang and pitcher Waite Hoyt. The 21-year-old Hoyt became close to Ruth: > The outrageous life fascinated Hoyt, the don't-give-a-shit freedom of it, the nonstop, pell-mell charge into excess. How did a man drink so much and never get drunk? ... The puzzle of Babe Ruth never was dull, no matter how many times Hoyt picked up the pieces and stared at them. After games he would follow the crowd to the Babe's suite. No matter what the town, the beer would be iced and the bottles would fill the bathtub. In the offseason, Ruth spent some time in Havana, Cuba, where he was said to have lost \$35,000 () betting on horse races. Ruth hit home runs early and often in the 1921 season, during which he broke Roger Connor's mark for home runs in a career, 138. Each of the almost 600 home runs Ruth hit in his career after that extended his own record. After a slow start, the Yankees were soon locked in a tight pennant race with Cleveland, winners of the 1920 World Series. On September 15, Ruth hit his 55th home run, breaking his year-old single-season record. In late September, the Yankees visited Cleveland and won three out of four games, giving them the upper hand in the race, and clinched their first pennant a few days later. Ruth finished the regular season with 59 home runs, batting .378 and with a slugging percentage of .846. Ruth's 177 runs scored, 119 extra-base hits, and 457 total bases set modern-era records that still stand as of 2023. The Yankees had high expectations when they met the New York Giants in the 1921 World Series, every game of which was played in the Polo Grounds. The Yankees won the first two games with Ruth in the lineup. However, Ruth badly scraped his elbow during Game 2 when he slid into third base (he had walked and stolen both second and third bases). After the game, he was told by the team physician not to play the rest of the series. Despite this advice, he did play in the next three games, and pinch-hit in Game Eight of the best-of-nine series, but the Yankees lost, five games to three. Ruth hit .316, drove in five runs and hit his first World Series home run. After the Series, Ruth and teammates Bob Meusel and Bill Piercy participated in a barnstorming tour in the Northeast. A rule then in force prohibited World Series participants from playing in exhibition games during the offseason, the purpose being to prevent Series participants from replicating the Series and undermining its value. Baseball Commissioner Kenesaw Mountain Landis suspended the trio until May 20, 1922, and fined them their 1921 World Series checks. In August 1922, the rule was changed to allow limited barnstorming for World Series participants, with Landis's permission required. On March 4, 1922, Ruth signed a new contract for three years at \$52,000 a year (). This was more than two times the largest sum ever paid to a ballplayer up to that point and it represented 40% of the team's player payroll. Despite his suspension, Ruth was named the Yankees' new on-field captain prior to the 1922 season. During the suspension, he worked out with the team in the morning and played exhibition games with the Yankees on their off days. He and Meusel returned on May 20 to a sellout crowd at the Polo Grounds, but Ruth batted 0-for-4 and was booed. On May 25, he was thrown out of the game for throwing dust in umpire George Hildebrand's face, then climbed into the stands to confront a heckler. Ban Johnson ordered him fined, suspended, and stripped of position as team captain. In his shortened season, Ruth appeared in 110 games, batted .315, with 35 home runs, and drove in 99 runs, but the 1922 season was a disappointment in comparison to his two previous dominating years. Despite Ruth's off-year, the Yankees managed to win the pennant and faced the New York Giants in the World Series for the second consecutive year. In the Series, Giants manager John McGraw instructed his pitchers to throw him nothing but curveballs, and Ruth never adjusted. Ruth had just two hits in 17 at bats, and the Yankees lost to the Giants for the second straight year, by 4–0 (with one tie game). Sportswriter Joe Vila called him, "an exploded phenomenon". After the season, Ruth was a guest at an Elks Club banquet, set up by Ruth's agent with Yankee team support. There, each speaker, concluding with future New York mayor Jimmy Walker, censured him for his poor behavior. An emotional Ruth promised reform, and, to the surprise of many, followed through. When he reported to spring training, he was in his best shape as a Yankee, weighing only 210 pounds (95 kg). The Yankees' status as tenants of the Giants at the Polo Grounds had become increasingly uneasy, and in 1922, Giants owner Charles Stoneham said the Yankees' lease, expiring after that season, would not be renewed. Ruppert and Huston had long contemplated a new stadium, and had taken an option on property at 161st Street and River Avenue in the Bronx. Yankee Stadium was completed in time for the home opener on April 18, 1923, at which Ruth hit the first home run in what was quickly dubbed "the House that Ruth Built". The ballpark was designed with Ruth in mind: although the venue's left-field fence was further from home plate than at the Polo Grounds, Yankee Stadium's right-field fence was closer, making home runs easier to hit for left-handed batters. To spare Ruth's eyes, right field—his defensive position—was not pointed into the afternoon sun, as was traditional; left fielder Meusel soon developed headaches from squinting toward home plate. During the 1923 season, the Yankees were never seriously challenged and won the AL pennant by 17 games. Ruth finished the season with a career-high .393 batting average and 41 home runs, which tied Cy Williams for the most in the major-leagues that year. Ruth hit a career-high 45 doubles in 1923, and he reached base 379 times, then a major league record. For the third straight year, the Yankees faced the Giants in the World Series, which Ruth dominated. He batted .368, walked eight times, scored eight runs, hit three home runs and slugged 1.000 during the series, as the Yankees christened their new stadium with their first World Series championship, four games to two. #### Batting title and "bellyache" (1924–1925) In 1924, the Yankees were favored to become the first team to win four consecutive pennants. Plagued by injuries, they found themselves in a battle with the Senators. Although the Yankees won 18 of 22 at one point in September, the Senators beat out the Yankees by two games. Ruth hit .378, winning his only AL batting title, with a league-leading 46 home runs. Ruth did not look like an athlete; he was described as "toothpicks attached to a piano", with a big upper body but thin wrists and legs. Ruth had kept up his efforts to stay in shape in 1923 and 1924, but by early 1925 weighed nearly 260 pounds (120 kg). His annual visit to Hot Springs, Arkansas, where he exercised and took saunas early in the year, did him no good as he spent much of the time carousing in the resort town. He became ill while there, and relapsed during spring training. Ruth collapsed in Asheville, North Carolina, as the team journeyed north. He was put on a train for New York, where he was briefly hospitalized. A rumor circulated that he had died, prompting British newspapers to print a premature obituary. In New York, Ruth collapsed again and was found unconscious in his hotel bathroom. He was taken to a hospital where he had multiple convulsions. After sportswriter W. O. McGeehan wrote that Ruth's illness was due to binging on hot dogs and soda pop before a game, it became known as "the bellyache heard 'round the world". However, the exact cause of his ailment has never been confirmed and remains a mystery. Glenn Stout, in his history of the Yankees, writes that the Ruth legend is "still one of the most sheltered in sports"; he suggests that alcohol was at the root of Ruth's illness, pointing to the fact that Ruth remained six weeks at St. Vincent's Hospital but was allowed to leave, under supervision, for workouts with the team for part of that time. He concludes that the hospitalization was behavior-related. Playing just 98 games, Ruth had his worst season as a Yankee; he finished with a .290 average and 25 home runs. The Yankees finished next to last in the AL with a 69–85 record, their last season with a losing record until 1965. #### Murderers' Row (1926–1928) Ruth spent part of the offseason of 1925–26 working out at Artie McGovern's gym, where he got back into shape. Barrow and Huggins had rebuilt the team and surrounded the veteran core with good young players like Tony Lazzeri and Lou Gehrig, but the Yankees were not expected to win the pennant. Ruth returned to his normal production during 1926, when he batted .372 with 47 home runs and 146 RBIs. The Yankees built a 10-game lead by mid-June and coasted to win the pennant by three games. The St. Louis Cardinals had won the National League with the lowest winning percentage for a pennant winner to that point (.578) and the Yankees were expected to win the World Series easily. Although the Yankees won the opener in New York, St. Louis took Games Two and Three. In Game Four, Ruth hit three home runs—the first time this had been done in a World Series game—to lead the Yankees to victory. In the fifth game, Ruth caught a ball as he crashed into the fence. The play was described by baseball writers as a defensive gem. New York took that game, but Grover Cleveland Alexander won Game Six for St. Louis to tie the Series at three games each, then got very drunk. He was nevertheless inserted into Game Seven in the seventh inning and shut down the Yankees to win the game, 3–2, and win the Series. Ruth had hit his fourth home run of the Series earlier in the game and was the only Yankee to reach base off Alexander; he walked in the ninth inning before being thrown out to end the game when he attempted to steal second base. Although Ruth's attempt to steal second is often deemed a baserunning blunder, Creamer pointed out that the Yankees' chances of tying the game would have been greatly improved with a runner in scoring position. The 1926 World Series was also known for Ruth's promise to Johnny Sylvester, a hospitalized 11-year-old boy. Ruth promised the child that he would hit a home run on his behalf. Sylvester had been injured in a fall from a horse, and a friend of Sylvester's father gave the boy two autographed baseballs signed by Yankees and Cardinals. The friend relayed a promise from Ruth (who did not know the boy) that he would hit a home run for him. After the Series, Ruth visited the boy in the hospital. When the matter became public, the press greatly inflated it, and by some accounts, Ruth allegedly saved the boy's life by visiting him, emotionally promising to hit a home run, and doing so. Ruth's 1926 salary of \$52,000 was far more than any other baseball player, but he made at least twice as much in other income, including \$100,000 from 12 weeks of vaudeville. The 1927 New York Yankees team is considered one of the greatest squads to ever take the field. Known as Murderers' Row because of the power of its lineup, the team clinched first place on Labor Day, won a then-AL-record 110 games and took the AL pennant by 19 games. There was no suspense in the pennant race, and the nation turned its attention to Ruth's pursuit of his own single-season home run record of 59 round trippers. Ruth was not alone in this chase. Teammate Lou Gehrig proved to be a slugger who was capable of challenging Ruth for his home run crown; he tied Ruth with 24 home runs late in June. Through July and August, the dynamic duo was never separated by more than two home runs. Gehrig took the lead, 45–44, in the first game of a doubleheader at Fenway Park early in September; Ruth responded with two blasts of his own to take the lead, as it proved permanently—Gehrig finished with 47. Even so, as of September 6, Ruth was still several games off his 1921 pace, and going into the final series against the Senators, had only 57. He hit two in the first game of the series, including one off of Paul Hopkins, facing his first major league batter, to tie the record. The following day, September 30, he broke it with his 60th homer, in the eighth inning off Tom Zachary to break a 2–2 tie. "Sixty! Let's see some son of a bitch try to top that one", Ruth exulted after the game. In addition to his career-high 60 home runs, Ruth batted .356, drove in 164 runs and slugged .772. In the 1927 World Series, the Yankees swept the Pittsburgh Pirates in four games; the National Leaguers were disheartened after watching the Yankees take batting practice before Game One, with ball after ball leaving Forbes Field. According to Appel, "The 1927 New York Yankees. Even today, the words inspire awe ... all baseball success is measured against the '27 team." The following season started off well for the Yankees, who led the league in the early going. But the Yankees were plagued by injuries, erratic pitching and inconsistent play. The Philadelphia Athletics, rebuilding after some lean years, erased the Yankees' big lead and even took over first place briefly in early September. The Yankees, however, regained first place when they beat the Athletics three out of four games in a pivotal series at Yankee Stadium later that month, and clinched the pennant in the final weekend of the season. Ruth's play in 1928 mirrored his team's performance. He got off to a hot start and on August 1, he had 42 home runs. This put him ahead of his 60 home run pace from the previous season. He then slumped for the latter part of the season, and he hit just twelve home runs in the last two months. Ruth's batting average also fell to .323, well below his career average. Nevertheless, he ended the season with 54 home runs. The Yankees swept the favored Cardinals in four games in the World Series, with Ruth batting .625 and hitting three home runs in Game Four, including one off Alexander. #### "Called shot" and final Yankee years (1929–1934) Before the 1929 season, Ruppert (who had bought out Huston in 1923) announced that the Yankees would wear uniform numbers to allow fans at cavernous Yankee Stadium to easily identify the players. The Cardinals and Indians had each experimented with uniform numbers; the Yankees were the first to use them on both home and away uniforms. Ruth batted third and was given number 3. According to a long-standing baseball legend, the Yankees adopted their now-iconic pinstriped uniforms in hopes of making Ruth look slimmer. In truth, though, they had been wearing pinstripes since 1915. Although the Yankees started well, the Athletics soon proved they were the better team in 1929, splitting two series with the Yankees in the first month of the season, then taking advantage of a Yankee losing streak in mid-May to gain first place. Although Ruth performed well, the Yankees were not able to catch the Athletics—Connie Mack had built another great team. Tragedy struck the Yankees late in the year as manager Huggins died at 51 of erysipelas, a bacterial skin infection, on September 25, only ten days after he had last directed the team. Despite their past differences, Ruth praised Huggins and described him as a "great guy". The Yankees finished second, 18 games behind the Athletics. Ruth hit .345 during the season, with 46 home runs and 154 RBIs. On October 17, the Yankees hired Bob Shawkey as manager; he was their fourth choice. Ruth had politicked for the job of player-manager, but Ruppert and Barrow never seriously considered him for the position. Stout deemed this the first hint Ruth would have no future with the Yankees once he retired as a player. Shawkey, a former Yankees player and teammate of Ruth, would prove unable to command Ruth's respect. On January 7, 1930, salary negotiations between the Yankees and Ruth quickly broke down. Having just concluded a three-year contract at an annual salary of \$70,000, Ruth promptly rejected both the Yankees' initial proposal of \$70,000 for one year and their 'final' offer of two years at seventy-five—the latter figure equaling the annual salary of then US President Herbert Hoover; instead, Ruth demanded at least \$85,000 and three years. When asked why he thought he was "worth more than the President of the United States," Ruth responded: "Say, if I hadn't been sick last summer, I'd have broken hell out of that home run record! Besides, the President gets a four-year contract. I'm only asking for three." Exactly two months later, a compromise was reached, with Ruth settling for two years at an unprecedented \$80,000 per year. Ruth's salary was more than 2.4 times greater than the next-highest salary that season, a record margin . In 1930, Ruth hit .359 with 49 home runs (his best in his years after 1928) and 153 RBIs, and pitched his first game in nine years, a complete game victory. Nevertheless, the Athletics won their second consecutive pennant and World Series, as the Yankees finished in third place, sixteen games back. At the end of the season, Shawkey was fired and replaced with Cubs manager Joe McCarthy, though Ruth again unsuccessfully sought the job. McCarthy was a disciplinarian, but chose not to interfere with Ruth, who did not seek conflict with the manager. The team improved in 1931, but was no match for the Athletics, who won 107 games, 13+1⁄2 games in front of the Yankees. Ruth, for his part, hit .373, with 46 home runs and 163 RBIs. He had 31 doubles, his most since 1924. In the 1932 season, the Yankees went 107–47 and won the pennant. Ruth's effectiveness had decreased somewhat, but he still hit .341 with 41 home runs and 137 RBIs. Nevertheless, he was sidelined twice because of injuries during the season. The Yankees faced the Cubs, McCarthy's former team, in the 1932 World Series. There was bad blood between the two teams as the Yankees resented the Cubs only awarding half a World Series share to Mark Koenig, a former Yankee. The games at Yankee Stadium had not been sellouts; both were won by the home team, with Ruth collecting two singles, but scoring four runs as he was walked four times by the Cubs pitchers. In Chicago, Ruth was resentful at the hostile crowds that met the Yankees' train and jeered them at the hotel. The crowd for Game Three included New York Governor Franklin D. Roosevelt, the Democratic candidate for president, who sat with Chicago Mayor Anton Cermak. Many in the crowd threw lemons at Ruth, a sign of derision, and others (as well as the Cubs themselves) shouted abuse at Ruth and other Yankees. They were briefly silenced when Ruth hit a three-run home run off Charlie Root in the first inning, but soon revived, and the Cubs tied the score at 4–4 in the fourth inning, partly due to Ruth's fielding error in the outfield. When Ruth came to the plate in the top of the fifth, the Chicago crowd and players, led by pitcher Guy Bush, were screaming insults at Ruth. With the count at two balls and one strike, Ruth gestured, possibly in the direction of center field, and after the next pitch (a strike), may have pointed there with one hand. Ruth hit the fifth pitch over the center field fence; estimates were that it traveled nearly 500 feet (150 m). Whether or not Ruth intended to indicate where he planned to (and did) hit the ball (Charlie Devens, who, in 1999, was interviewed as Ruth's surviving teammate in that game, did not think so), the incident has gone down in legend as Babe Ruth's called shot. The Yankees won Game Three, and the following day clinched the Series with another victory. During that game, Bush hit Ruth on the arm with a pitch, causing words to be exchanged and provoking a game-winning Yankee rally. Ruth remained productive in 1933. He batted .301, with 34 home runs, 103 RBIs, and a league-leading 114 walks, as the Yankees finished in second place, seven games behind the Senators. Athletics manager Connie Mack selected him to play right field in the first Major League Baseball All-Star Game, held on July 6, 1933, at Comiskey Park in Chicago. He hit the first home run in the All-Star Game's history, a two-run blast against Bill Hallahan during the third inning, which helped the AL win the game 4–2. During the final game of the 1933 season, as a publicity stunt organized by his team, Ruth was called upon and pitched a complete game victory against the Red Sox, his final appearance as a pitcher. Despite unremarkable pitching numbers, Ruth had a 5–0 record in five games for the Yankees, raising his career totals to 94–46. In 1934, Ruth played in his last full season with the Yankees. By this time, years of high living were starting to catch up with him. His conditioning had deteriorated to the point that he could no longer field or run. He accepted a pay cut to \$35,000 from Ruppert, but he was still the highest-paid player in the major leagues. He could still handle a bat, recording a .288 batting average with 22 home runs. However, Reisler described these statistics as "merely mortal" by Ruth's previous standards. Ruth was selected to the AL All-Star team for the second consecutive year, even though he was in the twilight of his career. During the game, New York Giants pitcher Carl Hubbell struck out Ruth and four other future Hall-of-Famers consecutively. The Yankees finished second again, seven games behind the Tigers. ### Boston Braves (1935) By this time, Ruth knew he was nearly finished as a player. He desired to remain in baseball as a manager. He was often spoken of as a possible candidate as managerial jobs opened up, but in 1932, when he was mentioned as a contender for the Red Sox position, Ruth stated that he was not yet ready to leave the field. There were rumors that Ruth was a likely candidate each time when the Cleveland Indians, Cincinnati Reds, and Detroit Tigers were looking for a manager, but nothing came of them. Just before the 1934 season, Ruppert offered to make Ruth the manager of the Yankees' top minor-league team, the Newark Bears, but he was talked out of it by his wife, Claire, and his business manager, Christy Walsh. Tigers owner Frank Navin seriously considered acquiring Ruth and making him player-manager. However, Ruth insisted on delaying the meeting until he came back from a trip to Hawaii. Navin was unwilling to wait. Ruth opted to go on his trip, despite Barrow advising him that he was making a mistake; in any event, Ruth's asking price was too high for the notoriously tight-fisted Navin. The Tigers' job ultimately went to Mickey Cochrane. Early in the 1934 season, Ruth openly campaigned to become the Yankees manager. However, the Yankee job was never a serious possibility. Ruppert always supported McCarthy, who would remain in his position for another 12 seasons. The relationship between Ruth and McCarthy had been lukewarm at best, and Ruth's managerial ambitions further chilled their interpersonal relations. By the end of the season, Ruth hinted that he would retire unless Ruppert named him manager of the Yankees. When the time came, Ruppert wanted Ruth to leave the team without drama or hard feelings. During the 1934–35 offseason, Ruth circled the world with his wife; the trip included a barnstorming tour of the Far East. At his final stop in the United Kingdom before returning home, Ruth was introduced to cricket by Australian player Alan Fairfax, and after having little luck in a cricketer's stance, he stood as a baseball batter and launched some massive shots around the field, destroying the bat in the process. Although Fairfax regretted that he could not have the time to make Ruth a cricket player, Ruth had lost any interest in such a career upon learning that the best batsmen made only about \$40 per week. Also during the offseason, Ruppert had been sounding out the other clubs in hopes of finding one that would be willing to take Ruth as a manager and/or a player. However, the only serious offer came from Athletics owner-manager Connie Mack, who gave some thought to stepping down as manager in favor of Ruth. However, Mack later dropped the idea, saying that Ruth's wife would be running the team in a month if Ruth ever took over. While the barnstorming tour was underway, Ruppert began negotiating with Boston Braves owner Judge Emil Fuchs, who wanted Ruth as a gate attraction. The Braves had enjoyed modest recent success, finishing fourth in the National League in both 1933 and 1934, but the team drew poorly at the box office. Unable to afford the rent at Braves Field, Fuchs had considered holding dog races there when the Braves were not at home, only to be turned down by Landis. After a series of phone calls, letters, and meetings, the Yankees traded Ruth to the Braves on February 26, 1935. Ruppert had stated that he would not release Ruth to go to another team as a full-time player. For this reason, it was announced that Ruth would become a team vice president and would be consulted on all club transactions, in addition to playing. He was also made assistant manager to Braves skipper Bill McKechnie. In a long letter to Ruth a few days before the press conference, Fuchs promised Ruth a share in the Braves' profits, with the possibility of becoming co-owner of the team. Fuchs also raised the possibility of Ruth succeeding McKechnie as manager, perhaps as early as 1936. Ruppert called the deal "the greatest opportunity Ruth ever had". There was considerable attention as Ruth reported for spring training. He did not hit his first home run of the spring until after the team had left Florida, and was beginning the road north in Savannah. He hit two in an exhibition game against the Bears. Amid much press attention, Ruth played his first home game in Boston in over 16 years. Before an opening-day crowd of over 25,000, including five of New England's six state governors, Ruth accounted for all the Braves' runs in a 4–2 defeat of the New York Giants, hitting a two-run home run, singling to drive in a third run and later in the inning scoring the fourth. Although age and weight had slowed him, he made a running catch in left field that sportswriters deemed the defensive highlight of the game. Ruth had two hits in the second game of the season, but it quickly went downhill both for him and the Braves from there. The season soon settled down to a routine of Ruth performing poorly on the few occasions he even played at all. As April passed into May, Ruth's physical deterioration became even more pronounced. While he remained productive at the plate early on, he could do little else. His conditioning had become so poor that he could barely trot around the bases. He made so many errors that three Braves pitchers told McKechnie they would not take the mound if he was in the lineup. Before long, Ruth stopped hitting as well. He grew increasingly annoyed that McKechnie ignored most of his advice. McKechnie later said that Ruth's presence made enforcing discipline nearly impossible. Ruth soon realized that Fuchs had deceived him, and had no intention of making him manager or giving him any significant off-field duties. He later said his only duties as vice president consisted of making public appearances and autographing tickets. Ruth also found out that far from giving him a share of the profits, Fuchs wanted him to invest some of his money in the team in a last-ditch effort to improve its balance sheet. As it turned out, Fuchs and Ruppert had both known all along that Ruth's non-playing positions were meaningless. By the end of the first month of the season, Ruth concluded he was finished even as a part-time player. As early as May 12, he asked Fuchs to let him retire. Ultimately, Fuchs persuaded Ruth to remain at least until after the Memorial Day doubleheader in Philadelphia. In the interim was a western road trip, at which the rival teams had scheduled days to honor him. In Chicago and St. Louis, Ruth performed poorly, and his batting average sank to .155, with only two additional home runs for a total of three on the season so far. In the first two games in Pittsburgh, Ruth had only one hit, though a long fly caught by Paul Waner probably would have been a home run in any other ballpark besides Forbes Field. Ruth played in the third game of the Pittsburgh series on May 25, 1935, and added one more tale to his playing legend. Ruth went 4-for-4, including three home runs, though the Braves lost the game 11–7. The last two were off Ruth's old Cubs nemesis, Guy Bush. The final home run, both of the game and of Ruth's career, sailed out of the park over the right field upper deck–the first time anyone had hit a fair ball completely out of Forbes Field. Ruth was urged to make this his last game, but he had given his word to Fuchs and played in Cincinnati and Philadelphia. The first game of the doubleheader in Philadelphia—the Braves lost both—was his final major league appearance. Ruth retired on June 2 after an argument with Fuchs. He finished 1935 with a .181 average—easily his worst as a full-time position player—and the final six of his 714 home runs. The Braves, 10–27 when Ruth left, finished 38–115, at .248 the worst winning percentage in modern National League history. Insolvent like his team, Fuchs gave up control of the Braves before the end of the season; the National League took over the franchise at the end of the year. Of the 5 members in the inaugural class of Baseball Hall of Fame in 1936 (Ty Cobb, Honus Wagner, Christy Mathewson, Walter Johnson and Ruth himself), only Ruth was not given an offer to manage a baseball team. ## Retirement Although Fuchs had given Ruth his unconditional release, no major league team expressed an interest in hiring him in any capacity. Ruth still hoped to be hired as a manager if he could not play anymore, but only one managerial position, Cleveland, became available between Ruth's retirement and the end of the 1937 season. Asked if he had considered Ruth for the job, Indians owner Alva Bradley replied negatively. Team owners and general managers assessed Ruth's flamboyant personal habits as a reason to exclude him from a managerial job; Barrow said of him, "How can he manage other men when he can't even manage himself?" Creamer believed Ruth was unfairly treated in never being given an opportunity to manage a major league club. The author believed there was not necessarily a relationship between personal conduct and managerial success, noting that John McGraw, Billy Martin, and Bobby Valentine were winners despite character flaws. Ruth played much golf and in a few exhibition baseball games, where he demonstrated a continuing ability to draw large crowds. This appeal contributed to the Dodgers hiring him as first base coach in 1938. When Ruth was hired, Brooklyn general manager Larry MacPhail made it clear that Ruth would not be considered for the manager's job if, as expected, Burleigh Grimes retired at the end of the season. Although much was said about what Ruth could teach the younger players, in practice, his duties were to appear on the field in uniform and encourage base runners—he was not called upon to relay signs. In August, shortly before the baseball rosters expanded, Ruth sought an opportunity to return as an active player in a pinch hitting role. Ruth often took batting practice before games and felt that he could take on the limited role. Grimes denied his request, citing Ruth's poor vision in his right eye, his inability to run the bases, and the risk of an injury to Ruth. Ruth got along well with everyone except team captain Leo Durocher, who was hired as Grimes' replacement at season's end. Ruth then left his job as a first base coach and would never again work in any capacity in the game of baseball. On July 4, 1939, Ruth spoke on Lou Gehrig Appreciation Day at Yankee Stadium as members of the 1927 Yankees and a sellout crowd turned out to honor the first baseman, who was forced into premature retirement by ALS, which would kill him two years later. The next week, Ruth went to Cooperstown, New York, for the formal opening of the Baseball Hall of Fame. Three years earlier, he was one of the first five players elected to the hall. As radio broadcasts of baseball games became popular, Ruth sought a job in that field, arguing that his celebrity and knowledge of baseball would assure large audiences, but he received no offers. During World War II, he made many personal appearances to advance the war effort, including his last appearance as a player at Yankee Stadium, in a 1943 exhibition for the Army-Navy Relief Fund. He hit a long fly ball off Walter Johnson; the blast left the field, curving foul, but Ruth circled the bases anyway. In 1946, he made a final effort to gain a job in baseball when he contacted new Yankees boss MacPhail, but he was sent a rejection letter. In 1999, Ruth's granddaughter, Linda Tosetti, and his stepdaughter, Julia Ruth Stevens, said that Babe's inability to land a managerial role with the Yankees caused him to feel hurt and slump into a severe depression. Ruth started playing golf when he was 20 and continued playing the game throughout his life. His appearance at many New York courses drew spectators and headlines. Rye Golf Club was among the courses he played with teammate Lyn Lary in June 1933. With birdies on 3 holes, Ruth posted the best score. In retirement, he became one of the first celebrity golfers participating in charity tournaments, including one where he was pitted against Ty Cobb. ## Personal life Ruth met Helen Woodford (1897–1929), by some accounts, in a coffee shop in Boston, where she was a waitress. They married as teenagers on October 17, 1914. Although Ruth later claimed to have been married in Elkton, Maryland, records show that they were married at St. Paul's Catholic Church in Ellicott City. They adopted a daughter, Dorothy (1921–1989), in 1921. Ruth and Helen separated around 1925 reportedly because of Ruth's repeated infidelities and neglect. They appeared in public as a couple for the last time during the 1926 World Series. Helen died in January 1929 at age 31 in a house fire in Watertown, Massachusetts in a house owned by Edward Kinder, a dentist with whom she had been living as "Mrs. Kinder". In her book, My Dad, the Babe, Dorothy claimed that she was Ruth's biological child by a mistress named Juanita Jennings. In 1980 Juanita admitted to this fact to Dorothy and Dorothy's stepsister Julia Ruth Stevens who was at the time already very ill. On April 17, 1929, three months after the death of his first wife, Ruth married actress and model Claire Merritt Hodgson (1897–1976) and adopted her daughter Julia (1916–2019). It was the second and final marriage for both parties. Claire, much unlike Helen, was well-travelled and educated, and went on to put structure into Ruth's life, like Miller Huggins did with him on the field. By one account, Julia and Dorothy were, through no fault of their own, the reason for the seven-year rift in Ruth's relationship with teammate Lou Gehrig. Sometime in 1932, during a conversation that she assumed was private, Gehrig's mother remarked, "It's a shame [Claire] doesn't dress Dorothy as nicely as she dresses her own daughter." When the comment got back to Ruth, he angrily told Gehrig to tell his mother to mind her own business. Gehrig, in turn, took offense at what he perceived as Ruth's comment about his mother. The two men reportedly never spoke off the field until they reconciled at Yankee Stadium on Lou Gehrig Appreciation Day, July 4, 1939, shortly after Gehrig's retirement from baseball. Although Ruth was married throughout most of his baseball career, when team co-owner Tillinghast 'Cap' Huston asked him to tone down his lifestyle, Ruth said, "I'll promise to go easier on drinking and to get to bed earlier, but not for you, fifty thousand dollars, or two-hundred and fifty thousand dollars will I give up women. They're too much fun." A detective that the Yankees hired to follow him one night in Chicago reported that Ruth had been with six women. Ping Bodie said that he was not Ruth's roommate while traveling; "I room with his suitcase". Before the start of the 1922 season, Ruth had signed a three-year contract at \$52,000 per year with an option to renew for two additional years. His performance during the 1922 season had been disappointing, attributed in part to his drinking and late-night hours. After the end of the 1922 season, he was asked to sign a contract addendum with a morals clause. Ruth and Ruppert signed it on November 11, 1922. It called for Ruth to abstain entirely from the use of intoxicating liquors, and to not stay up later than 1:00 a.m. during the training and playing season without permission of the manager. Ruth was also enjoined from any action or misbehavior that would compromise his ability to play baseball. ## Cancer and death (1946–1948) As early as the war years, doctors had cautioned Ruth to take better care of his health, and he grudgingly followed their advice, limiting his drinking and not going on a proposed trip to support the troops in the South Pacific. In 1946, Ruth began experiencing severe pain over his left eye and had difficulty swallowing. In November 1946, Ruth entered French Hospital in New York for tests, which revealed that he had an inoperable malignant tumor at the base of his skull and in his neck. The malady was a lesion known as nasopharyngeal carcinoma, or "lymphoepithelioma." His name and fame gave him access to experimental treatments, and he was one of the first cancer patients to receive both drugs and radiation treatment simultaneously. Having lost 80 pounds (36 kg), he was discharged from the hospital in February and went to Florida to recuperate. He returned to New York and Yankee Stadium after the season started. The new commissioner, Happy Chandler (Judge Landis had died in 1944), proclaimed April 27, 1947, Babe Ruth Day around the major leagues, with the most significant observance to be at Yankee Stadium. A number of teammates and others spoke in honor of Ruth, who briefly addressed the crowd of almost 60,000. By then, his voice was a soft whisper with a very low, raspy tone. Around this time, developments in chemotherapy offered some hope for Ruth. The doctors had not told Ruth he had cancer because of his family's fear that he might do himself harm. They treated him with pterolyl triglutamate (Teropterin), a folic acid derivative; he may have been the first human subject. Ruth showed dramatic improvement during the summer of 1947, so much so that his case was presented by his doctors at a scientific meeting, without using his name. He was able to travel around the country, doing promotional work for the Ford Motor Company on American Legion Baseball. He appeared again at another day in his honor at Yankee Stadium in September, but was not well enough to pitch in an old-timers game as he had hoped. The improvement was only a temporary remission, and by late 1947, Ruth was unable to help with the writing of his autobiography, The Babe Ruth Story, which was almost entirely ghostwritten. In and out of the hospital in Manhattan, he left for Florida in February 1948, doing what activities he could. After six weeks he returned to New York to appear at a book-signing party. He also traveled to California to witness the filming of the movie based on the book. On June 5, 1948, a "gaunt and hollowed out" Ruth visited Yale University to donate a manuscript of The Babe Ruth Story to its library. At Yale, he met with future president George H. W. Bush, who was the captain of the Yale baseball team. On June 13, Ruth visited Yankee Stadium for the final time in his life, appearing at the 25th-anniversary celebrations of "The House that Ruth Built". By this time he had lost much weight and had difficulty walking. Introduced along with his surviving teammates from 1923, Ruth used a bat as a cane. Nat Fein's photo of Ruth taken from behind, standing near home plate and facing "Ruthville" (right field) became one of baseball's most famous and widely circulated photographs, and won the Pulitzer Prize. Ruth made one final trip on behalf of American Legion Baseball, then entered Memorial Hospital, where he would die. He was never told he had cancer. But before his death, he surmised it. He was able to leave the hospital for a few short trips, including a final visit to Baltimore. On July 26, 1948, Ruth left the hospital to attend the premiere of the film The Babe Ruth Story. Shortly thereafter, he returned to the hospital for the final time. He was barely able to speak. Ruth's condition gradually grew worse, and only a few visitors were permitted to see him, one of whom was National League president and future Commissioner of Baseball Ford Frick. "Ruth was so thin it was unbelievable. He had been such a big man and his arms were just skinny little bones, and his face was so haggard", Frick said years later. Thousands of New Yorkers, including many children, stood vigil outside the hospital during Ruth's final days. On August 16, 1948, at 8:01 p.m., Ruth died in his sleep at the age of 53. His open casket was placed on display in the rotunda of Yankee Stadium, where it remained for two days; 77,000 people filed past to pay him tribute. His Requiem Mass was celebrated by Francis Cardinal Spellman at St. Patrick's Cathedral; a crowd estimated at 75,000 waited outside. Ruth is buried with his second wife, Claire, on a hillside in Section 25 at the Gate of Heaven Cemetery in Hawthorne, New York. ## Memorial and museum On April 19, 1949, the Yankees unveiled a granite monument in Ruth's honor in center field of Yankee Stadium. The monument was located in the field of play next to a flagpole and similar tributes to Huggins and Gehrig until the stadium was remodeled from 1974 to 1975, which resulted in the outfield fences moving inward and enclosing the monuments from the playing field. This area was known thereafter as Monument Park. Yankee Stadium, "the House that Ruth Built", was replaced after the 2008 season with a new Yankee Stadium across the street from the old one; Monument Park was subsequently moved to the new venue behind the center field fence. Ruth's uniform number 3 has been retired by the Yankees, and he is one of five Yankees players or managers to have a granite monument within the stadium. The Babe Ruth Birthplace Museum is located at 216 Emory Street, a Baltimore row house where Ruth was born, and three blocks west of Oriole Park at Camden Yards, where the AL's Baltimore Orioles play. The property was restored and opened to the public in 1973 by the non-profit Babe Ruth Birthplace Foundation, Inc. Ruth's widow, Claire, his two daughters, Dorothy and Julia, and his sister, Mamie, helped select and install exhibits for the museum. ## Impact Ruth was the first baseball star to be the subject of overwhelming public adulation. Baseball had been known for star players such as Ty Cobb and "Shoeless Joe" Jackson, but both men had uneasy relations with fans. In Cobb's case, the incidents were sometimes marked by violence. Ruth's biographers agreed that he benefited from the timing of his ascension to "Home Run King". The country had been hit hard by both the war and the 1918 flu pandemic and longed for something to help put these traumas behind it. Ruth also resonated in a country which felt, in the aftermath of the war, that it took second place to no one. Montville argued that Ruth was a larger-than-life figure who was capable of unprecedented athletic feats in the nation's largest city. Ruth became an icon of the social changes that marked the early 1920s. In his history of the Yankees, Glenn Stout writes that "Ruth was New York incarnate—uncouth and raw, flamboyant and flashy, oversized, out of scale, and absolutely unstoppable". During his lifetime, Ruth became a symbol of the United States. During World War II Japanese soldiers yelled in English, "To hell with Babe Ruth", to anger American soldiers. Ruth replied that he hoped "every Jap that mention[ed] my name gets shot". Creamer recorded that "Babe Ruth transcended sport and moved far beyond the artificial limits of baselines and outfield fences and sports pages". Wagenheim stated, "He appealed to a deeply rooted American yearning for the definitive climax: clean, quick, unarguable." According to Glenn Stout, "Ruth's home runs were exalted, uplifting experience that meant more to fans than any runs they were responsible for. A Babe Ruth home run was an event unto itself, one that meant anything was possible." Although Ruth was not just a power hitter—he was the Yankees' best bunter, and an excellent outfielder—Ruth's penchant for hitting home runs altered how baseball is played. Prior to 1920, home runs were unusual, and managers tried to win games by getting a runner on base and bringing him around to score through such means as the stolen base, the bunt, and the hit and run. Advocates of what was dubbed "inside baseball", such as Giants manager McGraw, disliked the home run, considering it a blot on the purity of the game. According to sportswriter W. A. Phelon, after the 1920 season, Ruth's breakout performance that season and the response in excitement and attendance, "settled, for all time to come, that the American public is nuttier over the Home Run than the Clever Fielding or the Hitless Pitching. Viva el Home Run and two times viva Babe Ruth, exponent of the home run, and overshadowing star." Bill James states, "When the owners discovered that the fans liked to see home runs, and when the foundations of the games were simultaneously imperiled by disgrace [in the Black Sox Scandal], then there was no turning back." While a few, such as McGraw and Cobb, decried the passing of the old-style play, teams quickly began to seek and develop sluggers. According to sportswriter Grantland Rice, only two sports figures of the 1920s approached Ruth in popularity—boxer Jack Dempsey and racehorse Man o' War. One of the factors that contributed to Ruth's broad appeal was the uncertainty about his family and early life. Ruth appeared to exemplify the American success story, that even an uneducated, unsophisticated youth, without any family wealth or connections, can do something better than anyone else in the world. Montville writes that "the fog [surrounding his childhood] will make him forever accessible, universal. He will be the patron saint of American possibility." Similarly, the fact that Ruth played in the pre-television era, when a relatively small portion of his fans had the opportunity to see him play allowed his legend to grow through word of mouth and the hyperbole of sports reporters. Reisler states that recent sluggers who surpassed Ruth's 60-home run mark, such as Mark McGwire and Barry Bonds, generated much less excitement than when Ruth repeatedly broke the single-season home run record in the 1920s. Ruth dominated a relatively small sports world, while Americans of the present era have many sports available to watch. ## Legacy Creamer describes Ruth as "a unique figure in the social history of the United States". Thomas Barthel describes him as one of the first celebrity athletes; numerous biographies have portrayed him as "larger than life". He entered the language: a dominant figure in a field, whether within or outside sports, is often referred to as "the Babe Ruth" of that field. Similarly, "Ruthian" has come to mean in sports, "colossal, dramatic, prodigious, magnificent; with great power". He was the first athlete to make more money from endorsements and other off-the-field activities than from his sport. In 2006, Montville stated that more books have been written about Ruth than any other member of the Baseball Hall of Fame. At least five of these books (including Creamer's and Wagenheim's) were written in 1973 and 1974. The books were timed to capitalize on the increase in public interest in Ruth as Hank Aaron approached his career home run mark, which he broke on April 8, 1974. As he approached Ruth's record, Aaron stated, "I can't remember a day this year or last when I did not hear the name of Babe Ruth." Montville suggested that Ruth is probably even more popular today than he was when his career home run record was broken by Aaron. The long ball era that Ruth started continues in baseball, to the delight of the fans. Owners build ballparks to encourage home runs, which are featured on SportsCenter and Baseball Tonight each evening during the season. The questions of performance-enhancing drug use, which dogged later home run hitters such as McGwire and Bonds, do nothing to diminish Ruth's reputation; his overindulgences with beer and hot dogs seem part of a simpler time. In various surveys and rankings, Ruth has been named the greatest baseball player of all time. In 1998, The Sporting News ranked him number one on the list of "Baseball's 100 Greatest Players". In 1999, baseball fans named Ruth to the Major League Baseball All-Century Team. He was named baseball's Greatest Player Ever in a ballot commemorating the 100th anniversary of professional baseball in 1969. The Associated Press reported in 1993 that Muhammad Ali was tied with Babe Ruth as the most recognized athlete in America. In a 1999 ESPN poll, he was ranked as the second-greatest U.S. athlete of the century, behind Michael Jordan. In 1983, the United States Postal Service honored Ruth with the issuance of a twenty-cent stamp. Several of the most expensive items of sports memorabilia and baseball memorabilia ever sold at auction are associated with Ruth. As of May 2022, Ruth's 1920 Yankees jersey, which sold for \$4,415,658 in 2012 (equivalent to \$ million in ), is the third most expensive piece of sports memorabilia ever sold, after Diego Maradona's 1986 World Cup jersey and Pierre de Coubertin's original 1892 Olympic Manifesto. The bat with which he hit the first home run at Yankee Stadium is in The Guinness Book of World Records as the most expensive baseball bat sold at auction, having fetched \$1.265 million on December 2, 2004 (equivalent to \$ million in ). A hat of Ruth's from the 1934 season set a record for a baseball cap when David Wells sold it at auction for \$537,278 in 2012. In 2017, Charlie Sheen sold Ruth's 1927 World Series ring for \$2,093,927 at auction. It easily broke the record for a championship ring previously set when Julius Erving's 1974 ABA championship ring sold for \$460,741 in 2011. One long-term survivor of the craze over Ruth may be the Baby Ruth candy bar. The original company to market the confectionery, the Curtis Candy Company, maintained that the bar was named after Ruth Cleveland, daughter of former president Grover Cleveland. She died in 1904 and the bar was first marketed in 1921, at the height of the craze over Ruth. He later sought to market candy bearing his name; he was refused a trademark because of the Baby Ruth bar. Corporate files from 1921 are no longer extant; the brand has changed hands several times and is now owned by Ferrara Candy Company. The Ruth estate licensed his likeness for use in an advertising campaign for Baby Ruth in 1995. In 2005, the Baby Ruth bar became the official candy bar of Major League Baseball in a marketing arrangement. In 2018, President Donald Trump announced that Ruth, along with Elvis Presley and Antonin Scalia, would posthumously receive the Presidential Medal of Freedom. Montville describes the continuing relevance of Babe Ruth in American culture, more than three-quarters of a century after he last swung a bat in a major league game: > The fascination with his life and career continues. He is a bombastic, sloppy hero from our bombastic, sloppy history, origins undetermined, a folk tale of American success. His moon face is as recognizable today as it was when he stared out at Tom Zachary on a certain September afternoon in 1927. If sport has become the national religion, Babe Ruth is the patron saint. He stands at the heart of the game he played, the promise of a warm summer night, a bag of peanuts, and a beer. And just maybe, the longest ball hit out of the park. ## See also - List of career achievements by Babe Ruth - Babe Ruth Award - Babe Ruth Home Run Award - Babe Ruth League - DHL Hometown Heroes - List of Major League Baseball home run records - List of Major League Baseball runs batted in records - The Year Babe Ruth Hit 104 Home Runs - Babe's Dream statue in Baltimore, Maryland
67,423,996
September 2019 events in the U.S. repo market
1,137,073,659
Financial event affecting interest rates
[ "2010s economic history", "2019 controversies in the United States", "2019 in economics", "Banking controversies", "Economic history of the United States", "Federal Reserve Bank of New York", "Federal Reserve System", "Financial controversies", "Pricing controversies", "September 2019 events in the United States" ]
On September 17, 2019, interest rates on overnight repurchase agreements (or "repos"), which are short-term loans between financial institutions, experienced a sudden and unexpected spike. A measure of the interest rate on overnight repos in the United States, the Secured Overnight Financing Rate (SOFR), increased from 2.43 percent on September 16 to 5.25 percent on September 17. During the trading day, interest rates reached as high as 10 percent. The activity also affected the interest rates on unsecured loans between financial institutions, and the Effective Federal Funds Rate (EFFR), which serves as a measure for such interest rates, moved above its target range determined by the Federal Reserve. This activity prompted an emergency intervention by the Federal Reserve Bank of New York, which injected \$75 billion in liquidity into the repo markets on September 17 and continued to do so every morning for the rest of the week. On September 19, the Federal Reserve's Federal Open Market Committee also lowered the interest paid on bank reserves. These actions were ultimately successful in calming the markets and, by September 20, rates had returned to a stable level. The Federal Reserve Bank of New York continued to regularly provide liquidity to the repo market until June 2020. The causes of the rate spike were not immediately clear. Economists later identified its main cause to be a temporary shortage of cash available in the financial system, which was itself caused by two events taking place on September 16: the deadline for the payment of quarterly corporate taxes and the issuing of new Treasury securities. The effects of this temporary shortage were exacerbated by declining level of reserves in the banking system. Other contributing factors have been suggested by economists and observers. ## Background ### Overnight lending Banks and financial institutions analyze their cash reserves on a daily basis, and assess whether they have an excess or a deficit of cash with respect to their needs. Banks that do not have sufficient cash to meet their liquidity needs borrow it from banks and money market funds with excess cash. This type of lending generally takes place overnight, which means that the cash is repaid the next day. ### The repo market Repurchase agreements, commonly referred to as repos, are a type of loans that are collateralized by securities and are generally provided for a short period of time. Although repos are economically equivalent to secured loans, they are legally structured as a sale and subsequent repurchase of securities. There are two steps in a repo transaction. First, the borrower sells their securities to the lender and receives cash in exchange. Second, the borrower repurchases the securities from the lender by repaying the cash amount they received plus an additional amount, which is the interest. This structure allows lenders to provide loans with very little risk, and borrowers to borrow at low rates. The repo market is used by banks, financial institutions and institutional investors to borrow cash to meet their overnight liquidity needs or to finance positions in the market. In this context, the repurchased securities are most often Treasury securities, but can also be agency securities and mortgage-backed securities. The broad measure of the interest rate for overnight loans collateralized by Treasury securities is the Secured Overnight Financing Rate (SOFR), which is administered by the Federal Reserve Bank of New York. The daily volume of repo transactions is generally estimated to be around \$1 trillion; hence, according to economists at the Bank for International Settlements, "any sustained disruption in this market [...] could quickly ripple through the financial system". The U.S. repo market is broadly divided into two segments: the tri-party market and the bilateral market. The tri-party market involves large, high-quality dealers borrowing cash from money market funds. This segment is called "tri-party" because a third party, the bank BNY Mellon, provides various services to market participants. The bilateral market involves large dealers lending to borrowers, such as smaller dealers and hedge funds. A common practice is for dealers to borrow cash on the tri-party market to lend it to their clients on the bilateral market. ### The federal funds market Federal funds are funds that are loaned or borrowed by financial institutions overnight to meet their liquidity needs. Unlike repos, federal funds are unsecured. According to economist Frederic Mishkin and finance professor Stanley Eakins, the term "federal funds" is misleading: "federal funds have nothing to do with the federal government", and term comes from the fact that these funds are held at the Federal Reserve bank". The repo market and the federal funds market are theoretically separate. However, there are significant links and interactions between the two, and shocks in one market can transmit themselves to the other. The interest rate on federal funds is an important component of U.S. monetary policy. To implement its monetary policy, the Federal Reserve's Federal Open Market Committee determines a target range for the federal funds rate. Although the Federal Reserve cannot directly control the rate, which is primarily determined by the forces of supply and demand, it can influence it by adjusting the interest rate on reserve balances held by banks at the Federal Reserve, or by buying (or selling) securities from (or to) banks. The measure of the interest rate on federal funds is the Effective Federal Funds Rate (EFFR), which is calculated as the effective median interest rate of overnight federal funds transactions on any business day. It is published by the Federal Reserve Bank of New York. ## Events ### Rates increase Before September 2019, both the SOFR and the EFFR were quite stable. The EFFR had remained within the FOMC's target range on all but one day since 2015. The SOFR was slightly more volatile, especially around quarter-end reporting dates, but had rarely moved more than 0.2 percentage points on a single day. On Monday, September 16, the SOFR was at 2.43 percent, an increase of 0.13 percentage points compared to the previous business day (Friday, September 13). The EFFR was at 2.25 percent, an increase of 0.11 percentage points from September 13. The EFFR was trading at the upper limit of the Federal Reserve's target range, which was 2 to 2.25 percent. On the morning of Tuesday, September 17, interest rates on overnight repo transactions experienced a sudden and unexpected increase. During the trading day, interest rates on overnight repo transactions went as high as 10 percent, with the top 1 percent of transactions reaching 9 percent. The SOFR benchmark increased by 2.3 percentage points and reached 5.25 percent for the day. The strains in the repo market quickly spilled into the federal funds market, and the EFFR moved above the top of its target range, to 2.3 percent. ### Response by the Federal Reserve Beginning on the morning of Tuesday, September 17, the Federal Reserve Bank of New York (or New York Fed) began to take action to restore market stability. Shortly after 9 a.m., it announced that it would begin to lend cash to borrowers on the repo market, in an amount of up to \$75 billion. The New York Fed would accept as collateral Treasury securities, agency debt securities and agency mortgage-backed securities. Interest rates began to decrease shortly after the announcement. Most repo trading occurs early in the morning, and had therefore taken place before the New York Fed's announcement: as a result, only \$53 billion was borrowed from the New York Fed by market participants. On the afternoon of September 17, repo rates remained relatively elevated, since market participants were uncertain whether the New York Fed would continue its intervention on the following days. These concerns were alleviated when the New York Fed announced at 8:15 a.m. the following morning (Wednesday, September 18) that it would conduct a second \$75 billion overnight lending operation. Repo rates then stabilized and federal funds rates returned closer to the Federal Reserve's target range. On September 19, the Federal Open Market Committee lowered the interest rate paid on reserves balances held by banks, in an effort to lower the EFFR, which tends to trade slightly above the rate paid on bank reserves. This decision also reduced the chance that the EFFR would return to levels above the Federal Reserve's target range. Meanwhile, the New York Fed continued to lend a daily amount of \$75 billion overnight to market participants every morning of the week, through Friday, September 20. All three operations were fully subscribed. On September 20, the New York Fed announced that it would continue to perform daily overnight operations through October 10. The actions of the Federal Reserve and the New York Fed were successful in calming the market activity: by September 20, the rates on overnight repo transactions had sunk to 1.75 percent and the rates on federal reserve funds decreased to 1.9 percent. ### Aftermath The New York Fed continued to offer liquidity to market participants for several months, in an effort to control and limit volatility. In June 2020, the New York Fed tightened its operations on the repo market, after seeing "substantial improvement" in market condition. From June 2020, market participants stopped using the Fed's liquidity facility. In January 2021, the New York Fed discontinued its repo facility altogether, citing a "sustained smooth functioning" of the market. ## Suggested causes The cause of September 2019 market events was not immediately clear, with The Wall Street Journal characterizing the event as a "mystery". Over time, market observers and economists have suggested a combination of several factors as the causes of the rates hike. ### Temporary cash shortage Two developments took place in mid-September that reduced the amount of cash available in the system and thus put stress on the overnight funding market. Firstly, quarterly corporate taxes were due on September 16, 2019. As a result, a substantial amount of cash was withdrawn from clients' accounts and was paid to the Treasury. Over a period of a few days, taxpayers withdrew more than \$100 billion out of the banking system and money market funds to pay their taxes. This reduced the amount of cash available in the system and, specifically, in the repo market, since banks and money market funds generally lend their excess cash in the repo market. Secondly, new Treasury securities were settled on September 16, meaning that their price was paid by their purchasers on this date. The total amount paid by buyers in exchange for Treasury securities was \$54 billion, which was withdrawn from their bank and money markets accounts. However, "[a] substantial share of newly issued Treasury debt is typically purchased by securities dealers, who then gradually sell the bonds to their customers." Between the moment dealers purchase newly issued Treasury securities and the moment they are able to sell them to customers, they finance their purchase by lending the securities on the repo market. Thus, there were more Treasury securities to be financed in the market on September 16, but less cash available to borrowers to purchase them. As a result, the increased rate for overnight funding seemed to stem from a temporary increase in the demand for cash and a simultaneous, temporary decline in the supply of cash, leading to a shortage of cash available in the system. ### Other causes The temporary cash shortage is nevertheless insufficient to explain the intensity of the movements observed in September 2019. The effects of the temporary cash shortage seemed to have been exacerbated by broader market trends. #### Declining bank reserves The events of September 2019 have been associated with the declining level of reserves in the banking system. "Reserves", in this context, means the cash held by banks in accounts at the central bank. The main function of reserves is for banks to make payments to each other, generally as a way to settle transactions that have taken place between their customers. Reserves can be increased by government spending, which results in cash being transferred from government accounts to bank accounts. By contrast, the government can decrease reserves by selling government bonds, such as Treasury securities, to investors, which results in cash being transferred from bank accounts (and thus reserves) to government accounts. During and after the financial crisis of 2007–2008, the Federal Reserve stimulated the economy by purchasing trillions of dollars of Treasury securities and mortgage-backed securities from banks and investors. As a result, reserves increased from around \$10 billion at the end of 2007 to a peak of \$2.8 trillion in 2014. In October 2017, the Federal Reserve began to reduce the size of its assets, most notably by stopping its purchases of Treasury securities and by letting its existing stock of Treasuries expire. As a result, reserves began to decline gradually, and financial institutions began to hold an increasing amount of Treasury securities themselves. According to economists at the Bank for International Settlement, this trend was particularly pronounced in the four main banks that are active as lenders in the repo market: since 2018, their holdings of liquid assets became increasingly skewed towards Treasury securities, which made it more difficult for them to lend their cash when demand rises. In mid-September 2019, the supply of reserves in the banking system amounted to \$1.4 trillion, their lowest point since 2011. Economists and analysts have suggested that such a low amount of reserves may have exacerbated the liquidity shortage experienced on September 17. #### Liquidity regulations and management According to Jamie Dimon, the CEO of JP Morgan, the bank had the cash and the willingness to deploy liquidity in the repo markets but was prevented from doing so by regulations on bank liquidity. Liquidity regulations require banks to hold a stock of liquid assets (such as cash) at all times to survive crisis scenarios, such as bank runs. Some economists have acknowledged that liquidity regulations may have prevented banks from lending more cash on the repo markets in September 2019, thus contributing to the cash shortage. Other researchers have taken a different view. They have argued that the inability of banks to deploy liquidity quickly to profit from the high rates was not caused by the liquidity regulations themselves, but by the more prudent risk-management framework put in place by banks after the 2007-08 crisis. They have also pointed out that other significant lenders on the repo market, such as money market funds and pension funds, were similarly reluctant to lend in mid-September 2019, despite not being subject to banking regulations. #### Other suggested causes Economists and market observers have suggested other factors as possible causes of the mid-September spike: - the inelasticity of the demand for funding in the tri-party segment of the repo market, such that "even small changes in the supply and demand for cash could result in large interest rate increases", according to economists at the New York Fed - The surprise caused by the sudden increase in interest rates on the morning of September 17, which may have led lenders to halt their lending until they could gather more information about the market conditions - A general decrease in the amount of repo lending by money market funds beginning in August 2019, caused by a shift of the funds' portfolios to Treasury securities, which were expected to provide higher returns - The increasing complexity of cash management at multinational U.S. banks ## See also - Central banking - Financial stability - Money market - Open market operation
25,790,729
Wintjiya Napaltjarri
1,173,060,355
Australian artist
[ "20th-century Australian painters", "20th-century Australian women artists", "20th-century births", "21st-century Australian painters", "21st-century Australian women artists", "Artists from the Northern Territory", "Australian Aboriginal artists", "Australian women painters", "Living people", "Pintupi", "Year of birth uncertain" ]
Wintjiya Napaltjarri (born between ca. 1923 to 1934) (also spelt Wentjiya, Wintjia or Wentja), and also known as Wintjia Napaltjarri No. 1, is a Pintupi-speaking Indigenous artist from Australia's Western Desert region. She is the sister of artist Tjunkiya Napaltjarri; both were wives of Toba Tjakamarra, with whom Wintjiya had five children. Wintjiya's involvement in contemporary Indigenous Australian art began in 1994 at Haasts Bluff, when she participated in a group painting project and in the creation of batik fabrics. She has also been a printmaker, using drypoint etching. Her paintings typically use an iconography that represents the eggs of the flying ant (waturnuma) and hair-string skirts (nyimparra). Her palette generally involves strong red or black against a white background. A finalist in the 2007 and 2008 National Aboriginal & Torres Strait Islander Art Awards, Wintjiya's work is held in several of Australia's public collections including the Art Gallery of New South Wales, the Museum and Art Gallery of the Northern Territory, the National Gallery of Australia and the National Gallery of Victoria. Her work is also held in the Kluge-Ruhe Aboriginal Art Collection of the University of Virginia. ## Life A 2004 reference work on Western Desert painters suggests Wintjiya was born in about 1923; the Art Gallery of New South Wales suggests 1932; expert Vivien Johnson reports two possible years: 1932 or 1934. The ambiguity around the year of birth is in part because Indigenous Australians have a different conception of time, often estimating dates by comparisons with the occurrence of other events. Napaljarri (in Warlpiri) or Napaltjarri (in Western Desert dialects) is a skin name, one of sixteen used to denote the subsections or subgroups in the kinship system of central Australian Indigenous people. These names define kinship relationships that influence preferred marriage partners and may be associated with particular totems. Although often used as terms of address, they are not surnames in the sense used by Europeans. Thus Wintjiya is the element of the artist's name that is specifically hers. She is sometimes referred to as Wintjia Napaltjarri No. 1; there is another artist from the same region, Wintjiya Morgan Napaljarri (also called Wintjiya Reid Napaltjarri), who is known as Wintjiya No. 2. Wintjiya came from an area north-west or north-east of Walungurru (the Pintupi-language name for Kintore, Northern Territory). Johnson reports that Wintjiya was born at Mulparingya, "a swamp and spring to the northeast of Kintore", west of Alice Springs. As was the case for a number of artists from the region, Wintjiya's family walked into the Haasts Bluff settlement in the 1950s, moving to Papunya in the 1960s. In 1981, Kintore was established and the family moved there. Her native language is Pintupi, and she speaks almost no English. She is the sister of artist Tjunkiya Napaltjarri, the two women being the second and third wives of Toba Tjakamarra, father (by his first wife, Nganyima Napaltjarri) of one of the prominent founders of the Papunya Tula art movement, Turkey Tolson Tjupurrula. Wintjiya and Toba had five children: sons Bundy (born 1953) and Lindsay (born 1961 and now deceased); and daughters Rubilee (born 1955), Claire (born 1958) and Eileen (born 1960). Superficially frail by 2008, she nevertheless had the stamina and agility to teach her granddaughter the skills of chasing and capturing goannas. ## Art ### Background Contemporary Indigenous art of the western desert began in 1971 when Indigenous men at Papunya created murals and canvases using western art materials, assisted by teacher Geoffrey Bardon. Their work, which used acrylic paints to create designs representing body painting and ground sculptures, rapidly spread across Indigenous communities of central Australia, particularly after the introduction of a government-sanctioned art program in central Australia in 1983. By the 1980s and '90s, such work was being exhibited internationally. The first artists, including all of the founders of the Papunya Tula artists' company, were men, and there was resistance among the Pintupi men of central Australia to women also painting. However, many of the women wished to participate, and in the 1990s many of them began to paint. In the western desert communities such as Kintore, Yuendumu, Balgo, and on the outstations, people were beginning to create art works expressly for exhibition and sale. ### Career Since the 1970s Wintjiya had created artefacts such as ininti seed necklaces, mats and baskets, using traditional artistic techniques including weaving of spinifex grass. When the women of Kintore, including sisters Wintjiya and Tjunkiya, started creating canvasses, their works bore little resemblance to those of their male peers (who had been painting for some years). Wintjiya's first efforts were collaborative, as one of a group of women who created murals on the Kintore Women's Centre walls in 1992. She then joined a painting camp with other women from Kintore and Haasts Bluff to produce "a series of very large collaborative canvases of the group's shared Dreamings" (dreamings are stories used to pass "important knowledge, cultural values and belief systems" from generation to generation). Twenty-five women were involved in planning the works, which included three canvases that were 3 metres (9.8 ft) square, as well as two that were 3 by 1.5 metres (9.8 by 4.9 ft); Tjunkiya and Wintjiya performed a ceremonial dance as part of the preparations. Wintjiya and her sister were determined to participate in the project despite cataracts interfering with their vision. As was the case for Makinti Napanangka, an operation to remove cataracts resulted in a new brightness to Wintjiya's compositions. Sources differ on when Wintjiya and her sister Tjunkiya had their cataracts removed: Johnson suggests 1999, but art centre coordinator Marina Strocchi, who worked closely with the women, states that it was 1994. In the early 2000s Wintjiya and her sister painted at Kintore, but in 2008 they were working from their home: "the widows' camp outside her 'son' Turkey Tolson's former residence". Tjunkiya and her sister Wintjiya did not confine their activities to painting canvases. In 2001 the National Gallery of Victoria purchased a collaborative batik, created by the sisters in cooperation with several other artists, together with one completed by Wintjiya alone. These works were the product of a batik workshop run for the women of Haasts Bluff by Northern Territory Education Department staff Jill Squires and Therese Honan in the months following June 1994. The works, including several by Wintjiya, were not completed until 1995. Circular markings, used by Wintjiya in both these batiks and her subsequent paintings, represent the eggs of the flying ant (waturnuma), one of the main subjects of her art. She also portrays "tree-like organic motifs" and representations of hair-string skirts (nyimparra). The sisters also gained experience with drypoint etching; works produced by Wintjiya in 2004 – Watiyawanu and Nyimpara – are held by the National Gallery of Australia. Wintjiya's work was included in a survey of the history of Papunya Tula painting hosted by Flinders University in the late 1990s. Reviewing the exhibition, Christine Nicholls remarked of Wintjiya's Watanuma that it was a germinal painting, with fine use of muted colour, and showed sensitivity to the relationships between objects and spaces represented in the work. Likewise, Marina Strocchi has noted the contrast between some of the subtle colours used in batik and Wintjiya's characteristic painting palette, which is "almost exclusively stark white with black or red". Hetti Perkins and Margie West have suggested that in paintings by Kintore women artists such as Wintjiya and Tjunkiya, "the viscosity of the painting's surface seems to mimic the generous application of body paint in women's ceremonies". Wintjiya's painting Rock holes west of Kintore was a finalist in the 2007 National Aboriginal & Torres Strait Islander Art Award. Another of her works, Country west of Kintore, was accepted as a finalist in 2008. Works by Wintjiya have appeared in many significant exhibitions including: Papunya Women group exhibition (Utopia Art Gallery, Sydney, 1996); Raiki Wara: Long Cloth from Aboriginal Australia and the Torres Strait (National Gallery of Victoria 1998–99); Twenty-five Years and Beyond: Papunya Tula Painting (Flinders University Art Museum, 1999); Papunya Tula: Genesis and Genius (Art Gallery of New South Wales, 2000) and Land Marks (National Gallery of Victoria, 2006). Her first solo exhibition was at Woolloongabba Art Gallery in Brisbane in 2005, while in 2010 there was one at a Melbourne gallery. Also in 2010, a print by Wintjiya was selected for inclusion in the annual Fremantle Arts Centre's Print Award. In 2013, she was one of sixteen finalists in the Western Australian Indigenous Art Awards. Works by Wintjiya are held in major private collections such as Nangara (also known as the Ebes Collection). Her work has been acquired by several major public art institutions including the Art Gallery of New South Wales, the Museum and Art Gallery of the Northern Territory, and the National Gallery of Victoria. Internationally, her work is held in the Aboriginal Art Museum [nl] at Utrecht in the Netherlands, and the Kluge-Ruhe Aboriginal Art Collection at the University of Virginia. Works by Wintjiya and her sister Tjunkiya are traded in the auction market, fetching prices of a few thousand dollars. In 2018 Wintjiya's work was included in the exhibition Marking the Infinite: Contemporary Women Artists from Aboriginal Australia at The Phillips Collection. ## Collections - Aboriginal Art Museum, The Netherlands - Art Gallery of New South Wales - Artbank - Museum and Art Gallery of the Northern Territory - National Gallery of Australia - National Gallery of Victoria - Supreme Court of the Northern Territory - Kluge-Ruhe Aboriginal Art Collection, University of Virginia ## Awards - 2007 – finalist, 24th National Aboriginal & Torres Strait Islander Art Award - 2008 – finalist, 25th National Aboriginal & Torres Strait Islander Art Award
1,088,510
55th (West Lancashire) Infantry Division
1,150,833,220
British Army Second World War division
[ "Infantry divisions of the British Army in World War II", "Military units and formations disestablished in 1945", "Military units and formations in Lancashire" ]
The 55th (West Lancashire) Infantry Division was an infantry division of the British Army's Territorial Army (TA) that was formed in 1920 and existed through the Second World War, although it did not see combat. The division had originally been raised in 1908 as the West Lancashire Division, part of the British Army's Territorial Force (TF). It fought in the First World War, as the 55th (West Lancashire) Division, and demobilised following the fighting. In 1920, the 55th (West Lancashire) Division started to reform. It was stationed in the county of Lancashire throughout the 1920s and 1930s, and was under-funded and under-staffed. In the late 1930s, the division was reduced from three to two infantry brigades and gave up some artillery and other support units to become a motorised formation, the 55th (West Lancashire) Motor Division. This was part of a British Army doctrine change that was intended to increase battlefield mobility. Following the German occupation of Czechoslovakia, the division created new units around cadres of its own personnel, a process called "duplicating". The division then used these new formations to create its "duplicate", the 59th (Staffordshire) Motor Division. The 55th remained in the United Kingdom, in a defensive role, after the outbreak of the Second World War. In 1940, following the Battle of France, the motor division concept was abandoned. The division regained its third infantry brigade, and became the 55th (West Lancashire) Infantry Division. It remained within the United Kingdom, training for future operations as well as training replacements for combat units, and assigned to anti-invasion duties. By 1944, the division had been drained of many of its assets. The remnant of the division was used in Operation Fortitude, a deception effort that supported the Allied invasion of France. At the end of the war, the division was demobilised and not reformed. ## Background The West Lancashire Division was formed in 1908, following the passing of the Territorial and Reserve Forces Act 1907 that created the Territorial Force (TF). The division was broken up between 1914 and 1915, to provide reinforcements for the British Expeditionary Force that was fighting in France during the First World War. It was reformed as the 55th (West Lancashire) Division in late 1915, deployed to the Western Front and fought during the Battles of the Somme, Passchendaele, and Estaires, and took part in the Hundred Days Offensive. During two years of war, 63,923 men served in the division, over half becoming casualties. Following the end of the war, in 1918, and through 1919, the division was demobilised. In April 1920, the division started the process of reforming in Lancashire, as part of Western Command. In 1921, the TF was reconstituted as the Territorial Army (TA) following the passage of the Territorial Army and Militia Act 1921. ## Interwar period The 55th (West Lancashire) Division was headquartered and primarily based in Liverpool, although it had units throughout Lancashire. At various times units were located in Chester, Lancaster, Lichfield, Seaforth, Southport, and Warrington. The division was reformed with the 164th (North Lancashire), the 165th (Liverpool), and the 166th (South Lancashire and Cheshire) Infantry Brigades. On 19 July 1924, the division was reviewed by George V, during a visit to Liverpool. During the interwar period, TA formations and units were only permitted to recruit up to 60 per cent of their establishment. Due to chronic underfunding, the lack of a pressing national threat, and a diminished level of prestige associated with serving in the TA, it was rare for units to reach even this level of manpower. By the 1930s, this resulted in the TA having limited access to modern equipment, under-trained men, and officers with inadequate experience in command. ### Motor division The development of British military doctrine during the interwar period resulted in three types of division by the end of the 1930s: the infantry division; the mobile division (later called the armoured division); and the motor division. Historian David French wrote "the main role of the infantry ... was to break into the enemy's defensive position." This would then be exploited by the mobile division, followed by the motor divisions that would "carry out the rapid consolidation of the ground captured by the mobile divisions" therefore "transform[ing] the 'break-in' into a 'break-through'." French wrote that the motor division had a similar role to the German Army's motorised and light divisions, "but there the similarities ended." German motorised divisions contained three regiments (akin to a British brigade) and were equipped similarly to a regular infantry division, while their smaller light divisions contained a tank battalion. The British motor division, while being fully motorised and capable of transporting all their infantry, was "otherwise much weaker than normal infantry divisions" or their German counterparts as it was made up of only two brigades, had two artillery regiments as opposed to an infantry division's three, and contained no tanks. In 1938, the army decided to create six motor divisions from TA units. Only three infantry divisions were converted before the war, including the 55th (West Lancashire). This resulted in the removal of infantry and artillery elements from the division. Many of the division's battalions were converted to new roles, and transferred to other branches of the army. For example: the 6th Liverpool Rifles were retrained and transferred to the Royal Engineers (RE), becoming the 38th (The King's Regiment) Anti-Aircraft Battalion, RE; the 5th King's Own Royal Regiment (Lancaster) was converted to artillery, becoming the 56th (King's Own) Anti-Tank Regiment, Royal Artillery; the 7th King's Regiment (Liverpool) became the 40th (The King's) Royal Tank Regiment. The division retained three brigades until March 1939, when the 164th Brigade was disbanded, bringing the division into line with the intention of the new organisation. Now the 55th (West Lancashire) Motor Division, it comprised the 165th (Liverpool) and the 166th (South Lancashire and Cheshire) Infantry Brigades. ### Rearmament During the 1930s, tensions increased between Germany and the United Kingdom and its allies. In late 1937 and throughout 1938, German demands for the annexation of the Sudetenland in Czechoslovakia led to an international crisis. To avoid war, the British Prime Minister Neville Chamberlain met with German Chancellor Adolf Hitler in September and brokered the Munich Agreement. The agreement averted a war and allowed Germany to annexe the Sudetenland. Although Chamberlain had intended the agreement to lead to further peaceful resolution of issues, relations between the two countries soon deteriorated. On 15 March 1939, Germany breached the terms of the agreement by invading and occupying the remnants of the Czech state. On 29 March, British Secretary of State for War Leslie Hore-Belisha announced plans to increase the TA from 130,000 to 340,000 men and double the number of TA divisions. The plan was for existing TA divisions, referred to as the first-line, to recruit over their establishments (aided by an increase in pay for Territorials, the removal of restrictions on promotion which had hindered recruiting, construction of better-quality barracks and an increase in supper rations) and then form a new division, known as the second-line, from cadres around which the new divisions could be expanded. This process was dubbed "duplicating". The 55th (West Lancashire) Motor Division provided cadres to create a second line "duplicate" formation, which became the 59th (Staffordshire) Motor Division. By September, the 55th (West Lancashire) Motor Division had also reformed the 164th Brigade. Despite the intention for the army to grow, the programme was complicated by a lack of central guidance on the expansion and duplication process and a lack of facilities, equipment and instructors. In April 1939, limited conscription was introduced. At that time 34,500 men, all aged 20, were conscripted into the regular army, initially to be trained for six months before being deployed to the forming second line units. It had been envisioned by the War Office that the duplicating process and recruiting the required numbers of men would take no more than six months. The process varied widely between the TA divisions. Some were ready in weeks while others had made little progress by the time the Second World War began on 1 September. ## Second World War ### Home defence On 4 September, the division established the second line duplicate of the 166th Brigade, the 177th. On 15 September, the 166th Infantry Brigade (renamed the 176th Infantry Brigade) and the 177th Brigade were transferred to the 59th (Staffordshire) Motor Division. This left the 55th (West Lancashire) Motor Division with the 164th and 165th Brigades. The former consisted of the 9th Battalion, King's Regiment (Liverpool), the 1/4th Battalion, the South Lancashire Regiment, and the 2/4th Battalion, South Lancashire Regiment. The 165th Brigade was made up of the 5th Battalion, King's Regiment (Liverpool), and the 1st and the 2nd Battalions, Liverpool Scottish (Queen's Own Cameron Highlanders). Major-General Vivian Majendie was the division's general officer commanding (GOC), and had been in command since 1938. The division's initial war-time duties included deploying guards to the docks at Birkenhead, the Port of Liverpool, and the naval defences at Crosby, while also assisting the civilian authorities during air raids. On 6 September, the division fired its first shots of the war. Divisional anti-aircraft and machine guns fired on three aircraft flying low over the River Mersey. The shots missed, and the aircraft were later determined to be Royal Air Force Handley Page Hampden bombers. The war deployment plan for the TA envisioned its divisions being sent overseas, as equipment became available, to reinforce the British Expeditionary Force (BEF) that had already been dispatched to Europe. The TA would join regular army divisions in waves as its divisions completed their training, the final divisions deploying a year after the war began. In October 1939, the Commander-in-Chief, Home Forces, General Walter Kirke, was tasked with drawing up a plan, codenamed Julius Caesar, to defend the United Kingdom from a potential German invasion. As part of this plan, the division was assigned to Home Forces' reserve. It was transferred to Northern Command and moved to Charnwood Forest in Leicestershire. Here the division furthered its training, while also having to be ready to act as a counter-attack force for Julius Caesar in case of a German invasion between the Humber and The Wash. Other duties included the protection of RAF Finningley. In January 1940, the division was used to obtain drafts for formations overseas as well as volunteers to man anti-aircraft guns on small ships. In March, the division was relieved as a reserve formation. It was assigned to Eastern Command the following month, and transferred to defend the coastline of Suffolk and then Essex. These moves were part of a larger effort by Kirke to reinforce the defences in the east of England, which he believed would be the location most in danger of an invasion as a result of the German operations on mainland Europe. Other than coastal defence, the division was also responsible for guarding Ipswich Airport, constructing roadblocks inland from potential invasion beaches, and providing mobile detachments to respond to any German airborne landings. In April, following the start of the Norwegian campaign, the division organised No. 4 Independent Company, which departed for Norway on 7 May. Following the conclusion of that campaign, many of these men joined the Commandos. As a result of the German victory in France and the return of the BEF following the Dunkirk evacuation, the division was not deployed overseas per the original TA deployment timeline. The British Army began implementing lessons learnt from the campaign in France. This included a decision to base the standard division around three brigades, and the abandonment of the motor division concept. This process involved breaking up four-second-line territorial divisions to reinforce depleted formations and aid in transforming the Army's five motor divisions, each made up of two brigades, into infantry divisions made up of three brigades. As part of this process, on 23 June, the 66th Infantry Division was disbanded. This freed up the 199th Infantry Brigade and an artillery regiment to be transferred to the 55th (West Lancashire) Motor Division, which became the 55th (West Lancashire) Infantry Division. General Edmund Ironside, who had replaced Kirke, believed the division (along with the others which had remained in the UK) to be insufficiently trained, equipped, and unable to undertake offensive operations. The division was therefore assigned a static coastal defence role in Essex, while leaving enough troops available to deal with any German paratrooper landings that may occur in its area. Duties also included the digging and improving of defensive positions, and ongoing training. On paper, an infantry division was to have seventy-two 25-pounder field guns. By 31 May, the division only had eight such modern guns. These were supplemented by four First World War-vintage 18-pounder field guns, and eight 4.5 in (110 mm) howitzers of similar vintage. The division had only two anti-tank guns, against a nominal establishment of 48, and only 47 of the required 307 Boys anti-tank rifles. General Alan Brooke, who replaced Ironside, reviewed the division on 1 August. He recorded in his diary that the 55th (West Lancashire) Infantry Division "should be quite good with a bit of training." The division remained in Essex until November 1940, when it was assigned to IV Corps. This was a reserve formation, based away from the coast, the intended role being to counterattack German landings in East Anglia. Elements of the division moved to more central locations, for example the two Liverpool Scottish battalions took up winter quarters in Oxfordshire. While based there, they conducted training in a counterattack role that involved moving to concentration areas behind units based along the south and southeast coasts. In February 1941, the 55th (West Lancashire) Infantry Division moved south to defend the Sussex coast. This included manning coastal defensive positions, being assigned to hunt down any German paratroopers, improving and expanding defences in their sector, and training. With the arrival of increased levels of ammunition, the men of the division were able to considerably improve their proficiency in the use of small arms and mortars. On 1 June 1941, Major-General William Duthie Morgan replaced Majendie as GOC. In July, the division was relieved from coastal defence. It relocated to Aldershot to act as a reserve formation, and increased the tempo of training. Morgan maintained his position until October, when he was wounded during a training exercise, and was replaced by Major-General Frederick Morgan. During the final months of 1941, the 55th (West Lancashire) Infantry Division started to provide drafts of men to other formations. This was followed by the division being placed on the lower establishment in January 1942. In December 1941, the 55th (West Lancashire) Infantry Division relocated to Yorkshire and was reassigned to Northern Command, and was spread out with troops based in the East Riding of Yorkshire and North Yorkshire. The intention of this deployment was to counter-attack any German landings along the coast or at nearby airfields. The 165th Brigade also spent some time at Catterick Garrison. During its stay with Northern Command, the majority of the time was spent training, from the battalion to the brigade level. The division relocated to Devon in January 1943, and was assigned to the South West District. The primary role now was to counter any raids conducted by German forces along the coast. This was in addition to continued training, guarding vulnerable points, and rendering assistance to nearby civilian authorities as needed after air raids. In June, the division lost five men killed following a German bombing raid. In December 1943, the division received drafts from anti-aircraft regiments. These men were then given a ten-week training course to make them viable drafts for infantry units. The same month, the 55th (West Lancashire) Infantry Division transferred to Northern Ireland, under the command of British Troops Northern Ireland. In Northern Ireland, the soldiers aided farmers, helped train elements of the reforming Belgian Army, and trained with newly arrived troops from the United States Army. The division continued to provide men to other formations through 1944. ### Wind down and deception In May 1944, the 55th (West Lancashire) Infantry Division was raised to higher establishment. The division did not increase in size; the war establishment (the paper strength) of a higher establishment infantry division, in this period, was 18,347 men. The 55th (West Lancashire), the 38th (Welsh), the 45th, the 47th (London), and the 61st Infantry Divisions had a combined total of 17,845 men. The division remained within the United Kingdom and was drained of manpower to a point that it was all but disbanded, and was then maintained as a deception formation. Of these 17,845 men, around 13,000 were available as replacements for the 21st Army Group fighting in France. The remaining 4,800 men were considered ineligible at that time for service abroad for a variety of reasons, including a lack of training or being medically unfit. Over the following six months, up to 75 per cent of these men would be deployed to reinforce 21st Army Group following the completion of their training and certification of fitness. For example, the two Liverpool Scottish battalions were used as training units and a source of reinforcements for other Scottish regiments. Entire units were also stripped from the division and deployed abroad; the 2nd Loyal Regiment (North Lancashire) (previously the 10th Battalion, Loyal Regiment) was transferred to Italy. While the 199th Brigade remained part of the division, it was attached to Northern Ireland District in July 1944. The same month, the division, minus the 199th Brigade, returned to the mainland and moved to southern Wales. The 199th Brigade, renumbered the 166th Brigade, physically rejoined the division in June 1945. In April 1945, the 304th and the 305th Infantry Brigades were attached to the division. These were recently converted anti-aircraft formations. The latter remained with the division for seventeen days, before being sent to the 21st Army Group. The former stayed with the division into May, and then deployed to Norway. In the final months of 1943 and through June 1944, the division's actual and notional moves were deliberately leaked by double agents as part of the "Fortitude North" segment of the Operation Fortitude deception, the effort to make the Germans believe that the notional 250,000-strong Fourth Army, based in Scotland, would assault Norway. The division was assigned to the fictional II Corps, which was notionally preparing to assault Stavanger. The division participated in this deception effort by maintaining wireless signals suggesting it was moving around the United Kingdom as part of the Fourth Army. The overall ruse of an attack on Norway was maintained through July 1944, the plan officially coming to an end in September. Historian Mary Barbier wrote "the evidence seems to indicate that [Fortitude North] was only partially successful" and "a heated debate has erupted over whether or not [the operation] was a success." The division then joined the II Corps's notional move south from Scotland to England, in June 1944, becoming part of "Fortitude South" to convince the Germans that the Normandy landings were a feint and the main Allied invasion would take place in the Pas-de-Calais with a force of 500,000 men. The deception aimed to persuade the Germans not to move the 18 divisions of the 15th Army from the Pas-de-Calais to Normandy. The division also provided the signal and headquarters staff to create the phantom 55th US Infantry Division. In July, the division was reported as an assault division training near Southampton. In September, as the "Fortitude" deception was wound down and the Fourth Army dispersed, it was allowed to be known that the division had reverted to a training role. Historian Gerhard Weinberg wrote that the Germans readily believed in the threat to the Pas de Calais and "it was only at the end of July" that they realised a second assault was not coming, and "by that time, it was too late to move reinforcements". Nevertheless, Barbier concludes "that the importance of the deception has been overrated". The 15th Army was largely immobile, and not combat-ready. Despite the deception, several German divisions, including the 1st SS Panzer Division in reserve behind the 15th Army, were transferred to Normandy. The Germans had realised, as early as May, that the threat to Normandy was real. Barbier concluded that while the Germans believed the deception due to "preconceived ideas about the importance of the Pas De Calais", the Allied staff had overestimated the effectiveness of the deception in causing the inaction of the 15th Army, because they also held a "preconceived notion of what [Operation Fortitude] would accomplish". The British army demobilised after the war. The TA was reformed in 1947, on a much smaller scale of nine divisions and did not include the 55th (West Lancashire) Infantry Division. In 1947, the division's insignia was temporarily adopted by the 87th Army Group Royal Artillery, but was replaced at some point before the unit was disbanded in 1955. This formation was based in Liverpool and was made up primarily of units from the West Lancashire area, creating a connection with the division. ## Order of battle ## See also - Altcar Training Camp, a training facility that was used by the division. - British Army Order of Battle (September 1939) - Everton Road drill hall, Liverpool - List of commanders of the British 55th Division - List of British divisions in World War II
628,446
Courageous-class battlecruiser
1,153,163,826
Ship class built for the Royal Navy during the First World War
[ "Battlecruiser classes", "Ship classes of the Royal Navy", "World War I battlecruisers of the United Kingdom" ]
The Courageous class consisted of three battlecruisers known as "large light cruisers" built for the Royal Navy during the First World War. The class was nominally designed to support the Baltic Project, a plan by Admiral of the Fleet Lord Fisher that was intended to land troops on the German Baltic Coast. Ships of this class were fast but very lightly armoured, with only a few heavy guns. They were given a shallow draught, in part to allow them to operate in the shallow waters of the Baltic but also reflecting experience gained earlier in the war. To maximize their speed, the Courageous-class battlecruisers were the first capital ships of the Royal Navy to use geared steam turbines and small-tube boilers. The first two ships, Courageous and Glorious, were commissioned in 1917 and spent the war patrolling the North Sea. They participated in the Second Battle of Heligoland Bight in November 1917 and were present when the High Seas Fleet surrendered a year later. Their half-sister Furious was designed with a pair of 18-inch (457 mm) guns, the largest guns ever fitted on a ship of the Royal Navy, but was modified during construction to take a flying-off deck and hangar in lieu of her forward turret and barbette. After some patrols in the North Sea, her rear turret was removed and another flight deck added. Her aircraft attacked the Zeppelin sheds during the Tondern raid in July 1918. All three ships were laid up after the war, but were rebuilt into the Courageous-class aircraft carriers during the 1920s. Glorious and Courageous were sunk early in the Second World War and Furious was sold for scrap in 1948. ## Design and description The first two Courageous-class battlecruisers were designed in 1915 to meet a set of requirements laid down by the First Sea Lord, Admiral Fisher, with his Baltic Project in mind. They were to be large enough to ensure that they could maintain their speed in heavy weather, have a powerful armament and a speed of at least 32 knots (59 km/h; 37 mph) to allow them to outrun enemy light cruisers. Their protection was to be light for a cruiser, with 3 inches (76 mm) of armour between the waterline and the forecastle deck, anti-torpedo bulges amidships and the machinery as far inboard as possible, protected by triple torpedo bulkheads. Shallow draught was of the utmost importance and all other factors were to be subordinated to this. The Director of Naval Construction (DNC), Sir Eustace Tennyson-d'Eyncourt, responded on 23 February 1915 with a smaller version of the Renown-class battlecruisers with one less gun turret and reduced armour protection. The Chancellor of the Exchequer had forbidden any further construction of ships larger than light cruisers in 1915, so Fisher designated the ships as large light cruisers to evade this prohibition. If this restriction had not been in place, the ships would have been built as improved versions of the preceding Renown class. The two ships were laid down a few months later under a veil of secrecy, so they became known in the Royal Navy as "Lord Fisher's hush-hush cruisers" and their odd design also earned them the nickname of the Outrageous class. Their half-sister Furious was designed a few months later to meet a revised requirement specifying an armament of two BL 18-inch Mk I guns, the largest guns ever fitted on a Royal Navy ship, in single turrets with the ability to use twin 15-inch (381 mm) gun turrets if the 18-inch guns were unsatisfactory. Gunnery experts criticized this decision because the long time between salvoes would make spotting corrections useless and reduce the rate of fire and thus the probability of a direct hit. Her secondary armament was upgraded to BL 5.5-inch (140 mm) Mk I guns, rather than the 4-inch (102 mm) guns used by the first two ships, to compensate for the weakness of the two main guns against fast-moving targets like destroyers. Her displacement and beam were increased over that of her half-sisters with slightly less draught. The Baltic Project was only one justification for the ships. Admiral Fisher wrote in a letter to the DNC on 16 March 1915: "I've told the First Lord that the more that I consider the qualities of your design of the Big Light Battle Cruisers, the more that I am impressed by its exceeding excellence and simplicity—all the three vital requisites of gunpower, speed and draught so well balanced!" In fact they could be considered the epitome of Fisher's belief in the paramount importance of speed over everything else. Fisher's adherence to this principle is highlighted in a letter he wrote to Churchill concerning the battleships of the 1912–13 Naval Estimates. In the letter, dated April 1912, Fisher stated: "There must be sacrifice of armour ... There must be further VERY GREAT INCREASE IN SPEED ... your speed must vastly exceed [that of] your possible enemy!" Fisher's desire for a shallow draught was not merely based on the need to allow for inshore operations; ships tended to operate closer to deep load than anticipated and were often found lacking in freeboard, reserve buoyancy and safety against underwater attack. This experience led the DNC to reconsider the proportions of the hull to rectify the problems identified thus far. The Courageous-class ships were the first products of that re-evaluation. ### General characteristics The Courageous-class ships had an overall length of 786 feet 9 inches (239.8 m), a beam of 81 feet (24.7 m), and a draught of 25 feet 10 inches (7.9 m) at deep load. They displaced 19,180 long tons (19,490 t) normally and 22,560 long tons (22,922 t) at deep load. They had a metacentric height of 6 feet (1.8 m) at deep load and a complete double bottom. Their half-sister Furious was the same length, but had a beam of 88 feet (26.8 m) and a draught of 24 feet 11 inches (7.6 m) at deep load. She displaced 19,513 long tons (19,826 t) at load and 22,890 long tons (23,257 t) at deep load. She had a metacentric height of 5.33 feet (1.6 m) at deep load. ### Propulsion To save weight and space the Courageous-class ships were the first large warships in the Royal Navy to have geared steam turbines and small-tube boilers despite the latter's significantly heavier maintenance requirements. Furthermore, to save design time, the turbine installation used in the light cruiser Champion, the navy's first cruiser with geared turbines, was simply doubled. The Parsons turbines were arranged in two engine rooms and each of the turbines drove one of the four propeller shafts. Furious's propellers were 11 feet 6 inches (3.5 m) in diameter. The turbines were powered by eighteen Yarrow boilers equally divided among three boiler rooms. They were designed to produce a total of 90,000 shaft horsepower (67,113 kW) at a working pressure of 235 psi (1,620 kPa; 17 kgf/cm<sup>2</sup>), but achieved slightly more than that during Glorious's trials, although she did not reach her designed speed of 32 knots (59 km/h; 37 mph). They were designed to normally carry 750 long tons (762 t) of fuel oil, but could carry a maximum of 3,160 long tons (3,211 t). At full capacity, they could steam for an estimated 6,000 nautical miles (11,110 km; 6,900 mi) at a speed of 20 knots (37 km/h; 23 mph). ### Armament The Courageous-class ships mounted four BL 15-inch Mark I guns in two twin hydraulically powered Mark I\* turrets, one each fore (designated the 'A' turrets) and aft (the 'Y' turrets). These turrets were originally intended for a Revenge-class battleship that was canceled shortly after the war began. The guns could be depressed to −3° and elevated to 20°; they could be loaded at any angle up to 20°, although loading at high angles tended to slow the gun's return to battery (firing position). The ships carried 120 shells per gun. They fired 1,910-pound (866 kg) projectiles at a muzzle velocity of 2,575 ft/s (785 m/s); this provided a maximum range of 23,734 yd (21,702 m) with armour-piercing shells. The Courageous-class ships were designed with 18 BL 4-inch Mark IX guns, fitted in six triple mounts. These were manually powered and quite cumbersome in use as they required a crew of thirty-two men to load and train the guns. The gun's rate of fire was only 10 to 12 rounds per minute as the loaders kept getting in each other's way. They had a maximum depression of −10° and a maximum elevation of 30°. They fired a 22-pound (10.0 kg) high explosive shell at a muzzle velocity of 2,625 ft/s (800 m/s). At maximum elevation the guns had a maximum range of 13,500 yards (12,344 m). The ships carried 120 rounds for each gun. Each ship mounted a pair of QF 3 inch 20 cwt anti-aircraft guns on single high-angle Mark II mountings. These were mounted abreast the mainmast in the Courageous-class ships and before the funnel on Furious. The gun had a maximum depression of 10° and a maximum elevation of 90°. It fired a 12.5-pound (5.7 kg) shell at a muzzle velocity of 2,500 ft/s (760 m/s) at a rate of fire of 12–14 rounds per minute. They had a maximum effective ceiling of 23,500 ft (7,200 m). All three ships carried ten torpedoes and mounted two 21-inch (533 mm) submerged side-loading torpedo tubes fitted near 'A' turret. They were loaded and traversed by hydraulic power, but fired by compressed air. The 18-inch BL Mark I gun carried by Furious was derived from the 15-inch Mark I gun used in her half-sisters. It was intended to be mounted in two single-gun turrets derived from the twin-gun 15-inch Mark I/N turret, and her barbettes were designed to accommodate either turret in case problems arose with the 18-inch gun's development, but only one turret was actually fitted. The gun could depress to −3° and elevate to a maximum of 30°. It fired a 3,320-pound (1,510 kg), 4 crh armour-piercing, capped shell at a muzzle velocity of 2,270 ft/s (690 m/s) to a distance of 28,900 yards (26,400 m). It could fire one round per minute and the ship carried sixty rounds of ammunition. The turret's revolving mass was 826 long tons (839 t), only slightly more than the 810 long tons (823 t) of its predecessor. Furious's secondary armament consisted of 11 BL 5.5-inch Mk I guns. The guns had a maximum elevation of 25° on their pivot mounts. They fired 82-pound (37 kg) projectiles at a muzzle velocity of 2,790 ft/s (850 m/s) at a rate of 12 rounds per minute. Their maximum range was 16,000 yd (15,000 m) at 25° elevation. ### Fire control The main guns of the Courageous-class ships could be controlled from either of the two fire-control directors. The primary director was mounted above the conning tower in an armoured hood and the other was in the fore-top on the foremast. The secondary armament was also director-controlled. Each turret was provided with a 15-foot (4.6 m) rangefinder in an armoured housing on the turret roof. The fore-top was equipped with a 9-foot (2.7 m) rangefinder as was the torpedo control tower above the rear superstructure. The anti-aircraft guns were controlled by a simple 2-metre (6 ft 7 in) rangefinder mounted on the aft superstructure. ### Protection Unlike on other British battlecruisers, the bulk of the armour of the Courageous-class ships was made from high-tensile steel, a type of steel used structurally in other ships. Their waterline belt consisted of 2 inches (51 mm) covered by a 1-inch (25 mm) skin. It ran from barbette to barbette with a one-inch extension forward to the two-inch forward bulkhead well short of the bow. The belt had a height of 23 feet (7.0 m), of which 18 inches (0.5 m) was below the designed waterline. From the forward barbette a three-inch bulkhead extended out to the ship's side between the upper and lower decks and a comparable bulkhead was in place at the rear barbette as well. Four decks were armoured with thicknesses varying from .75 to 3 inches (19 to 76 mm), with the greatest thicknesses over the magazines and the steering gear. After the loss of three battlecruisers to magazine explosions during the Battle of Jutland, 110 long tons (112 t) of extra protection was added to the deck around the magazines. The turrets, barbettes and conning tower were made from Krupp cemented armour. The turret faces were 9 inches (229 mm) thick while their sides ranged from 7 to 9 inches (178 to 229 mm) in thickness and the roof was 4.5 inches (114 mm) thick. The barbettes had a maximum thickness of 6 to 7 inches (152 to 178 mm) above the main deck, but reduced in thickness to 3 to 4 inches (76 to 102 mm) between the lower and main decks. The conning tower armour was 10 inches (254 mm) thick and it had a three-inch roof. The primary fire-control director atop the conning tower was protected by an armoured hood. The face of the hood was six inches thick, its sides were two inches thick and its roof was protected by three inches of armour. A communications tube with three-inch sides ran from the conning tower down to the lower conning position on the main deck. The torpedo bulkheads were increased during building from .75 inches (19 mm) to 1.5 inches (38 mm) in thickness. All three ships were fitted with a shallow anti-torpedo bulge integral to the hull which was intended to explode the torpedo before it hit the hull proper and vent the underwater explosion to the surface rather than into the ship. However, later testing proved that it was not deep enough to accomplish its task as it lacked the layers of empty and full compartments that were necessary to absorb the force of the explosion. ## Ships ## Service During her sea trials in November 1916 off the River Tyne, Courageous sustained structural damage while running at full speed in a rough head sea. The forecastle deck was deeply buckled in three places between the breakwater and the forward turret. In addition, the side plating was visibly buckled between the forecastle and upper decks. Water had entered the submerged torpedo room and rivets had sheared in the vertical flange of the angle iron securing the deck armour in place. The exact cause remains uncertain, but Courageous received 130 long tons (132 t) of stiffening in response; Glorious did not receive her stiffening until 1918. Courageous also was temporarily fitted as a minelayer in April 1917, but never actually laid any mines. In mid-1917 both ships received a dozen torpedo tubes in pairs: one mount on each side of the mainmast on the upper deck and two mounts on each side of the rear turret on the quarterdeck. Courageous and Glorious served together throughout the war. Both ships were initially assigned to the 3rd Light Cruiser Squadron and later reconstituted the 1st Cruiser Squadron (CS). Even as she was being built, Furious was modified with a large hangar capable of housing ten aircraft on her forecastle replacing the forward turret. A 160-foot (49 m) flight deck was built along its roof. Aircraft were flown off and, less successfully, landed on this deck. Although the aft turret was fitted and the gun trialled, it was not long before Furious returned to her builders for further modifications. In November 1917 the rear turret was replaced by a 300-foot (91 m) deck for landing aircraft over another hangar. Her funnel and superstructure remained intact, with a narrow strip of decking around them to connect the fore and aft flight decks. Turbulence from the funnel and superstructure was severe enough that only three landing attempts were successful before further attempts were forbidden. Her 18-inch guns were reused on the Lord Clive-class monitors General Wolfe and Lord Clive during the war. All three ships were in the 1st CS of which Courageous was flagship when the Admiralty received word of German ship movements on 16 October 1917, possibly indicating some sort of raid. Admiral Beatty, commander of the Grand Fleet, ordered most of his light cruisers and destroyers to sea in an effort to locate the enemy ships. Furious was detached from the 1st CS and ordered to sweep along the 56th parallel as far as 4° East and to return before dark. The other two ships were not initially ordered to sea, but were sent to reinforce the 2nd Light Cruiser Squadron patrolling the central part of the North Sea later that day. Two German Brummer-class light cruisers managed to slip through the gaps in the British patrols and destroyed a convoy headed to Scandinavia during the morning of 17 October, but no word was received of the engagement until that afternoon. The 1st CS was ordered to attempt to intercept the German ships, but they proved to be too fast and the British ships were unsuccessful. ### Second Battle of Heligoland Bight Over the course of 1917 the Admiralty was becoming more concerned about German efforts in the North Sea to sweep paths through the British-laid minefields intended to restrict the actions of the High Seas Fleet and German submarines. A preliminary raid on German minesweeping forces on 31 October by light forces destroyed ten small ships and the Admiralty decided on a larger operation to destroy the minesweepers and their escorting light cruisers. Based on intelligence reports the Admiralty decided on 17 November 1917 to allocate two light cruiser squadrons, the 1st CS covered by the reinforced 1st Battlecruiser Squadron and, more distantly, the battleships of the 1st Battle Squadron to the operation. The German ships, four light cruisers of II Scouting Force, eight destroyers, three divisions of minesweepers, eight Sperrbrechers (cork-filled trawlers, used to detonate mines without sinking) and two trawlers to mark the swept route, were spotted at 7:30 a.m., silhouetted by the rising sun. Courageous and the light cruiser Cardiff opened fire with their forward guns seven minutes later. The Germans responded by laying an effective smoke screen. The British continued in pursuit, but lost track of most of the smaller ships in the smoke and concentrated fire on the light cruisers as opportunity permitted. One 15-inch hit was made on a gun shield of SMS Pillau, but it did not affect her speed. At 8:33 the left-hand gun in Glorious's forward turret was wrecked when a shell detonated inside the gun barrel. At 9:30 the 1st CS broke off their pursuit so they would not enter a minefield marked on their maps; the ships turned south, playing no further role in the battle. The German ships had too much of a lead to be caught by the British ships before they had to turn to avoid the minefield. Both ships had taken minor damage from their own muzzle blasts, and Glorious required five days of repairs. Courageous fired 92 rounds of 15-inch while Glorious fired 57, scoring only the single hit on Pillau between them. They also fired 180 and 213 four-inch shells respectively. Courageous's mine fittings were removed after the battle and both ships received flying-off platforms on top of their turrets in 1918. A Sopwith Camel was carried on the rear turret and a Sopwith 11⁄2 Strutter on the forward turret. Furious was recommissioned on 15 March 1918 and her embarked aircraft were used on anti-Zeppelin patrols in the North Sea after May. In July 1918 she flew off seven Sopwith Camels which participated in the Tondern raid, attacking the Zeppelin sheds at Tondern with moderate success. All three ships were present at the surrender of the German fleet on 21 November 1918. ### Post-war history Courageous was reduced to reserve at Rosyth on 1 February 1919 before being assigned to the Gunnery School at Devonport the following year as a turret drill ship. She became flagship of the Rear-Admiral Commanding the Reserve at Devonport in March 1920. Glorious was also reduced to reserve at Rosyth on 1 February and served as a turret-drill ship, but succeeded her sister as flagship between 1921 and 1922. Furious was placed in reserve 21 November 1919 before beginning reconstruction as an aircraft carrier in 1921. The Washington Naval Treaty of 1922 required the signatory nations to severely curtail their plans for new warships and scrap many existing warships to meet its tonnage limits. Up to 66,000 long tons (67,000 t) of existing ships, however, could be converted into aircraft carriers, and the Royal Navy chose to convert the Courageous-class ships because of their high speed. Each ship was reconstructed with a full-length flight deck during the 1920s. Their 15-inch turrets were placed into storage and later reused during the Second World War for HMS Vanguard, the Royal Navy's last battleship. As the first large, or "fleet", carrier completed by the Royal Navy, Furious was extensively used to evaluate aircraft handling and landing procedures, including the first ever carrier night-landing in 1926. Courageous became the first warship lost by the Royal Navy in the Second World War II when she was torpedoed in September 1939. Glorious unsuccessfully hunted the Admiral Graf Spee in the Indian Ocean in 1939. She participated in the Norwegian Campaign in 1940, but was sunk by the German battleships Scharnhorst and Gneisenau on 8 June 1940 in the North Sea. Furious spent the first months of the war hunting for German raiders and escorting convoys before she began to support British forces in Norway. She spent most of 1940 in Norwegian waters making attacks on German installations and shipping, and most of 1941 ferrying aircraft to West Africa, Gibraltar and Malta before refitting in the United States. She ferried aircraft to Malta during 1942 and provided air support to British forces during Operation Torch. Furious spent most of 1943 training with the Home Fleet, but made numerous air strikes against the German battleship Tirpitz and other targets in Norway in 1944. She was worn out by late 1944 and was reduced to reserve in September before being decommissioned the following year. Furious was sold in 1948 for scrap.
1,126,782
William T. Anderson
1,171,806,875
Confederate guerrilla fighter
[ "1840 births", "1864 deaths", "American guerrillas killed in action", "American mass murderers", "American rapists", "Bushwhackers", "Confederate States of America military personnel killed in the American Civil War", "Deaths by firearm in Missouri", "James–Younger Gang", "People from Hopkins County, Kentucky", "People from Huntsville, Missouri", "People of Missouri in the American Civil War", "People with sadistic personality disorder", "Proslavery activists killed in the American Civil War", "War criminals" ]
William T. Anderson (c. 1840 – October 26, 1864), known by the nickname "Bloody Bill" Anderson, was a soldier who was one of the deadliest and most notorious Confederate guerrilla leaders in the American Civil War. Anderson led a band of volunteer partisan raiders who targeted Union loyalists and federal soldiers in the states of Missouri and Kansas. Raised by a family of Southerners in Kansas, Anderson began to support himself by stealing and selling horses in 1862. After a former friend and secessionist turned Union loyalist judge killed his father, Anderson killed the judge and fled to Missouri. There he robbed travelers and killed several Union soldiers. In early 1863 he joined Quantrill's Raiders, a group of Confederate guerrillas which operated along the Kansas–Missouri border. He became a skilled bushwhacker, earning the trust of the group's leaders, William Quantrill and George M. Todd. Anderson's bushwhacking marked him as a dangerous man and eventually led the Union to imprison his sisters. After a building collapse in the makeshift jail in Kansas City, Missouri, left one of them dead in custody and the other permanently maimed, Anderson devoted himself to revenge. He took a leading role in the Lawrence Massacre and later took part in the Battle of Baxter Springs, both in 1863. In late 1863, while Quantrill's Raiders spent the winter in Sherman, Texas, animosity developed between Anderson and Quantrill. Anderson, perhaps falsely, implicated Quantrill in a murder, leading to the latter's arrest by Confederate authorities. Anderson subsequently returned to Missouri as the leader of his own group of raiders and became the most feared guerrilla in the state, robbing and killing a large number of Union soldiers and civilian sympathizers. Although Union supporters viewed him as incorrigibly evil, Confederate supporters in Missouri saw his actions as justifiable. In September 1864, Anderson led a raid on the town of Centralia, Missouri. Unexpectedly, his men were able to capture a passenger train, the first time Confederate guerrillas had done so. In what became known as the Centralia Massacre, Anderson's bushwhackers killed 24 unarmed Union soldiers on the train and set an ambush later that day which killed over a hundred Union militiamen. Anderson himself was killed a month later in battle. Historians have made disparate appraisals of Anderson; some see him as a sadistic, psychopathic killer, while others put his actions into the perspective of the general desperation and lawlessness of the time and the brutalization effect of war. ## Early life William T. Anderson was born around 1840 in Hopkins County, Kentucky, to William C. and Martha Anderson. His siblings were Jim, Ellis, Mary Ellen, Josephine and Janie. His schoolmates recalled him as a well-behaved, reserved child. During his childhood, Anderson's family moved to Huntsville, Missouri, where his father found employment on a farm and the family became well-respected. In 1857, they relocated to the Kansas Territory, traveling southwest on the Santa Fe Trail and settling 13 miles (21 km) east of Council Grove. The Anderson family supported slavery, though they did not own slaves. Their move to Kansas was likely for economic rather than political reasons. Kansas was at the time embroiled in an ideological conflict regarding its admission to the Union as slave or free, and both pro-slavery activists and abolitionists had moved there in attempts to influence its ultimate status. Animosity and violence between the two sides quickly developed in what was called Bleeding Kansas, but there was little unrest in the Council Grove area. After settling there, the Anderson family became friends with A.I. Baker, a local judge who was a Confederate sympathizer. By 1860, the young William T. Anderson was a joint owner of a 320-acre (1.3 km<sup>2</sup>) property that was worth \$500; his family had a total net worth of around \$1,000. On June 28, 1860, William's mother, Martha Anderson, died after being struck by lightning. In the late 1850s, Ellis Anderson fled to Iowa after killing a native American. Around the same time, William T. Anderson fatally shot a member of the Kaw tribe outside Council Grove; he claimed that the man had tried to rob him. He joined the freight shipping operation for which his father worked and was given a position known as "second boss" for a wagon trip to New Mexico. The trip was not successful and he returned to Missouri without the shipment, saying his horses had disappeared with the cargo. After he returned to Council Grove he began horse trading, taking horses from towns in Kansas, transporting them to Missouri and returning with more horses. ## Horse trading and outlawry After the Civil War began in 1861, the demand for horses increased and Anderson transitioned from trading horses to stealing them, reselling them as far away as New Mexico. He worked with his brother Jim, their friend Lee Griffith and several accomplices strung along the Santa Fe Trail. In late 1861, Anderson traveled south with Jim and Judge Baker in an apparent attempt to join the Confederate Army. Anderson had told a neighbor that he sought to fight for financial reasons rather than out of loyalty to the Confederacy. However, the group was attacked by the Union's 6th Regiment Kansas Volunteer Cavalry in Vernon County, Missouri; the cavalry likely assumed they were Confederate guerrillas. The Anderson brothers escaped, but Baker was captured and spent four months in prison before returning to Kansas, professing loyalty to the Union. One way he sought to prove that loyalty was by severing his ties with Anderson's sister Mary, his former lover. Upon his return to Kansas, Anderson continued horse trafficking, but ranchers in the area soon became aware of his operations. In May 1862, Judge Baker issued an arrest warrant for Griffith, whom Anderson helped hide. Some local citizens suspected the Anderson family was assisting Griffith and traveled to their house to confront the elder William Anderson. After hearing their accusations against his sons, he was incensed—he found Baker's involvement particularly infuriating. The next day, the elder Anderson traveled to the Council Grove courthouse with a gun, intending to force Baker to withdraw the warrant. As he entered the building he was restrained by a constable and fatally shot by Baker. The younger Anderson buried his father and was subsequently arrested for assisting Griffith. However, he was quickly released owing to a problem with the warrant, and fled to Agnes City, fearing he would be lynched. There he met Baker, who temporarily placated him by providing a lawyer. Anderson remained in Agnes City until he learned that Baker would not be charged, as the judge's claim of self-defense had been accepted by legal authorities. Anderson was outraged and went to Missouri with his siblings. William and Jim Anderson then traveled southwest of Kansas City, robbing travelers to support themselves. On July 2, 1862, William and Jim Anderson returned to Council Grove and sent an accomplice to Baker's house claiming to be a traveler seeking supplies. Baker and his brother-in-law brought the man to a store, where they were ambushed by the Anderson brothers. After a brief gunfight, Baker and his brother-in-law fled into the store's basement. The Andersons barricaded the door to the basement and set the store on fire, killing Baker and his brother-in-law. They also burnt Baker's home and stole two of his horses before returning to Missouri on the Santa Fe Trail. William and Jim Anderson soon formed a gang with a man named Bill Reed; in February 1863, the Lexington Weekly Union recorded that Reed was the leader of the gang. William Quantrill, a Confederate guerrilla leader, later claimed to have encountered Reed's company in July and rebuked them for robbing Confederate sympathizers; in their biography of Anderson, Albert Castel and Tom Goodrich speculate that this rebuke may have resulted in a deep resentment of Quantrill by Anderson. Anderson and his gang subsequently traveled east of Jackson County, Missouri, avoiding territory where Quantrill operated and continuing to support themselves by robbery. They also attacked Union soldiers, killing seven by early 1863. ## Quantrill's Raiders Missouri had a large Union presence throughout the Civil War, but was also inhabited by many civilians whose sympathies lay with the Confederacy. From July 1861 until the end of the war, the state suffered up to 25,000 deaths from guerrilla warfare, more than any other state. Confederate General Sterling Price failed to gain control of Missouri in his 1861 offensive and retreated into Arkansas, leaving only partisan rangers and local guerrillas known as "bushwhackers" to challenge Union dominance. Quantrill was at the time the most prominent guerrilla leader in the Kansas–Missouri area. In early 1863, William and Jim Anderson traveled to Jackson County, Missouri, to join him. William Anderson was initially given a chilly reception from other raiders, who perceived him to be brash and overconfident. In May 1863, Anderson joined members of Quantrill's Raiders on a foray near Council Grove, Kansas, in which they robbed a store 15 miles (24 km) west of the town. After the robbery, the group was intercepted by a United States Marshal accompanied by a large posse, about 150 miles (240 km) from the Kansas–Missouri border. In the resulting skirmish, several raiders were captured or killed and the rest of the guerrillas, including Anderson, split into small groups to return to Missouri. Castel and Goodrich speculated that this raid may have given Quantrill the idea of launching an attack deep in Kansas, as it demonstrated that the state's border was poorly defended and that guerrillas could travel deep into the state's interior before Union forces were alerted. In early summer 1863, Anderson was made a lieutenant, serving in a unit led by George M. Todd. In June and July, Anderson took part in several raids that killed Union soldiers, in Westport, Kansas City and Lafayette County, Missouri. The first reference to Anderson in Official Records of the American Civil War concerns his activities at this time, describing him as the captain of a band of guerrillas. He commanded 30–40 men, one of whom was Archie Clement, an 18-year-old with a predilection for torture and mutilation who was loyal only to Anderson. By late July, Anderson led groups of guerrillas on raids and was often pursued by Union volunteer cavalry. Anderson was under Quantrill's command, but independently organized some attacks. Quantrill's Raiders had an extensive support network in Missouri that provided them with numerous hiding places. Biographer Larry Wood claimed that Anderson's sisters aided the guerrillas by gathering information inside Union-controlled territory. In August 1863, however, Union General Thomas Ewing, Jr. attempted to thwart the guerrillas by arresting their female relatives, and Anderson's sisters were confined in a three-story building on Grand Avenue in Kansas City with a number of other girls. While they were confined, the building collapsed, killing one of Anderson's sisters. In the aftermath, rumors that the building had been intentionally sabotaged by Union soldiers spread quickly; Anderson was convinced it had been a deliberate act. Biographer Larry Wood wrote that Anderson's motivation shifted after the death of his sister, arguing that killing then became his focus, and an enjoyable act. Castel and Goodrich maintain that by then killing had become more than a means to an end for Anderson: it became an end in itself. ### Lawrence Massacre Although Quantrill had considered the idea of a raid on the pro-Union stronghold that was the town of Lawrence, Kansas before the building collapsed in Kansas City, the deaths convinced the guerrillas to make a bold strike. Quantrill attained near-unanimous consent to travel 40 miles (64 km) into Union territory to strike Lawrence. The guerrillas gathered at the Blackwater River in Johnson County, Missouri. Anderson was placed in charge of 40 men, of which he was perhaps the angriest and most motivated—his fellow guerrillas considered him one of the deadliest fighters there. On August 19, the group, which proved to be the most guerrillas under one commander in the war, began the trip to Lawrence. En route, some guerrillas robbed a Union supporter, but Anderson knew the man and reimbursed him. Arriving in Lawrence on August 21, the guerrillas immediately killed a number of Union Army recruits and one of Anderson's men took their flag. The Provost Marshal of Kansas, a Union captain who commanded military police, surrendered to the guerrillas and Anderson took his uniform (guerrillas often wore uniforms stolen from Union soldiers). They proceeded to pillage and burn many buildings, killing almost every man they found, but taking care not to shoot women. Anderson personally killed 14 people. Although some men begged him to spare them, he persisted, only relenting when a woman pleaded with him not to torch her house. The guerrillas under Anderson's command, notably including Archie Clement and Frank James, killed more than any of the other group. They left town at 9:00 am after a company of Union soldiers approached the town. The raiding party was pursued by Union forces but eventually managed to break contact with the soldiers and scatter into the Missouri woods. After a dead raider was scalped by a Union-allied Lenape Indian during the pursuit, one guerrilla leader pledged to adopt the practice of scalping. ## Texas Four days after the Lawrence Massacre, on August 25, 1863, General Ewing retaliated against the Confederate guerrillas by issuing General Order No. 11, an evacuation order that evicted almost 20,000 people from four counties in rural western Missouri and burned many of their homes. The order was intended to undermine the guerrillas' support network in Missouri. On October 2, a group of 450 guerrillas under Quantrill's leadership met at Blackwater River in Jackson County and left for Texas. They departed earlier in the year than they had planned, owing to increased Union pressure. En route, they entered Baxter Springs, Kansas, the site of Fort Blair. They attacked the fort on October 6, but the 90 Union troops there quickly took refuge inside, suffering minimal losses. Shortly after the initial assault, a larger group of Union troops approached Fort Blair, unaware the fort had been attacked and that the men they saw outside the fort dressed in Union uniforms were actually disguised guerrillas. The guerrillas charged the Union forces, killing about 100. Anderson and his men were in the rear of the charge, but gathered a large amount of plunder from the dead soldiers, irritating some guerrillas from the front line of the charge. Not satisfied with the number killed, Anderson and Todd wished to attack the fort again, but Quantrill considered another attack too risky. He angered Anderson by ordering his forces to withdraw. On October 12, Quantrill and his men met General Samuel Cooper at the Canadian River and proceeded to Mineral Springs, Texas, to rest for the winter. During the winter, Anderson married Bush Smith, a woman from Sherman, Texas. Anderson ignored Quantrill's request to wait until after the war and a dispute erupted, which resulted in Anderson separating his men from Quantrill's band. The tension between the two groups markedly increased—some feared open warfare would result—but by the time of the wedding, relations had improved. In March 1864, at the behest of General Sterling Price, Quantrill reassembled his men, sending most of them into active duty with the regular Confederate Army. He retained 84 men and reunited with Anderson. Quantrill appointed him a first lieutenant, subordinate only to himself and to Todd. A short time later, one of Anderson's men was accused of stealing from one of Quantrill's men. Quantrill expelled him and warned him not to come back, and the man was fatally shot by some of Quantrill's men when he attempted to return. It is likely that this incident angered Anderson, who then took 20 men to visit the town of Sherman. They told General Cooper that Quantrill was responsible for the death of a Confederate officer; the general had Quantrill arrested. Sutherland described Anderson's betrayal of Quantrill as a "Judas" turn. Quantrill was taken into custody but soon escaped. Anderson was told to recapture him and gave chase, but he was unable to locate his former commander and stopped at a creek. There, his men briefly engaged a group of guerrillas loyal to Quantrill, but no one was injured in the confrontation. Upon returning to the Confederate leadership, Anderson was commissioned as a captain by General Price. ## Return to Missouri Anderson and his men rested in Texas for several months before returning to Missouri. Although he learned that Union General Egbert B. Brown had devoted significant attention to the border area, Anderson led raids in Cooper County and Johnson County, Missouri, robbing local residents. On June 12, 1864, Anderson and 50 of his men engaged 15 members of the Missouri State Militia, killing and robbing 12. After the attack, one of Anderson's guerrillas scalped a dead militiaman. The next day, in southeast Jackson County, Anderson's group ambushed a wagon train carrying members of the Union 1st Northeast Missouri Cavalry, killing nine. The attacks prompted the Kansas City Daily Journal of Commerce to declare that rebels had taken over the area. Anderson and his men dressed as Union soldiers, wearing uniforms taken from those they killed. In response, Union militias developed hand signals to verify that approaching men in Union uniforms were not guerrillas. The guerrillas, however, quickly learned the signals, and local citizens became wary of Union troops, fearing that they were disguised guerrillas. On July 6, a Confederate sympathizer brought Anderson newspapers containing articles about him. Anderson was upset by the critical tone of the coverage and sent letters to the publications. In the letters, Anderson took an arrogant and threatening yet playful tone, boasting of his attacks. He protested the execution of guerrillas and their sympathizers, and threatened to attack Lexington, Missouri. He concluded the letters by describing himself as the commander of "Kansas First Guerrillas" and requesting that local newspapers publish his replies. The letters were given to Union generals and were not published for 20 years. In early July, Anderson's group robbed and killed several Union sympathizers in Carroll and Randolph counties. On July 15, Anderson and his men entered Huntsville, Missouri and occupied the town's business district. Anderson killed one hotel guest whom he suspected was a U.S. Marshal, but spoke amicably with an acquaintance he found there. Anderson's men robbed the town's depository, gaining about \$40,000 () in the robbery, although Anderson returned some money to the friend he had met at the hotel. ### Growing infamy In June 1864, George M. Todd usurped Quantrill's leadership of their group and forced him to leave the area. Todd rested his men in July to allow them to prepare for a Confederate invasion of Missouri. As Quantrill and Todd became less active, "Bloody Bill" Anderson emerged as the best-known, and most feared, Confederate guerrilla in Missouri. By August, the St. Joseph Herald, a Missouri newspaper, was describing him as "the Devil". As Anderson's profile increased, he was able to recruit more guerrillas. Anderson was selective, turning away all but the fiercest applicants, as he sought fighters similar to himself. His fearsome reputation gave a fillip to his recruiting efforts. Jesse James and his brother Frank were among the Missourians who joined Anderson; both of them later became notorious outlaws. General Clinton B. Fisk ordered his men to find and kill Anderson, but they were thwarted by Anderson's support network and his forces' superior training and arms. Many militia members had been conscripted and lacked the guerrillas' boldness and resolve. In 1863, most Union troops left Missouri and only four regiments remained there. These regiments were composed of troops from out of state, who sometimes mistreated local residents, further motivating the guerrillas and their supporters. The Union militias sometimes rode slower horses and may have been intimidated by Anderson's reputation. On July 23, 1864, Anderson led 65 men to Renick, Missouri, robbing stores and tearing down telegraph wires on the way. They had hoped to attack a train, but its conductor learned of their presence and turned back before reaching the town. The guerrillas then attacked Allen, Missouri. At least 40 members of the 17th Illinois Cavalry and the Missouri State Militia were in town and took shelter in a fort. The guerrillas were only able to shoot the Union horses before reinforcements arrived; three of Anderson's men were killed in the confrontation. In late July, the Union military sent a force of 100 well-equipped soldiers and 650 other men after Anderson. On July 30, Anderson and his men kidnapped the elderly father of the local Union militia's commanding officer. They tortured him until he was near death and sent word to the man's son in an unsuccessful attempt to lure him into an ambush, before releasing the father with instructions to spread word of his mistreatment. On August 1, while searching for militia members, Anderson and some of his men stopped at a house full of women and requested food. While they rested at the house, a group of local men attacked. The guerrillas quickly forced the attackers to flee, and Anderson shot and injured one woman as she fled the house. This action angered his men, who saw themselves as the protectors of women, but Anderson dismissed their concerns, saying such things were inevitable. They chased the men who had attacked them, killing one and mutilating his body. By August 1864, they were regularly scalping the men they killed. In early August, Anderson and his men traveled to Clay County. Around that time, he received further media coverage: the St. Joseph Morning Herald deemed him a "heartless scoundrel", publishing an account of his torture of a captured Union soldier. On August 10, while traveling through Clay County, Anderson and his men engaged 25 militia members, killing five of them and forcing the rest to flee. After hearing of the engagement, General Fisk commanded a colonel to lead a party with the sole aim of killing Anderson. ### Missouri River and Fayette On August 13, Anderson and his men traveled through Ray County, Missouri, to the Missouri River, where they engaged Union militia. Although they forced the Union soldiers to flee, Anderson and Jesse James were injured in the encounter and the guerrillas retired to Boone County to rest. On August 27, Union soldiers killed at least three of Anderson's men in an engagement near Rocheport. The next day, the 4th Missouri Volunteer Cavalry pursued them, but Anderson launched an ambush that killed seven Union soldiers. Anderson's men mutilated the bodies, earning the guerrillas the description of "incarnate fiends" from the Columbia Missouri Statesman. On August 30, Anderson and his men attacked a steamboat on the Missouri River, killing the captain and gaining control of the boat. They used it to attack other boats, bringing river traffic to a virtual halt. In mid-September, Union soldiers ambushed two of Anderson's parties traveling through Howard County, killing five men in one day. They found the guerrillas' horses decorated with the scalps of Union soldiers. A short time later, another six of Anderson's men were ambushed and killed by Union troops; after learning of these events, Anderson was outraged and left the area to seek revenge. Anderson met Todd and Quantrill on September 24, 1864; although they had clashed in the past, they agreed to work together again. Anderson suggested that they attack Fayette, Missouri, targeting the 9th Missouri Cavalry, which was based at the town. Quantrill disliked the idea because the town was fortified, but Anderson and Todd prevailed. Clad in Union uniforms, the guerrillas generated little suspicion as they approached the town, even though it had received warning of nearby guerrillas. However, a guerrilla fired his weapon before they reached the town, and the cavalry garrisoned in the town quickly withdrew into their fort while civilians hid. Anderson and Todd launched an unsuccessful attack against the fort, leading charge after futile charge without injury. The defeat resulted in the deaths of five guerrillas but only two Union soldiers, further maddening Anderson. On September 26, Anderson and his men reached Monroe County, Missouri, and traveled towards Paris, but learned of other nearby guerrillas and rendezvoused with them near Audrain County. Anderson and his men camped with at least 300 men, including Todd. Although a large group of guerrillas was assembled, their leaders felt there were no promising targets to attack because all of the large towns nearby were heavily guarded. ## Raid on Centralia On the morning of September 27, 1864, Anderson left his camp with about 75 men to scout for Union forces. They soon arrived at the small town of Centralia and proceeded to loot it, robbing people and searching the town for valuables. They found a large supply of whiskey and all began drinking. Anderson retreated into the lobby of the town hotel to drink and rest. A stagecoach soon arrived, and Anderson's men robbed the passengers, including Congressman James S. Rollins and a plainclothes sheriff. The two were prominent Unionists and hid their identities from the guerrillas. As the guerrillas robbed the stagecoach passengers, a train arrived. The guerrillas blocked the railroad, forcing the train to stop. Anderson's men quickly took control of the train, which included 23 off-duty, unarmed Union soldiers as passengers. This was the first capture of a Union passenger train in the war. Anderson ordered his men not to harass the women on the train, but the guerrillas robbed all of the men, finding over \$9,000 () and taking the soldiers' uniforms. Anderson forced the captured Union soldiers to form a line and announced that he would keep one for a prisoner exchange but would execute the rest. He addressed the prisoners, castigating them for the treatment of guerrillas by Union troops. After selecting a sergeant for a potential prisoner swap, Anderson's men shot the rest. Anderson gave the civilian hostages permission to leave but warned them not to put out fires or move bodies. Although he was alerted to the congressman's presence in the town, he opted not to search for him. The guerrillas set the passenger train on fire and derailed an approaching freight train. Anderson's band then rode back to their camp, taking a large amount of looted goods. ### Battle with Union soldiers Anderson arrived at the guerrilla camp and described the day's events, the brutality of which unsettled Todd. By mid-afternoon, the 39th Missouri Volunteer Infantry had arrived in Centralia. From the town, they saw a group of about 120 guerrillas and pursued them. The guerrillas heard that the cavalry was approaching, and Anderson sent a party to set an ambush. They drew the Union troops to the top of a hill; a group of guerrillas led by Anderson had been stationed at the bottom and other guerrillas hid nearby. Anderson then led a charge up the hill. Although five guerrillas were killed by the first volley of Union fire, the Union soldiers were quickly overwhelmed by the well-armed guerrillas, and those who fled were pursued. One Union officer reached Centralia and gave word of the ambush, allowing a few Union soldiers who had remained there to escape. However, most were hunted down and killed. Anderson's men mutilated the bodies of the dead soldiers and tortured some survivors. By the end of the day, Anderson's men had killed 22 soldiers from the train and 125 soldiers in the ensuing battle in one of the most decisive guerrilla victories of the entire war. It was Anderson's greatest victory, surpassing Lawrence and Baxter Springs in brutality and the number of casualties. The attack led to a near-complete halt in rail traffic in the area and a dramatic increase in Union rail security. Anderson achieved the same notoriety Quantrill had previously enjoyed, and he began to refer to himself as "Colonel Anderson", partly in an effort to supplant Quantrill. Sutherland saw the massacre as the last battle in the worst phase of the war in Missouri, and Castel and Goodrich described the slaughter as the Civil War's "epitome of savagery". However, Frank James, who participated in the attack, later defended the guerrillas' actions, arguing that the federal troops were marching under a black flag, indicating that they intended to show no mercy. ### Aftermath Anderson left the Centralia area on September 27, pursued for the first time by Union forces equipped with artillery. Anderson evaded the pursuit, leading his men into ravines the Union troops would not enter for fear of ambush. In the aftermath of the massacre, Union soldiers committed several revenge killings of Confederate-sympathizing civilians. They burned Rocheport to the ground on October 2; the town was under close scrutiny by Union forces, owing to the number of Confederate sympathizers there, but General Fisk maintained that the fire was accidental. Anderson watched the fire from nearby bluffs. Anderson visited Confederate sympathizers as he traveled, some of whom viewed him as a hero for fighting the Union, whom they deeply hated. Many of Anderson's men also despised the Union, and he was adept at tapping into this emotion. The Union soldier held captive at Centralia was impressed with the control Anderson exercised over his men. Although many of them wished to execute this Union hostage, Anderson refused to allow it. On October 6, Anderson and his men began travelling to meet General Price in Boonville, Missouri; they arrived and met the general on October 11. Price was disgusted that Anderson used scalps to decorate his horse, and would not speak with him until he removed them. He was, however, impressed by the effectiveness of Anderson's attacks. Anderson presented him with a gift of fine Union pistols, likely captured at Centralia. Price instructed Anderson to travel to the Missouri railroad and disrupt rail traffic, making Anderson a de facto Confederate captain. Anderson traveled 70 miles (110 km) east with 80 men to New Florence, Missouri. The group then traveled west, disregarding the mission assigned by General Price in favor of looting. Anderson reached a Confederate Army camp; although he hoped to kill some injured Union prisoners there, he was prevented from doing so by camp doctors. After Confederate forces under General Joseph O. Shelby conquered Glasgow, Anderson traveled to the city to loot. He visited the house of a well-known Union sympathizer, the wealthiest resident of the town, brutally beat him, and raped his 12- or 13-year-old black servant. Anderson indicated that he was particularly angry that the man had freed his slaves, then trampled him with a specially trained horse. Local residents gathered \$5,000, which they gave to Anderson; he then released the man, who died of his injuries in 1866. Anderson killed several other Union loyalists and some of his men returned to the wealthy resident's house to rape more of his female servants. He left the area with 150 men. ## Death Union military leaders assigned Lieutenant Colonel Samuel P. Cox to kill Anderson, providing him with a group of experienced soldiers. Soon after Anderson left Glasgow, a local woman saw him and told Cox of his presence. On October 26, 1864, he pursued Anderson's group with 150 men and engaged them in a battle called the Skirmish at Albany, Missouri. Anderson and his men charged the Union forces, killing five or six of them, but turned back under heavy fire. Only Anderson and one other man, the son of a Confederate general, continued to charge after the others had retreated. Anderson was hit by a bullet behind an ear, likely killing him instantly. Four other guerrillas were killed in the attack. The victory made a hero of Cox and led to his promotion. Union soldiers identified Anderson by a letter found in his pocket and paraded his body through the streets of Richmond, Missouri. The corpse was photographed and displayed at a local courthouse for public viewing, along with Anderson's possessions. Union soldiers claimed that Anderson was found with a string that had 53 knots, symbolizing each person he had killed. Union soldiers buried Anderson's body in a field near Richmond in a fairly well-built coffin. Some of them cut off one of his fingers to steal a ring. Flowers were placed at his grave, to the chagrin of Union soldiers. In 1908, Cole Younger, a former guerrilla who served under Quantrill, reburied Anderson's body in the Old Pioneer Cemetery in Richmond, Missouri. In 1967, a memorial stone was placed at the grave. Archie Clement led the guerrillas after Anderson's death, but the group splintered by mid-November. Most Confederate guerrillas had lost heart by then, owing to a cold winter and the simultaneous failure of General Price's 1864 invasion of Missouri, which ensured the state would remain securely under Union control for the rest of the war. As the Confederacy collapsed, most of Anderson's men joined Quantrill's forces or traveled to Texas. Jim Anderson moved to Sherman, Texas, with his two sisters. ## Legacy After the war, information about Anderson initially spread through memoirs of Civil War combatants and works by amateur historians. He was later discussed in biographies of Quantrill, which typically cast Anderson as an inveterate murderer. Three biographies of Anderson were written after 1975. Asa Earl Carter's novel The Rebel Outlaw: Josey Wales (1972) features Anderson as a main character. In 1976, the book was adapted into a film, The Outlaw Josey Wales, which portrays a man who joins Anderson's gang after his wife is killed by Union-backed raiders. James Carlos Blake's novel Wildwood Boys (2000) is a fictional biography of Anderson. He also appears as a character in several films about Jesse James. Historians have been mixed in their appraisal of Anderson. Wood describes him as the "bloodiest man in America's deadliest war" and characterizes him as the clearest example of the war's "dehumanizing influence". Castel and Goodrich view Anderson as one of the war's most savage and bitter combatants, but they also argue that the war made savages of many others. According to journalist T.J. Stiles, Anderson was not necessarily a "sadistic fiend", but illustrated how young men became part of a "culture of atrocity" during the war. He maintains that Anderson's acts were seen as particularly shocking in part because his cruelty was directed towards white Americans of equivalent social standing, rather than targets deemed acceptable by American society, such as Native Americans or foreigners. In a study of 19th-century warfare, historian James Reid posited that Anderson suffered from delusional paranoia, which exacerbated his aggressive, sadistic personality. He sees Anderson as obsessed with, and greatly enjoying, the ability to inflict fear and suffering in his victims, and suggests he suffered from the most severe type of sadistic personality disorder. Reid draws a parallel between the bashi-bazouks of the Ottoman Army and Anderson's guerrillas, arguing that they behaved similarly. Anderson is loosely portrayed by Jim Caviezel as “Black John Ambrose” in the 1999 Ang Lee film Ride With The Devil. ## See also - William Quantrill - George M. Todd - Partisan Ranger Act
7,027,786
Live and Let Die (novel)
1,165,505,047
1954 James Bond novel by Ian Fleming
[ "1954 British novels", "British novels adapted into films", "For Your Eyes Only (film)", "James Bond books", "Jonathan Cape books", "Licence to Kill", "Live and Let Die (film)", "Novels by Ian Fleming", "Novels set in Florida", "Novels set in Jamaica", "Underwater adventure novels" ]
Live and Let Die is the second novel in Ian Fleming's James Bond series of stories. Set in London, the United States and Jamaica, it was first published in the UK by Jonathan Cape on 5 April 1954. Fleming wrote the novel at his Goldeneye estate in Jamaica before his first book, Casino Royale, was published; much of the background came from Fleming's travel in the US and knowledge of Jamaica. The story centres on Bond's pursuit of "Mr Big", a criminal who has links to the American criminal network, the world of voodoo and SMERSH—an arm of the Soviet secret service—all of which are threats to the First World. Bond becomes involved in the US through Mr Big's smuggling of 17th-century gold coins from British territories in the Caribbean. The novel deals with the themes of the ongoing East–West struggle of the Cold War, including British and American relations, Britain's position in the world, race relations, and the struggle between good and evil. As with Casino Royale, Live and Let Die was broadly well received by the critics. The initial print run of 7,500 copies quickly sold out and a second print run was ordered within the year. US sales, when the novel was released there a year later, were much slower. Following a comic strip adaptation in 1958–59 by John McLusky in the Daily Express, the novel was adapted in 1973 as the eighth film in the Eon Productions Bond series and the first to star Roger Moore as Bond. Major plot elements from the novel were also incorporated into the Bond films For Your Eyes Only in 1981 and Licence to Kill in 1989. ## Plot The British Secret Service agent James Bond is sent by his superior, M, to New York City to investigate "Mr Big", real name Buonaparte Ignace Gallia. Bond's target is an agent of the Soviet counterintelligence organisation SMERSH, and an underworld voodoo leader who is suspected of selling 17th-century gold coins to finance Soviet spy operations in America. These gold coins have been turning up in the Harlem section of New York City and in Florida and are suspected of being part of a treasure that was buried in Jamaica by the pirate Henry Morgan. In New York, Bond meets up with his counterpart in the CIA, Felix Leiter. The two visit some of Mr Big's nightclubs in Harlem, but are captured. Bond is interrogated by Mr Big, who uses his fortune-telling employee, Solitaire (so named because she excludes men from her life), to determine if Bond is telling the truth. Solitaire lies to Mr Big, supporting Bond's cover story. Mr Big decides to release Bond and Leiter, and has one of Bond's fingers broken. On leaving, Bond kills several of Mr Big's men; Leiter is released with minimal physical harm by a gang member, sympathetic because of a shared appreciation of jazz. Solitaire later leaves Mr Big and contacts Bond; the couple travel by train to St. Petersburg, Florida, where they meet Leiter. While Bond and Leiter are scouting one of Mr Big's warehouses used for storing exotic fish, Solitaire is kidnapped by Mr Big's minions. Leiter later returns to the warehouse by himself, but is either captured and fed to a shark or tricked into standing on a trap door over the shark tank through which he falls; he survives, but loses an arm and a leg. Bond finds him in their safe house with a note pinned to his chest "He disagreed with something that ate him". Bond then investigates the warehouse himself and discovers that Mr Big is smuggling gold coins by hiding them in the bottom of fish tanks holding poisonous tropical fish, which he is bringing into the US. He is attacked in the warehouse by "the Robber", Mr Big's gunman, and in the resultant gunfight Bond outwits the Robber and causes him to fall into the shark tank. Bond continues his mission in Jamaica, where he meets a local fisherman, Quarrel, and John Strangways, the head of the local MI6 station. Quarrel gives Bond training in scuba diving in the local waters. Bond swims through shark- and barracuda-infested waters to Mr Big's island and manages to plant a limpet mine on the hull of his yacht before being captured once again by Mr Big. Bond is reunited with Solitaire; the following morning Mr Big ties the couple to a line behind his yacht and plans to drag them over the shallow coral reef and into deeper water so that the sharks and barracuda that he attracts in to the area with regular feedings will eat them. Bond and Solitaire are saved when the limpet mine explodes seconds before they are dragged over the reef: though temporarily stunned by the explosion and injured on the coral, they are protected from the explosion by the reef and Bond watches as Mr Big, who survived the explosion, is killed by the sharks and barracuda. Quarrel then rescues the couple. ## Background Between January and March 1952, the journalist Ian Fleming wrote Casino Royale, his first novel, at his Goldeneye estate in Jamaica. Fleming conducted research for Live and Let Die, and completed the novel before Casino Royale was published in January 1953, four months before his second book was published. Fleming and his wife Ann flew to New York before taking the Silver Meteor train to St. Petersburg in Florida and then flying on to Jamaica. In doing so, they followed the same train route Fleming had taken with his friend Ivar Bryce in July 1943, when Fleming had first visited the island. Once Fleming and his wife arrived at Goldeneye, he started work on the second Bond novel. In May 1963 he wrote an article for Books and Bookmen magazine describing his approach to writing, in which he said: "I write for about three hours in the morning ... and I do another hour's work between six and seven in the evening. I never correct anything and I never go back to see what I have written ... By following my formula, you write 2,000 words a day." As he had done with Casino Royale, Fleming showed the manuscript to his friend, the writer William Plomer, who reacted favourably to the story, telling Fleming that "the new book held this reader like a limpet mine & the denouement was shattering". On a trip to the US in May 1953, Fleming used his five-day travelling time on RMS Queen Elizabeth to correct the proofs of the novel. Fleming intended the book to have a more serious tone than his debut novel, and he initially considered making the story a meditation on the nature of evil. The novel's original title, The Undertaker's Wind, reflects this; the undertaker's wind, which was to act as a metaphor for the story, describes one of Jamaica's winds that "blows all the bad air out of the island". The literary critic Daniel Ferreras Savoye considers the titles of Fleming's novels to have importance individually and collectively; Live and Let Die, he writes, "turns an expression of collective wisdom, in this case fraternal and positive, into its exact opposite, suggesting a materialistic epistemological outlook, individualistic and lucid". This is in keeping with the storyline in that Bond brings order without which "the world would quickly turn into the dystopian, barbarian reality feared by [Thomas] Hobbes and celebrated by [Marquis] de Sade." Although Fleming provided no dates within his novels, two writers have identified different timelines based on events and situations within the novel series as a whole. John Griswold and Henry Chancellor—both of whom have written books on behalf of Ian Fleming Publications—put the events of Live and Let Die in 1952; Griswold is more precise, and considers the story to have taken place in January and February that year. ## Development ### Plot inspirations Much of the novel draws from Fleming's personal experiences: the opening of the book, with Bond's arrival at New York's Idlewild Airport, was inspired by Fleming's own journeys in 1941 and 1953, and the warehouse at which Leiter is attacked by a shark was based on a similar building Fleming and his wife had visited in St. Petersburg, Florida, on their recent journey. He also used his experiences on his two journeys on the Silver Meteor as background for the route taken by Bond and Solitaire. Fleming used the names of some of his friends in the story, including Ivar Bryce for Bond's alias, and Tommy Leiter for Felix Leiter; He borrowed Bryce's middle name, Felix, for Leiter's first name, and part of John Fox-Strangways's surname for the name of the MI6 station chief in Jamaica. Fleming also used the name of the local Jamaican rufous-throated solitaire bird as the name of the book's main female character. Fleming's experiences on his first scuba dive with Jacques Cousteau in 1953 provided much of the description of Bond's swim to Mr Big's boat. The concept of limpet-mining is possibly based on the wartime activities of the elite 10th Light Flotilla, a unit of Italian navy frogmen. Fleming also used, and extensively quoted, information about voodoo from his friend Patrick Leigh Fermor's 1950 book The Traveller's Tree, which had also been partly written at Goldeneye. Fleming had a long-held interest in pirates, from the novels he read as a child through to films such as Captain Blood (1935) with Errol Flynn, which he enjoyed watching. From his Goldeneye home on Jamaica's northern shore, Fleming had visited Port Royal on the south of the island, which was once the home port of Sir Henry Morgan, all of which stimulated Fleming's interest. For the background to Mr Big's treasure island, Fleming appropriated the details of Cabritta Island in Port Maria Bay, which was the true location of Morgan's hoard. ### Characters Fleming builds the main character in Live and Let Die to make Bond come across as more human than in Casino Royale, becoming "a much warmer, more likeable man from the opening chapter", according to the novelist Raymond Benson, who between 1997 and 2002 wrote a series of Bond novels and short stories. Savoye sees the introduction of a vulnerable side to Bond, identifying the agent's tears towards the end of the story as evidence of this. Similarly, over the course of the book, the American character Leiter develops and also emerges as a more complete and human character and his and Bond's friendship is evident in the story. Despite the relationship, Leiter is again subordinate to Bond. While in Casino Royale his role was to provide technical support and money to Bond, in Live and Let Die the character is secondary to Bond, and the only time he takes the initiative, he loses an arm and a leg, while Bond wins his own battle with the same opponent. Although Fleming had initially intended to kill Leiter off in the story, his American literary agent protested, and the character was saved. Quarrel was Fleming's ideal concept of a black person, and the character was based on his genuine liking for Jamaicans, whom he saw as "full of goodwill and cheerfulness and humour". The relationship between Bond and Quarrel was based on a mutual assumption of Bond's superiority. Fleming described the relationship as "that of a Scots laird with his head stalker; authority was unspoken and there was no room for servility". Fleming's villain was physically abnormal—as many of Bond's later adversaries were. Mr Big is described as being intellectually brilliant, with a "great football of a head, twice the normal size and very nearly round" and skin which was "grey-black, taut and shining like the face of a week-old corpse in the river". For Benson, "Mr Big is only an adequate villain", with little depth. According to the literary analyst LeRoy L. Panek, in his examination of 20th century British spy novels, Live and Let Die was a departure from the "gentleman crook" that appeared in much earlier literature, as the intellectual and organisational skills of Mr Big were emphasised, rather than the behavioural. Within Mr Big's organisation, Panek identifies Mr Big's henchmen as "merely incompetent gunsels" whom Bond can eliminate with relative ease. ## Style Benson analysed Fleming's writing style and identified what he described as the "Fleming Sweep": a stylistic point that sweeps the reader from one chapter to another using 'hooks' at the end of chapters to heighten tension and pull the reader into the next: Benson felt that the "Fleming Sweep never achieves a more engaging rhythm and flow" than in Live and Let Die. The writer and academic Kingsley Amis—who also later wrote a Bond novel—disagrees, and thinks that the story has "less narrative sweep than most". Fleming's biographer, Matthew Parker, considers the novel possibly Fleming's best, as it has a tight plot and is well paced throughout; he thinks the book "establishes the winning formula" for the stories that follow. Savoye, comparing the structure of Live and Let Die with Casino Royale, believes that the two books have open narratives which allow Fleming to continue with further books in the series. Savoye finds differences in the structure of the endings, with Live and Let Die's promise of future sexual encounters between Bond and Solitaire to be more credible than Casino Royale's ending, in which Bond vows to battle a super-criminal organisation. Within the novel Fleming uses elements that are "pure Gothic", according to the essayist Umberto Eco. This includes the description of Mr Big's death by shark attack, in which Bond watches as "Half of The Big Man's left arm came out of the water. It had no hand, no wrist, no wrist watch." Eco considers that this is "not just an example of macabre sarcasm; it is an emphasis on the essential by the inessential, typical of the école du regard." Benson considers that Fleming's experiences as a journalist, and his eye for detail, add to the verisimilitude displayed in the novel. ## Themes Live and Let Die, like other Bond novels, reflects the changing roles of Britain and America during the 1950s and the perceived threat from the Soviet Union to both nations. Unlike Casino Royale, where Cold War politics revolve around British-Soviet tensions, in Live and Let Die Bond arrives in Harlem to protect America from Soviet agents working through the Black Power movement. In the novel, America was the Soviet objective and Bond comments "that New York 'must be the fattest atomic-bomb target on the whole face of the world'." Live and Let Die also gave Fleming a chance to outline his views on what he saw as the increasing American colonisation of Jamaica—a subject that concerned both him and his neighbour Noël Coward. While the American Mr Big was unusual in appropriating an entire island, the rising number of American tourists to the islands was seen by Fleming as a threat to Jamaica; he wrote in the novel that Bond was "glad to be on his way to the soft green flanks of Jamaica and to leave behind the great hard continent of Eldollarado." Bond's briefing also provides an opportunity for Fleming to offer his views on race through his characters. "M and Bond ... offer their views on the ethnicity of crime, views that reflected ignorance, the inherited racialist prejudices of London clubland", according to the cultural historian Jeremy Black. Black also points out that "the frequency of his references and his willingness to offer racial stereotypes [was] typical of many writers of his age". The writer Louise Welsh observes that "Live and Let Die taps into the paranoia that some sectors of white society were feeling" as the civil rights movements challenged prejudice and inequality. That insecurity manifested itself in opinions shared by Fleming with the intelligence industry, that the American National Association for the Advancement of Colored People was a communist front. The communist threat was brought home to Jamaica with the 1952 arrest of the Jamaican politician Alexander Bustamante by the American authorities while he was on official business in Puerto Rico, despite the fact that he was avowedly anti-communist. During the course of the year local Jamaican political parties had also expelled members for being communists. Friendship is another prominent element of Live and Let Die, where the importance of male friends and allies shows through in Bond's relationships with Leiter and Quarrel. The more complete character profiles in the novel clearly show the strong relationship between Bond and Leiter, and this provides a strengthened motive for Bond to chase Mr Big in revenge for the shark attack on Leiter. Live and Let Die continues the theme Fleming examined in Casino Royale, that of evil or, as Fleming's biographer, Andrew Lycett, describes it, "the banality of evil". Fleming uses Mr Big as the vehicle to voice opinions on evil, particularly when he tells Bond that "Mister Bond, I suffer from boredom. I am prey to what the early Christians called 'accidie', the deadly lethargy that envelops those who are sated." This allowed Fleming to build the Bond character as a counter to the accidie, in what the writer saw as a Manichaean struggle between good and evil. Benson considers evil as the main theme of the book, and highlights the discussion Bond has with René Mathis of the French Deuxième Bureau in Casino Royale, in which the Frenchman predicts Bond will seek out and kill the evil men of the world. ## Publication and reception ### Publication history Live and Let Die was published in hardback by Jonathan Cape on 5 April 1954 and, as with Casino Royale, Fleming designed the cover, which again featured the title lettering prominently. It had an initial print run of 7,500 copies which sold out, and a reprint of 2,000 copies was soon undertaken; by the end of the first year, a total of over 9,000 copies had been sold. In May 1954 Live and Let Die was banned in Ireland by the Irish Censorship of Publications Board. Lycett observed that the ban helped the general publicity in other territories. In October 1957 Pan Books issued a paperback version which sold 50,000 copies in the first year. Live and Let Die was published in the US in January 1955 by Macmillan; there was only one major change in the book, which was that the title of the fifth chapter was changed from "Nigger Heaven" to "Seventh Avenue". Sales in the US were poor, with only 5,000 copies sold in the first year of publication. In 2023 Ian Fleming Publications—the company that administers all Fleming's literary works—had the Bond series edited as part of a sensitivity review to remove or reword some racial or ethnic descriptors. The rerelease of the series was for the 70th anniversary of Casino Royale, the first Bond novel. ### Critical reception Philip Day of The Sunday Times noted "How wincingly well Mr Fleming writes"; the reviewer for The Times thought that "[t]his is an ingenious affair, full of recondite knowledge and horrific spills and thrills—of slightly sadistic excitements also—though without the simple and bold design of its predecessor". Elizabeth L Sturch, writing in The Times Literary Supplement, observed that Fleming was "without doubt the most interesting recent recruit among thriller-writers" and that Live and Let Die "fully maintains the promise of ... Casino Royale." Tempering her praise of the book, Sturch thought that "Mr Fleming works often on the edge of flippancy, rather in the spirit of a highbrow", although overall she felt that the novel "contains passages which for sheer excitement have not been surpassed by any modern writer of this kind". The reviewer for The Daily Telegraph felt that "the book is continually exciting, whether it takes us into the heart of Harlem or describes an underwater swim in shark-infested waters; and it is more entertaining because Mr Fleming does not take it all too seriously himself". George Malcolm Thompson, writing in The Evening Standard, believed Live and Let Die to be "tense; ice-cold, sophisticated; Peter Cheyney for the carriage trade". Writing in The New York Times, Anthony Boucher—a critic described by Fleming's biographer, John Pearson, as "throughout an avid anti-Bond and an anti-Fleming man"—thought that the "high-spots are all effectively described ... but the narrative is loose and jerky". Boucher concluded that Live and Let Die was "a lurid meller contrived by mixing equal parts of Oppenheim and Spillane". In June 1955 Raymond Chandler was visiting the poet Stephen Spender in London when he was introduced to Fleming, who subsequently sent Chandler a copy of Live and Let Die. In response, Chandler wrote that Fleming was "probably the most forceful and driving writer of what I suppose still must be called thrillers in England". ## Adaptations Live and Let Die was adapted as a daily comic strip which was published in The Daily Express and syndicated around the world. The adaptation ran from 15 December 1958 to 28 March 1959. The adaptation was written by Henry Gammidge and illustrated by John McLusky, whose drawings of Bond had a resemblance to Sean Connery, the actor who portrayed Bond in Dr. No three years later. Before Live and Let Die had been published, the producer Alexander Korda had read a proof copy of the novel. He thought it was the most exciting story he had read for years, but was unsure whether it was suitable for a film. Nevertheless, he wanted to show the story to the directors David Lean and Carol Reed for their impressions, although nothing came of Korda's initial interest. In 1955, following the television broadcast of an adaptation of Fleming's earlier novel, Casino Royale, Warner Bros. expressed an interest in Live and Let Die, and offered \$500 for an option, against \$5,000 if the film was made. Fleming thought the terms insufficient and turned them down. Live and Let Die, a film based loosely on the novel, was released in 1973; it starred Roger Moore as Bond, and played on the cycle of blaxploitation films produced at the time. The film was directed by Guy Hamilton, produced by Albert R. Broccoli and Harry Saltzman, and is the eighth in the Eon Productions Bond series. Some scenes from the novel were depicted in later Bond films: Bond and Solitaire being dragged behind Mr Big's boat was used in For Your Eyes Only; Felix Leiter was fed to a shark in Licence to Kill, which also adapts Live and Let Die's shoot-out in the warehouse.
9,428
Ernest Hemingway
1,171,081,257
American author and journalist (1899–1961)
[ "1899 births", "1961 deaths", "1961 suicides", "20th-century American dramatists and playwrights", "20th-century American essayists", "20th-century American journalists", "20th-century American male writers", "20th-century American memoirists", "20th-century American non-fiction writers", "20th-century American novelists", "20th-century American poets", "20th-century American screenwriters", "20th-century American short story writers", "20th-century letter writers", "20th-century travel writers", "American Nobel laureates", "American anthologists", "American anti-fascists", "American expatriates in Canada", "American expatriates in Cuba", "American expatriates in France", "American expatriates in Italy", "American expatriates in Spain", "American expatriates in the British Empire", "American expatriates in the United Kingdom", "American fishers", "American hunters", "American letter writers", "American literary theorists", "American male dramatists and playwrights", "American male essayists", "American male journalists", "American male non-fiction writers", "American male novelists", "American male poets", "American male screenwriters", "American male short story writers", "American military personnel of World War I", "American people of the Spanish Civil War", "American psychological fiction writers", "American travel writers", "American war correspondents", "American war correspondents of World War II", "American war novelists", "Bancarella Prize winners", "Burials in Idaho", "Catholics from Idaho", "Converts to Roman Catholicism", "Ernest Hemingway", "French Resistance members", "Hemingway family", "History of Key West, Florida", "Journalists from Illinois", "Lost Generation writers", "Maritime writers", "Modernist writers", "Nobel laureates in Literature", "Novelists from Illinois", "People from Ketchum, Idaho", "Pulitzer Prize for Fiction winners", "Recipients of the Silver Medal of Military Valor", "Suicides by firearm in Idaho", "Survivors of aviation accidents or incidents", "Toronto Star people", "Writers about activism and social change", "Writers from Chicago", "Writers from Oak Park, Illinois", "Writers of historical fiction set in the modern age" ]
Ernest Miller Hemingway (/ˈɜːrnɪst ˈhɛmɪŋweɪ/; July 21, 1899 – July 2, 1961) was an American novelist, short-story writer, and journalist. His economical and understated style—which included his iceberg theory—had a strong influence on 20th-century fiction, while his adventurous lifestyle and public image brought him admiration from later generations. Hemingway produced most of his work between the mid-1920s and the mid-1950s, and he was awarded the 1954 Nobel Prize in Literature. He published seven novels, six short-story collections, and two nonfiction works. Three of his novels, four short-story collections, and three nonfiction works were published posthumously. Many of his works are considered classics of American literature. Hemingway was raised in Oak Park, Illinois. After high school, he was a reporter for a few months for The Kansas City Star before leaving for the Italian Front to enlist as an ambulance driver in World War I. In 1918, he was seriously wounded and returned home. His wartime experiences formed the basis for his novel A Farewell to Arms (1929). In 1921, he married Hadley Richardson, the first of four wives. They moved to Paris, where he worked as a foreign correspondent for the Toronto Star and fell under the influence of the modernist writers and artists of the 1920s' "Lost Generation" expatriate community. Hemingway's debut novel The Sun Also Rises was published in 1926. He divorced Richardson in 1927, and married Pauline Pfeiffer. They divorced after he returned from the Spanish Civil War (1936–1939), which he covered as a journalist and which was the basis for his novel For Whom the Bell Tolls (1940). Martha Gellhorn became his third wife in 1940. He and Gellhorn separated after he met Mary Welsh in London during World War II. Hemingway was present with Allied troops as a journalist at the Normandy landings and the liberation of Paris. He maintained permanent residences in Key West, Florida in the 1930s and in Cuba in the 1940s and 1950s. On a 1954 trip to Africa, he was seriously injured in two plane accidents on successive days, leaving him in pain and ill health for much of the rest of his life. In 1959, he bought a house in Ketchum, Idaho, where, in mid-1961, he died by suicide. ## Life and career Ernest Miller Hemingway was born on July 21, 1899, in Oak Park, Illinois, an affluent suburb just west of Chicago, to Clarence Edmonds Hemingway, a physician, and Grace Hall Hemingway, a musician. His parents were well-educated and well-respected in Oak Park, a conservative community about which resident Frank Lloyd Wright said, "So many churches for so many good people to go to." When Clarence and Grace Hemingway married in 1896, they lived with Grace's father, Ernest Miller Hall, after whom they named their first son, the second of their six children. His sister Marcelline preceded him in 1898, followed by Ursula in 1902, Madelaine in 1904, Carol in 1911, and Leicester in 1915. Grace followed the Victorian convention of not differentiating children's clothing by gender. With only a year separating the two, Ernest and Marcelline resembled one-another strongly. Grace wanted them to appear as twins, so in Ernest's first three years she kept his hair long and dressed both children in similarly frilly feminine clothing. Hemingway's mother, a well-known musician in the village, taught her son to play the cello despite his refusal to learn; though later in life he admitted the music lessons contributed to his writing style, evidenced for example in the "contrapuntal structure" of For Whom the Bell Tolls. As an adult Hemingway professed to hate his mother, although biographer Michael S. Reynolds points out that he shared similar energies and enthusiasms. Each summer the family traveled to Windemere on Walloon Lake, near Petoskey, Michigan. There young Ernest joined his father and learned to hunt, fish, and camp in the woods and lakes of Northern Michigan, early experiences that instilled a life-long passion for outdoor adventure and living in remote or isolated areas. Hemingway attended Oak Park and River Forest High School in Oak Park from 1913 until 1917. He was an accomplished athlete involved with a number of sports, including boxing, track and field, water polo, and football. He performed in the school orchestra for two years with his sister Marcelline, and received good grades in English classes. During his last two years at high school he edited the Trapeze and Tabula (the school's newspaper and yearbook), where he imitated the language of sportswriters and used the pen name Ring Lardner Jr.—a nod to Ring Lardner of the Chicago Tribune whose byline was "Line O'Type". Like Mark Twain, Stephen Crane, Theodore Dreiser, and Sinclair Lewis, Hemingway was a journalist before becoming a novelist. After leaving high school he went to work for The Kansas City Star as a cub reporter. Although he stayed there for only six months, he relied on the Star's style guide as a foundation for his writing, such as "Use short sentences. Use short first paragraphs. Use vigorous English. Be positive, not negative." ### World War I In December 1917, after being rejected by the U.S. Army for poor eyesight, Hemingway responded to an International Red Cross and Red Crescent Movement recruitment effort and signed on to be an ambulance driver in Italy. In May 1918, he sailed from New York, and arrived in Paris as the city was under bombardment from German artillery. That June he arrived at the Italian Front. On his first day in Milan, he was sent to the scene of a munitions factory explosion to join rescuers retrieving the shredded remains of female workers. He described the incident in his 1932 non-fiction book Death in the Afternoon: "I remember that after we searched quite thoroughly for the complete dead we collected fragments." A few days later, he was stationed at Fossalta di Piave. On July 8, he was seriously wounded by mortar fire, having just returned from the canteen bringing chocolate and cigarettes for the men at the front line. Despite his wounds, Hemingway assisted Italian soldiers to safety, for which he was decorated with the Italian War Merit Cross, the Croce al Merito di Guerra. He was still only 18 at the time. Hemingway later said of the incident: "When you go to war as a boy you have a great illusion of immortality. Other people get killed; not you ... Then when you are badly wounded the first time you lose that illusion and you know it can happen to you." He sustained severe shrapnel wounds to both legs, underwent an immediate operation at a distribution center, and spent five days at a field hospital before he was transferred for recuperation to the Red Cross hospital in Milan. He spent six months at the hospital, where he met and formed a strong friendship with "Chink" Dorman-Smith that lasted for decades and shared a room with future American foreign service officer, ambassador, and author Henry Serrano Villard. While recuperating he fell in love with Agnes von Kurowsky, a Red Cross nurse seven years his senior. When Hemingway returned to the United States in January 1919, he believed Agnes would join him within months and the two would marry. Instead, he received a letter in March with her announcement that she was engaged to an Italian officer. Biographer Jeffrey Meyers writes Agnes's rejection devastated and scarred the young man; in future relationships, Hemingway followed a pattern of abandoning a wife before she abandoned him. Hemingway returned home early in 1919 to a time of readjustment. Before the age of 20, he had gained from the war a maturity that was at odds with living at home without a job and with the need for recuperation. As Reynolds explains, "Hemingway could not really tell his parents what he thought when he saw his bloody knee." He was not able to tell them how scared he had been "in another country with surgeons who could not tell him in English if his leg was coming off or not." In September, he took a fishing and camping trip with high school friends to the back-country of Michigan's Upper Peninsula. The trip became the inspiration for his short story "Big Two-Hearted River", in which the semi-autobiographical character Nick Adams takes to the country to find solitude after returning from war. A family friend offered him a job in Toronto, and with nothing else to do, he accepted. Late that year he began as a freelancer and staff writer for the Toronto Star Weekly. He returned to Michigan the following June and then moved to Chicago in September 1920 to live with friends, while still filing stories for the Toronto Star. In Chicago, he worked as an associate editor of the monthly journal Cooperative Commonwealth, where he met novelist Sherwood Anderson. When St. Louis native Hadley Richardson came to Chicago to visit the sister of Hemingway's roommate, Hemingway became infatuated. He later claimed, "I knew she was the girl I was going to marry." Hadley, red-haired, with a "nurturing instinct", was eight years older than Hemingway. Despite the age difference, Hadley, who had grown up with an overprotective mother, seemed less mature than usual for a young woman her age. Bernice Kert, author of The Hemingway Women, claims Hadley was "evocative" of Agnes, but that Hadley had a childishness that Agnes lacked. The two corresponded for a few months and then decided to marry and travel to Europe. They wanted to visit Rome, but Sherwood Anderson convinced them to visit Paris instead, writing letters of introduction for the young couple. They were married on September 3, 1921; two months later Hemingway was hired as a foreign correspondent for the Toronto Star, and the couple left for Paris. Of Hemingway's marriage to Hadley, Meyers claims: "With Hadley, Hemingway achieved everything he had hoped for with Agnes: the love of a beautiful woman, a comfortable income, a life in Europe." ### Paris Carlos Baker, Hemingway's first biographer, believes that while Anderson suggested Paris because "the monetary exchange rate" made it an inexpensive place to live, more importantly it was where "the most interesting people in the world" lived. In Paris, Hemingway met American writer and art collector Gertrude Stein, Irish novelist James Joyce, American poet Ezra Pound (who "could help a young writer up the rungs of a career") and other writers. The Hemingway of the early Paris years was a "tall, handsome, muscular, broad-shouldered, brown-eyed, rosy-cheeked, square-jawed, soft-voiced young man." He and Hadley lived in a small walk-up at 74 rue du Cardinal Lemoine in the Latin Quarter, and he worked in a rented room in a nearby building. Stein, who was the bastion of modernism in Paris, became Hemingway's mentor and godmother to his son Jack; she introduced him to the expatriate artists and writers of the Montparnasse Quarter, whom she referred to as the "Lost Generation"—a term Hemingway popularized with the publication of The Sun Also Rises. A regular at Stein's salon, Hemingway met influential painters such as Pablo Picasso, Joan Miró, and Juan Gris. He eventually withdrew from Stein's influence, and their relationship deteriorated into a literary quarrel that spanned decades. While living in Paris in 1922, Hemingway befriended artist Henry Strater who painted two portraits of him. Ezra Pound met Hemingway by chance at Sylvia Beach's bookshop Shakespeare and Company in 1922. The two toured Italy in 1923 and lived on the same street in 1924. They forged a strong friendship, and in Hemingway, Pound recognized and fostered a young talent. Pound introduced Hemingway to James Joyce, with whom Hemingway frequently embarked on "alcoholic sprees". During his first 20 months in Paris, Hemingway filed 88 stories for the Toronto Star newspaper. He covered the Greco-Turkish War, where he witnessed the burning of Smyrna, and wrote travel pieces such as "Tuna Fishing in Spain" and "Trout Fishing All Across Europe: Spain Has the Best, Then Germany". Hemingway was devastated on learning that Hadley had lost a suitcase filled with his manuscripts at the Gare de Lyon as she was traveling to Geneva to meet him in December 1922. In the following September the couple returned to Toronto, where their son John Hadley Nicanor was born on October 10, 1923. During their absence, Hemingway's first book, Three Stories and Ten Poems, was published. Two of the stories it contained were all that remained after the loss of the suitcase, and the third had been written early the previous year in Italy. Within months a second volume, in our time (without capitals), was published. The small volume included six vignettes and a dozen stories Hemingway had written the previous summer during his first visit to Spain, where he discovered the thrill of the corrida. He missed Paris, considered Toronto boring, and wanted to return to the life of a writer, rather than live the life of a journalist. Hemingway, Hadley and their son (nicknamed Bumby) returned to Paris in January 1924 and moved into a new apartment on the rue Notre-Dame des Champs. Hemingway helped Ford Madox Ford edit The Transatlantic Review, which published works by Pound, John Dos Passos, Baroness Elsa von Freytag-Loringhoven, and Stein, as well as some of Hemingway's own early stories such as "Indian Camp". When In Our Time was published in 1925, the dust jacket bore comments from Ford. "Indian Camp" received considerable praise; Ford saw it as an important early story by a young writer, and critics in the United States praised Hemingway for reinvigorating the short story genre with his crisp style and use of declarative sentences. Six months earlier, Hemingway had met F. Scott Fitzgerald, and the pair formed a friendship of "admiration and hostility". Fitzgerald had published The Great Gatsby the same year: Hemingway read it, liked it, and decided his next work had to be a novel. With his wife Hadley, Hemingway first visited the Festival of San Fermín in Pamplona, Spain, in 1923, where he became fascinated by bullfighting. It is at this time that he began to be referred to as "Papa", even by much older friends. Hadley would much later recall that Hemingway had his own nicknames for everyone and that he often did things for his friends; she suggested that he liked to be looked up to. She did not remember precisely how the nickname came into being; however, it certainly stuck. The Hemingways returned to Pamplona in 1924 and a third time in June 1925; that year they brought with them a group of American and British expatriates: Hemingway's Michigan boyhood friend Bill Smith, Donald Ogden Stewart, Lady Duff Twysden (recently divorced), her lover Pat Guthrie, and Harold Loeb. A few days after the fiesta ended, on his birthday (July 21), he began to write the draft of what would become The Sun Also Rises, finishing eight weeks later. A few months later, in December 1925, the Hemingways left to spend the winter in Schruns, Austria, where Hemingway began revising the manuscript extensively. Pauline Pfeiffer joined them in January and against Hadley's advice, urged Hemingway to sign a contract with Scribner's. He left Austria for a quick trip to New York to meet with the publishers, and on his return, during a stop in Paris, began an affair with Pfeiffer, before returning to Schruns to finish the revisions in March. The manuscript arrived in New York in April; he corrected the final proof in Paris in August 1926, and Scribner's published the novel in October. The Sun Also Rises epitomized the post-war expatriate generation, received good reviews and is "recognized as Hemingway's greatest work". Hemingway himself later wrote to his editor Max Perkins that the "point of the book" was not so much about a generation being lost, but that "the earth abideth forever"; he believed the characters in The Sun Also Rises may have been "battered" but were not lost. Hemingway's marriage to Hadley deteriorated as he was working on The Sun Also Rises. In early 1926, Hadley became aware of his affair with Pfeiffer, who came to Pamplona with them that July. On their return to Paris, Hadley asked for a separation; in November she formally requested a divorce. They split their possessions while Hadley accepted Hemingway's offer of the proceeds from The Sun Also Rises. The couple were divorced in January 1927, and Hemingway married Pfeiffer in May. Pfeiffer, who was from a wealthy Catholic family in Arkansas, moved to Paris to work for Vogue magazine. Before their marriage, Hemingway converted to Catholicism. They honeymooned in Le Grau-du-Roi, where he contracted anthrax, and he planned his next collection of short stories, Men Without Women, which was published in October 1927, and included his boxing story "Fifty Grand". Cosmopolitan magazine editor-in-chief Ray Long praised "Fifty Grand", calling it, "one of the best short stories that ever came to my hands ... the best prize-fight story I ever read ... a remarkable piece of realism." By the end of the year Pauline, who was pregnant, wanted to move back to America. John Dos Passos recommended Key West, and they left Paris in March 1928. Hemingway suffered a severe injury in their Paris bathroom when he pulled a skylight down on his head thinking he was pulling on a toilet chain. This left him with a prominent forehead scar, which he carried for the rest of his life. When Hemingway was asked about the scar, he was reluctant to answer. After his departure from Paris, Hemingway "never again lived in a big city". ### Key West and the Caribbean Hemingway and Pauline traveled to Kansas City, Missouri, where their son Patrick was born on June 28, 1928. Pauline had a difficult delivery; Hemingway fictionalized a version of the event as a part of A Farewell to Arms. After Patrick's birth, Pauline and Hemingway traveled to Wyoming, Massachusetts, and New York. In the winter, he was in New York with Bumby, about to board a train to Florida, when he received a cable telling him that his father had killed himself. Hemingway was devastated, having earlier written to his father telling him not to worry about financial difficulties; the letter arrived minutes after the suicide. He realized how Hadley must have felt after her own father's suicide in 1903, and he commented, "I'll probably go the same way." Upon his return to Key West in December, Hemingway worked on the draft of A Farewell to Arms before leaving for France in January. He had finished it in August but delayed the revision. The serialization in Scribner's Magazine was scheduled to begin in May, but as late as April, Hemingway was still working on the ending, which he may have rewritten as many as seventeen times. The completed novel was published on September 27. Biographer James Mellow believes A Farewell to Arms established Hemingway's stature as a major American writer and displayed a level of complexity not apparent in The Sun Also Rises. (The story was turned into a play by war veteran Laurence Stallings that was the basis for the film starring Gary Cooper.) In Spain in mid-1929, Hemingway researched his next work, Death in the Afternoon. He wanted to write a comprehensive treatise on bullfighting, explaining the toreros and corridas complete with glossaries and appendices, because he believed bullfighting was "of great tragic interest, being literally of life and death." During the early 1930s, Hemingway spent his winters in Key West and summers in Wyoming, where he found "the most beautiful country he had seen in the American West" and hunted deer, elk, and grizzly bear. He was joined there by Dos Passos, and in November 1930, after bringing Dos Passos to the train station in Billings, Montana, Hemingway broke his arm in a car accident. The surgeon tended the compound spiral fracture and bound the bone with kangaroo tendon. Hemingway was hospitalized for seven weeks, with Pauline tending to him; the nerves in his writing hand took as long as a year to heal, during which time he suffered intense pain. His third child, Gloria Hemingway, was born a year later on November 12, 1931, in Kansas City as "Gregory Hancock Hemingway". Pauline's uncle bought the couple a house in Key West with a carriage house, the second floor of which was converted into a writing studio. While in Key West, Hemingway frequented the local bar Sloppy Joe's. He invited friends—including Waldo Peirce, Dos Passos, and Max Perkins—to join him on fishing trips and on an all-male expedition to the Dry Tortugas. Meanwhile, he continued to travel to Europe and to Cuba, and—although in 1933 he wrote of Key West, "We have a fine house here, and kids are all well"—Mellow believes he "was plainly restless". In 1933, Hemingway and Pauline went on safari to Kenya. The 10-week trip provided material for Green Hills of Africa, as well as for the short stories "The Snows of Kilimanjaro" and "The Short Happy Life of Francis Macomber". The couple visited Mombasa, Nairobi, and Machakos in Kenya; then moved on to Tanganyika Territory, where they hunted in the Serengeti, around Lake Manyara, and west and southeast of present-day Tarangire National Park. Their guide was the noted "white hunter" Philip Percival who had guided Theodore Roosevelt on his 1909 safari. During these travels, Hemingway contracted amoebic dysentery that caused a prolapsed intestine, and he was evacuated by plane to Nairobi, an experience reflected in "The Snows of Kilimanjaro". On Hemingway's return to Key West in early 1934, he began work on Green Hills of Africa, which he published in 1935 to mixed reviews. Hemingway bought a boat in 1934, named it the Pilar, and began sailing the Caribbean. In 1935 he first arrived at Bimini, where he spent a considerable amount of time. During this period he also worked on To Have and Have Not, published in 1937 while he was in Spain, the only novel he wrote during the 1930s. ### Spanish Civil War In 1937, Hemingway left for Spain to cover the Spanish Civil War for the North American Newspaper Alliance (NANA), despite Pauline's reluctance to have him working in a war zone. He and Dos Passos both signed on to work with Dutch filmmaker Joris Ivens as screenwriters for The Spanish Earth. Dos Passos left the project after the execution of José Robles, his friend and Spanish translator, which caused a rift between the two writers. Hemingway was joined in Spain by journalist and writer Martha Gellhorn, whom he had met in Key West a year earlier. Like Hadley, Martha was a St. Louis native, and like Pauline, she had worked for Vogue in Paris. Of Martha, Kert explains, "she never catered to him the way other women did". In July 1937 he attended the Second International Writers' Congress, the purpose of which was to discuss the attitude of intellectuals to the war, held in Valencia, Barcelona and Madrid and attended by many writers including André Malraux, Stephen Spender and Pablo Neruda. Late in 1937, while in Madrid with Martha, Hemingway wrote his only play, The Fifth Column, as the city was being bombarded by Francoist forces. He returned to Key West for a few months, then back to Spain twice in 1938, where he was present at the Battle of the Ebro, the last republican stand, and he was among the British and American journalists who were some of the last to leave the battle as they crossed the river. ### Cuba In early 1939, Hemingway crossed to Cuba in his boat to live in the Hotel Ambos Mundos in Havana. This was the separation phase of a slow and painful split from Pauline, which began when Hemingway met Martha Gellhorn. Martha soon joined him in Cuba, and they rented "Finca Vigía" ("Lookout Farm"), a 15-acre (61,000 m<sup>2</sup>) property 15 miles (24 km) from Havana. Pauline and the children left Hemingway that summer, after the family was reunited during a visit to Wyoming; when his divorce from Pauline was finalized, he and Martha were married on November 20, 1940, in Cheyenne, Wyoming. Hemingway moved his primary summer residence to Ketchum, Idaho, just outside the newly built resort of Sun Valley, and moved his winter residence to Cuba. He had been disgusted when a Parisian friend allowed his cats to eat from the table, but he became enamored of cats in Cuba and kept dozens of them on the property. Descendants of his cats live at his Key West home. Gellhorn inspired him to write his most famous novel, For Whom the Bell Tolls, which he began in March 1939 and finished in July 1940. It was published in October 1940. His pattern was to move around while working on a manuscript, and he wrote For Whom the Bell Tolls in Cuba, Wyoming, and Sun Valley. It became a Book-of-the-Month Club choice, sold half a million copies within months, was nominated for a Pulitzer Prize and, in the words of Meyers, "triumphantly re-established Hemingway's literary reputation". In January 1941, Martha was sent to China on assignment for Collier's magazine. Hemingway went with her, sending in dispatches for the newspaper PM, but in general he disliked China. A 2009 book by former KGB officer Alexander Vassiliev suggests during that period he may have been recruited to work for NKVD "on ideological grounds" under the code name "Argo". They returned to Cuba before the declaration of war by the United States that December, when he convinced the Cuban government to help him refit the Pilar, which he intended to use to ambush German submarines off the coast of Cuba. ### World War II Hemingway was in Europe from May 1944 to March 1945. When he arrived in London, he met Time magazine correspondent Mary Welsh, with whom he became infatuated. Martha had been forced to cross the Atlantic in a ship filled with explosives because Hemingway refused to help her get a press pass on a plane, and she arrived in London to find him hospitalized with a concussion from a car accident. She was unsympathetic to his plight; she accused him of being a bully and told him that she was "through, absolutely finished". The last time that Hemingway saw Martha was in March 1945 as he was preparing to return to Cuba, and their divorce was finalized later that year. Meanwhile, he had asked Mary Welsh to marry him on their third meeting. Hemingway accompanied the troops to the Normandy Landings wearing a large head bandage, according to Meyers, but he was considered "precious cargo" and not allowed ashore. The landing craft came within sight of Omaha Beach before coming under enemy fire and turning back. Hemingway later wrote in Collier's that he could see "the first, second, third, fourth and fifth waves of [landing troops] lay where they had fallen, looking like so many heavily laden bundles on the flat pebbly stretch between the sea and first cover". Mellow explains that, on that first day, none of the correspondents were allowed to land and Hemingway was returned to the Dorothea Dix. Late in July, he attached himself to "the 22nd Infantry Regiment commanded by Col. Charles "Buck" Lanham, as it drove toward Paris", and Hemingway became de facto leader to a small band of village militia in Rambouillet outside of Paris. Paul Fussell remarks: "Hemingway got into considerable trouble playing infantry captain to a group of Resistance people that he gathered because a correspondent is not supposed to lead troops, even if he does it well." This was in fact in contravention of the Geneva Convention, and Hemingway was brought up on formal charges; he said that he "beat the rap" by claiming that he only offered advice. On August 25, he was present at the liberation of Paris as a journalist; contrary to the Hemingway legend, he was not the first into the city, nor did he liberate the Ritz. In Paris, he visited Sylvia Beach and Pablo Picasso with Mary Welsh, who joined him there; in a spirit of happiness, he forgave Gertrude Stein. Later that year, he observed heavy fighting in the Battle of Hürtgen Forest. On December 17, 1944, he had himself driven to Luxembourg in spite of illness to cover The Battle of the Bulge. As soon as he arrived, however, Lanham handed him to the doctors, who hospitalized him with pneumonia; he recovered a week later, but most of the fighting was over. In 1947, Hemingway was awarded a Bronze Star for his bravery during World War II. He was recognized for having been "under fire in combat areas in order to obtain an accurate picture of conditions", with the commendation that "through his talent of expression, Mr. Hemingway enabled readers to obtain a vivid picture of the difficulties and triumphs of the front-line soldier and his organization in combat". ### Cuba and the Nobel Prize Hemingway said he "was out of business as a writer" from 1942 to 1945 during his residence in Cuba. In 1946 he married Mary, who had an ectopic pregnancy five months later. The Hemingway family suffered a series of accidents and health problems in the years following the war: in a 1945 car accident, he "smashed his knee" and sustained another "deep wound on his forehead"; Mary broke first her right ankle and then her left in successive skiing accidents. A 1947 car accident left Patrick with a head wound and severely ill. Hemingway sank into depression as his literary friends began to die: in 1939 William Butler Yeats and Ford Madox Ford; in 1940 F. Scott Fitzgerald; in 1941 Sherwood Anderson and James Joyce; in 1946 Gertrude Stein; and the following year in 1947, Max Perkins, Hemingway's long-time Scribner's editor, and friend. During this period, he suffered from severe headaches, high blood pressure, weight problems, and eventually diabetes—much of which was the result of previous accidents and many years of heavy drinking. Nonetheless, in January 1946, he began work on The Garden of Eden, finishing 800 pages by June. During the post-war years, he also began work on a trilogy tentatively titled "The Land", "The Sea" and "The Air", which he wanted to combine in one novel titled The Sea Book. However, both projects stalled, and Mellow says that Hemingway's inability to continue was "a symptom of his troubles" during these years. In 1948, Hemingway and Mary traveled to Europe, staying in Venice for several months. While there, Hemingway fell in love with the then 19-year-old Adriana Ivancich. The platonic love affair inspired the novel Across the River and into the Trees, written in Cuba during a time of strife with Mary, and published in 1950 to negative reviews. The following year, furious at the critical reception of Across the River and Into the Trees, he wrote the draft of The Old Man and the Sea in eight weeks, saying that it was "the best I can write ever for all of my life". The Old Man and the Sea became a book-of-the-month selection, made Hemingway an international celebrity, and won the Pulitzer Prize in May 1953, a month before he left for his second trip to Africa. In January 1954, while in Africa, Hemingway was almost fatally injured in two successive plane crashes. He chartered a sightseeing flight over the Belgian Congo as a Christmas present to Mary. On their way to photograph Murchison Falls from the air, the plane struck an abandoned utility pole and "crash landed in heavy brush". Hemingway's injuries included a head wound, while Mary broke two ribs. The next day, attempting to reach medical care in Entebbe, they boarded a second plane that exploded at take-off, with Hemingway suffering burns and another concussion, this one serious enough to cause leaking of cerebral fluid. They eventually arrived in Entebbe to find reporters covering the story of Hemingway's death. He briefed the reporters and spent the next few weeks recuperating and reading his erroneous obituaries. Despite his injuries, Hemingway accompanied Patrick and his wife on a planned fishing expedition in February, but pain caused him to be irascible and difficult to get along with. When a bushfire broke out, he was again injured, sustaining second-degree burns on his legs, front torso, lips, left hand and right forearm. Months later in Venice, Mary reported to friends the full extent of Hemingway's injuries: two cracked discs, a kidney and liver rupture, a dislocated shoulder and a broken skull. The accidents may have precipitated the physical deterioration that was to follow. After the plane crashes, Hemingway, who had been "a thinly controlled alcoholic throughout much of his life, drank more heavily than usual to combat the pain of his injuries." In October 1954, Hemingway received the Nobel Prize in Literature. He modestly told the press that Carl Sandburg, Isak Dinesen and Bernard Berenson deserved the prize, but he gladly accepted the prize money. Mellow says Hemingway "had coveted the Nobel Prize", but when he won it, months after his plane accidents and the ensuing worldwide press coverage, "there must have been a lingering suspicion in Hemingway's mind that his obituary notices had played a part in the academy's decision." Because he was suffering pain from the African accidents, he decided against traveling to Stockholm. Instead he sent a speech to be read, defining the writer's life: > Writing, at its best, is a lonely life. Organizations for writers palliate the writer's loneliness but I doubt if they improve his writing. He grows in public stature as he sheds his loneliness and often his work deteriorates. For he does his work alone and if he is a good enough writer he must face eternity, or the lack of it, each day. From the end of the year in 1955 to early 1956, Hemingway was bedridden. He was told to stop drinking to mitigate liver damage, advice he initially followed but then disregarded. In October 1956, he returned to Europe and met Basque writer Pio Baroja, who was seriously ill and died weeks later. During the trip, Hemingway became sick again and was treated for "high blood pressure, liver disease, and arteriosclerosis". In November 1956, while staying in Paris, he was reminded of trunks he had stored in the Ritz Hotel in 1928 and never retrieved. Upon re-claiming and opening the trunks, Hemingway discovered they were filled with notebooks and writing from his Paris years. Excited about the discovery, when he returned to Cuba in early 1957, he began to shape the recovered work into his memoir A Moveable Feast. By 1959 he ended a period of intense activity: he finished A Moveable Feast (scheduled to be released the following year); brought True at First Light to 200,000 words; added chapters to The Garden of Eden; and worked on Islands in the Stream. The last three were stored in a safe deposit box in Havana, as he focused on the finishing touches for A Moveable Feast. Author Michael Reynolds claims it was during this period that Hemingway slid into depression, from which he was unable to recover. The Finca Vigía became crowded with guests and tourists, as Hemingway, beginning to become unhappy with life there, considered a permanent move to Idaho. In 1959 he bought a home overlooking the Big Wood River, outside Ketchum, and left Cuba—although he apparently remained on easy terms with the Castro government, telling The New York Times he was "delighted" with Castro's overthrow of Batista. He was in Cuba in November 1959, between returning from Pamplona and traveling west to Idaho, and the following year for his 61st birthday; however, that year he and Mary decided to leave after hearing the news that Castro wanted to nationalize property owned by Americans and other foreign nationals. On July 25, 1960, the Hemingways left Cuba for the last time, leaving art and manuscripts in a bank vault in Havana. After the 1961 Bay of Pigs Invasion, the Finca Vigía was expropriated by the Cuban government, complete with Hemingway's collection of "four to six thousand books". President Kennedy arranged for Mary Hemingway to travel to Cuba where she met Fidel Castro and obtained her husband's papers and painting in return for donating Finca Vigía to Cuba. ### Idaho and suicide Hemingway continued to rework the material that was published as A Moveable Feast through the 1950s. In mid-1959, he visited Spain to research a series of bullfighting articles commissioned by Life magazine. Life wanted only 10,000 words, but the manuscript grew out of control. He was unable to organize his writing for the first time in his life, so he asked A. E. Hotchner to travel to Cuba to help him. Hotchner helped him trim the Life piece down to 40,000 words, and Scribner's agreed to a full-length book version (The Dangerous Summer) of almost 130,000 words. Hotchner found Hemingway to be "unusually hesitant, disorganized, and confused", and suffering badly from failing eyesight. Hemingway and Mary left Cuba for the last time on July 25, 1960. He set up a small office in his New York City apartment and attempted to work, but he left soon after. He then traveled alone to Spain to be photographed for the front cover of Life magazine. A few days later, the news reported that he was seriously ill and on the verge of dying, which panicked Mary until she received a cable from him telling her, "Reports false. Enroute Madrid. Love Papa." He was, in fact, seriously ill, and believed himself to be on the verge of a breakdown. Feeling lonely, he took to his bed for days, retreating into silence, despite having the first installments of The Dangerous Summer published in Life in September 1960 to good reviews. In October, he left Spain for New York, where he refused to leave Mary's apartment, presuming that he was being watched. She quickly took him to Idaho, where physician George Saviers met them at the train. Hemingway was constantly worried about money and his safety. He worried about his taxes and that he would never return to Cuba to retrieve the manuscripts that he had left in a bank vault. He became paranoid, thinking that the FBI was actively monitoring his movements in Ketchum. The FBI had opened a file on him during World War II, when he used the Pilar to patrol the waters off Cuba, and J. Edgar Hoover had an agent in Havana watch him during the 1950s. Unable to care for her husband, Mary had Saviers fly Hemingway to the Mayo Clinic in Minnesota at the end of November for hypertension treatments, as he told his patient. The FBI knew that Hemingway was at the Mayo Clinic, as an agent later documented in a letter written in January 1961. Hemingway was checked in under Saviers's name to maintain anonymity. Meyers writes that "an aura of secrecy surrounds Hemingway's treatment at the Mayo" but confirms that he was treated with electroconvulsive therapy (ECT) as many as 15 times in December 1960 and was "released in ruins" in January 1961. Reynolds gained access to Hemingway's records at the Mayo, which document ten ECT sessions. The doctors in Rochester told Hemingway the depressive state for which he was being treated may have been caused by his long-term use of Reserpine and Ritalin. Of the ECT therapy, Hemingway told Hotchner, "What is the sense of ruining my head and erasing my memory, which is my capital, and putting me out of business? It was a brilliant cure, but we lost the patient." Hemingway was back in Ketchum in April 1961, three months after being released from the Mayo Clinic, when Mary "found Hemingway holding a shotgun" in the kitchen one morning. She called Saviers, who sedated him and admitted him to the Sun Valley Hospital and once the weather cleared Saviers flew again to Rochester with his patient. Hemingway underwent three electroshock treatments during that visit. He was released at the end of June and was home in Ketchum on June 30. Two days later he "quite deliberately" shot himself with his favorite shotgun in the early morning hours of July 2, 1961. He had unlocked the basement storeroom where his guns were kept, gone upstairs to the front entrance foyer, and shot himself with the "double-barreled shotgun that he had used so often it might have been a friend", which was purchased from Abercrombie & Fitch. Mary was sedated and taken to the hospital, returning home the next day where she cleaned the house and saw to the funeral and travel arrangements. Bernice Kert writes that it "did not seem to her a conscious lie" when she told the press that his death had been accidental. In a press interview five years later, Mary confirmed that he had shot himself. Family and friends flew to Ketchum for the funeral, officiated by the local Catholic priest, who believed that the death had been accidental. An altar boy fainted at the head of the casket during the funeral, and Hemingway's brother Leicester wrote: "It seemed to me Ernest would have approved of it all." He is buried in the Ketchum cemetery. Hemingway's behavior during his final years had been similar to that of his father before he killed himself; his father may have had hereditary hemochromatosis, whereby the excessive accumulation of iron in tissues culminates in mental and physical deterioration. Medical records made available in 1991 confirmed that Hemingway had been diagnosed with hemochromatosis in early 1961. His sister Ursula and his brother Leicester also killed themselves. Hemingway's health was further complicated by heavy drinking throughout most of his life. A memorial to Hemingway just north of Sun Valley is inscribed on the base with a eulogy Hemingway had written for a friend several decades earlier: Best of all he loved the fall the leaves yellow on cottonwoods leaves floating on trout streams and above the hills the high blue windless skies ...Now he will be a part of them forever. ## Writing style The New York Times wrote in 1926 of Hemingway's first novel, "No amount of analysis can convey the quality of The Sun Also Rises. It is a truly gripping story, told in a lean, hard, athletic narrative prose that puts more literary English to shame." The Sun Also Rises is written in the spare, tight prose that made Hemingway famous, and, according to James Nagel, "changed the nature of American writing". In 1954, when Hemingway was awarded the Nobel Prize for Literature, it was for "his mastery of the art of narrative, most recently demonstrated in The Old Man and the Sea, and for the influence that he has exerted on contemporary style." Henry Louis Gates believes Hemingway's style was fundamentally shaped "in reaction to [his] experience of world war". After World War I, he and other modernists "lost faith in the central institutions of Western civilization" by reacting against the elaborate style of 19th-century writers and by creating a style "in which meaning is established through dialogue, through action, and silences—a fiction in which nothing crucial—or at least very little—is stated explicitly." Because he began as a writer of short stories, Baker believes Hemingway learned to "get the most from the least, how to prune language, how to multiply intensities and how to tell nothing but the truth in a way that allowed for telling more than the truth." Hemingway called his style the iceberg theory: the facts float above water; the supporting structure and symbolism operate out of sight. The concept of the iceberg theory is sometimes referred to as the "theory of omission". Hemingway believed the writer could describe one thing (such as Nick Adams fishing in "Big Two-Hearted River") though an entirely different thing occurs below the surface (Nick Adams concentrating on fishing to the extent that he does not have to think about anything else). Paul Smith writes that Hemingway's first stories, collected as In Our Time, showed he was still experimenting with his writing style, and when he wrote about Spain or other countries he incorporated foreign words into the text, which sometimes appears directly in the other language (in italics, as occurs in The Old Man and the Sea) or in English as literal translations. He also often used bilingual puns and crosslingual wordplay as stylistic devices. In general, he avoided complicated syntax. About 70 percent of the sentences are simple sentences without subordination—a simple childlike grammar structure. Jackson Benson believes Hemingway used autobiographical details as framing devices about life in general—not only about his life. For example, Benson postulates that Hemingway used his experiences and drew them out with "what if" scenarios: "what if I were wounded in such a way that I could not sleep at night? What if I were wounded and made crazy, what would happen if I were sent back to the front?" Writing in "The Art of the Short Story", Hemingway explains: "A few things I have found to be true. If you leave out important things or events that you know about, the story is strengthened. If you leave or skip something because you do not know it, the story will be worthless. The test of any story is how very good the stuff that you, not your editors, omit." The simplicity of the prose is deceptive. Zoe Trodd believes Hemingway crafted skeletal sentences in response to Henry James's observation that World War I had "used up words". Hemingway offers a "multi-focal" photographic reality. His iceberg theory of omission is the foundation on which he builds. The syntax, which lacks subordinating conjunctions, creates static sentences. The photographic "snapshot" style creates a collage of images. Many types of internal punctuation (colons, semicolons, dashes, parentheses) are omitted in favor of short declarative sentences. The sentences build on each other, as events build to create a sense of the whole. Multiple strands exist in one story; an "embedded text" bridges to a different angle. He also uses other cinematic techniques of "cutting" quickly from one scene to the next; or of "splicing" a scene into another. Intentional omissions allow the reader to fill the gap, as though responding to instructions from the author and create three-dimensional prose. Hemingway habitually used the word "and" in place of commas. This use of polysyndeton may serve to convey immediacy. Hemingway's polysyndetonic sentence—or in later works his use of subordinate clauses—uses conjunctions to juxtapose startling visions and images. Benson compares them to haikus. Many of Hemingway's followers misinterpreted his lead and frowned upon all expression of emotion; Saul Bellow satirized this style as "Do you have emotions? Strangle them." However, Hemingway's intent was not to eliminate emotion, but to portray it more scientifically. Hemingway thought it would be easy, and pointless, to describe emotions; he sculpted collages of images in order to grasp "the real thing, the sequence of motion and fact which made the emotion and which would be as valid in a year or in ten years or, with luck and if you stated it purely enough, always". This use of an image as an objective correlative is characteristic of Ezra Pound, T. S. Eliot, James Joyce, and Marcel Proust. Hemingway's letters refer to Proust's Remembrance of Things Past several times over the years, and indicate he read the book at least twice. ## Themes Hemingway's writing includes themes of love, war, travel, wilderness, and loss. Critic Leslie Fiedler sees the theme he defines as "The Sacred Land"—the American West—extended in Hemingway's work to include mountains in Spain, Switzerland and Africa, and to the streams of Michigan. The American West is given a symbolic nod with the naming of the "Hotel Montana" in The Sun Also Rises and For Whom the Bell Tolls. According to Stoltzfus and Fiedler, in Hemingway's work, nature is a place for rebirth and rest; and it is where the hunter or fisherman might experience a moment of transcendence at the moment they kill their prey. Nature is where men exist without women: men fish; men hunt; men find redemption in nature. Although Hemingway does write about sports, such as fishing, Carlos Baker notes the emphasis is more on the athlete than the sport. At its core, much of Hemingway's work can be viewed in the light of American naturalism, evident in detailed descriptions such as those in "Big Two-Hearted River". Hemingway often wrote about Americans abroad. In Hemingway’s Expatriate Nationalism, Jeffrey Herlihy describes "Hemingway's Transnational Archetype" as one that involves characters who are "multilingual and bicultural, and have integrated new cultural norms from the host community into their daily lives by the time plots begin." In this way, "foreign scenarios, far from being mere exotic backdrops or cosmopolitan milieus, are motivating factors in-character action." Donald Monk comments that Hemingway's use of "expatriation comes to be not so much a psychological as a metaphysical reality. It guarantees his world-view of his heroes, based on a type of rootless outsider." Fiedler believes Hemingway inverts the American literary theme of the evil "Dark Woman" versus the good "Light Woman". The dark woman—Brett Ashley of The Sun Also Rises—is a goddess; the light woman—Margot Macomber of "The Short Happy Life of Francis Macomber"—is a murderess. Robert Scholes says early Hemingway stories, such as "A Very Short Story", present "a male character favorably and a female unfavorably". According to Rena Sanderson, early Hemingway critics lauded his male-centric world of masculine pursuits, and the fiction divided women into "castrators or love-slaves". Feminist critics attacked Hemingway as "public enemy number one", although more recent re-evaluations of his work "have given new visibility to Hemingway's female characters (and their strengths) and have revealed his own sensitivity to gender issues, thus casting doubts on the old assumption that his writings were one-sidedly masculine." Nina Baym believes that Brett Ashley and Margot Macomber "are the two outstanding examples of Hemingway's 'bitch women.'" The theme of women and death is evident in stories as early as "Indian Camp". The theme of death permeates Hemingway's work. Young believes the emphasis in "Indian Camp" was not so much on the woman who gives birth or the father who kills himself, but on Nick Adams who witnesses these events as a child, and becomes a "badly scarred and nervous young man". Hemingway sets the events in "Indian Camp" that shape the Adams persona. Young believes "Indian Camp" holds the "master key" to "what its author was up to for some thirty-five years of his writing career". Stoltzfus considers Hemingway's work to be more complex with a representation of the truth inherent in existentialism: if "nothingness" is embraced, then redemption is achieved at the moment of death. Those who face death with dignity and courage live an authentic life. Francis Macomber dies happy because the last hours of his life are authentic; the bullfighter in the corrida represents the pinnacle of a life lived with authenticity. In his paper The Uses of Authenticity: Hemingway and the Literary Field, Timo Müller writes that Hemingway's fiction is successful because the characters live an "authentic life", and the "soldiers, fishers, boxers and backwoodsmen are among the archetypes of authenticity in modern literature". The theme of emasculation is prevalent in Hemingway's work, notably in God Rest You Merry, Gentlemen and The Sun Also Rises. Emasculation, according to Fiedler, is a result of a generation of wounded soldiers; and of a generation in which women such as Brett gained emancipation. This also applies to the minor character, Frances Clyne, Cohn's girlfriend in the beginning of The Sun Also Rises. Her character supports the theme not only because the idea was presented early on in the novel but also the impact she had on Cohn in the start of the book while only appearing a small number of times. In God Rest You Merry, Gentlemen, the emasculation is literal, and related to religious guilt. Baker believes Hemingway's work emphasizes the "natural" versus the "unnatural". In "An Alpine Idyll" the "unnaturalness" of skiing in the high country late spring snow is juxtaposed against the "unnaturalness" of the peasant who allowed his wife's dead body to linger too long in the shed during the winter. The skiers and peasant retreat to the valley to the "natural" spring for redemption. Descriptions of food and drink feature prominently in many of Hemingway's works. In the short story "Big Two-Hearted River" Hemingway describes a hungry Nick Adams cooking a can of pork and beans and a can of spaghetti over a fire in a heavy cast iron pot. The primitive act of preparing the meal in solitude is a restorative act and one of Hemingway's narratives of post-war integration. Susan Beegel reports that Charles Stetler and Gerald Locklin read Hemingway's The Mother of a Queen as both misogynistic and homophobic, and Ernest Fontana thought that a "horror of homosexuality" drove the short story "A Pursuit Race". Beegel found that "despite the academy's growing interest in multiculturalism ... during the 1980s ... critics interested in multiculturalism tended to ignore the author as 'politically incorrect.'", listing just two "apologetic articles on [his] handling of race". Barry Gross, comparing Jewish characters in literature of the period, commented that "Hemingway never lets the reader forget that Cohn is a Jew, not an unattractive character who happens to be a Jew but a character who is unattractive because he is a Jew." ## Influence and legacy Hemingway's legacy to American literature is his style: writers who came after him either emulated or avoided it. After his reputation was established with the publication of The Sun Also Rises, he became the spokesperson for the post-World War I generation, having established a style to follow. His books were burned in Berlin in 1933, "as being a monument of modern decadence", and disavowed by his parents as "filth". Reynolds asserts the legacy is that "[Hemingway] left stories and novels so starkly moving that some have become part of our cultural heritage." Benson believes the details of Hemingway's life have become a "prime vehicle for exploitation", resulting in a Hemingway industry. Hemingway scholar Hallengren believes the "hard-boiled style" and the machismo must be separated from the author himself. Benson agrees, describing him as introverted and private as J. D. Salinger, although Hemingway masked his nature with braggadocio. During World War II, Salinger met and corresponded with Hemingway, whom he acknowledged as an influence. In a letter to Hemingway, Salinger claimed their talks "had given him his only hopeful minutes of the entire war" and jokingly "named himself national chairman of the Hemingway Fan Clubs". The extent of his influence is seen from the enduring and varied tributes to Hemingway and his works. 3656 Hemingway, a minor planet discovered in 1978 by Soviet astronomer Nikolai Chernykh, was named for Hemingway, and in 2009, a crater on Mercury was also named in his honor. The Kilimanjaro Device by Ray Bradbury featured Hemingway being transported to the top of Mount Kilimanjaro, while the 1993 motion picture Wrestling Ernest Hemingway explored the friendship of two retired men, played by Robert Duvall and Richard Harris, in a seaside Florida town. His influence is further evident from the many restaurants bearing his name and the proliferation of bars called "Harry's", a nod to the bar in Across the River and Into the Trees. Hemingway's son Jack (Bumby) promoted a line of furniture honoring his father, Montblanc created a Hemingway fountain pen, and multiple lines of clothing inspired by Hemingway have been produced. In 1977, the International Imitation Hemingway Competition was created to acknowledge his distinct style and the comical efforts of amateur authors to imitate him; entrants are encouraged to submit one "really good page of really bad Hemingway" and the winners are flown to Harry's Bar in Italy. Mary Hemingway established the Hemingway Foundation in 1965, and in the 1970s she donated her husband's papers to the John F. Kennedy Library. In 1980, a group of Hemingway scholars gathered to assess the donated papers, subsequently forming the Hemingway Society, "committed to supporting and fostering Hemingway scholarship", publishing The Hemingway Review. Numerous awards have been established in Hemingway's honor to recognize significant achievement in the arts and culture, including the Hemingway Foundation/PEN Award and the Hemingway Award. In 2012, he was inducted into the Chicago Literary Hall of Fame. Almost exactly 35 years after Hemingway's death, on July 1, 1996, his granddaughter Margaux Hemingway died in Santa Monica, California. Margaux was a supermodel and actress, co-starring with her younger sister Mariel in the 1976 movie Lipstick. Her death was later ruled a death by suicide. Three houses associated with Hemingway are listed on the U.S. National Register of Historic Places: the Ernest Hemingway Cottage on Walloon Lake, Michigan, designated in 1968; the Ernest Hemingway House in Key West, designated in 1968; and the Ernest and Mary Hemingway House in Ketchum, designated in 2015. Hemingway's childhood home in Oak Park and his Havana residence were also converted into museums. On April 5, 2021, Hemingway, a three-episode, six-hour documentary, a recapitulation of Hemingway's life, labors, and loves, debuted on PBS. It was co-produced and directed by Ken Burns and Lynn Novick. ## Selected works The following is the list of books that Ernest Hemingway completed during his lifetime. While much of his work was published posthumously, they were finished without his supervision, unlike the works listed below. - Three Stories and Ten Poems (1923) - in our time (1924) - In Our Time (1925) - The Torrents of Spring (1926) - The Sun Also Rises (1926) - Men Without Women (1927) - A Farewell to Arms (1929) - Death in the Afternoon (1932) - Winner Take Nothing (1933) - Green Hills of Africa (1935) - To Have and Have Not (1937) - The Fifth Column and the First Forty-Nine Stories (1938) - For Whom the Bell Tolls (1940) - Across the River and into the Trees (1950) - The Old Man and the Sea (1952) ## See also - Family tree showing Ernest Hemingway's parents, siblings, wives, children and grandchildren
41,934,646
A Quiet Night In
1,169,025,299
null
[ "2014 British television episodes", "British LGBT-related television episodes", "Burglary in fiction", "Inside No. 9 episodes", "Television episodes about murder", "Television episodes about theft", "Transgender-related television episodes", "Works about couples" ]
"A Quiet Night In" is the second episode of the British dark comedy television anthology series Inside No. 9. It first aired on 12 February 2014 on BBC Two. Written by Reece Shearsmith and Steve Pemberton, it stars the writers as a pair of hapless burglars attempting to break into the large, modernist house of a couple—played by Denis Lawson and Oona Chaplin—to steal a painting. Once the burglars make it into the house, they encounter obstacle after obstacle, while the lovers, unaware of the burglars' presence, argue. The episode progresses almost entirely without dialogue, relying instead on physical comedy and slapstick, though more sinister elements are present in the plot. In addition to Pemberton, Shearsmith, Lawson and Chaplin, "A Quiet Night In" also starred Joyce Veheary and Kayvan Novak. Shearsmith and Pemberton had originally considered including a dialogue-free segment in their television series Psychoville, but ultimately did not; they found the format of Inside No. 9 appropriate for revisiting the idea. Both journalists and those involved with the episode's production commented on the casting of Chaplin, a grandchild of the silent film star Charlie Chaplin, in an almost entirely dialogue-free episode, though her casting was not a deliberate homage. Critics generally responded positively to the episode, and a particularly laudatory review by David Chater was published in The Times, prompting a complaint from a reader who found the episode more traumatic than comedic. On its first airing, "A Quiet Night In" was watched by 940,000 viewers (4.8% of the market). "A Quiet Night In" was submitted to the British Academy of Film and Television Arts for the 2015 awards, but it was not nominated. Pemberton and Shearsmith have said that they have no plans to do further silent episodes, but have compared "A Quiet Night In" to the highly-experimental "Cold Comfort" from Inside No. 9's second series, a sentiment echoed by some television critics. ## Production Writers Steve Pemberton and Reece Shearsmith, who had previously co-written and starred in The League of Gentlemen and Psychoville, took inspiration for Inside No. 9 from "David and Maureen", episode 4 of the first series of Psychoville, which was in turn inspired by Alfred Hitchcock's Rope. "David and Maureen" took place entirely in a single room, and was filmed in only two shots. At the same time, the concept of Inside No. 9 was a "reaction" to Psychoville, with Shearsmith saying that "we'd been so involved with labyrinthine over-arcing, we thought it would be nice to do six different stories with a complete new house of people each week. That's appealing, because as a viewer you might not like this story, but you've got a different one next week." As an anthology series with horror themes, Inside No. 9 also pays homage to Tales of the Unexpected, The Twilight Zone and Alfred Hitchcock Presents. The format of Inside No. 9 allowed Pemberton and Shearsmith to explore ideas which are less practical for other approaches to storytelling, such as the possibility of a script with little dialogue. Prior to writing "A Quiet Night In", Shearsmith had spoken with directors, including Ben Wheatley, about the possibility of producing television without speech. The directors had expressed doubts, Shearsmith explained, because the success of dialogue-free television comes down entirely to the visuals and filming. "A Quiet Night In" was inspired by an idea Shearsmith and Pemberton had discussed for Psychoville. The writers had considered omitting dialogue from a ten-minute section in an episode, or even from the whole episode. Pemberton explained that this was not possible as there were "too many good jokes" which they wanted to fit into the sequence. This episode, like "A Quiet Night In", dealt with a break-in. Inside No. 9, for Pemberton, offered the "perfect vehicle" for revisiting the possibility of dialogue-free television. Shearsmith said that, at the start of the writing process, the pair did not have the intention of scripting the entire episode without dialogue, and that it would be "great" to have ten minutes without it. However, Pemberton said it was easier to write once they had entered the correct "mindset". Once half an episode had been written, Pemberton said, the pair thought "we've just got to keep going". The only dialogue in the episode is right at the end; "what a great thing to get to the end and just have one line of dialogue", Pemberton suggested, comparing the concept to that of the Mel Brooks film Silent Movie. The story of "A Quiet Night In" revolves around a break-in, which, combined with an argument between the people living in the house, means that the characters all have a reason to be silent. At 18 pages of stage directions, the script contained every joke in the episode, an exercise in planning atypical for Shearsmith and Pemberton. The story contains multiple "reveals"; Pemberton explained that he and Shearsmith "hope there's an 'oh my God' moment. There is always a desire to wrong-foot the viewer. That's what you strive to do". Pemberton said that writing for a silent episode "makes you inventive in a completely different way". The episode was filmed at the White Lodge, in Oxted, Surrey. The episode's burglars are played by the writers; the pair were quoted as saying "we didn't want to dominate [the series], so we sometimes play fairly minor characters. But we know that, say, if we were writing something about two burglars, we'd be the burglars." Pemberton suggested that a partial influence for the episode may have been the children's television series Brum. He said that he and Shearsmith had "always wanted to be a couple of robbers in that, so that might be where the idea came from". Both writers agreed that their roles were "great to perform", and Pemberton described the resulting episode by saying that it "worked out better than [they] could have dreamed". As the format of Inside No. 9 requires new characters each week, the writers were able to attract actors who may have been unwilling to commit to an entire series. In addition to Pemberton and Shearsmith, "A Quiet Night In" starred Denis Lawson, Joyce Veheary, Oona Chaplin and Kayvan Novak. Pemberton commented on the appropriateness of casting Chaplin, a grandchild of the silent film star Charlie Chaplin, in an episode with little dialogue. Shearsmith stressed that the episode should not be considered a silent film in the same way as Charlie Chaplin's, elsewhere saying that the casting was "almost an accident but maybe a little nod". Bruce Dessau, writing in The Independent, described the casting choice as "a satisfying nod to silent cinema". Both Oona Chaplin and the Inside No. 9 executive producer Jon Plowman stressed, however, that there was no significance in the casting. Chaplin also said that her character was very unlike herself, explaining that the "big boobs, the heels, the blonde wig ... freed [her] up amazingly". ## Plot Inside a large, modernist house, Gerald (Lawson) turns on Rachmaninoff's Piano Concerto No. 2 and sits down to soup brought by his housekeeper, Kim (Veheary). Through the windows behind him, burglars Eddie (Pemberton) and Ray (Shearsmith) are seen. Ray enters the house, then lets in Eddie while Gerald is using the toilet. Eddie is shocked to see that the pair have come to steal an almost completely white painting. Ray starts to dismantle the painting while Eddie keeps watch; he tries to guide a Yorkshire Terrier out of the patio window, but inadvertently lets in an Irish Wolfhound. As Ray releases the wolfhound, Eddie accidentally throws the terrier into the window, so Ray stuffs the dog into an umbrella stand. Sabrina (Chaplin) walks down the stairs, and Ray puts the painting back and hides. Sabrina turns down Gerald's music to watch EastEnders. Gerald returns, sitting away from Sabrina. He turns up his music and the pair fight over the television remote, before leaving through the patio door and arguing, though their voices are muffled. Ray cuts away the canvas and replaces it with kitchen roll. When Sabrina reenters, she unknowingly stands on the canvas. Kim picks it up, mistaking it for laundry, and heads into a laundry room as Sabrina walks upstairs. Eddie follows Kim and she sprays something into his eyes. Ray knocks out Kim and sees the canvas in a laundry basket, which is sent up a laundry chute. He runs upstairs, while Gerald remains outside. Sabrina packs a holdall, including the contents of the laundry basket. She locks the case and heads into an en suite. Downstairs, Gerald retrieves a pistol and heads back outside. Ray attempts to steal the key from Sabrina's discarded trousers and he sees that Sabrina is a transgender woman. On the patio, Gerald points the gun into his mouth, as Eddie stumbles around in the lounge area, having accidentally pushed chilli peppers into his eyes. Ray hides under Sabrina's bed as she reenters the room; lying on a sex doll with both breasts and a penis, he is almost discovered. Eddie washes his face, and Gerald starts to play "Without You". Sabrina makes her way downstairs, taking the holdall's key. Sabrina and Gerald dance. Ray drags the case to the top of the stairs and meets Eddie. Gerald lays Sabrina down on the sofa, places a cushion over her face and shoots her. Gerald turns off the music as the doorbell is heard. Answering the door, Gerald sees a man (Novak) who holds up a sign reading "Hello, my name is Paul. I am deaf & dumb." The reverse of the sign reads "Do you need any cleaning products today?" Gerald heads inside and hides Sabrina's body as Paul waits. Gerald splashes his soup onto the blood and invites Paul to clean it. Ray runs down the stairs and meets Paul; he proceeds to buy rope before returning upstairs. Paul continues to clean, but sees the bullethole in the cushion, and then the suitcase being lowered outside the window. Gerald heads outside to investigate, but Eddie and Ray drop the case on his head. The burglars run past Paul and look out to see the canvas in the pool, before both being shot by Paul. Paul rings someone and says "Hello, it's me. Yeah, it's done." He looks to the fake painting, and says "I've got it right here. Yeah, it's fine. Not a peep out of anyone." He takes down the painting and walks out, as the real canvas is seen sinking in the pool. ## Analysis The style of "A Quiet Night In" is experimental and represents a creative risk. While Pemberton and Shearsmith's characters provide comedy, the relationship of Lawson and Chaplin's characters adds an element of darkness. The two storylines are brought together with the violence towards the end of the episode, resulting in the juxtaposition of elements reminiscent of both the Chuckle Brothers (slapstick) and Quentin Tarantino (bloody violence). Though the comedy remains black, the comedic style of the episode differs considerably from that of "Sardines", the previous installment of Inside No. 9. "A Quiet Night In" offers a kind of "sadistic slapstick" humour; physical comedy, toilet humour and buffoonery are utilised, with the episode effectively becoming a farce. "A Quiet Night In" builds upon silent comedy tropes and norms, but, for the comedy critic Bruce Dessau, the tone is closer to that of Kill List or Sightseers than to the work of Buster Keaton. The episode features various twists, and these are generally in keeping with Pemberton and Shearsmith's typical approach, though one is reminiscent of the Farrelly brothers. ## Reception Critics generally responded positively to "A Quiet Night In". David Chater, writing for The Times, gave a highly laudatory review, saying the episode was "the funniest, cleverest, most imaginative and original television I have seen for as long as I can remember – one of those fabulous programmes where time stands still and the world around you disappears". He chose not to reveal too much about the plot for fear of "spoiling the fun". Chater later described the episode as "mindboggling in its originality", and "one of the funniest, most imaginative programmes shown on television in the past 15 years". Jane Simon, writing for the Daily Mirror, called the episode a "triumph", while writers for Metro described the episode as "quality comedy", and journalists writing for The Sunday Times characterised it as a "brilliantly conceived and choreographed mime". Jack Seale, writing for the Radio Times, also stressed how the episode was "beautifully choreographed", praising Pemberton and Shearsmith's "willingness to attempt difficult concepts". Dessau considered the episode "genius", and described the twist ending as "genuinely unexpected". In The Observer, Mike Bradley called "A Quiet Night In" a "priceless silent farce", but, in the newspaper's sister publication The Guardian, Luke Holland was more critical. He said the episode was "an almost wordless half-hour of physical comedy", and that "it plays out like a French farce, its comedic strokes far broader" than those of "Sardines". "If you find two men silently mime-arguing about how long it takes to have a poo funny", he continued, "you're on sturdy ground here". Later, a review by Phelim O'Neill of the Inside No. 9 series 1 boxset published on theguardian.com described "A Quiet Night In" as "engaging, tense, funny, frightening – and accessibly experimental". The episode was compared positively to "Last Gasp" by Rebecca McQuillan of The Herald, who said that "A Quiet Night In" was "something close to comedy genius". An anonymous review in the South African newspaper The Saturday Star picked out "A Quiet Night In" as the strongest episode of the first series. After the episode had aired, The Times received an email complaint about Chater's positive review of the episode, which was discussed by the journalist Rose Wild. Part of the complaint read: > I told my husband how it was supposed to be the funniest thing ever, but we were horrified! I'll never be able to forget the little dog being thrown against the window and then stabbed to death by an umbrella – nor the gay man killed by his lover, nor what they had under the bed – nor the deaf man killing the thieves. Having thieves tiptoe comically around the house before having their heads blown off did not make up for my trauma. In response, Wild said: "I am sorry if we left any permanent damage. In our defence, we did say 'black' comedy." Wild agreed with the reader's comment that she and her husband "must be very different kinds of people" from Chater. ### Viewing figures On its first airing, the episode received 940,000 viewers (4.8% of the market). This was lower than the 1 million (5.6% of viewers) of the series's debut, "Sardines", and lower than the 1.8 million (7.4%) of Line of Duty which immediately preceded "A Quiet Night In" in most UK listings. A repeat, shown on 26 May on BBC2, attracted 900,000 viewers, which was 4% of the audience. On this occasion, the episode followed The Fast Show Special. The series average, based upon the viewing figures of the first broadcast of each episode, was 904,000 viewers, or 4.9% of the audience, lower than the slot average of 970,000 (5.1% of the audience). ## Legacy "A Quiet Night In" was submitted to the British Academy of Film and Television Arts (BAFTA), but was not nominated for a 2015 BAFTA award. In an interview with Digital Spy, Shearsmith said that this surprised him, saying "I was upset, I did think it was a shame that it's not been recognised. You want people to have seen it and to have recognised the work, and innovation, but I think people are doing that. I get told that every day on Twitter, or in meetings." A number of journalists expressed surprise that Inside No. 9 had received no BAFTA nominations, with Julia Raeside, of The Guardian, describing "A Quiet Night In" as "one of the most inspired pieces of mute theatre I've seen on television". In 2015, Shearsmith said that he and Pemberton had no intention to write any further silent episodes, as they would not want viewers to think they had run out of ideas, while Pemberton separately said that the pair had no desire to do what would be an inferior version of "A Quiet Night In". "Cold Comfort", the fourth episode of the second series of Inside No. 9, was compared to "A Quiet Night In" by Pemberton, Shearsmith and some critics. "Cold Comfort" was also filmed an experimental style, with most of the episode shot from fixed cameras and displayed on a split screen. Despite this—with its focus on listening and the fact that it was mostly static—"Cold Comfort" could, for Pemberton, be seen as the "polar opposite" of "A Quiet Night In". In June 2016, there was a screening of "A Quiet Night In" at Arnolfini as part of Bristol's Slapstick Festival. The one-off event, entitled "A Quiet Night In with Reece & Steve", also featured Pemberton and Shearsmith discussing the episode on-stage with Robin Ince, followed by a question and answer session with the writers. In an interview with Craig Jones of the Bristol Post, Shearsmith said that he was "very excited to come to Bristol", and that he and Pemberton had been wanting to be involved with Slapstick Festival for some time. He said that "It is a lovely thing to be part of and it is great to see how respected slapstick still remains."
1,248,450
Lesley J. McNair
1,173,067,913
United States Army officer (1883–1944)
[ "1883 births", "1944 deaths", "Civilian Conservation Corps people", "Commandants of the United States Army Command and General Staff College", "Deaths by airstrike during World War II", "Friendly fire incidents of World War II", "Military personnel from Minnesota", "Military personnel killed by friendly fire", "People from Wadena County, Minnesota", "Recipients of the Distinguished Service Medal (US Army)", "Recipients of the Legion of Honour", "South High School (Minnesota) alumni", "United States Army Field Artillery Branch personnel", "United States Army War College alumni", "United States Army generals", "United States Army generals of World War I", "United States Army generals of World War II", "United States Army personnel killed in World War II", "United States Military Academy alumni" ]
Lesley James McNair (May 25, 1883 – July 25, 1944) was a senior United States Army officer who served during World War I and World War II. He attained the rank of lieutenant general during his life; he was killed in action during World War II, and received a posthumous promotion to general. A Minnesota native and 1904 graduate of the United States Military Academy, McNair was a Field Artillery officer with a background in the Ordnance Department. A veteran of the Veracruz occupation and Pancho Villa Expedition, during World War I he served as assistant chief of staff for training with the 1st Division, and then chief of artillery training on the staff at the American Expeditionary Forces headquarters. His outstanding performance resulted in his promotion to temporary brigadier general; at age 35, he was the Army's youngest general officer. McNair's experience of more than 30 years with equipment and weapons design and testing, his administrative skills, and his success in the areas of military education and training led to his World War II assignment as commander of Army Ground Forces. In this position, McNair became the "unsung architect of the U.S. Army", and played a leading role in the organizational design, equipping, and training of Army units in the United States before they departed for overseas combat. While historians continue to debate some of McNair's decisions and actions, including the individual replacement system for killed and wounded soldiers, and a controversy over the use of tanks or tank destroyers as anti-tank weapons, his concentration on advanced officer education, innovative weapons systems, improved doctrine, realistic combat training, and development of combined arms tactics enabled the Army to modernize and perform successfully on the World War II battlefield, where the mobility of mechanized forces replaced the static defenses of World War I as the primary tactical consideration. He was killed by friendly fire while in France to act as commander of the fictitious First United States Army Group, part of the Operation Quicksilver deception that masked the actual landing sites for the Invasion of Normandy. During Operation Cobra, an Eighth Air Force bomb landed in his foxhole near Saint-Lô when the Army attempted to use heavy bombers for close air support of infantry operations as part of the Battle of Normandy. ## Early life McNair was born in Verndale, Minnesota, on May 25, 1883, the son of James (1846–1932) and Clara (Manz) McNair (1853–1925). He was the second-born of their six children, and the first son. His siblings who lived to adulthood were: sister Nora (1881–1971), the wife of Harry Jessup; brother Murray Manz McNair (1888–1976); and sister Irene (1890–1979), the wife of Harry R. Naftalin. McNair attended school in Verndale through the ninth grade, the highest available locally; his parents then relocated to Minneapolis so McNair and his siblings could complete high school. After graduating from South High School in 1897, he competed successfully for an appointment to the United States Naval Academy. While he was on the Naval Academy waiting list as an alternate, he began studies at the Minnesota School of Business in Minneapolis, where he concentrated primarily on mechanical engineering and statistics courses. Frustrated with the wait to start at the Naval Academy, in 1900 McNair competed for appointment to the United States Military Academy and took an examination offered by U.S. Senators Cushman Kellogg Davis and Knute Nelson. Initially selected as an alternate in July 1900, he was quickly accepted as a member of the class that began that August. While at West Point, his fellow students nicknamed him "Whitey" for his ash blond hair; they continued to use it with him for the rest of his life. The description of McNair which accompanied the photo of him in West Point's yearbook for his senior year refers to him as "Pedestrian Whitey" and details an incident when he had to walk from Newburgh to West Point, a distance of 11 miles (18 km), after having missed the last train while returning from visiting his fiancée in New York City; the yearbook also contains an anonymously authored poem, "'Whitey's' Record Walk", about the same incident. Several of McNair's classmates also went on to prominent careers in the Army, including George R. Allin, Charles School Blakely, Robert M. Danford, Pelham D. Glassford, Edmund L. Gruber, Henry Conger Pratt, Henry J. Reilly, Joseph Stilwell, and Innis P. Swift. McNair graduated in 1904, and was commissioned as a second lieutenant. The top five or six graduates usually chose the engineer branch; McNair's high class standing (11th of 124) earned him a place in the second choice of most high-ranking graduates, the artillery branch. ## Early career McNair was first assigned as a platoon leader with the 12th Battery of Mountain Artillery at Fort Douglas, Utah. While there, he requested duty with the Ordnance Department, and passed a qualifying examination. After approval of his transfer request, he was first assigned to Sandy Hook Proving Ground, New Jersey, where he began a lifelong interest in testing and experimenting with new equipment and weapons. Initially, McNair's Ordnance testing centered on improving the mountain guns used by units including his 12th Battery for artillery support of troops in rugged terrain where limbers and caissons could not travel. After assignment to the staff of the Army's Chief of Ordnance from 1905 to 1906, McNair was assigned to the Watertown Arsenal, where he completed self-directed academic studies in metallurgy and other scientific topics. In this posting, he gained experience with both laboratory and practical methods of experimentation, including analyzing bronze, steel, and cast iron to determine the best materials to use in manufacturing cannons and other weapons. In addition, he gained firsthand experience with the uses and applications of several foundry machines, including forges, steam hammers, lathes, planing machines, and boring machines. His business college background in statistical analysis and engineering (including technical drawing) helped make him successful at testing and experimentation; as a result of his experience at Watertown, for the rest of his career the Army frequently relied on him to oversee boards that developed and tested weapons and other equipment, and made recommendations on which items were most suitable for procurement and fielding. He was promoted to temporary first lieutenant in July 1905, and permanent first lieutenant in January 1907. In May 1907, McNair was promoted to temporary captain. (His higher temporary ranks applied in the Ordnance branch, but not in the Artillery.) ## Pre-World War I In 1909, McNair returned to the Artillery branch and was assigned to the 4th Field Artillery Regiment at Fort Russell, Wyoming. Assigned to command of Battery C, he earned accolades for his leadership skills and technical expertise. Jacob L. Devers, who was assigned to Battery C after his 1909 West Point graduation, recalled McNair as an outstanding commander who set a superior personal example, and knew how to motivate his subordinates to perform to a high standard. During his battery command, McNair also worked with mixed success on recommendations to modernize pack howitzers, pack saddles, ammunition carriers, and other equipment (much of it of his own design) for the Army's mule-transported mountain artillery. In 1909, McNair commanded the 4th Artillery's Battery D at Fort Riley, Kansas, during tests designed to determine the battle-worthiness of different types of defensive works if they were attacked by various types of cannons and howitzers. In 1911, he received three patents (998,711; 998,712; 1,007,223) for his designs of new artillery projectiles. McNair's skills in technical drawing, engineering, prototype building, and statistical analysis began to be known Army-wide; in 1912 the commandant of the Field Artillery School requested him by name for assignment to his staff. Instructors at the school had spent more than a year gathering information on 7,000 rounds fired during tests and field exercises; because the school was short-staffed, the commandant called on McNair to compile the data into firing tables that would make it easier for Artillery crews Army-wide to plan and control indirect fire. While carrying out this assignment in 1913, he also spent seven months in France to observe and gather information on the French Army's artillery training, education, and employment. In April 1914, McNair was promoted to permanent captain, and from April to November 1914, McNair took part in the Veracruz Expedition as the 4th Field Artillery Regiment's commissary officer. Assigned following another by name request, this time from regimental commander Lucien Grant Berry, McNair was responsible for procuring, storing, maintaining, accounting for, and distributing the regiment's materiel, including equipment and weapons. During 1915 and 1916, he was again assigned to the Field Artillery School, where he continued to work on procedures for the implementation of the firing table data he had published. He also continued to experiment with different types of artillery pieces so he could make manufacturing and procurement recommendations as the Army began to prepare for possible involvement in World War I. He returned to the 4th Field Artillery Regiment for the Pancho Villa Expedition on the Texas-Mexico border; initially an unassigned staff officer available for additional duties, he later commanded a battery. ## World War I In April 1917, the United States entered World War I. The next month, he was promoted to major and assigned to temporary duty as an instructor for the officer training camp at Leon Springs, Texas. After his Leon Springs assignment, McNair was assigned to the newly created 1st Division, then located at Camp Stewart, in El Paso County, Texas. Assigned to the division headquarters as assistant chief of staff for training, McNair was given responsibility for the organization's pre-deployment mobilization, individual soldier training, and collective unit training. When the division departed for France in June, McNair shared quarters aboard ship with the division's assistant chief of staff for operations, George C. Marshall. During their long ocean voyage, they forged a personal and professional bond that they maintained for the rest of their careers. McNair was promoted to temporary lieutenant colonel in August 1917, soon after his arrival in France, and was assigned to the American Expeditionary Forces (AEF) headquarters as chief of artillery training and tactics in the AEF staff's training division (G-5). He was promoted to temporary colonel in June 1918, and temporary brigadier general in October; at age 35, he was the youngest general officer in the Army. He continued to impress his superiors with his technical and tactical expertise, and at the end of the war he received the Army Distinguished Service Medal from John J. Pershing and the French Legion of Honor (Officer) from Philippe Pétain. The citation for the Army DSM reads: > The President of the United States of America, authorized by Act of Congress, July 9, 1918, takes pleasure in presenting the Army Distinguished Service Medal to Brigadier General Lesley James McNair, United States Army, for exceptionally meritorious and distinguished services to the Government of the United States, in a duty of great responsibility during World War I. As the Senior Artillery Officer of the Training Section, General Staff, General McNair displayed marked ability in correctly estimating the changing conditions and requirements of military tactics. He was largely responsible for impressing upon the American Army sound principles for the use of artillery and for improving methods for the support of Infantry, so necessary to the proper cooperation of the two arms. In June 1919, seven months after the Armistice with Germany which brought an end to the war, McNair was named to the AEF board that was charged with studying the problem of how to provide adequate mobile indirect fire support to infantry during combat. This panel, called the Lassiter Board after its chairman, Major General William Lassiter, was one of several formed by the AEF to review wartime plans and activities, and make recommendations for the Army's future equipment, doctrine, and training. ## Post-World War I ### School of the Line McNair remained with the Lassiter Board only briefly because he received orders assigning him as one of the faculty members of the Army's School of the Line. The school had ceased operations during the war and was being re-formed at Fort Leavenworth to offer professional education to field grade officers on planning and overseeing the execution of operations at division level and above. McNair reverted to his permanent rank of major, and remained on the faculty until 1921. In addition to receiving accolades for his work to help design and field the course curriculum, McNair also played a key role in another task traditionally assigned to the School of the Line: developing and promulgating the Field Service Regulations, the Army's main document for codifying training and readiness doctrine. While on the School of the Line's faculty, he was one of the main authors of the 1923 revision of Field Service Regulations. The officers who were assigned to the faculty and were responsible for restarting the School of the Line after World War I all received credit for having attended the course, including McNair. ### Hawaiian Department In 1921, McNair was posted to Fort Shafter and assigned as assistant chief of staff for operations (G-3) at the headquarters of the Army's Hawaiian Department. While in Hawaii, he became a participant in the Army's ongoing debate over the best methods for providing coastal defense, which engaged proponents of the Coast Artillery branch and Army Air Service. Assigned to the project by Hawaiian Department commander Major General Charles Pelot Summerall because of his reputation for objectivity in carrying out analysis and experimentation with military weapons and equipment, McNair created a committee made up of himself, two coast artillery officers, and an aviation officer to investigate the strengths and weaknesses of the two branches, especially with regard to defending Army and Navy bases on Oahu, and make recommendations on the best way to employ coast artillery and military aircraft. The McNair board carried out numerous tests of coast artillery and bomber aircraft in a variety of conditions, and compiled tables and charts to depict the results. The panel concluded that coastal artillery was sufficient for shore defense, provided that adequate listening and lighting equipment for detecting and illuminating enemy ships and planes was available, and that bombers were less accurate, but more effective at destroying enemy ships at longer distances from shore, provided they could overcome obstacles including inclement weather. In addition to his work on the coastal defense problem, McNair was also responsible for directing a review of War Plan Orange, the Army and Navy joint defense plan for countering an attack on Hawaii by Japan. This possibility was a major concern of U.S. military leaders during the years between World War I and World War II. Among McNair's contributions to updating this plan was the creation of several contingency plans ("branches and sequels" in Army parlance) to augment the main war plan. These contingency plans included: using chemical weapons to defend against a Japanese attack, declaring martial law in Hawaii, and determining how to maintain a defense against Japanese invaders while waiting for reinforcements from the U.S. mainland. In 1924 and 1925, McNair and Summerall defended McNair's work when it was criticized during the continuing debate about the future of the Army Air Service. The head of the Air Service, Major General Mason Patrick, argued that the McNair board's findings underestimated the capabilities of bomber aircraft, and that the data the board had compiled was inaccurate. In response, Major General Frank W. Coe, the chief of the Coast Artillery Corps, pointed out that McNair's panel included both Coast Artillery and Air Service officers, and that experiments with aircraft had included coast artillery officers as observers. In addition, the aircrews that participated in the McNair board experiments had the opportunity to provide input and voice concern over the board's methods and conclusions. Coe concluded his argument by recommending that the McNair board's findings be approved by the Army as its official position on the issue of coastal artillery versus bombers for shore defense. Coe's recommendation was not followed; subsequent panels and committees continued to investigate and debate the issue. In addition, the debate continued on the larger question of whether the Air Service should remain a component of the Army or become a separate branch of the military. McNair's involvement in the issue continued during the 1925 court-martial of Brigadier General Billy Mitchell, whose zealous advocacy of creating a separate Air Force resulted in accusations of insubordination. Mitchell based his public assertions about non-aviation officers being ignorant of aviation matters on events he falsely claimed to have witnessed in Hawaii during the McNair board's experiments. Summerall was so incensed at the questioning of his and McNair's integrity that he attempted to be appointed as president of the court-martial. During Mitchell's trial, Major General Robert Courtney Davis, the Army's adjutant general, ordered Summerall and McNair to provide testimony. They refuted Mitchell's claims that during his time in Hawaii in 1923 the Hawaiian Department had no plan to defend Oahu from Japanese attack. They also demonstrated that Mitchell was incorrect in stating that the Air Service was not treated fairly in the distribution of resources in Hawaii; in fact, Summerall had reallocated funding, equipment and other items from other branches to the Air Service. Mitchell was convicted, and sentenced to a five-year suspension from active duty. He resigned from the Army so he could continue to advocate for the creation of a separate Air Force. Despite the controversy, McNair's work enhanced his reputation as an objective and innovative thinker, planner, and leader, and he remained under consideration for positions of increasing rank and responsibility. ### Purdue University In 1924, McNair was appointed commandant and professor of military science and tactics for the Reserve Officers' Training Corps (ROTC) program at Purdue University. In accordance with the National Defense Act of 1920, ROTC offered a two-year course of instruction for freshmen and sophomores, which was compulsory at many universities, including Purdue. The program also offered advanced instruction for juniors and seniors who desired to continue military training and possibly earn a commission in the Army Reserve, National Guard, or Regular Army. In addition to following this academic model, since 1919 Purdue had organized its ROTC cadets as a motorized field artillery unit, a circumstance which played to McNair's strengths. Purdue's president, Edward C. Elliott, was a strong advocate at the national level for ROTC, and a leading voice in opposition to the pacifist movement which gained strength and influence following World War I. McNair became an advocate for military preparedness generally and ROTC specifically, and also argued in opposition to the pacifists. Already a prolific author of professional journal articles on technical military subjects, he penned numerous articles and letters in favor of military training and readiness, and in opposition to the pacifist movement. He also continued to write on Army-specific subjects, including articles that argued for reforming the Army's officer promotion system to replace seniority with merit as the primary consideration. McNair also effected several positive changes to Purdue's ROTC program. As outlined by Elliott, Purdue ROTC had been the subject of several rapid leadership changes which had resulted in disorganization and low morale. McNair's leadership, technical expertise, and administrative abilities resulted in enhanced student participation and improved morale, and developed the program into the Army's largest light artillery unit. When the Chief of Field Artillery attempted to reassign McNair to Fort Bragg, North Carolina, to lead revisions to the Army's field artillery regulations, Elliott protested; his request to keep McNair until the end of the usual four-year assignment for ROTC professors was granted, and McNair remained at Purdue until 1928. ### Army War College In 1928, McNair was promoted to lieutenant colonel and entered the United States Army War College, the highest-level formal education program for Army officers. In the 1920s the curriculum had been revised so that the program of instruction concentrated on economic, industrial, and logistics issues related to large-scale wartime mobilizations, as well as the doctrine, strategy, and tactics requirements associated with organizing, training, deploying, and employing large scale units for combat (typically division and above). In addition to completing seminars on staff functions (G-1 for personnel, G-2 for intelligence, G-3 for operations and training, and G-4 for logistics), McNair and his War College classmates served on committees that studied war plans and suggested improvements, reviewed regulations and proposed updates, and studied and discussed strategic-level foreign and defense policy issues to improve their understanding. Among his classmates were several officers who became prominent during World War II, including: Simon Bolivar Buckner Jr., Roy Geiger, Oscar Griswold, Clarence R. Huebner, Troy H. Middleton, and Franklin C. Sibert. Upon graduating, McNair received an evaluation of "superior", with a recommendation from the commandant that he be considered for high command or a senior staff position on the War Department General Staff. In addition, the commandant forwarded to the War Department McNair's final research project on ways for the department to maximize efficiency when allocating funding for unit training, calling it "a study of exceptional merit made at the War College". ### U.S. Army Field Artillery School After his War College graduation, McNair was assigned as assistant commandant of the Field Artillery Center and School. In this position, he worked with the school's Gunnery Department to address field artillery doctrine issues that had lingered since World War I, including limited mobility, inadequate communications, and overly detailed fire direction techniques. Successive Gunnery Department directors Jacob Devers, Carlos Brewer, and Orlando Ward recognized that continuing improvement to innovations including machine guns and tanks made the static trench warfare of World War I unlikely to be repeated. As a result, they experimented with new techniques, including increasing the speed of artillery support to mobile armor and infantry by empowering Artillery-qualified fire support officers attached to those formations to direct artillery fire. In addition, they pioneered techniques to enhance accuracy, including forward observers (FOs) who could direct rounds onto targets based on seeing their impact, rather than the unobserved timed fire and rolling barrages that had prevailed in World War I. McNair supported these innovations, and prevented interference from senior officers who tried to block them. Devers, Brewer, and Ward also argued for directing artillery from the battalion level rather than the battery level. In their view, massing artillery and centrally controlling it from a brigade fire direction center (FDC) enabled senior commanders to rapidly provide direct support to the areas on the battlefield where it was most needed. McNair had advocated this tactic since World War I; he again agreed with the Gunnery Department, and worked to help implement this doctrinal change while also protecting the Gunnery Department from outside interference. Over time, improvements to communications equipment and procedures and changes to doctrine enabled implementation of many of these changes, and they largely became the standard by which field artillery units conducted operations in World War II. At the completion of McNair's assignment as deputy commandant in 1933, he received the highest marks on his efficiency report, along with recommendations for promotion to colonel, and assignment to regimental or brigade command. ### Battalion command McNair commanded 2nd Battalion, 16th Field Artillery Regiment at Fort Bragg from July 1 through October 1, 1933, when organizational changes re-designated the unit as 2nd Battalion, 83rd Field Artillery Regiment. McNair commanded the renamed unit until August 1934, and led it through its renaming and reorganization. ### Civilian Conservation Corps In August 1934, McNair was assigned to command of Civilian Conservation Corps (CCC) District E, part of the Seventh Corps Area. Headquartered at Camp Beauregard, Louisiana, District E was composed of thousands of CCC members based at 33 camps throughout Louisiana and Mississippi. As with other Regular Army officers who took part in organizing and operating the CCC, McNair's work to plan, direct, and supervise the activities over a wide area gave him practical experience at mobilizing, housing, feeding, providing medical care for, supervising, and improving the physical and mental resilience of thousands of young members. This experience working with large bodies of men was an asset as McNair ascended into the Army's senior ranks. In addition, McNair benefited from the experience of working with civilian government leaders to plan and direct CCC activities, which was also put to good use in his later assignments as one of the Army's highest-level commanders. He was promoted to colonel in May 1935. ### Executive officer to the Chief of Field Artillery With his promotion to colonel, McNair was assigned as executive officer for the Army's Chief of Field Artillery. In addition to carrying out the usual administrative duties of the position, such as managing the chief's appointment calendar and handling his correspondence, McNair was able to continue experimenting with and testing field artillery equipment and weapons, including trips to Aberdeen Proving Ground in Maryland to test the Hotchkiss Anti-Tank Gun and Hotchkiss Anti-Aircraft Gun. He also studied and authored articles on the use of autogyros for field artillery targeting and indirect fire observation, which anticipated the use of helicopters in modern warfare. In January 1937, McNair was promoted to brigadier general. ### Brigade command In March 1937, McNair was assigned to command of the 2nd Field Artillery Brigade, a unit of the 2nd Infantry Division, then based at Fort Sam Houston, Texas. The Army continued to experiment with equipment, weapons, and organizational design as it moved towards mechanization and modernization; the Army Chief of Staff directed tests of the triangular division concept (as opposed to the square division of World War I) through creation of a Proposed Infantry Division (PID). The 2nd Infantry Division was selected to conduct the tests, and McNair performed additional duty as the PID's chief of staff. In this position, he managed and supervised the PID's design, field tests, after-action reviews, and reports and recommendations to the War Department. The triangular division model was adopted, and became the Army's standard design for infantry divisions in World War II. In his annual performance appraisal, his division commander, Major General James K. Parsons, rated McNair as superior, and recommended him for assignment as a corps or army chief of staff. On the question of how McNair compared to his peers (unique to the appraisals of general officers), Parsons rated McNair second of the 40 generals personally known to him. McNair remained in command of the 2nd Field Artillery Brigade until March 1939, when he was appointed to serve as commandant of the Command and General Staff College. ### Command and General Staff College The Chief of Staff of the Army, General Malin Craig, selected McNair to command the Command and General Staff College because he believed its teaching methods needed to be updated, and that the combat unit planning and reporting processes it taught needed to be streamlined. Craig felt that McNair's background made him ideally suited to lead this effort; in addition, Craig's deputy, Brigadier General George Marshall, believed that the Command and General Staff College program of instruction was too rigid and too focused on a staff process geared towards leadership of Regular Army units. In Marshall's view, the curriculum needed to be overhauled to reflect the likely needs of the World War II army, including faster, more flexible methods for planning and execution of large scale mobilizations, and incorporation of processes for training draftees and members of the National Guard, who would report for duty with less training and experience than members of the Regular Army. In addition, Marshall wanted to ensure that graduates were prepared to plan and execute the offensive maneuver-based operations Army leaders anticipated would characterize World War II, as opposed to the defensive trench warfare of World War I. In addition to modernizing the curriculum, McNair reduced the course length to accommodate the civilian schedules of National Guard and Reserve officers, many of whom would otherwise be unable to attend. While working to update and streamline the curriculum, McNair updated the Army's core doctrine, the Field Service Manual. He began his service as commandant in time to finalize publication of the 1939 edition, which was divided into three Field Manuals (FMs): FM 100–5, Operations; FM 100–10, Administration; and FM 100–15, Large Units. Because of criticism of the 1939 edition, McNair almost immediately began work on an update, with Marshall, now the Army's chief of staff, directing that it be published no later than January 1, 1941. The work on the 1941 edition was still in progress when McNair was again reassigned; when it was published it became the primary doctrinal document for the Army's World War II activities. His efficiency reports continued to reflect his superior performance; for the first appraisal he received while commandant, Craig rated McNair second of the 41 brigadier generals he knew. In an evaluation of his performance in his additional duty as commander of the Fort Leavenworth post, Major General Percy Poe Bishop, commander of the Seventh Corps Area, ranked McNair fifth of the 31 brigadier generals he knew. By the time of his second appraisal as commandant, the War Department had waived the requirement for written evaluations of officers supervised directly by the Army Chief of Staff, but in his second role as commander of Fort Leavenworth, Bishop rated McNair as superior in all areas, recommended him for a high level command in combat, and ranked him first of the more than 30 brigadier generals Bishop knew. ## World War II ### General Headquarters, United States Army In July 1940, McNair began his new assignment as chief of staff for General Headquarters, United States Army (GHQ), the organization the Army created to oversee World War II mobilization, organization, equipping, and training. Marshall was appointed to command GHQ in addition to his duties as Chief of Staff of the Army; in order to concentrate on his primary role, he largely delegated responsibility for running GHQ to McNair. As part of this working relationship, Marshall provided McNair broad advice and guidance, and McNair obtained approval from Marshall for the most important decisions. As GHQ's responsibilities increased following U.S. entry into the war, McNair's responsibilities were encroached upon by members of the War Department staff; for instance, the logistics staff section (G-4) retained authority over corps area commands in matters involving billeting, equipping, and supplying soldiers undergoing mobilization training, which limited GHQ's ability to plan them and oversee their execution. In response, McNair proposed establishing GHQ's unity of command over the Army's four field armies and eight corps areas; under his concept, each corps area headquarters would have responsibility for all administrative functions within their areas of responsibility, enabling GHQ and the field armies, corps, and divisions to focus on organizing, training, and administering the mobilized troop units that were preparing to go overseas. Though Marshall was initially receptive, members of the War Department General Staff disagreed with McNair's proposal, and Marshall concurred with them. The small GHQ staff Marshall assembled included representatives from each of the Army's major field branches – Infantry, Field Artillery, Cavalry, Coast Artillery, Armor, Engineers, and Signal, along with liaison officers representing the National Guard and Army Reserve. As operations tempo increased, the staff expanded to include functional area representatives (G-1, G-2, G-3, and G-4). Among the individuals who served on the GHQ staff were Lloyd D. Brown, who later succeeded Omar Bradley as commander of the 28th Infantry Division, and Mark W. Clark, who went on to command the 15th Army Group. McNair's National Guard liaison was Kenneth Buchanan, who later served as assistant division commander of both the 28th and 9th Infantry Divisions, and commanded the Illinois National Guard as a major general after the war. As GHQ chief of staff, McNair played a leading role in planning and conducting the 1940 and 1941 Louisiana Maneuvers and Carolina Maneuvers, large scale war games that enabled the Army to observe and draw conclusions with respect to training, doctrine, leadership, and other items of interest, which in turn led to changes in doctrine, equipment, and weapons. In addition, these maneuvers were used to identify which senior officers were most capable, enabling the Army to assign the best performers to command and top level staff positions, and relieve or reassign those perceived as less capable. McNair was promoted to temporary major general in September 1940, and temporary lieutenant general in June 1941. The War Department also assigned GHQ operational responsibilities, including planning for the defenses of facilities in Iceland, Greenland, and Alaska. McNair generally delegated the responsibility for this aspect of GHQ activities to his deputy, Brigadier General Harry J. Malony, so that he could concentrate on GHQ's organizational and training responsibilities, but maintained overall control of each role. McNair established the GHQ on the site of the Army War College (now the location of the National War College) at Washington Barracks (now Fort Lesley J. McNair), as the college had been closed for the duration of the war. McNair continued to command Army Ground Forces from this location rather than moving into the Pentagon when its construction was completed in 1943. ### Army Ground Forces #### AGF creation and operations In March 1942, the Army eliminated the General Headquarters in favor of three new functional commands: Army Ground Forces (AGF), commanded by McNair; the Army Air Forces (AAF), commanded by Lieutenant General Henry H. Arnold; and the Services of Supply (later the Army Service Forces or ASF), commanded by Lieutenant General Brehon B. Somervell. McNair's task at AGF was to expand the Army's ground forces from their March 1942 strength of 780,000 officers and men to more than 2.2 million by July 1943, and more than 8 million by 1945. His duties grew significantly, and included all Army boards, formal schools, training centers, and mobilization camps, as well as special activities having to do with the four combat arms (Infantry, Field Artillery, Cavalry, and Coast Artillery). As part of this reorganization, the Army eliminated the branch chief positions which had been responsible for these arms, transferring their authority to McNair. In addition, he had authority over four new "quasi-arms" which did not fall under the traditional combat arms – Airborne, Armor, Tank Destroyer, and Antiaircraft Artillery. The Army intended for this reorganization to end inter-branch rivalries and competition for pre-eminence and resources; in fact it met with mixed success as advocates for each branch continued to argue among themselves. Additionally, competition for authority and resources also emerged between the War Department General Staff and the three functional commands, and between the functional commands themselves. Used to working with minimal delegation, and functioning with a small staff, McNair largely concentrated on the task at hand, and avoided the rivalries as much as possible. Despite this approach, he was still involved in controversies with the other functional commands, including an ongoing rivalry over allocation of the best qualified recruits and draftees. #### Personnel recruiting and training ##### Training McNair organized and supervised instruction in basic soldiering skills, to help individuals become proficient generalists prepared for more complex unit training. Once individuals were qualified, units carried out collective training, beginning with the lowest level, and continuing to build through successive echelons until divisions, corps, and armies were capable of carrying out large scale simulated force-against-force exercises. He insisted that training be conducted in realistic conditions, including the use of live ammunition, or simulators that replicated live ammunition, so that soldiers and commanders would be prepared to fight once they deployed overseas. ##### National Guard McNair had difficulty implementing a training program for National Guard units, chiefly because of their lack of prior military experience other than monthly drills and short annual training periods. He recommended a wholesale demobilization of the National Guard; Marshall and the Secretary of War disagreed, partly because they anticipated political backlash, and partly because the manpower they provided was in demand. McNair also found National Guard senior commanders wanting; in his view, National Guard officers should not progress beyond the grade of colonel, and professional full-time officers should command at the division level and above. In this, he was mostly successful; all but two National Guard division commanders were replaced by regular Army officers, and most National Guard generals carried out stateside assignments, or non-combat overseas assignments. Some served in roles below division commander (such as assistant division commander), or carried out administrative and training roles, such as provost marshal or school commandant. In addition, some accepted reductions in rank so they could serve in overseas assignments. ##### Fielding army divisions Original War Department estimates called for the Army to raise, equip, train, and deploy as many as 350 divisions. Later estimates revised this number downward to between 200 and 220. One factor that enabled this downward revision was that U.S. divisions were better equipped in some areas of their organizations than those of their adversaries, particularly later in the war; for example, every type of U.S. division was completely motorized or mechanized, while equivalent German divisions often relied on as many as 4,000 horses for transporting soldiers, supplies, and artillery. A variety of other factors, including the entry of the Soviet Union into the war on the side of the Allies after Adolf Hitler broke his non-aggression pact with Joseph Stalin by launching Operation Barbarossa, the need to ensure that enough farmers and agricultural workers were available for food production, and the need to maintain a U.S. workforce large enough to handle the production of weapons, vehicles, ammunition, and other equipment caused Army Chief of Staff Marshall to decide that maintaining the Army's ground combat strength at 90 divisions would strike the balance between too few soldiers to defeat the Axis powers, and so many that there were not enough civilians in the U.S. workforce. To accomplish this goal, McNair and the AGF staff created new division manning and equipping tables, which reduced the number of soldiers required to man a division. This initiative had accomplished Marshall's goal by 1945, enabling the Army to field 89 divisions with the same number of soldiers it would have required to man only 73 divisions in 1943. The Army's effort to create and field a number of divisions sufficient to achieve victory also included the creation of airborne units. The Army had begun testing and experimenting with airborne formations in 1940; by 1943, William C. Lee, an early proponent of airborne forces, had convinced McNair of the need for division-sized airborne organizations. Though McNair and a few other Army leaders sometimes advocated for creation of all-purpose light divisions that could be adapted for unique missions, the Army did create some specialized divisions, including airborne. This initiative included activating the Airborne Command to oversee the organization and training of airborne units, conversion of the 82nd and 101st Infantry Divisions to airborne, and preparation of these divisions for paratrooper and glider missions in Europe. Another initiative McNair incorporated into the effort to field the Army's World War II division was the creation of light divisions. Recognizing that the rugged terrain of the Italian mountains and Pacific jungles required specialized units, the AGF reorganized three existing divisions as the 89th Light Division (Truck), 10th Light Division (Pack, Alpine), and 71st Light Division (Pack, Jungle). The results of pre-deployment training demonstrated that the 71st and 89th Divisions were too small to sustain themselves in combat, so they were converted back to regular infantry divisions. The 10th Division's early training results were also less than encouraging, but its identity as a mountain division was retained, and after completing training it served in combat in the Italian mountains. Overall, the Army's wartime division organization and reorganizations have been judged a success by historians, in that they provided an adequate number of units to win the war, while ensuring that agricultural and industrial production could continue. ##### Individual replacement system As AGF commander, McNair worked to solve the problem of how to integrate replacement soldiers into units that had sustained casualties. Rather than adopt the model of replacing units that had sustained high casualties with new, full strength units, Marshall and McNair cited the need to allocate space on transport ships to equipment and supplies as the reason to provide individual replacement soldiers to units while the units remained in combat. In practice, the individual replacement system caused difficulties for both the replacement soldiers and the units they joined, especially during later stages of the war. New soldiers could have difficulty being accepted by the veterans of their units, since they were replacing buddies who had been killed or wounded, and had not shared the veterans' combat experiences. In addition, because new soldiers joined units that were already in the fight, there was often no time to teach them the tactics and techniques that increased their chances of surviving on the battlefield. Though soldiers were supposed to be allocated to requesting units from replacement depots based on their qualifications and the priority of the unit, McNair found that in practice many commanders in the combat theaters used replacement soldiers to form new units, or personally selected individual replacements from personnel centers without regard to their qualifications. Assigning soldiers to units for which they were not qualified, such as armor crewmen to infantry units, negated the training they had received before going overseas. To address these concerns, McNair advocated faster qualification of replacement soldiers by reducing their training from 24 weeks to 13. The War Department reduced the training requirement to 17 weeks, but mandated continued use of the individual replacement system. Because the AGF had responsibility for implementing the individual replacement system, McNair attempted improvements, including directing the establishment of the Classification and Replacement Division within his command, and streamlining the physical, psychological, and mental criteria used to determine fitness for service. Issues with the ASF's management of replacement centers within the United States led AGF to establish two new ones at Fort Meade, Maryland, and Fort Ord, California. Because infantry soldiers suffered disproportionately high casualties in combat, McNair argued for, but only partially succeeded in procuring recruits and draftees deemed high quality (typically those with the most education and highest aptitude test scores) for the AGF. In addition, he undertook several initiatives to improve the morale and esprit de corps of infantry soldiers, and enhance the goodwill of the civilian population towards the infantry, including creation of the Expert Infantryman Badge, and implementation of the "Soldier for a Day Initiative", which gave civilian government and business leaders the opportunity to interact with mobilized soldiers before they left the United States for combat assignments. These initiatives were not always successful; by late 1944 and early 1945, the number of units fighting continuously or nearly continuously caused the replacement system to break down. As a result, rear echelon soldiers were often pulled from their duties to fill vacancies in front line combat units, and training for some replacement soldiers and units was cut short so they could be rushed into combat. Some units were worn down to the point of combat ineffectiveness. In others, low morale, fatigue, and sickness became more prevalent. ##### Recruitment efforts Because of the difficulty in attracting to the AGF those trainees deemed to be "high-quality", McNair attempted to recruit through improved public relations. One part of this effort was the creation of an office attached to the AGF staff, the Special Information Section (SIS). Approximately 12 officers and enlisted soldiers with experience as writers and editors worked in the SIS to promote improved public appreciation for AGF soldiers, especially the infantry. In addition, the SIS worked with civilian writers and editors, musicians, cartoonists, film makers, and artists who worked in other mediums to enhance the prominence of infantrymen in their work. As part of this initiative, McNair wrote personally to several leading magazine and newspaper publishers to ask for their aid. In another effort to inform the public of the Army's personnel needs and improve the way the AGF was perceived, on Armistice Day in November 1942, McNair delivered a radio address over the Blue Network. In his remarks, titled "The Struggle is for Survival", McNair described the fighting capability and ruthless attitude of soldiers in the Japanese and German armies, and stated that only similar qualities in American ground troops – by implication, meaning not "the more genteel forms of warfare" practiced by the AAF and ASF – would see the Allies through to victory. The public response to McNair's remarks was largely favorable, though he did receive some criticism for extreme language that seemed to suggest an unfeeling attitude towards death and destruction. More importantly, McNair's radio address did little to improve recruiting into the Army Ground Forces; the public may have developed a better appreciation for infantry, armor, and artillery, but volunteers and draftees continued to be attracted to the AAF and ASF, and the AAF and ASF continued to lobby for the bulk of new service members in the high quality category to be assigned to them. ##### African-American soldiers During World War II, War Department regulations for African Americans required that they be admitted into the Army in numbers proportionate to their share of the population. In addition, the Army was required to establish segregated African American units in each major branch of the service, and give qualified African Americans the opportunity to earn officer commissions. The AGF worked to reconcile these requirements with its mission of producing trained soldiers and units that were capable of meeting and defeating the enemy in battle. To that end, the AGF activated and trained African American units in all major branches of the ground forces, and black soldiers who graduated from officer candidate schools (OCS) were assigned to African American units as they received their commissions. At the peak of the Army's expansion in June 1943, there were nearly 170,000 African Americans training at AGF facilities, or about 10.5 percent of its personnel strength. These figures were in line with the War Department requirements; at the time, African Americans accounted for between 10 and 11 percent of the U.S. population. Beginning with a small number of African American units in the Regular Army and National Guard, the AGF organized and trained many more, including the African-American 92nd and 93rd Infantry Divisions, and the 2nd Cavalry Division. By 1943, the AGF had created nearly 300 African American units, including the 452nd Anti-Aircraft Artillery Battalion, the 555th Parachute Infantry Battalion, and the 761st Tank Battalion. The War Department General Staff initially suggested incorporating African Americans into units with white soldiers at a ratio of 1 black to 10.6 whites. Based on difficulties with completing training, in large part because the abilities of the recruits were lagging as a result of having grown up in the segregated education system and culture then prevalent in the United States, McNair advocated for separate battalions of African American soldiers, arguing that they could be more effectively employed in this manner. In McNair's construct, such units could be deployed for functions including guarding lines of communication in rear areas and near front lines, or maintained as a reserve by division and corps commanders and committed where they were most needed during combat. The proposal that was adopted was closer to what McNair had advocated, and the AGF lengthened the initial training period. African American officers generally filled company grade positions as they completed OCS, with white officers holding the field grade and senior command positions. In cases where there were not enough African Americans, white officers filled the company grade positions as well. #### Anti-tank weapons and doctrine Developing and employing anti-tank weapons and creating suitable doctrine proved to be an ongoing challenge, for which some historians have faulted McNair. Marshall favored creation of self-propelled anti-tank weapons; McNair had long favored towed weapons, including the M3 gun. McNair recognized the limitations of the anti-tank weapons then available, and favored a defensive approach for their use, advocating that units emplace and camouflage them, but official doctrine called for a more offensive mindset. It also called for anti-tank units to conduct independent operations; McNair favored a combined arms approach. He believed the use of anti-tank weapons was an economical and efficient means to defeat enemy tanks, and would free up U.S. tanks for wider offensive operations. When the M3 anti-tank gun proved to be a less than optimum anti-tank weapon, the Army began development of tank destroyers – self-propelled tracked vehicles with a gun capable of engaging a tank, but which were faster and more maneuverable because they had thinner armor than a tank. When it appeared that the Infantry and Cavalry branch chiefs might subsume proposed tank destroyer units into their own task organizations, Marshall attempted to continue progress on tank destroyer development without generating active dissent by approving creation of the Tank Destroyer Center at Camp Hood. In practice, the separate Tank Destroyer Center meant that commanders of armor and infantry units had little or no experience with anti-tank weapons, or the most effective way to employ them. After having initially fielded the anti-tank weapon it had available – the M3 towed gun – the Army began to field self-propelled anti-tank weapons, first the M3 Gun Motor Carriage, and then the M10 tank destroyer. As a result of the Army's inability to resolve the questions of equipment and doctrine, it continued to struggle with efforts to develop viable anti-tank doctrine, and efforts to employ anti-tank guns or tank destroyers as part of combined arms team often proved ineffective. The ongoing debate within the Army about which type of anti-tank or tank destroyer weapon to use, and what design specifications ought to be included, also hampered the AGF's abilities to field the weapons and provide adequate training. As a result, during their initial employment and use, AGF observers in combat theaters identified several issues, including some as basic as training ammunition for the M3 Gun Motor Carriage being used unknowingly during combat, which obviously limited its effectiveness. Over time, commanders in combat resorted to field expedient measures to solve the problems, including learning to employ anti-tank guns and later tank destroyers in mutually supporting positions, and integrating them with infantry and armor units to maximize their effectiveness as part of a combined arms effort. The AGF in turn incorporated these lessons learned into mobilization training, so that over time soldiers and units deploying for combat were making use of the most up to date doctrine and tactics. #### Tanks At the start of World War II, the United States fielded the M3 Lee and M4 Sherman as its primary medium tanks. After-action reports from the North African Campaign and other engagements convinced commanders including Jacob L. Devers that the U.S. needed to deploy a heavier tank with more firepower in order to counter German Tiger I and Panther tanks. During his assignment as Chief of Armor earlier in the war, Devers had rejected the M6 heavy tank for being under-powered and under-gunned for its weight and size. As a result, the Army's Ordnance Department had overseen the creation of the T20 Medium Tank. In 1942, Devers advocated the immediate shipment of 250 T20s to the European Theater. McNair opposed this request, still convinced that smaller but heavily armed self-propelled tank destroyers could be employed faster and more effectively, especially when considering factors such as available space on cargo ships transporting weapons and equipment to Europe. The Army attempted development of an improved version of the T20, the T23, but design flaws kept it from being moved into production. In December 1943, Devers and other commanders with tank experience succeeded in convincing George Marshall of the need for a tank with more armor and firepower than the M3 and M4. An improved prototype, the T26, was produced as the M26 Pershing, and the Army ordered 250 Pershings. McNair was opposed, stating that the M4 was adequate, and arguing that tank-on-tank battles requiring the U.S. to employ heavier tanks with bigger guns were unlikely to occur. The Pershings were fielded, but arrived in Europe too late to have an effect on the conduct of the war. McNair's views on the employment of tanks also factored into reorganizations of the Army's armored divisions. The Armored Force had been created in 1940, and grew to include 16 divisions, though McNair unsuccessfully recommended reducing the number to six. The Armored Force created an armored corps headquarters in 1942, but it was deactivated at McNair's instigation after only a few months. In addition to arguing against the need for an armored corps, McNair believed the task organization for an armored division to be too large and unwieldy, again presuming that tanks would primarily serve as an exploitation force for rapid advances and as infantry support, but were not likely to engage in tank-on-tank battles. As a result, he played a key role in downsizing the armored divisions in 1942 and 1943, with the 1943 reorganization reducing the divisions by 4,000 soldiers and between 130 and 140 tanks. The downsizing enabled the creation of separate tank battalions, which could be deployed to support infantry divisions on an as-needed basis. (The downsizing did not affect the 2nd or 3rd Armored Divisions, which maintained their "heavy" task organization.) ## Death and burial In 1943, McNair traveled to North Africa on an inspection tour of AGF troops to acquire firsthand information about the effectiveness of training and doctrine, with the goal of making improvements in AGF's mobilization and training process. On April 23, he was observing front line troops in action in Tunisia when he sustained fragmentation wounds to his arm and head; a company first sergeant standing nearby was killed. McNair deployed to the European Theater in 1944; his assignment was initially undetermined, and Marshall and Dwight Eisenhower, the supreme commander in Europe, considered him for command of the Fifteenth United States Army or the fictitious First United States Army Group (FUSAG). With Lieutenant General George S. Patton, the FUSAG commander, slated to take command of the actual Third United States Army after the invasion, the Army required another commander with a recognizable name and sufficient prestige to continue the Operation Quicksilver deception that masked the actual landing sites for Operation Overlord, the Invasion of Normandy. Eisenhower requested McNair as Patton's FUSAG successor, and Marshall approved. In July 1944, McNair was in France to observe troops in action during Operation Cobra, and add to the FUSAG deception by making the Germans believe he was in France to exercise command. He was killed near Saint-Lô on July 25, when errant bombs of the Eighth Air Force fell on the positions of 2nd Battalion, 120th Infantry, where McNair was observing the fighting. In one of the first Allied efforts to use heavy bombers in support of ground combat troops, several planes dropped their bombs short of their targets. Over 100 U.S. soldiers were killed, and nearly 500 wounded. McNair was buried at the Normandy American Cemetery and Memorial in Normandy, France; his funeral was kept secret to maintain the FUSAG deception, and was attended only by his aide and Generals Omar Bradley, George S. Patton, Courtney Hodges, and Elwood Quesada. When his death was reported by the press, initial accounts indicated he had been killed by enemy fire; not until August were the actual circumstances reported in the news media. McNair is the highest-ranking military officer buried at the Normandy cemetery. Along with Frank Maxwell Andrews, Simon Bolivar Buckner Jr., and Millard Harmon, he was one of four American lieutenant generals who died in World War II. McNair's tombstone originally indicated his rank of lieutenant general. In 1954, Buckner and he were posthumously promoted to general by an act of Congress. The American Battle Monuments Commission (ABMC) did not upgrade McNair's gravestone; upon being informed in 2010 that the original marker was still in place, the ABMC replaced McNair's headstone with one that indicates the higher rank. ## Hearing loss While generally in excellent physical condition even as he aged, McNair began to experience hearing loss early in his career. The condition progressed, and included tinnitus, but did not interfere with his work; physical examinations indicated he had no trouble with tasks including speaking on the telephone. By the time he reached the ranks of the Army's senior commanders, his hearing loss was severe enough that he compensated by reading lips, and by forgoing participation in events where his difficulty in hearing would pose an obstacle, such as large conferences. By the late 1930s, he worried that his hearing condition might result in his mandatory retirement for medical reasons. Instead, Marshall issued a waiver which allowed him to continue to serve. His hearing loss may have prevented him from obtaining a field command during World War II, but Marshall was unwilling to do without his abilities as an organizer and trainer. ## Legacy ### Reputation McNair was held in high regard by his contemporaries, as evidenced by performance appraisals that routinely gave him the highest possible ratings. Marshall also held McNair in high esteem, as demonstrated by the fact that McNair was a top stateside commander during World War II, was considered for a top command in Europe, and was ultimately selected to command the First United States Army Group as part of a deception plan that required a general with a good reputation and high name recognition for successful execution. In addition, soon after McNair was assigned to serve as commandant of the Command and General Staff School, Marshall learned that he would become the Chief of Staff of the Army. In a letter to McNair, Marshall wrote: "You at the head of Leavenworth are one of the great satisfactions I have at the moment in visualizing the responsibilities of the next couple of years." Mark W. Clark served under McNair as operations officer (G-3) at AGF Headquarters before ascending to the general officer ranks. In his autobiography, Clark referred to McNair as "one of the most brilliant, selfless and devoted soldiers" he had ever encountered. McNair's primary legacy was his role as "the brains of the Army", in that his involvement in unit design (the triangular division), education (the Command and General Staff Officer Course), doctrine (updating the Field Service Manual and revising field artillery methods and procedures), weapons design and procurement (experimentation with field artillery, anti-tank weapons, and anti-aircraft weapons), and training of soldiers and units (as commander of Army Ground Forces), especially in the era between World Wars I and II, made him one of the primary architects of the Army as the United States employed it for World War II. Another enduring McNair legacy was his training method of beginning with basic soldier skills and then building through successive echelons until large units became proficient during exercises and war games that closely simulated combat. These concepts remain the Army's core principles for planning, executing, and overseeing individual and collective training. ### Historical debate McNair's decisions and actions during World War II continue to be debated by historians, particularly the individual replacement system and the difficulties with resolving the issue of tanks versus anti-tank guns and tank destroyers. With respect to the individual replacement program, historian Stephen Ambrose described it as inefficient, wasteful, and a contributor to unnecessary casualties. However, some recent assessments have viewed it more favorably. As one example, a 2013 essay by Robert S. Rush concluded "Success results NOT from rotating organizations in and out of combat but from sustaining those organizations while in combat." On the questions of fielding anti-tank guns and tank destroyers as the primary anti-armor weapons, and fielding light and medium tanks instead of a heavy tank capable of matching those in German armor units, historians including Mark Calhoun argue that McNair recognized the limitations of the anti-tank weapons then available, and the difficulties with providing better ones given time and production constraints, and so worked to develop a doctrine that made the best possible use of the weapons that were available. Other historians, including Steve Zaloga, argue that McNair's opposition to development and fielding of heavy tanks represented a "World War I mindset" that hindered the overall performance of the US Army during World War II. ### Awards and honors McNair received the honorary degree of LL.D. from Purdue University in 1941 and the University of Maine in 1943. The Lesley J. McNair Bridge was a temporary structure over the Rhine River in Cologne, Germany; it stood from 1945 until it was dismantled after the erection of a permanent bridge in 1946. Washington Barracks in Washington, D.C., was renamed Fort Lesley J. McNair in 1948. Roads and buildings on several U.S. Army posts carry the name "McNair", including McNair Avenue and McNair Hall (Fort Sill), McNair Road (Joint Base Myer–Henderson Hall), McNair Drive (Fort Monroe), and McNair Hall (Fort Leavenworth). McNair Barracks in Berlin, Germany, was named for Lesley J. McNair. The facility was closed as a U.S. military installation in 1994, and has since been redeveloped, but retains a museum which details its use as a U.S. base. McNair Kaserne in Höchst, Germany, was also named in his honor. Home to several Army Signal Corps units, it was closed and turned over to the German government in 1992, and has since been redeveloped as residential and commercial space. Veterans of Foreign Wars Post 5263 in Lawton, Oklahoma (near the Fort Sill Artillery Center and School) is named for him. McNair was inducted into the Fort Leavenworth Hall of Fame, which was created in 1969 and is managed by the United States Army Combined Arms Center. The Fort Leavenworth Hall of Fame honors soldiers who were stationed at Fort Leavenworth, and significantly contributed to the Army's history, heritage, and traditions. In 2005, McNair was inducted into the U.S. Army's Force Development Hall of Fame. There is a collection of McNair papers at the National Archives and Records Administration. Purdue University maintains another folder of papers related to McNair's service there. ## Family McNair married Clare Huster (1882–1973) of New York City on June 15, 1905. They were the parents of a son, Douglas (1907–1944). ### Clare McNair After McNair's death, Clare McNair was employed by the United States Department of State to investigate working conditions for female members of the United States Foreign Service Auxiliary, the organization created to augment the Foreign Service during World War II. She traveled to several foreign locations, including North Africa, Europe, and Latin America to interview employees and observe working conditions in order to make recommendations for improvements. ### Douglas C. McNair Douglas Crevier McNair was born in Boston, Massachusetts, on April 17, 1907, while his father was stationed at the Watertown Arsenal. He was a 1928 graduate of West Point, and became an artillery officer after initially qualifying in the infantry branch. The younger McNair advanced through command and staff positions to become chief of staff of the 77th Infantry Division with the rank of colonel. He was killed in action on the island of Guam on August 6, 1944, only 12 days after the death of his father. He died when two other 77th Division soldiers and he became involved in a skirmish with Japanese soldiers while scouting locations for a new division command post. Douglas McNair posthumously received the Silver Star, Legion of Merit, and Purple Heart. First interred on Guam, in 1949 he was buried at the National Memorial Cemetery of the Pacific in Hawaii. The 77th Division named its temporary encampment near Agat, Guam, "Camp McNair" in his honor. Another Camp McNair, this one near Fujiyoshida, Japan, served as a U.S. military training facility from the late 1940s until the 1970s, and was used extensively during the Korean War. In addition, the McNair Village housing development at Fort Hood, Texas, was also named for him. ## Military awards Lesley McNair's awards and decorations included: Note: Two Distinguished Service Medals, one Purple Heart, and the World War II Victory Medal were awarded posthumously. ## Dates of rank ## Works by McNair (Partial list)
3,319,963
Hell Is Other Robots
1,172,252,285
null
[ "1999 American television episodes", "Dance animation", "Fiction about the Devil", "Futurama (season 1) episodes", "Musical television episodes", "Religion in science fiction", "Television episodes about drugs", "Television episodes set in hell", "Works based on the Faust legend" ]
"Hell Is Other Robots" is the ninth episode in the first season of the American animated television series Futurama. It originally aired on the Fox network in the United States on May 18, 1999. The episode was written by Eric Kaplan and directed by Rich Moore. Guest stars in this episode include the Beastie Boys as themselves and Dan Castellaneta voicing the Robot Devil. The episode is one of the first to focus heavily on Bender. In the episode, he develops an addiction to electricity. When this addiction becomes problematic, Bender joins the Temple of Robotology, but after Fry and Leela tempt Bender with alcohol and prostitutes, he quits the Temple of Robotology and is visited by the Robot Devil for sinning, and Bender is sent to Robot Hell. Finally Fry and Leela come to rescue him, and the three escape. The episode introduces the Robot Devil, Reverend Lionel Preacherbot and the religion of the Temple of Robotology, a spoof on the Church of Scientology. The episode received positive reviews, and was one of four featured on the DVD boxed set of Matt Groening's favorite episodes: "Monster Robot Maniac Fun Collection". ## Plot After a Beastie Boys concert, Bender attends a party with his old friend, Fender, a giant guitar amp. At the party, Bender and the other robots abuse electricity by "jacking on," and Bender develops an addiction. After receiving a near-lethal dose from an electrical storm, Bender realizes he has a problem and searches for help. He joins the Temple of Robotology, accepting the doctrine of eternal damnation in Robot Hell should he sin. After baptizing him in oil, the Reverend Lionel Preacherbot welds the symbol of Robotology to Bender's case. As Bender begins to annoy his co-workers with his new religion, Fry and Leela decide they want the "old Bender" back. They fake a delivery to Atlantic City, New Jersey and tempt Bender with alcohol, prostitutes and easy targets for theft. He eventually succumbs, rips off the Robotology symbol and throws it away, causing it to beep ominously. While seducing three female robots in his Trump Trapezoid room, Bender is interrupted by a knock at his room door. He opens the door and is knocked unconscious. He awakens to see the Robot Devil and finds himself in Robot Hell. The Robot Devil reminds Bender that he agreed to be punished for sinning when he joined Robotology. After discovering Bender is missing, Fry and Leela track him down using Nibbler's sense of smell. They eventually find the entrance to Robot Hell in an abandoned amusement park. A musical number starts as the Robot Devil begins detailing Bender's punishment. As the song ends, Fry and Leela arrive and try to reason with the Robot Devil on Bender's behalf. The Robot Devil tells them that the only way to win back Bender's soul is to beat him in a fiddle-playing contest, as required under the "Fairness in Hell Act of 2275". The Robot Devil goes first, playing Antonio Bazzini's "La Ronde des Lutins". Leela responds, having experience in playing the drums, but after a few notes it is clear Leela's fiddle-playing is pathetic, so she assaults the Robot Devil with the fiddle instead. As Fry, Leela, and Bender flee the Robot Devil's clutches, Bender steals the wings off a flying torture robot, attaches them to his back, and airlifts Fry and Leela to safety. Leela drops the heavy golden fiddle onto the Robot Devil's head, making them light enough to escape. Bender promises to never be too good or too evil, but to remain as he was before joining the Temple of Robotology. Over the closing credits, a remix of the show's theme song plays instead of the original version. ## Production "Hell Is Other Robots" lampoons drug addiction and religious conversion. In the DVD commentary for the episode, David X. Cohen, Matt Groening and Eric Kaplan all agreed that they felt comfortable enough with each of the Futurama characters to begin to take them in new and strange directions. Cohen noted that Bender's addiction is a perfect example of something they could do with a robot character which they could not get away with had it been a human character. One person at the studio refused to work on this episode because they did not agree with the portrayal of some of the religious content. Cohen also noted that the writing team had begun to loosen up during this episode, which gave it a feel similar to the series' later episodes. Kaplan claimed that before editing, there was enough material to make a three-part episode. Cohen and Ken Keeler traveled to New York to work with the Beastie Boys for their role. They waited three days for the Beastie Boys to call and say they were willing to record but eventually gave up and returned to the studios in Los Angeles. The audio tracks were recorded later. Adam "MCA" Yauch was unavailable at the time of the recording so only Adam "King Adrock" Horovitz and Michael "Mike D" Diamond voice themselves in the episode, with Horovitz also voicing Yauch. The Beastie Boys perform three songs in the episode: their 1998 hit single "Intergalactic", "Super Disco Breakin", and a brief a cappella version of "Sabotage". It was initially requested that they perform "Fight for Your Right" but they declined. The episode also contains Futurama's first original musical number. The lyrics to "Welcome to Robot Hell" were written by Kaplan and Keeler and the music was written by Keeler and Christopher Tyng. When praised for his performance in the audio commentary, John DiMaggio, the voice of Bender, noted that the most difficult part of the performance was singing in a lower octave rather than keeping up with the song's fast pace. ## Themes This episode is one of very few that focuses on the religious aspects of the Futurama universe. In most episodes, it is indicated that the Planet Express crew, along with most beings in the year 3000, are "remarkably unreligious". It introduces two of the religious figures of Futurama, The Robot Devil and Reverend Lionel Preacherbot, both of whom make appearances in later episodes. Preacherbot, who speaks in a manner typical of inner-city African-American pastor stereotypes, converts Bender to the religion Robotology. This leads to a series of events that are similar in many ways to the experiences of real world religious converts. Mark Pinsky states that the episode has a "double-edged portrayal of religion" as it portrays both an improvement in Bender's character but also some of the "less pleasant characteristics of the newly pious". The Robot Devil is introduced after Bender's fallback into sin. Pinsky writes in The Gospel According to The Simpsons, that while explaining to Bender his claim on his soul, the Robot Devil uses logic similar to that used by many Southern Baptists: "Bender tried to plead his case, without success. 'You agreed to this when you joined our religion,' the devil replies, in logic any Southern Baptist would recognize. 'You sin, you go to robot hell – for all eternity.'" By the end of the episode, Bender has returned to his old ways and states that he will no longer try to be either too good or too bad, a parody and contradiction of the Book of Revelation statement that one should not be lukewarm in his faith. ## Cultural references This episode contains a large amount of religious parody, with references to many religiously themed works of fiction. The episode's title is itself a parody of the famous line "Hell is other people" from Jean-Paul Sartre's one act play No Exit, though the episode has no other resemblance to the play. The punishments in Robot Hell are similar to the levels and rationale which are portrayed in Dante's The Divine Comedy, specifically the Inferno. The "Fairness in Hell Act", where the damned may engage in a fiddle battle to save his soul and win a solid gold fiddle, is taken directly from The Charlie Daniels Band song "The Devil Went Down to Georgia". Jokes poking fun at New Jersey are included because writer Cohen and actor DiMaggio both grew up there. The Temple of Robotology is a spoof of the Church of Scientology, and according to series creator Groening he received a call from the Church of Scientology concerned about the use of a similar name. Groening's The Simpsons had previously parodied elements of Scientology in the season nine episode "The Joy of Sect". In a review of the episode, TV Squad later posed the question: "Is the Temple of Robotology a poke at the Church of Scientology?" When TV Squad asked actor Billy West about this, he jokingly sidestepped the issue. ## Reception "Hell Is Other Robots" is one of four episodes featured in the DVD boxed set Monster Robot Maniac Fun Collection, Groening's favorite episodes from the first four seasons. The DVD includes audio commentary from Groening and DiMaggio, as well as a full-length animatic of the episode. In an article on the DVD release, Winston-Salem Journal described "Hell Is Other Robots" as one of Futurama's best episodes. Dan Castellaneta's performance as the Robot Devil in this episode and "The Devil's Hands are Idle Playthings" was described as a "bravura appearance". In a review of Futurama's first-season DVD release, the South Wales Echo highlighted the episode along with "Fear of a Bot Planet" as "crazy episodes" of the series. Brian Cortis of The Age gave the episode a rating of three stars out of four. Writing in The Observer after Futurama's debut in the United States but before it aired in the United Kingdom, Andrew Collins wrote favorably of the series and highlighted "Hell Is Other Robots" and "Love's Labors Lost in Space". He noted that the jokes "come thick and fast". John G. Nettles of PopMatters wrote: "'Hell is Other Robots' is a terrific introduction to Bender and Futurama's irreverent humor, sly social satire, and damn catchy musical numbers." TV Squad wrote that the series' funnier material appears in "Robot Hell – after Bender is 'born again' in the Temple of Robotology." David Johnson of DVD Verdict described "Hell Is Other Robots" as "not one of my favorites", criticizing the episode for focusing a large amount on the character of Bender. Johnson concluded his review by rating the episode a "B". The episode led to a Dark Horse Comics book, Futurama Pop-Out People: Hell Is Other Robots. ## See also - List of fictional religions - Religion in Futurama
14,233,768
2008 Monaco Grand Prix
1,156,735,779
Formula One motor race
[ "2008 Formula One races", "2008 in Monégasque sport", "May 2008 sports events in Europe", "Monaco Grand Prix" ]
The 2008 Monaco Grand Prix (formally the Formula 1 Grand Prix de Monaco 2008) was a Formula One motor race held on 25 May 2008 at the Circuit de Monaco; contested over 76 laps, it was the sixth race of the 2008 Formula One World Championship. The race was won by the season's eventual Drivers' Champion, Lewis Hamilton, for the McLaren team. BMW Sauber driver Robert Kubica finished second, and Felipe Massa, who started from pole position, was third in a Ferrari. Conditions were wet at the start of the race. Massa maintained his lead into the first corner, but his teammate Kimi Räikkönen was passed for second by Hamilton, who had started in third position on the grid. Hamilton suffered a punctured tyre on lap six, forcing him to make a pit stop from which he re-entered the race in fifth place. As the track dried and his rivals made their own pit stops Hamilton became the race leader, a position he held until the end of the race. Kubica's strategy allowed him to pass Massa during their second pit stops, after the latter's Ferrari was forced to change from wet to dry tyres. Räikkönen dropped back from fifth position to ninth after colliding with Adrian Sutil's Force India late in the race. Sutil had started from 18th on the grid and was in fourth position before the incident, which allowed Red Bull driver Mark Webber to finish fourth, ahead of Toro Rosso driver Sebastian Vettel in fifth. The race was Hamilton's second win of the season, his first in Monaco, and the result meant that he led the Drivers' Championship, three points ahead of Räikkönen and four ahead of Massa. Ferrari maintained their lead in the Constructors' Championship, 16 points ahead of McLaren and 17 ahead of BMW Sauber, with 12 races of the season remaining. ## Background The Grand Prix was contested by 20 drivers, in ten teams of two. The teams, also known as "constructors", were Ferrari, McLaren-Mercedes, Renault, Honda, Force India-Ferrari, BMW Sauber, Toyota, Red Bull-Renault, Williams-Toyota and Toro Rosso-Ferrari. Tyre supplier Bridgestone brought four different tyre types to the race: two dry-weather tyre compounds, the softer marked by a single white stripe down one of the grooves, and two wet-weather compounds, the extreme wet also marked by a single white stripe. Before the race, Ferrari driver Kimi Räikkönen led the Drivers' Championship with 35 points, and his teammate Felipe Massa was second with 28 points. McLaren driver Lewis Hamilton was third, also on 28 points, with one win fewer than Massa; BMW driver Robert Kubica was fourth on 24 points, ahead of his teammate Nick Heidfeld on 20 points. In the Constructors' Championship, Ferrari were leading with 63 points, 19 points ahead of BMW Sauber, and 21 points ahead of McLaren-Mercedes. Ferrari had responded to Hamilton's win in the opening round of the season in Australia by winning each of the subsequent four races, giving them a commanding lead in the Constructors' Championship. Ferrari's dominance had been highlighted by two one-two finishes: Massa ahead of Räikkönen in Bahrain and Räikkönen's win over Massa at the following round in Spain. A strong drive despite an unfavourable strategy had helped Hamilton to split the Ferrari drivers on the podium in Turkey, coming second, behind Massa but ahead of Räikkönen. Ferrari had not won in Monaco since 2001, and had been unable to match McLaren's pace in 2007. After tests at the Circuit Paul Ricard in mid-May, Räikkönen said his team had developed a strong car for the tight, low-speed Monaco circuit, but they still expected a challenge from McLaren and BMW Sauber. Hamilton claimed that McLaren would be competitive at Monaco, but Kubica played down his team's chances, saying "I think a win will be very difficult." Toro Rosso's main 2008 car, the STR3, was also introduced that weekend; the team had used a modified version of their 2007 car, the STR2, for the opening five races. Originally due to be introduced at the previous race in Turkey, a crash in testing had left the team short of spare parts, delaying the car's race debut. As the STR3 used a different transmission than that used in the STR2, Toro Rosso driver Sebastian Vettel was forced to take a five-place penalty on the grid for an unscheduled change of gearbox. His teammate Sébastien Bourdais escaped a similar penalty because he had failed to finish the race in Turkey, allowing him a free gearbox change. ## Practice Three practice sessions were held before the Sunday race – two on Thursday, and one on Saturday. The Thursday morning and afternoon sessions each lasted 90 minutes; the third session, on Saturday morning, lasted for an hour. One hour into the first session, officials noticed a loose drain cover in the run through Beau Rivage, and suspended practice. Ferrari and McLaren took the top four spots after the resumption – Räikkönen ahead of Hamilton, McLaren driver Heikki Kovalainen, and Massa. Behind Williams driver Nico Rosberg, Kubica was the best of the BMW Saubers in sixth; his teammate Heidfeld was forced to retire because of an engine failure, and stopped his car in Casino Square after just 13 laps. Despite further delays during the second session – both Renault drivers crashed in separate incidents at Sainte Devote, requiring the marshals to sweep the track of debris – Hamilton again proved strong, fastest ahead of Rosberg, Räikkönen, Massa, Kovalainen and Kubica. Apart from the Renault drivers, two more cars struck the barriers: Toyota driver Jarno Trulli scraped the wall out of Piscine; Adrian Sutil's Force India lost its front wing after tagging the barrier at Rascasse. While light rain fell on Saturday morning, Kovalainen set the fastest time in the final session, before losing control at Piscine and damaging the rear of his car. The rain increased as the marshals cleared the debris, and in the ensuing poor track conditions Kovalainen's time remained unbeaten. Hamilton managed second fastest, ahead of Räikkönen, Rosberg, Kubica and Massa. ## Qualifying Saturday afternoon's qualifying session was divided into three parts. In the first 20-minute period, cars finishing 16th or lower were eliminated. The second qualifying period lasted for 15 minutes, at the end of which the fastest ten cars went into the final period, to determine their grid positions for the race. Cars failing to make the final period were allowed to be refuelled before the race but those competing in it were not, and so carried more fuel than they had done in the earlier qualifying sessions. Massa clinched his third pole position of the season with a time of 1:15.787, and was joined on the front row by teammate Räikkönen. Hamilton took third place on the grid, with a qualifying time just 0.052 seconds slower than Massa's. Kovalainen edged out Kubica to take fourth, the latter struggling to get heat into his tyres for his final run. Rosberg's attacking style took him to sixth; Renault driver Fernando Alonso, Trulli and Red Bull driver Mark Webber occupied the next three places. Webber's teammate David Coulthard ended his second session in the barriers after the tunnel, after his car jerked sideways at the crest under braking. Although Coulthard was unhurt, the position of his stricken car and the subsequent caution flags surrounding it denied many drivers the opportunity to make their final flying laps in the session. Honda driver Jenson Button, who took 12th behind Timo Glock of Toyota, blamed the disruption for his performance, having prepared his car specifically for the final run. Heidfeld suffered from similar tyre problems to his teammate and managed 13th; Williams driver Kazuki Nakajima and Honda driver Rubens Barrichello took the next two places ahead of Bourdais in the new STR3. Renault driver Nelson Piquet Jr. lined up from 17th and Vettel managed 18th before his gearbox penalty demoted him to 19th. The Force Indias of Sutil and Giancarlo Fisichella qualified slowest; Fisichella's penalty for a change of gearbox after morning practice meant he started from 20th on the grid. ### Qualifying classification - – David Coulthard, Sebastian Vettel and Giancarlo Fisichella were each given five place grid penalties for changing their gearboxes. ## Race On Sunday morning Coulthard became the third driver to incur a gearbox penalty, after changing his transmission following his crash during qualifying. The penalty moved him from 10th to 15th on the grid. In contrast to Saturday's dry qualifying session, frequent showers soaked the track on Sunday morning, making racing slippery and potentially hazardous. Although the showers subsided by early afternoon, they resumed 20 minutes before the start, the changeable conditions forcing teams to delay tyre choices for as long as possible. By the time of the three-minute warning most drivers had opted for the standard wet tyre; only Piquet started the race on the extreme wet. Kovalainen's engine stalled as the cars set off on their formation lap; his car was pushed back to the pit lane by his mechanics, where it was restarted and a new steering wheel fitted to solve the problem. Massa held his lead into the first corner at Sainte Devote, while behind him Hamilton used the pit lane exit to pass Räikkönen down the inside. Kovalainen's vacated fourth position was filled by Kubica, as Alonso moved into fifth, passing Rosberg. The latter made a pit stop soon after for a new front wing after making contact with Alonso at the hairpin, promoting Trulli to sixth. These positions were maintained for several laps, but the distances between the cars increased, in part because the spray thrown up by their wheels made close racing difficult. The conditions proved to be crucial when Hamilton made contact with barriers on the outside of Tabac on lap six, necessitating his return to the pit lane for a new set of tyres. The McLaren mechanics fuelled the car for a longer second stint and Hamilton emerged in fifth, the distances between his competitors' cars resulting in him losing only three places during the pit stop. Alonso had a similar accident to Hamilton's at Massenet two laps later, and emerged in seventh after pitting and taking on extreme wets. Massa's lead – 12 seconds over the second-placed Räikkönen by lap six – was reduced to nothing when the safety car was deployed on the eighth lap. Coulthard and Bourdais had crashed into the barriers at Massenet just seconds apart, requiring the marshals to separate the cars and lift them off the track. When the safety car pulled off after three laps, Massa consolidated his lead over his teammate. Räikkönen, however, was called into the pit lane for a drive-through penalty after to failing to have his tyres attached by the three-minute warning before the race, dropping him back to fourth. Kubica, now second, took the lead when Massa slid off down the escape road at Sainte Devote, the undamaged Ferrari rejoining in second as the rain eased. The two swapped positions again after their pit stops – Kubica on lap 26 and Massa on lap 33 – but Hamilton, carrying more fuel than his rivals, took the lead. As a dry line appeared on the track, Hamilton extended his lead over Massa, from 13 to 37 seconds, by the time he made a pit stop for the second time on lap 54. His timing proved fortunate, as he changed to dry tyres just as such a strategy emerged as the strongest, and he emerged 13 seconds ahead of Massa. The Ferrari's own pit stop two laps later dropped Massa behind Kubica. Hamilton's lead was reduced when the safety car once again deployed on lap 62 after Rosberg crashed at Piscine, hitting both sides of the track and scattering debris. Rosberg was unhurt. Of the 20 cars which started the race, Sutil had gained the most places by the second safety car period. The Force India started from 18th, rose one place after Kovalainen was pushed back to the pit lane, passed Piquet on the second lap and Bourdais a lap later. Sutil benefited as Button, Rosberg, Glock and Trulli made pit stops for repairs over the following laps, to sit in 11th by lap 14. Alonso, on extreme wet tyres behind Heidfeld, attempted to pass the BMW at the hairpin, but succeeded only in damaging his front wing and forming a stationary queue behind him. He also benefitted by overtaking three cars under yellow flags and would have been penalised if he had finished the race. Sutil took advantage of the situation and passed four cars to take seventh, behind Heidfeld and Webber. Running a long first stint until lap 53, he moved to fourth as the cars ahead of him made pit stops. Vettel gained 12 places from the start, battling with Barrichello and Nakajima early on before jumping several places on his first stint as his rivals made pit stops, eventually switching to dry tyres when he stopped on lap 52. Webber improved three places from the start to sit in sixth, ahead of Vettel, Barrichello and Nakajima. The safety car was withdrawn on lap 68. Later on the same lap, Räikkönen lost control of his car under braking out of the tunnel, and by the time he regained control his speed was too great to avoid a collision with Sutil. The Ferrari damaged its front wing and made a pit stops for a replacement before resuming, but the damage to Sutil's rear suspension led to his retirement. Webber benefited from the incident and moved to fourth place, while Räikkönen dropped back to ninth. The slow pace during the opening laps meant the race ended after two hours, or 76 laps, rather than the 78 laps originally scheduled. Hamilton, despite suffering a slow puncture on the final lap, crossed the finish line to take his first win in Monaco. Kubica followed in second, ahead of Massa and Webber. Vettel took the STR3 to its first points in its maiden race, coming fifth, ahead of Barrichello and Nakajima. Kovalainen recovered from his stall to finish eighth, one place ahead of Räikkönen, who set the fastest lap of the race on lap 74, a 1:16.689. He was followed by Alonso, Button, Glock and Trulli. Heidfeld was the last of the classified finishers, in 14th position, four laps behind Hamilton. Fisichella retired with a gearbox failure after 36 laps; Piquet – two laps after he switched to dry tyres on lap 46 – finished his race in the barriers at Sainte Devote. Sutil, Rosberg, Coulthard and Bourdais were the four other retirements. ### Post-race The top three finishers appeared in Prince Albert II of Monaco's Royal box to collect their trophies. In the subsequent press conference Hamilton said that conditions early in the race made driving difficult: "When the weather is like this, when it starts to rain and we had an idea it was going to start to dry, the important thing is to keep it on the track but I can't explain how difficult it was for all of us. You were aquaplaning all the time and you were tip-toeing almost." Hamilton also said that his crash at Tabac had been the result of a stream of water running over the track, causing his car to oversteer and resulting in a puncture to his right rear tyre. Nevertheless, he praised his team and strategy for helping him take the victory. Kubica said that tyre problems in his middle stint meant he was unable to overtake Massa, who was driving a heavier car. Noticing Glock's success on dry tyres, Kubica asked his team to make a similar change, resulting in his pass on Massa. During the post-race interview Massa said that although he was fuelled to the end of the race after his first pit stop, the drying track forced him to pit again for new tyres: "It was a shame that we made a little mistake on the strategy but it is good to be on the podium." Sutil expressed his disappointment after being knocked out of the race when he was on course to record his team's first points: > I can't believe it, it was so close. It feels like a pain in my heart. It is like a dream gone to a nightmare – suddenly you are in the car and it looks all fantastic, then you have to accept it is not going to happen ... It was after the restart after the final safety car that Kimi had a problem under braking and crashed into the back of my car. The race was over and it was a real shock. A few tears came out as the adrenaline was high – I just can't explain it. Räikkönen apologised to Sutil for their collision, blaming cold brakes for his loss of control. Mike Gascoyne, Force India's technical chief, called for the stewards to investigate the incident, but after deliberation no action was taken. "The frustration is that if that was a Force India driver hitting a world champion we'd expect to get a one or two-race ban, but the other way round nothing ever seems to happen" said Gascoyne. Sutil was called to the stewards' room and reprimanded for passing three cars under yellow flags on lap 13. Because he had retired, he was issued with merely a warning, but had he finished the race he would have been given a 25-second time penalty, which would have dropped him out of the point-scoring positions. Hamilton's win was praised by Jackie Stewart, a three-time winner of the Monaco Grand Prix. "At his age, Lewis can win this race many times," he said. "This is the first, I hope, of many victories for him in Monaco so that he can join the greats of Formula One." Damon Hill, the 1996 World Champion, said Hamilton "did very well indeed. I was most impressed, and the race as a whole was also a great advertisement for what F1 should be about." As of 2023 this was McLaren's most recent victory at this Grand Prix. The race result left Hamilton leading the Drivers' Championship with 38 points. Räikkönen, who failed to score in Monaco, was second on 35 points, one point ahead of Massa and three ahead of Kubica. Heidfeld was sixth on 20 points. In the Constructors' Championship, Ferrari maintained their lead with 69 points, McLaren jumped to second on 53 points, and BMW Sauber dropped to third on 52 points, with 12 races of the season remaining. ### Race classification ## Championship standings after the race Drivers' Championship standings Constructors' Championship standings - Note: Only the top five positions are included for both sets of standings. ## See also - 2008 Monaco GP2 Series round
32,712
Vega
1,172,829,916
Brightest star in the constellation Lyra
[ "A-type main-sequence stars", "Bayer objects", "Bright Star Catalogue objects", "Castor Moving Group", "Circumstellar disks", "Delta Scuti variables", "Durchmusterung objects", "Flamsteed objects", "Gliese and GJ objects", "Henry Draper Catalogue objects", "Hipparcos objects", "Hypothetical planetary systems", "Lambda Boötis stars", "Lyra", "Northern pole stars", "Vega" ]
Vega is the brightest star in the northern constellation of Lyra. It has the Bayer designation α Lyrae, which is Latinised to Alpha Lyrae and abbreviated Alpha Lyr or α Lyr. This star is relatively close at only 25 light-years (7.7 parsecs) from the Sun, and one of the most luminous stars in the Sun's neighborhood. It is the fifth-brightest star in the night sky, and the second-brightest star in the northern celestial hemisphere, after Arcturus. Vega has been extensively studied by astronomers, leading it to be termed "arguably the next most important star in the sky after the Sun". Vega was the northern pole star around 12,000 BCE and will be so again around the year 13,727, when its declination will be . Vega was the first star other than the Sun to have its image and spectrum photographed. It was one of the first stars whose distance was estimated through parallax measurements. Vega has functioned as the baseline for calibrating the photometric brightness scale and was one of the stars used to define the zero point for the UBV photometric system. Vega is only about a tenth of the age of the Sun, but since it is 2.1 times as massive, its expected lifetime is also one tenth of that of the Sun; both stars are at present approaching the midpoint of their main sequence lifetimes. Compared with the Sun, Vega has a lower abundance of elements heavier than helium. Vega is also a variable star that varies slightly in brightness. It is rotating rapidly with a velocity of 236 km/s at the equator. This causes the equator to bulge outward due to centrifugal effects, and, as a result, there is a variation of temperature across the star's photosphere that reaches a maximum at the poles. From Earth, Vega is observed from the direction of one of these poles. Based on observations of more infrared radiation than expected, Vega appears to have a circumstellar disk of dust. This dust is likely to be the result of collisions between objects in an orbiting debris disk, which is analogous to the Kuiper belt in the Solar System. Stars that display an infrared excess due to dust emission are termed Vega-like stars. In 2021, a candidate ultra-hot Neptune on a 2.43-day orbit around Vega was discovered with the radial velocity method, additionally, another possible Saturn-mass signal with a period of about 200 days. ## Nomenclature α Lyrae (Latinised to Alpha Lyrae) is the star's Bayer designation. The traditional name Vega (earlier Wega) comes from a loose transliteration of the Arabic word wāqi' (Arabic: واقع) meaning "falling" or "landing", via the phrase an-nasr al-wāqi' (Arabic: النّسر الْواقع), "the falling eagle". In 2016, the International Astronomical Union (IAU) organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Vega for this star. It is now so entered in the IAU Catalog of Star Names. ## Observation Vega can often be seen near the zenith in the mid-northern latitudes during the evening in the Northern Hemisphere summer. From mid-southern latitudes, it can be seen low above the northern horizon during the Southern Hemisphere winter. With a declination of +38.78°, Vega can only be viewed at latitudes north of 51° S. Therefore, it does not rise at all anywhere in Antarctica or in the southernmost part of South America, including Punta Arenas, Chile (53° S). At latitudes to the north of 51° N, Vega remains continuously above the horizon as a circumpolar star. Around July 1, Vega reaches midnight culmination when it crosses the meridian at that time. Each night the positions of the stars appear to change as the Earth rotates. However, when a star is located along the Earth's axis of rotation, it will remain in the same position and thus is called a pole star. The direction of the Earth's axis of rotation gradually changes over time in a process known as the precession of the equinoxes. A complete precession cycle requires 25,770 years, during which time the pole of the Earth's rotation follows a circular path across the celestial sphere that passes near several prominent stars. At present the pole star is Polaris, but around 12,000 BCE the pole was pointed only five degrees away from Vega. Through precession, the pole will again pass near Vega around 14,000 CE. Vega is the brightest of the successive northern pole stars. In 210,000 years, Vega will become the brightest star in the night sky, and will peak in brightness in 290,000 years with an apparent magnitude of –0.81. This star lies at a vertex of a widely spaced asterism called the Summer Triangle, which consists of Vega plus the two first-magnitude stars Altair, in Aquila, and Deneb in Cygnus. This formation is the approximate shape of a right triangle, with Vega located at its right angle. The Summer Triangle is recognizable in the northern skies for there are few other bright stars in its vicinity. ## Observational history Astrophotography, the photography of celestial objects, began in 1840 when John William Draper took an image of the Moon using the daguerreotype process. On 17 July 1850, Vega became the first star (other than the Sun) to be photographed, when it was imaged by William Bond and John Adams Whipple at the Harvard College Observatory, also with a daguerreotype. In August 1872, Henry Draper took a photograph of Vega's spectrum, the first photograph of a star's spectrum showing absorption lines. Similar lines had already been identified in the spectrum of the Sun. In 1879, William Huggins used photographs of the spectra of Vega and similar stars to identify a set of twelve "very strong lines" that were common to this stellar category. These were later identified as lines from the Hydrogen Balmer series. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. The distance to Vega can be determined by measuring its parallax shift against the background stars as the Earth orbits the Sun. The first person to publish a star's parallax was Friedrich G. W. von Struve, when he announced a value of 0.125 arcsecond (0.125′′) for Vega. Friedrich Bessel was skeptical about Struve's data, and, when Bessel published a parallax of 0.314′′ for the star system 61 Cygni, Struve revised his value for Vega's parallax to nearly double the original estimate. This change cast further doubt on Struve's data. Thus most astronomers at the time, including Struve, credited Bessel with the first published parallax result. However, Struve's initial result was actually close to the currently accepted value of 0.129′′, as determined by the Hipparcos astrometry satellite. The brightness of a star, as seen from Earth, is measured with a standardized, logarithmic scale. This apparent magnitude is a numerical value that decreases in value with increasing brightness of the star. The faintest stars visible to the unaided eye are sixth magnitude, while the brightest in the night sky, Sirius, is of magnitude −1.46. To standardize the magnitude scale, astronomers chose Vega and several similar stars and averaged their brightness to represent magnitude zero at all wavelengths. Thus, for many years, Vega was used as a baseline for the calibration of absolute photometric brightness scales. However, this is no longer the case, as the apparent magnitude zero point is now commonly defined in terms of a particular numerically specified flux. This approach is more convenient for astronomers, since Vega is not always available for calibration and varies in brightness. The UBV photometric system measures the magnitude of stars through ultraviolet, blue and yellow filters, producing U, B and V values, respectively. Vega is one of six A0V stars that were used to set the initial mean values for this photometric system when it was introduced in the 1950s. The mean magnitudes for these six stars were defined as: U − B = B − V = 0. In effect, the magnitude scale has been calibrated so that the magnitude of these stars is the same in the yellow, blue and ultraviolet parts of the electromagnetic spectrum. Thus, Vega has a relatively flat electromagnetic spectrum in the visual region—wavelength range 350–850 nanometers, most of which can be seen with the human eye—so the flux densities are roughly equal; 2,000–4,000 Jy. However, the flux density of Vega drops rapidly in the infrared, and is near 100 Jy at 5 micrometers. Photometric measurements of Vega during the 1930s appeared to show that the star had a low-magnitude variability on the order of ±0.03 magnitude (around ±2.8% luminosity). This range of variability was near the limits of observational capability for that time, and so the subject of Vega's variability has been controversial. The magnitude of Vega was measured again in 1981 at the David Dunlap Observatory and showed some slight variability. Thus it was suggested that Vega showed occasional low-amplitude pulsations associated with a Delta Scuti variable. This is a category of stars that oscillate in a coherent manner, resulting in periodic pulsations in the star's luminosity. Although Vega fits the physical profile for this type of variable, other observers have found no such variation. Thus the variability was thought to possibly be the result of systematic errors in measurement. However, a 2007 article surveyed these and other results, and concluded that "A conservative analysis of the foregoing results suggests that Vega is quite likely variable in the 1–2% range, with possible occasional excursions to as much as 4% from the mean". Also, a 2011 article affirms that "The long-term (year-to-year) variability of Vega was confirmed". Vega became the first solitary main-sequence star beyond the Sun known to be an X-ray emitter when in 1979 it was observed from an imaging X-ray telescope launched on an Aerobee 350 from the White Sands Missile Range. In 1983, Vega became the first star found to have a disk of dust. The Infrared Astronomical Satellite (IRAS) discovered an excess of infrared radiation coming from the star, and this was attributed to energy emitted by the orbiting dust as it was heated by the star. ## Physical characteristics Vega's spectral class is A0V, making it a blue-tinged white main-sequence star that is fusing hydrogen to helium in its core. Since more massive stars use their fusion fuel more quickly than smaller ones, Vega's main-sequence lifetime is roughly one billion years, a tenth of the Sun's. The current age of this star is about 455 million years, or up to about half its expected total main-sequence lifespan. After leaving the main sequence, Vega will become a class-M red giant and shed much of its mass, finally becoming a white dwarf. At present, Vega has more than twice the mass of the Sun and its bolometric luminosity is about 40 times the Sun's. Because it is rotating rapidly, approximately once every 16.5 hours, and seen nearly pole-on, its apparent luminosity, calculated assuming it was the same brightness all over, is about 57 times the Sun's. If Vega is variable, then it may be a Delta Scuti type with a period of about 0.107 day. Most of the energy produced at Vega's core is generated by the carbon–nitrogen–oxygen cycle (CNO cycle), a nuclear fusion process that combines protons to form helium nuclei through intermediary nuclei of carbon, nitrogen and oxygen. This process becomes dominant at a temperature of about 17 million K, which is slightly higher than the core temperature of the Sun, but is less efficient than the Sun's proton–proton chain fusion reaction. The CNO cycle is highly temperature sensitive, which results in a convection zone about the core that evenly distributes the 'ash' from the fusion reaction within the core region. The overlying atmosphere is in radiative equilibrium. This is in contrast to the Sun, which has a radiation zone centered on the core with an overlying convection zone. The energy flux from Vega has been precisely measured against standard light sources. At 5,480 Å, the flux density is 3,650 Jy with an error margin of 2%. The visual spectrum of Vega is dominated by absorption lines of hydrogen; specifically by the hydrogen Balmer series with the electron at the n=2 principal quantum number. The lines of other elements are relatively weak, with the strongest being ionized magnesium, iron and chromium. The X-ray emission from Vega is very low, demonstrating that the corona for this star must be very weak or non-existent. However, as the pole of Vega is facing Earth and a polar coronal hole may be present, confirmation of a corona as the likely source of the X-rays detected from Vega (or the region very close to Vega) may be difficult as most of any coronal X-rays would not be emitted along the line of sight. Using spectropolarimetry, a magnetic field has been detected on the surface of Vega by a team of astronomers at the Observatoire du Pic du Midi. This is the first such detection of a magnetic field on a spectral class A star that is not an Ap chemically peculiar star. The average line of sight component of this field has a strength of −0.6±0.3 gauss (G). This is comparable to the mean magnetic field on the Sun. Magnetic fields of roughly 30 G have been reported for Vega, compared to about 1 G for the Sun. In 2015, bright starspots were detected on the star's surface—the first such detection for a normal A-type star, and these features show evidence of rotational modulation with a period of 0.68 day. ### Rotation Vega has a rotation period of 12.5 hours, much faster than the Sun's rotational period but similar to, and slightly slower than, those of Jupiter and Saturn. Because of that, Vega is significantly oblate like those two planets. When the radius of Vega was measured to high accuracy with an interferometer, it resulted in an unexpectedly large estimated value of 2.73±0.01 times the radius of the Sun. This is 60% larger than the radius of the star Sirius, while stellar models indicated it should only be about 12% larger. However, this discrepancy can be explained if Vega is a rapidly rotating star that is being viewed from the direction of its pole of rotation. Observations by the CHARA array in 2005–06 confirmed this deduction. The pole of Vega—its axis of rotation—is inclined no more than five degrees from the line-of-sight to the Earth. At the high end of estimates for the rotation velocity for Vega is 236.2±3.7 km/s along the equator, much higher than the observed (i.e. projected) rotational velocity because Vega is seen almost pole-on. This is 88% of the speed that would cause the star to start breaking up from centrifugal effects. This rapid rotation of Vega produces a pronounced equatorial bulge, so the radius of the equator is 19% larger than the polar radius. (The estimated polar radius of this star is 2.362±0.012 solar radii, while the equatorial radius is 2.818±0.013 solar radii.) From the Earth, this bulge is being viewed from the direction of its pole, producing the overly large radius estimate. The local surface gravity at the poles is greater than at the equator, which produces a variation in effective temperature over the star: the polar temperature is near 10,000 K, while the equatorial temperature is about 8,152 K. This large temperature difference between the poles and the equator produces a strong gravity darkening effect. As viewed from the poles, this results in a darker (lower-intensity) limb than would normally be expected for a spherically symmetric star. The temperature gradient may also mean that Vega has a convection zone around the equator, while the remainder of the atmosphere is likely to be in almost pure radiative equilibrium. By the Von Zeipel theorem, the local luminosity is higher at the poles. As a result, if Vega were viewed along the plane of its equator instead of almost pole-on, then its overall brightness would be lower. As Vega had long been used as a standard star for calibrating telescopes, the discovery that it is rapidly rotating may challenge some of the underlying assumptions that were based on it being spherically symmetric. With the viewing angle and rotation rate of Vega now better known, this will allow improved instrument calibrations. ### Element abundance In astronomy, those elements with higher atomic numbers than helium are termed "metals". The metallicity of Vega's photosphere is only about 32% of the abundance of heavy elements in the Sun's atmosphere. (Compare this, for example, to a threefold metallicity abundance in the similar star Sirius as compared to the Sun.) For comparison, the Sun has an abundance of elements heavier than helium of about Z<sub>Sol</sub> = 0.0172±0.002. Thus, in terms of abundances, only about 0.54% of Vega consists of elements heavier than helium. Nitrogen is slightly more abundant, oxygen is only marginally less abundant and sulfur abundance is about 50% of solar. On the other hand, Vega has only 10% to 30% of the solar abundance for most other major elements with barium and scandium below 10%. The unusually low metallicity of Vega makes it a weak Lambda Boötis star. However, the reason for the existence of such chemically peculiar, spectral class A0–F0 stars remains unclear. One possibility is that the chemical peculiarity may be the result of diffusion or mass loss, although stellar models show that this would normally only occur near the end of a star's hydrogen-burning lifespan. Another possibility is that the star formed from an interstellar medium of gas and dust that was unusually metal-poor. The observed helium to hydrogen ratio in Vega is 0.030±0.005, which is about 40% lower than the Sun. This may be caused by the disappearance of a helium convection zone near the surface. Energy transfer is instead performed by the radiative process, which may be causing an abundance anomaly through diffusion. ### Kinematics The radial velocity of Vega is the component of this star's motion along the line-of-sight to the Earth. Movement away from the Earth will cause the light from Vega to shift to a lower frequency (toward the red), or to a higher frequency (toward the blue) if the motion is toward the Earth. Thus the velocity can be measured from the amount of shift of the star's spectrum. Precise measurements of this blueshift give a value of −13.9±0.9 km/s. The minus sign indicates a relative motion toward the Earth. Motion transverse to the line of sight causes the position of Vega to shift with respect to the more distant background stars. Careful measurement of the star's position allows this angular movement, known as proper motion, to be calculated. Vega's proper motion is 202.03±0.63 milliarcseconds (mas) per year in right ascension—the celestial equivalent of longitude—and 287.47±0.54 mas/y in declination, which is equivalent to a change in latitude. The net proper motion of Vega is 327.78 mas/y, which results in angular movement of a degree every 11,000 years. In the galactic coordinate system, the space velocity components of Vega are (U, V, W) = (−16.1±0.3, −6.3±0.8, −7.7±0.3) km/s, for a net space velocity of 19 km/s. The radial component of this velocity—in the direction of the Sun—is −13.9 km/s, while the transverse velocity is 9.9 km/s. Although Vega is at present only the fifth-brightest star in the night sky, the star is slowly brightening as proper motion causes it to approach the Sun. Vega will make its closest approach in an estimated 264,000 years at a perihelion distance of 13.2 ly (4.04 pc). Based on this star's kinematic properties, it appears to belong to a stellar association called the Castor Moving Group. However, Vega may be much older than this group, so the membership remains uncertain. This group contains about 16 stars, including Alpha Librae, Alpha Cephei, Castor, Fomalhaut and Vega. All members of the group are moving in nearly the same direction with similar space velocities. Membership in a moving group implies a common origin for these stars in an open cluster that has since become gravitationally unbound. The estimated age of this moving group is 200±100 million years, and they have an average space velocity of 16.5 km/s. ## Possible planetary system ### Infrared excess One of the early results from the Infrared Astronomy Satellite (IRAS) was the discovery of excess infrared flux coming from Vega, beyond what would be expected from the star alone. This excess was measured at wavelengths of 25, 60 and 100 μm, and came from within an angular radius of 10 arcseconds (10′′) centered on the star. At the measured distance of Vega, this corresponded to an actual radius of 80 astronomical units (AU), where an AU is the average radius of the Earth's orbit around the Sun. It was proposed that this radiation came from a field of orbiting particles with a dimension on the order of a millimetre, as anything smaller would eventually be removed from the system by radiation pressure or drawn into the star by means of Poynting–Robertson drag. The latter is the result of radiation pressure creating an effective force that opposes the orbital motion of a dust particle, causing it to spiral inward. This effect is most pronounced for tiny particles that are closer to the star. Subsequent measurements of Vega at 193 μm showed a lower than expected flux for the hypothesized particles, suggesting that they must instead be on the order of 100 μm or less. To maintain this amount of dust in orbit around Vega, a continual source of replenishment would be required. A proposed mechanism for maintaining the dust was a disk of coalesced bodies that were in the process of collapsing to form a planet. Models fitted to the dust distribution around Vega indicate that it is a 120-astronomical-unit-radius circular disk viewed from nearly pole-on. In addition, there is a hole in the center of the disk with a radius of no less than 80 AU. Following the discovery of an infrared excess around Vega, other stars have been found that display a similar anomaly that is attributable to dust emission. As of 2002, about 400 of these stars have been found, and they have come to be termed "Vega-like" or "Vega-excess" stars. It is believed that these may provide clues to the origin of the Solar System. ### Debris disks By 2005, the Spitzer Space Telescope had produced high-resolution infrared images of the dust around Vega. It was shown to extend out to 43′′ (330 AU) at a wavelength of 24 μm, 70′′ (543 AU) at 70 μm and 105′′ (815 AU) at 160 μm. These much wider disks were found to be circular and free of clumps, with dust particles ranging from 1–50 μm in size. The estimated total mass of this dust is 3×10<sup>−3</sup> times the mass of the Earth (around 7.5 times more massive than the asteroid belt). Production of the dust would require collisions between asteroids in a population corresponding to the Kuiper Belt around the Sun. Thus the dust is more likely created by a debris disk around Vega, rather than from a protoplanetary disk as was earlier thought. The inner boundary of the debris disk was estimated at 11′′±2′′, or 70–100 AU. The disk of dust is produced as radiation pressure from Vega pushes debris from collisions of larger objects outward. However, continuous production of the amount of dust observed over the course of Vega's lifetime would require an enormous starting mass—estimated as hundreds of times the mass of Jupiter. Hence it is more likely to have been produced as the result of a relatively recent breakup of a moderate-sized (or larger) comet or asteroid, which then further fragmented as the result of collisions between the smaller components and other bodies. This dusty disk would be relatively young on the time scale of the star's age, and it will eventually be removed unless other collision events supply more dust. Observations, first with the Palomar Testbed Interferometer by David Ciardi and Gerard van Belle in 2001 and then later confirmed with the CHARA array at Mt. Wilson in 2006 and the Infrared Optical Telescope Array at Mt. Hopkins in 2011, revealed evidence for an inner dust band around Vega. Originating within 8 AU of the star, this exozodiacal dust may be evidence of dynamical perturbations within the system. This may be caused by an intense bombardment of comets or meteors, and may be evidence for the existence of a planetary system. ### Possible planets Observations from the James Clerk Maxwell Telescope in 1997 revealed an "elongated bright central region" that peaked at 9′′ (70 AU) to the northeast of Vega. This was hypothesized as either a perturbation of the dust disk by a planet or else an orbiting object that was surrounded by dust. However, images by the Keck telescope had ruled out a companion down to magnitude 16, which would correspond to a body with more than 12 times the mass of Jupiter. Astronomers at the Joint Astronomy Centre in Hawaii and at UCLA suggested that the image may indicate a planetary system still undergoing formation. Determining the nature of the planet has not been straightforward; a 2002 paper hypothesizes that the clumps are caused by a roughly Jupiter-mass planet on an eccentric orbit. Dust would collect in orbits that have mean-motion resonances with this planet—where their orbital periods form integer fractions with the period of the planet—producing the resulting clumpiness. In 2003, it was hypothesized that these clumps could be caused by a roughly Neptune-mass planet having migrated from 40 to 65 AU over 56 million years, an orbit large enough to allow the formation of smaller rocky planets closer to Vega. The migration of this planet would likely require gravitational interaction with a second, higher-mass planet in a smaller orbit. Using a coronagraph on the Subaru Telescope in Hawaii in 2005, astronomers were able to further constrain the size of a planet orbiting Vega to no more than 5–10 times the mass of Jupiter. The issue of possible clumps in the debris disc was revisited in 2007 using newer, more sensitive instrumentation on the Plateau de Bure Interferometer. The observations showed that the debris ring is smooth and symmetric. No evidence was found of the blobs reported earlier, casting doubts on the hypothesized giant planet. The smooth structure has been confirmed in follow-up observations by Hughes et al. (2012) and the Herschel Space Telescope. Although a planet has yet to be directly observed around Vega, the presence of a planetary system cannot yet be ruled out. Thus there could be smaller, terrestrial planets orbiting closer to the star. The inclination of planetary orbits around Vega is likely to be closely aligned to the equatorial plane of this star. From the perspective of an observer on a hypothetical planet around Vega, the Sun would appear as a faint 4.3-magnitude star in the Columba constellation. In 2021, a paper analyzing 10 years of spectra of Vega detected a candidate 2.43-day signal around Vega, statistically estimated to have only a 1% chance of being a false positive. Considering the amplitude of the signal, the authors estimated a minimum mass of 21.9±5.1 Earth masses, but considering the very oblique rotation of Vega itself of only 6.2° from Earth's perspective, the planet may be aligned to this plane as well, giving it an actual mass of 203±47 Earth masses. The researchers also detected a faint 196.4+1.6 −1.9-day signal which could translate to a 80±21 Earth masses (740±190 at 6.2° inclination) but is too faint to claim as a real signal with available data. ## Etymology and cultural significance The name is believed to be derived from the Arabic term Al Nesr al Waki النسر الواقع which appeared in the Al Achsasi al Mouakket star catalogue and was translated into Latin as Vultur Cadens, "the falling eagle/vulture". The constellation was represented as a vulture in ancient Egypt, and as an eagle or vulture in ancient India. The Arabic name then appeared in the western world in the Alfonsine tables, which were drawn up between 1215 and 1270 by order of King Alfonso X. Medieval astrolabes of England and Western Europe used the names Wega and Alvaca, and depicted it and Altair as birds. Among the northern Polynesian people, Vega was known as whetu o te tau, the year star. For a period of history it marked the start of their new year when the ground would be prepared for planting. Eventually this function became denoted by the Pleiades. The Assyrians named this pole star Dayan-same, the "Judge of Heaven", while in Akkadian it was Tir-anna, "Life of Heaven". In Babylonian astronomy, Vega may have been one of the stars named Dilgan, "the Messenger of Light". To the ancient Greeks, the constellation Lyra was formed from the harp of Orpheus, with Vega as its handle. For the Roman Empire, the start of autumn was based upon the hour at which Vega set below the horizon. In Chinese, 織女 (Zhī Nǚ), meaning Weaving Girl (asterism), refers to an asterism consisting of Vega, ε Lyrae and ζ<sup>1</sup> Lyrae. Consequently, the Chinese name for Vega is 織女一 (Zhī Nǚ yī, English: the First Star of Weaving Girl). In Chinese mythology, there is a love story of Qixi (七夕) in which Niulang (牛郎, Altair) and his two children (β Aquilae and γ Aquilae) are separated from their mother Zhinü (織女, lit. "weaver girl", Vega) who is on the far side of the river, the Milky Way. However, one day per year on the seventh day of the seventh month of the Chinese lunisolar calendar, magpies make a bridge so that Niulang and Zhinü can be together again for a brief encounter. The Japanese Tanabata festival, in which Vega is known as Orihime (織姫), is also based on this legend. In Zoroastrianism, Vega was sometimes associated with Vanant, a minor divinity whose name means "conqueror". The indigenous Boorong people of northwestern Victoria, Australia, named it as Neilloan, "the flying loan". In the Srimad Bhagavatam, Shri Krishna tells Arjuna, that among the Nakshatras he is Abhijit, which remark indicates the auspiciousness of this Nakshatra. Medieval astrologers counted Vega as one of the Behenian stars and related it to chrysolite and winter savory. Cornelius Agrippa listed its kabbalistic sign under Vultur cadens, a literal Latin translation of the Arabic name. Medieval star charts also listed the alternate names Waghi, Vagieh and Veka for this star. W. H. Auden's 1933 poem "A Summer Night (to Geoffrey Hoyland)" famously opens with the couplet, "Out on the lawn I lie in bed,/Vega conspicuous overhead". Vega became the first star to have a car named after it with the French Facel Vega line of cars from 1954 onwards, and later on, in America, Chevrolet launched the Vega in 1971. Other vehicles named after Vega include the ESA's Vega launch system and the Lockheed Vega aircraft.
483,130
North Carolina-class battleship
1,156,382,031
US Navy fast battleship class (1937–1947)
[ "Battleship classes", "North Carolina-class battleships", "World War II battleships of the United States" ]
The North Carolina class''' were a pair of fast battleships, North Carolina and Washington, built for the United States Navy in the late 1930s and early 1940s. In planning a new battleship class in the 1930s, the US Navy was heavily constrained by international treaty limitations, which included a requirement that all new capital ships have a standard displacement of under 35,000 LT (35,600 t). This restriction meant that the navy could not construct a ship with the firepower, armor, and speed that they desired, and the balancing uncertainty that resulted meant that the navy considered fifty widely varying designs. Eventually, the General Board of the United States Navy declared its preference for a battleship with a speed of 30 knots (56 km/h; 35 mph), faster than any in US service, with a main battery of nine 14-inch (356 mm)/50 caliber Mark B guns. The board believed that these ships would be balanced enough to effectively take on a multitude of roles. However, the acting Secretary of the Navy authorized a modified version of a different design, which in its original form had been rejected by the General Board. This called for a 27-knot (50 km/h; 31 mph) ship with twelve 14-inch guns in quadruple turrets and protection against guns of the same caliber. In a major departure from traditional American design practices, this design prioritized firepower at the cost of speed and protection. After construction had begun, the United States invoked a so-called "escalator clause" in the international treaty to increase the class' main armament to nine 16-inch (406 mm)/45 caliber Mark 6 guns. Both North Carolina and Washington saw extensive service during the Second World War in a variety of roles, primarily in the Pacific Theater where they escorted fast carrier task forces, such as during the Battle of the Philippine Sea, and conducted shore bombardments. Washington also participated in a surface engagement, the Naval Battle of Guadalcanal, where its radar-directed main batteries fatally damaged the Japanese battleship Kirishima. Both battleships were damaged during the war, with North Carolina taking a torpedo hit in 1942 and Washington colliding with Indiana in 1944. After the end of the war, both ships remained in commission for a brief time before being laid up in reserve. In the early 1960s, North Carolina was sold to the state of North Carolina as a museum ship, and Washington was broken up for scrap. ## Background After the end of the First World War, several navies continued and expanded naval construction programs that they had started during the conflict. The United States' 1916 program called for six Lexington-class battlecruisers and five South Dakota-class battleships; in December 1918, the administration of President Woodrow Wilson called for building an additional ten battleships and six battlecruisers. The 1919–1920 General Board proposals planned for slightly smaller, but still significant, acquisitions beyond the 1916 plan: two battleships and a battlecruiser for the fiscal year 1921, and three battleships, a battlecruiser, four aircraft carriers and thirty destroyers between the fiscal years 1922 and 1924. The United Kingdom was in the final stages of ordering eight capital ships (the G3 battlecruisers, with the first's keel laying in 1921, and N3-class battleships, to be laid down beginning in 1922). Imperial Japan was, by 1920, attempting to build up to an 8-8 standard of eight battleships and eight battlecruisers or cruisers with the Nagato, Tosa, Amagi, Kii and Number 13 classes. Two ships from these designs were to be laid down per year until 1928. With the staggering costs associated with such programs, the United States' Secretary of State Charles Evans Hughes invited delegations from the major maritime powers—France, Italy, Japan, and the United Kingdom—to come together in Washington, D.C. to discuss, and hopefully end, the naval arms race. The subsequent Washington Naval Conference resulted in the 1922 Washington Naval Treaty. Along with many other provisions, it limited all future battleships to a standard displacement of 35,000 long tons (36,000 t) and a maximum gun caliber of 16 inches. It also decreed that the five countries could not construct another capital ship for ten years and could not replace any ship that survived the treaty until it was at least twenty years old. The 1936 Second London Naval Treaty kept many of the Washington treaty's requirements but restricted gun size on new warships to 14 inches. The treaties heavily influenced the design of the North Carolina class, as can be attested to in the long quest to find a ship that incorporated everything the US Navy considered necessary while remaining under 35,000 long tons. ## Design ### Early The General Board began preparations for a new class of battleships in May–July 1935, and three design studies were submitted to them. "A" would be 32,150 long tons (32,670 t) armed with nine 14-inch (356 mm) guns in triple turrets, all forward of the bridge; capable of 30 knots; and armored against 14-inch shells. "B" and "C" would both be over 36,000 long tons (37,000 t), able to reach 30.5 kn (56.5 km/h; 35.1 mph), and armored against 14-inch shells. The major difference between the two was the planned main battery, as "B" had twelve 14-inch guns in triple turrets, while "C" had eight 16-inch/45-caliber guns in dual turrets. "A" was the only one to remain within the 35,000-ton displacement limit set in the Washington Naval Treaty and reaffirmed in the Second London Naval Treaty. When the Bureau of Ordnance introduced a "super-heavy" 16-inch shell, the ships were redesigned in an attempt to provide protection against it, but this introduced severe weight problems; two of the designs were nearly 5,000 long tons (5,100 t) over the treaty limit. Although these original three studies were all "fast" battleships, the General Board was not committed to the higher maximum speeds. It posed questions to the Naval War College, asking for their opinion as to whether the new class should be a "conventional" 23-knot (43 km/h; 26 mph) ship with an eight-nine, 16-inch main battery, or rather one akin to "A", "B" or "C". Five more design studies were produced in late September 1935, which had characteristics of 23–30.5 knots, eight or nine 14- or 16-inch guns, and a standard displacement between 31,500 and 40,500 long tons (32,000 and 41,100 t). Designs "D" and "E" were attempts at fast battleships with 16-inch guns and protections against the same, but their displacement was greater than the Washington Naval Treaty allowed. Design "F" was a radical attempt at a hybrid battleship-carrier, with three catapults mounted fore and eight 14-inch guns aft. It was reportedly favored by President Franklin Delano Roosevelt, but as aircraft launched from catapults were necessarily inferior to most carrier- or land-based aircraft because of the floats used to land, nothing came of the design. Designs "G" and "H" were slower 23-knot ships with nine 14-inch guns; in particular, "H" was thought to be a very well balanced design by the Preliminary Design section of the Bureau of Construction and Repair. However, the General Board finally decided to use faster ships, which "G" and "H" were not. These studies demonstrated the difficulty the designers faced with a displacement of 35,000 tons. They could choose a faster ship, able to steam at 30 knots, but that would force them to mount a lighter armament and armor than contemporary foreign battleships. Alternatively, they could choose a lower maximum speed and mount heavier guns, but fitting in adequate protection against newer 16-inch guns would be extremely difficult. The Preliminary Design section drew up five more studies in October, based upon "A" with additional armor or a scaled-down "B"; all used 14-inch guns and called for at least 30 knots. Two called for four turrets, but they would be too heavy and mount less armor. Another, "K," would have a 15-inch (380 mm) belt and 5.25-inch (133 mm) deck, giving it a 19,000–30,000 yd (9.4–14.8 nmi; 17–27 km) immune zone against the United States' super-heavy 14-inch shell. While "K" was liked by the naval constructors, its designed standard displacement of 35,000 tons left little room for error, modifications, or improvements . The final two designs, "L" and "M," would use quadruple turrets to save weight (similar to the French Dunkerque) while still mounting 12 guns. Many officers in the United States Navy supported the construction of three or four fast battleships for carrier escorts and to counter Japan's Kongō class. These included the acting Secretary of the Navy and Chief of Naval Operations Admiral William Standley, the president of the Naval War College Admiral William S. Pye, a small majority (9–7) of senior officers at sea, and five of six line officers engaged in strategic planning as part of the War Plans Division, although at least one officer believed that an aerial attack would also be capable of sinking the Kongōs. With the above recommendations, the General Board selected "K" to undergo further development. ### Final At least 35 different final designs were proposed. All numbered with Roman numerals ("I" through "XVI-D"), the first five were completed on 15 November 1935. They were the first to employ so-called "paper" weight reductions: not counting certain weights towards the ship's 35,000 long ton treaty limit that were not specifically part of the definition of standard displacement. In this case, even though there was designed storage room for 100 shells per main battery gun and an extra 100 rounds, the weight of the rounds did not figure toward the treaty-mandated limit. These final designs varied greatly in everything but their standard displacements and speeds. Just one was over the treaty displacement limit; every other design called for 35,000 long tons. Only five planned for a top speed of under 27 kn (50 km/h; 31 mph); of those, only one was lower than 26.5 kn (49.1 km/h): "VII", with 22 kn (41 km/h). "VII" returned to a lower speed to obtain more firepower (twelve 14-inch guns in triple turrets) and protection; as such, the design called for only 50,000 shp (37,000 kW) and a length of only 640 ft (200 m). Most other plans called for 710 ft (220 m) or 725 ft (221 m), although a few had lengths between 660 ft (200 m) and 690 ft (210 m). Several different gun mountings were examined, including eight, nine, ten, eleven, and twelve 14-inch guns; eight 14-inch guns in two quadruple turrets, and even one design with two quadruple 16-inch guns. One specific design, "XVI," was a 27-knot (50 km/h), 714 ft-long (218 m) ship with twelve 14-inch guns, a 11.2-inch (284.5 mm) belt, and a deck 5.1 to 5.6-inches (129.5 to 142.2 mm) thick. Produced on 20 August 1936, the Bureau of Ordnance found many problems in it. For example, model tests showed that at high speeds, waves generated by the hull would leave certain lower parts of the ship uncovered by water or adequate armor, including around the explosive magazines, and the Bureau believed that hits around this part of the hull were easily possibly when fighting at ranges between 20,000 and 30,000 yd (9.9 and 14.8 nmi; 18 and 27 km). Other problems included the design's defense against aircraft-dropped bombs, as the Bureau thought the formula used to calculate its effectiveness was not realistic, and the tapering of a fore bulkhead below the waterline could worsen underwater shell hits because the mostly unarmored bow could easily be penetrated. The proposed solutions for these issues were all impractical: added patches of armor around the magazines could neutralize the effectiveness of the ship's torpedo-defense system, and deepening the belt near the bow and stern would put the ships over the 35,000 long ton limit. The General Board detested this design, saying it was "not ... a true battleship" due to its speed and armor problems. To address these problems, a final set of designs was presented by the Preliminary Design section in October 1936. Designated "XVI-B" through "XVI-D," they were all modifications of the "XVI" plan. These added an extra 11 feet (3.4 m) of length to "XVI" for greater speed, but the resulting weight increase meant that only eleven 14-inch guns could be mounted with a thin 10.1-inch (260 mm) belt. Another gun could be traded for a 13.5-inch (342.9 mm) belt, and yet another could be swapped for more speed and an extra tenth of an inch of belt armor; this became design "XVI-C". The General Board liked "XVI-C" very much, seeing in it a ship that had enough protection to fight—and survive—in a battle line formed with the US' older battleships while also having enough speed to operate in a detached wing with aircraft carrier or cruiser commerce raiding groups. However, one member of the Board, Admiral Joseph Reeves—one of the principal developers of the United States' aircraft carrier strategy—disliked "XVI-C" because he believed that it was not fast enough to work with the 33-knot (61 km/h; 38 mph) fast carriers, and it was not powerful enough to justify its cost. Instead, he advocated a development of the previously rejected "XVI", adding additional underwater protection and patches of armor within the ship to make the magazines immune to above- and below-water shell hits from 19,000 yd (9.4 nmi; 17 km) and beyond. The immune zone's outer limit was increased from 28,200 yd (13.9 nmi; 25.8 km) to 30,000 yd (15 nmi; 27 km). After further revisions, Reeves went to Standley, the Chief of Naval Operations, who approved "XVI" in its newly modified form over the hopes of the General Board, who still thought that "XVI-C" should be built. Standley's only addition to the characteristics was to be able to switch from quadruple 14-inch to triple 16 in (406 mm) turrets if the "escalator clause" in the Second London Naval Treaty was invoked. With these parameters now set, "XVI" would become the basis of the North Carolina class' as-built design despite additional back and forth over the design's final particulars. These included an increase in armor; something allowed by the finding of more on-paper weight savings; the armor's slope was increased from 10° to 13°, and eventually settled at 15°; a months-long debate on the propulsion machinery's layout was finally concluded, and other minor changes. ### The "escalator clause" Although the Second London Naval Treaty stipulated that warship guns could be no larger than 14 inches, a so-called "escalator clause" was included at the urging of American negotiators in case any country that had signed the Washington Naval Treaty refused to adhere to this new limit. The provision allowed signatory countries of the Second London Treaty—France, the United Kingdom and the United States—to raise the limit from 14 to 16 inches if Japan or Italy still refused to sign after 1 April 1937. When figuring potential configurations for the North Carolinas, designers focused most of their planning on 14-inch weaponry; Standley's requirement meant that a switch from 14- to 16-inch, even after the ships' keels had been laid, was possible. Japan formally rejected the 14-inch limit on 27 March 1937, meaning that the "escalator clause" could be invoked. There were hurdles that still needed to be overcome, though: Roosevelt was under heavy political pressure and, as a result, was reluctant to allow the 16-inch gun. > I am not willing that the United States be the first naval power to adopt the 16 in. gun. ... Because of the international importance of the United States not being the first to change the principles laid down in the Washington and London Treaties, it seems to me that the plans for the two new battleships should contemplate the ... 14-inch gun. Admiral Reeves also came out strongly in favor of the larger weapon. In a two-page letter to Secretary of the Navy Claude A. Swanson and indirectly to Roosevelt, Reeves argued that the 16-inch gun's significantly greater armor penetration was of paramount importance, drawing examples from the First World War's Battle of Jutland, where some battleships were able to survive ten or twenty hits from large guns, but other battlecruisers were blown up in three to seven hits because the shells were able to cut through the armor protecting magazines and turrets. Reeves also argued that the larger gun would favor the "indirect method" of shooting then being developed, where airplanes would be used to relay targeting information to allied battleships so that they could bombard targets that were out of their sight or over the horizon, because new battleships being built by foreign powers would have more armor. Reeves believed that if the 14-inch gun was adopted, it would not be able to penetrate this larger amount of protection, whereas the 16-inch would be able to break through. In a final vain attempt, Roosevelt's Secretary of State Cordell Hull sent a telegram on 4 June to the Ambassador to Japan Joseph Grew instructing him that the United States would still accept a cap of 14-inch guns if he could get Japan to as well. The Japanese replied that they could not accept this unless the number of battleships was also limited; they wanted the United States and the United Kingdom to agree to having an equal number of battleships with Japan, but this was a condition that the two countries refused to accept. On 24 June, the two North Carolinas were ordered with the 14-inch weapons, but on 10 July, Roosevelt directed that they be armed with triple 16-inch instead. ## Specifications ### General characteristics The North Carolina was 713 feet 5.25 inches (217.456 m) long at the waterline and 728 feet 8.625 inches (222.113 m) long overall. The maximum beam was 108 feet 3.875 inches (33.017 m) while waterline beam was 104 feet 6 inches (31.85 m) due to the inclination of the armor belt. In 1942, the standard displacement was 36,600 long tons (37,200 t) while full load displacement was 44,800 long tons (45,500 t), while maximum draft was 35 feet 6 inches (10.82 m). At design combat displacement of 42,329 long tons (43,008 t), the mean draft was 31 feet 7.313 inches (9.635 m) and (GM) metacentric height was 8.31 feet (2.53 m). As designed, the crew complement was 1,880 with 108 officers and 1,772 enlisted. By 1945, the considerable increase in anti-aircraft armament and their crew accommodations had increased full load displacement to 46,700 long tons (47,400 t), while crew complement increased to 2,339 with 144 officers and 2,195 enlisted. After the end of World War II, the crew complement was reduced to 1,774. The North Carolina class hull feature a bulbous bow and had an unusual stern design for the time by placing the two inboard propulsion shafts in skegs. This was theorized to improve flow conditions to the propellers. Initial model basin testing for various stern configurations suggested that the skeg arrangement could reduce resistance, although later testing during the design process of the Montana-class battleship would indicate an increase in drag. The skegs improved the structural strength of the stern by acting as girders and also provided structural continuity for the torpedo bulkheads. However, the skegs also contributed to severe vibration problems with the class that required extensive testing and modifications to mitigate. The problem was particularly acute near the aft main battery director, which required additional reinforcing braces due to the vibrations. Nevertheless, skegs would be improved and incorporated in the designs of all subsequent American battleships, with vibration problems largely eliminated on the Iowa class battleships. ### Armament North Carolina and Washington were principally armed with nine 16-inch (406 mm)/45 caliber (cal) Mark 6 guns and twenty 5-inch (127 mm)/38 cal Mark 12 guns. Their lighter armament consisted of varying numbers of 1.1-inch (28 mm)/75 caliber, .50 caliber machine guns, Bofors 40 mm and Oerlikon 20 mm. #### Main battery Mounted on both the North Carolina class and the follow-up South Dakota class, the nine 16 in/45 were improved versions of the guns mounted on the Colorado-class battleships, hence the designation of "Mark 6". A major alteration from the older guns was the Mark 6's ability to fire a new 2,700-pound (1,200-kilogram) armor-piercing (AP) shell developed by the Bureau of Ordnance. At full charge with a brand-new gun, the heavy shell would be expelled at a muzzle velocity of 2,300 ft/s (700 m/s). At a reduced charge, the same shell would be fired at 1,800 ft/s (550 m/s). Barrel life—the approximate number of rounds a gun could fire before needing to be relined or replaced—was 395 shells when using AP, although if only practice shells were used this figure was significantly higher: 2,860. Turning at 4 degrees a second, each turret could train to 150 degrees on either side of the ship. The guns could be elevated to a maximum inclination of 45 degrees; turrets one and three could depress to −2 degrees, but due to its superfiring position, the guns on turret two could only depress to 0 degrees. Each gun was 736 in (18,700 mm) long overall; its bore and rifling length were 715.2-inch (18,170 mm) and 616.9-inch (15,670 mm), respectively. Maximum range with the heavy AP shell was obtained at an inclination of 45 degrees: 36,900 yd (33,700 m). At the same elevation a lighter 1,900-pound (860-kilogram) high capacity (HC) shell would travel 40,180 yards (20 nautical miles; 37 kilometres). The guns weighed 85.85 long tons (192,300 lb; 87,230 kg) not including the breech; the turrets weighed 1,403–1,437 long tons (3,143,000–3,219,000 lb; 1,426,000–1,460,000 kg). When firing the same shell, the 16-inch/45 Mark 6 had a slight advantage over the 16-inch/50 Mark 7 when hitting deck armor—a shell from a 45 cal gun would be slower, meaning that it would have a steeper trajectory as it descended. At 36,000 yards (18 nautical miles; 33 kilometres), a shell from a 45 cal would strike a ship at an angle of 47.5 degrees, as opposed to 38 degrees with the 50 cal. #### Secondary battery The North Carolinas carried ten Mark 28 Mod 0 enclosed base ring mounts, each supporting twin 5-inch/38-caliber Mark 12 guns Originally designed to be mounted on destroyers built in the 1930s, these guns were so successful that they were added to a myriad of American ships during the Second World War, including every major ship type and many smaller warships constructed between 1934 and 1945. They were considered to be "highly reliable, robust and accurate" by the Navy's Bureau of Ordnance. The 5-inch/38 functioned as a dual purpose gun. However, this did not mean that it possessed inferior anti-air abilities; as established during 1941 gunnery tests conducted on board North Carolina, the gun possessed the ability to consistently shoot down aircraft flying at 12,000–13,000 feet (2.3–2.5 miles; 3.7–4.0 kilometres), which was twice as far as the effective range of the earlier single purpose 5-inch/25 anti-air gun. Each 5-inch/38 weighed almost 4,000 lb (1,800 kg) without the breech. The entire mount weighed 156,295 pounds (70,894 kilograms). It was 223.8 in (5,680 mm) long overall, had a bore length of 190 in (4,800 mm), and had a rifling length of 157.2 in (3,990 mm). The gun could fire shells at about 2,500–2,600 ft/s (760–790 m/s); about 4,600 could be fired before the barrel needed to be replaced. Minimum and maximum elevations were −15 and 85 degrees, respectively. The guns' elevation could be raised or lowered at about 15 degrees per second. Loading was possible at any elevation. The mounts closest to the bow and stern could aim from −150 to 150 degrees; the others were restricted to −80 to 80 degrees. They could be turned at about 25 degrees per second. #### Smaller weaponry The remaining weaponry on board the two North Carolinas was composed of differing numbers of 1.1"/75 caliber guns, .50 caliber machine guns, Bofors 40 mm and Oerlikon 20 mm cannons. Although the ships were originally designed to carry only four quadruple 1.1 in and twelve .50 caliber, this was greatly increased and upgraded during the war. On both ships, two more quadruple sets of 1.1 in guns were added in place of two searchlights amidships. After it was torpedoed in 1942, North Carolina had these removed and ten quadruple sets of 40 mm guns added. Fourteen were present by June 1943, while a fifteenth mount was added on top of the third main turret that November. Washington retained its six 1.1 in quads until the middle of 1943, when ten quad 40 mm guns replaced them. By August, it had fifteen. The two ships carried these through to the close of the war. The .50 caliber machine guns did not have the range or power needed to combat modern aircraft and were scheduled for replacement by equal numbers of 20 mm guns, but nothing immediately came of the proposal. In fact, both North Carolina and Washington carried 20 mm and .50 caliber guns for most of 1942. In April, North Carolina had, respectively, forty and twelve, while Washington had twenty and twelve. Two months later, the number of 20 mm guns remained the same, but twelve .50 caliber guns had been added. By September, Washington had twenty more 20 mm guns added, for a total of forty, but five were removed—along with all of the .50 caliber guns—shortly thereafter when two quadruple sets of 1.1 in guns were added. In its refit after being torpedoed, North Carolina had an additional six 20 mm guns added and all of its .50 caliber weapons removed. Washington had sixty-four 20 mm weapons by April 1943, prior to one single mount being replaced by a quadruple mount, and North Carolina had fifty-three by March 1944. In April 1945, North Carolina was assigned to have fifty-six 20 mm, while Washington was assigned seventy-five. In August 1945, the ships both had eight twin 20 mm mounts; North Carolina also carried twenty single, while Washington carried one quad and sixty-three single. ### Electronics Both North Carolina and Washington, designed prior to radar, were originally fitted with many fire-control and navigational optical range-finders. The former lasted until 1944, when it was replaced by a Mark 27 microwave radar supplemented by a Mark 3 main armament fire control radar. The range-finders were removed in favor of additional 20 mm guns sometime between the end of 1941 and mid-1942. In addition, the ships were commissioned with two Mark 38 directors and were originally fitted with a CXAM air search, two Mark 3s and three Mark 4 secondary armament. By November 1942, North Carolina had an additional Mark 4 and a SG surface search radar added. The normal battleship configuration was present aboard North Carolina in April 1944, with SK and SG radars (air and surface search, respectively), a backup SG, and Mark 8s to direct its main battery. All of the Mark 4s remained for the secondary battery, and one of the older Mark 3s was still present, possibly as a backup for the Mark 8s. An SK-2 dish replaced the older SK radar and Mark 12s and 22s superseded the Mark 4s in September of that year. Aside from never receiving an SK-2, Washington was the recipient of similar upgrades. Both ships underwent extensive refits near the end or after the war. North Carolina received a secondary air search set (SR) and a SCR-720 zenith search radar on the forward funnel. At the end of the war, it had an SP surface-search, a SK-2 air-search, a Mark 38 main battery fire control system with Mark 13 and 27 radars, a Mark 37 secondary battery fire control system with Mark 12, 22 and 32 radars, and a Mark 57 smaller weaponry fire control system, with a Mark 34 radar. In March 1946, Washington had a SK fore and a SR aft, a SG both fore and aft, and a TDY jammer (which could scramble radar on other ships). ### Propulsion The ships in the North Carolina class were equipped with four General Electric geared turbines and eight Babcock & Wilcox three-drum express type boilers. The ships' powerplant incorporated several recent developments in turbine equipment, including double helical reduction gears and high-pressure steam technology. North Carolina's boilers supplied steam at 575 psi (3,960 kPa) and as hot as 850 °F (454 °C). To meet the design requirement of 27 kn (50 km/h; 31 mph), the engine system was originally designed to supply 115,000 shp (85,755 kW), but the new technologies increased this output to 121,000 shp (90,000 kW). Despite this increase, the maximum speed for the ships did not change, since the modifications to the powerplant were incorporated later in the design process. The turbines that had already been installed could not fully take advantage of the higher pressure and temperature steam, and so the level of efficiency was not as high as it should have been. When going astern, the engines provided 32,000 shp (24,000 kW). The engine system was divided into four engine rooms, all on the centerline. Each room contained a turbine and two boilers, without any division between the boilers and turbines. This was done to limit the risk of capsizing should the ship sustain heavy flooding in the engine rooms. The engine rooms alternated in their layout: the first and third engine rooms were arranged with the turbine on the starboard side and its corresponding boilers on the port, this was reversed in the second and fourth rooms. The forward-most engine room powered the starboard outer shaft, the second turbine drove the outer screw on the port side, the third engine supplied power to the inner starboard propeller, and the fourth turbine drove the port-side inner screw. All four screws had four blades; the two outer propellers were 15 ft 4 in (4.67 m) in diameter and the inner pair were 16 ft 7.5 in (5.067 m) wide. Steering was controlled by a pair of rudders. At the time of their commissioning, the ships had a top speed of 28 knots (52 km/h; 32 mph), though by 1945, with the addition of other equipment, such as anti-aircraft weaponry, their maximum speed was reduced to 26.8 knots (49.6 km/h; 30.8 mph). The increases in weight also reduced the ships' cruising range. In 1941, the ships could steam for 17,450 nautical miles (32,320 km; 20,080 mi) at a cruising speed of 15 knots (28 km/h; 17 mph); by 1945, the range at that speed was reduced to 16,320 nmi (30,220 km; 18,780 mi). At 25 knots (46 km/h; 29 mph), the range was considerably lower, at 5,740 nmi (10,630 km; 6,610 mi). Electrical power was supplied by eight generators. Four were turbo-generators designed for naval use; these provided 1,250 kilowatts each. The other four were diesel generators that supplied 850 kilowatts each. Two smaller diesel generators—each provided 200 kilowatts—supplied emergency power should the main system be damaged. Total electrical output was 8,400 kilowatts, not including the emergency generators, at 450 volts on an alternating current. ### Armor The North Carolina class incorporated "all or nothing" armor which weighed 41% of the total displacement; it consisted of an "armored raft" that extended from just forward of the first gun turret to just aft of the rear gun turret. They had a main armored belt of Class A armor that was 12-inch (305 mm) thick amidships, inclined at 15°, and backed by 0.75-inch (19 mm) Special Treatment Steel (STS). This tapered down to 6-inch (152 mm) on the lower edge of the belt. The ships had three armored decks; their main deck was 1.45-inch (37 mm) thick. The second, thickest deck was 3.6-inch (91 mm) of Class B armor laminated on 1.4-inch (36 mm) STS for a total of 5-inch (127 mm). In the outboard sections of the hull the plating was 4.1-inch (104 mm) Class B laminated on 1.4-inch (36 mm) STS. The third and thinnest deck was 0.62-inch (16 mm) thick inboard, and .75-inch (19 mm) outboard. The first deck was designed to cause delay-fuzed projectiles to detonate, while the thicker second deck would protect the ships' internals. The third deck was intended to protect against shell splinters that might have penetrated the second deck; it also acted as the upper support for the torpedo bulkheads. The conning tower was connected to the armored citadel by a 14-inch (356 mm) thick communications tube. Armor thickness for the conning tower itself ranged from 16 inches (406 mm) on both sides to 14.7 inches (373 mm) on the front and rear. The roof was 7 inches (178 mm) thick and the bottom was 3.9 inches (99 mm) thick. The main battery turrets were heavily armored: the turret faces were 16-inch (406 mm) thick, the sides were 9-inch (229 mm) thick, the rear sides were 11.8-inch (300 mm) thick, and the roofs were 7-inch (178 mm) thick. Sixteen–inch-thick armor was the maximum width factories were able to produce at the time of the ships' design; by 1939, however, it was possible to create 18-inch (457 mm)-thick plates. These were not installed because it was estimated that the conversion would delay completion of the ships by 6 to 8 months. The barbettes that held the turrets were also strongly protected. The front portion was 14.7 inches (373 mm), the sides increased to 16 in, and the rear portion reduced to 11.5-inch (292 mm). The 5-inch gun turrets, along with their ammunition magazines, were armored with 1.95-inch (50 mm) STS plates. The side protection system incorporated five compartments divided by torpedo bulkheads and a large anti-torpedo bulge that ran the length of the "armored raft". The outer two compartments, the innermost compartment and the bulge would remain empty, while the third and fourth compartments would be filled with liquid. The system was reduced in depth at either end by the forward and rear gun turrets. In these areas, the fifth compartment was deleted; instead, there was an outer empty compartment and two liquid-filled spaces, backed by another empty compartment. To compensate for the reduced underwater protection system, these sections received additional armor plating, up to 3.75-inch (95 mm) in thickness. The complete system was 18.5 ft (5.64 m) deep and designed to withstand warheads of up to 700 lb (320 kg) of TNT. Underwater protection was rounded out by a triple bottom that was 5.75 ft (2 m) deep. The bottom layer was 3 ft (1 m) thick and was kept filled with fluid, while the upper 2.75-foot (1 m) thick layer was kept empty. The triple bottom was also heavily subdivided to prevent catastrophic flooding should the upper layer be penetrated. ## Service ### Construction Two ships, each to cost about \$50 million, were authorized in January 1937. Five shipyards submitted bids to build one of the two planned ships. Three were privately run corporations: Bethlehem Shipbuilding, New York Shipbuilding and Newport News Shipbuilding. The other two, the New York Naval Shipyard and Philadelphia Naval Shipyard, were run by the government. When bids were reviewed, the privately run shipyards' submissions ranged from \$46 to 50 million, while their government counterparts came in at \$37 million. Newport News was unique among these in refusing any fixed monetary value in favor of a "cost-plus 3+1⁄2%" price, but this led to the rejection of their bid out of hand. The bids from private companies were heavily influenced by the legislation of the New Deal. The Vinson-Trammell Act limited profit from a ship's construction to 10 percent, while the Walsh-Healey Public Contracts Act specified a minimum wage and required working conditions for workers. The latter act greatly affected the ability of the navy to acquire steel, as the text of the law caused friction between executives in the industry, who greatly disliked the forty-hour work week and minimum wage requirements, and their workers—who themselves were embroiled in a separate dispute pitting the union of the skilled workers, the American Federation of Labor, against the union of the unskilled, the Congress of Industrial Organizations. Amid the unrest, the navy ran into difficulties trying to acquire 18 million pounds of steel to build six destroyers and three submarines; many more pounds than this would be needed for the new battleships. The private shipyards, however, had their own labor problems, so much so that one author described the navy's issues as "minimal" compared to their shipbuilding counterparts. This increased the price of the battleships to \$60 million each, so the Bureau of Steam Engineering and Bureau of Construction and Repair recommended to their superiors that the \$37 million tenders from the two navy yards be accepted. This was confirmed by Roosevelt, as the private shipyards' bids were seen as unjustly inflated. The contracts for North Carolina and Washington—names had been officially chosen on 3 May 1937—were sent to the New York and Philadelphia yards, respectively, on 24 June 1937. Shortly after this announcement, Roosevelt was bombarded with heavy lobbying from citizens and politicians from Camden and the state of New Jersey, in an ultimately futile attempt to have the construction of North Carolina shifted to Camden's New York Shipbuilding; such a contract would keep many men employed in that area. Roosevelt refused, saying that the disparity in price was too great. Instead, the company was awarded two destroyer tenders in December 1937, Dixie and Prairie. Construction of the North Carolina class was slowed by the aforementioned material issues, the changes made to the basic design after this date—namely the substitution of 16-inch for 14-inch guns—and the need to add both length and strength to the slipways already present in the navy yards. Increased use of welding was proposed as a possible way to reduce weight and bolster the structural design, as it could have reduced the ships' structural weight by 10%, but it was used in only about 30% of the ship. The costs associated with welding and an increase in the time of construction made it impractical. ### North Carolina USS North Carolina (BB-55) was laid down on 27 October 1937, the first battleship begun by the United States since the never-completed South Dakota class of the early 1920s. Although North Carolina was launched on 13 June 1940 and commissioned on 9 April 1941, it did not go on active duty because of acute longitudinal vibrations from its propeller shafts. A problem shared with its sister Washington and some other ships like Atlanta, it was only cured after different propellers were tested aboard North Carolina, including four-bladed and cut-down versions of the original three-bladed. This testing required it to be at sea, and the many resulting trips out of New York Harbor to the Atlantic Ocean caused it to be nicknamed "The Showboat". After a shakedown cruise in the Caribbean Sea and participation in war exercises, North Carolina transited the Panama Canal en route to the Pacific War. Joining Task Force (TF) 16, the battleship escorted the aircraft carrier Enterprise during the invasions of Guadalcanal and Tulagi on 7 August 1942, and continued to accompany the carrier when it moved to be southeast of the Solomons. The Battle of the Eastern Solomons began when Japanese carriers were spotted on 24 August; although American planes were able to strike first by sinking the light carrier Ryūjō, a strike group from a different force, formed around the fleet carriers Shōkaku and Zuikaku, attacked TF 16. In an intense eight-minute battle, North Carolina shot down 7–14 aircraft and was relatively undamaged, though there were seven near-misses and one crewman was killed by strafing. Enterprise took three bomb hits. North Carolina then joined the carrier Saratoga's screen, and protected it while support was rendered to American troops fighting on Guadalcanal. Although it dodged one torpedo on 6 September, it was not able to avoid another on the 15th. Out of a six-torpedo salvo from the Japanese submarine I-19, three hit the carrier Wasp, one hit O'Brien, one missed, and one struck North Carolina. A 660 lb (300 kg) warhead hit on the port side 20 ft (6.1 m) below the waterline at a point that was just behind the number one turret. It created a 32 ft × 18 ft (9.8 m × 5.5 m) hole, allowed about 970 long tons (990 t) of water into the ship—which had to be offset with counter-flooding, meaning that another 480 long tons (490 t) entered—killed five men, and wounded twenty. Although North Carolina could steam at 24 knots (44 km/h; 28 mph) soon after the explosion, it was later forced to slow to 18 knots (33 km/h; 21 mph) to ensure that temporary shoring did not fail. Structural damage beneath the first turret rendered it unable to fire unless in absolute need, and the main search radar failed. As this was the first torpedo to strike a modern American battleship, there was a large amount of interest from various officers and bureaus within the navy in learning more about it. The conclusions were seen as a vindication by some who believed that too much had been sacrificed in the design of the North Carolinas—the torpedo defense system had come close to breaking near one of the most important areas of the ship (a magazine), after all—and the General Board called for the fifth and sixth Iowa-class battleships, Illinois and Kentucky, to have a torpedo bulge added outside their magazines. However, the new Bureau of Ships opposed this on the basis that the system performed as it was supposed to; in any case, no modifications were made. Repaired and refitted at the facilities in Pearl Harbor, North Carolina operated as a carrier escort for Enterprise and Saratoga for the remainder of 1942 and the majority of 1943 while they provided cover for supply and troop movements in the Solomons. In between, it received advanced fire control and radar gear in March, April and September 1943 at Pearl Harbor. In November, North Carolina escorted Enterprise while the carrier launched strikes upon Makin, Tarawa and Abemama. On 1–8 December it bombarded Nauru before returning to carrier screening; it accompanied Bunker Hill while that carrier launched attacks on Kavieng and New Ireland. Joining Task Force 58 in January 1944, North Carolina escorted aircraft carriers as the flagship of Vice Admiral Willis A. Lee, Commander, Battleships, Pacific Fleet (ComBatPac) for much of the year, providing support for airborne strikes on Kwajalein, Namur, Truk (twice), Saipan, Tinian, Guam, Palau, Woleai, and Hollandia in January–April. Also in April, North Carolina destroyed defensive installations on Ponape before setting course for Pearl Harbor for repairs to a damaged rudder. With repairs completed, the battleship joined with Enterprise on 6 June for assaults within the Marianas; as part of these, North Carolina used its main battery to bombard Saipan and Tanapag. In late June, North Carolina was one of the American ships which took part in the so-called "Marianas Turkey Shoot", where a majority of attacking Japanese aircraft were shot down out of the air at little cost to the American defenders. Problems with its propeller shafts then caused the battleship to sail to the Puget Sound Navy Yard to receive an overhaul. It returned to active duty in November and to its carrier escort tasks in time to be hit by a typhoon. North Carolina protected carriers while they provided air cover for invasion fleets and launched attacks on Leyte, Luzon, and the Visayas. Surviving another typhoon, one which sank three destroyers, North Carolina continued escort duty when naval aircraft struck Formosa, Indo-China, China, the Ryukyus and Honshu in January and February 1945. During the invasion of Iwo Jima, the battleship provided bombardment support for troops ashore. During the assault on Okinawa, North Carolina screened carriers and bombarded targets ashore. Although it was able to shoot down three kamikazes on 6 April, it was also struck by a 5-inch (127 mm) shell during that time in a friendly fire incident; three were killed and forty-four injured. The battleship shot down a plane on the 7th and two on the 17th. After receiving another overhaul from 9 May to 28 June, this one in the naval yard at Pearl Harbor, North Carolina operated as both a carrier escort and shore bombardier for the remainder of the war. Of note was a 17 July bombardment of the industrial area in Hitachi, Ibaraki in company with fellow battleships Alabama, Missouri, Wisconsin and HMS King George V, along with smaller warships. In August, members of North Carolina's crew and Marine contingent were sent ashore to assist in occupying Japan. After the official surrender, these men were brought back aboard and the battleship sailed to Okinawa. As part of Operation "Magic Carpet", soldiers were embarked to be returned to the United States. Passing through the locks of Panama Canal on 8 October, it weighed anchor in Boston on the 17th. After an overhaul in the New York Naval Yard, it participated in exercises off New England before beginning a midshipman training cruise in the Caribbean. North Carolina was decommissioned in Bayonne, New Jersey on 27 June 1947; it remained in the reserve fleet in until 1 June 1960, when it was struck from the Naval Vessel Register. Instead of the scrapping that faced most of the United States' battleships, North Carolina was sold to the state of North Carolina for \$250,000 on 8 August 1961 to be a museum ship. It was dedicated in Wilmington on 29 April 1962 as a memorial to the citizens of the state who died in the Second World War. Listed on the United States' National Register of Historic Places and designated as a National Historic Landmark on 1 January 1986, it remains there today, maintained by the USS North Carolina Battleship Commission. ### Washington USS Washington (BB-56) was laid down on 14 June 1938, launched on 1 June 1940 and commissioned on 15 May 1941 at the Philadelphia Naval Shipyard. Although commissioned, its engine had not yet been run at full power—like its sister, Washington had major problems with longitudinal vibrations, which were only tempered after many tests conducted aboard North Carolina. The fixes made it possible to run builder's trials, which Washington did on 3 August 1941; loaded at about 44,400 long tons (45,100 t), the propulsion plant was run up to 123,850 shp (92,350 kW), and repeated the performance again in February 1942, achieving 127,100 and 120,000 shp (94,800 and 89,500 kW). In early 1942 Rear Admiral John W. Wilcox chose Washington as the flagship of Task Force 39. On 26 March 1942, Washington, along with Wasp, Wichita, Tuscaloosa and various smaller ships, sailed to bolster the British Home Fleet. During the voyage, Wilcox fell into the ocean; he was seen soon after by the destroyer Wilson, face down in the water, but due to rough seas they were unable to retrieve the body. It is not known what exactly happened; he could have simply been caught by a wave and washed overboard, but there has been speculation that he suffered a heart attack. The force reached the main anchorage of the Home Fleet, Scapa Flow, on 4 April. Washington and the other ships of TF 39 participated in exercises with the Home Fleet until late April. Along with certain British units, the task force departed the British Isles as TF 99. They escorted some of the Arctic convoys which were carrying vital cargo to the Soviet Union. While carrying out this duty, an accompanying British battleship, HMS King George V, accidentally rammed a destroyer, cutting it in two. Directly behind King George V, Washington passed through the same stretch of sea and received damage from exploding depth charges. Though damage to the hull was minimal—limited to only one leaking fuel tank—many devices on board the ship were damaged, including main battery range finders, circuit breakers, three fire-control and the search radars. The American ships then put in at an Icelandic port, Hvalfjörður, until 15 May; they returned to Scapa Flow on 3 June. On 4 June, Washington hosted the commander of US naval forces in Europe, Admiral Harold Rainsford Stark, who set up a temporary headquarters on the ship for the next few days. On 7 June, King George VI of the United Kingdom inspected the battleship. Washington left the North Sea bound for the United States on 14 July with an escort of four destroyers; upon arrival at the New York Naval Yard on the 23rd, it was given a full overhaul which took a month to be completed. It set sail for the Panama Canal and the Pacific Ocean on 23 August and reached its destination, Tonga Island, on 14 September, where it became the flagship of Admiral Willis "Ching" Lee. Over the coming months, Washington would be focused upon the safe arrival of supply convoys to the men fighting on Guadalcanal. On 13 November, three formations of Japanese ships were discovered on course for Guadalcanal, one of them aiming to bombard Henderson Field while night gave them protection from aircraft. The first Japanese bombardment force was driven back by an American cruiser-destroyer force. On 14 November, the Japanese organized another sortie to neutralize the airfield. Washington, South Dakota, and four destroyers were sent to intercept the Japanese force that night. The Japanese, composed of the fast battleship Kirishima, two heavy cruisers, two light cruisers, and nine destroyers, initially sank three US destroyers and inflicted significant topside damage to South Dakota. However, Washington remained undetected and at midnight fired on Kirishima from 5,800 yd (5 km; 3 nmi), point blank range for Washington's 16-inch/45-caliber guns. Washington fired seventy-five 16-inch and one hundred and seven 5-inch rounds during the melee, scoring 20 main and seventeen secondary battery hits, knocking out Kirishimas steering and main battery and causing uncontrollable progressive flooding. Kirishima capsized at 03:25 on the morning of 15 November 1942, with 212 crewmen lost. Radar-directed fire from Washington's secondary battery also damaged destroyer Ayanami so severely it had to be scuttled. Soon after the battle, the Japanese began evacuating Guadalcanal. Until April 1943, Washington stayed near its base in New Caledonia, providing protection for convoys and battle groups that were supporting the Solomons campaign. Returning to Pearl Harbor, it practiced for battle and underwent an overhaul before returning to the combat zone in late July. From August to the end of October, Washington operated out of Efate. It then joined with four battleships and six destroyers as Task Group (TG) 53.2 for exercises; Enterprise, Essex and Independence also participated. TG 52.2 then voyaged to the Gilbert Islands to add additional firepower to the strikes currently hitting them. Departing in late November, Washington first steamed to Makin to provide protection for ships there, then Ocean Island to prepare to bombard Nauru with its sister North Carolina, all four South Dakota-class battleships, and the carriers Bunker Hill and Monterey. All of the capital ships struck before dawn on 8 December; the aircraft carriers struck again soon after. The ships then sailed back to Efate, arriving on 12 December. On Christmas, Washington, North Carolina, and four destroyers left Efate for gunnery practice. By late January, it was made part of TG 50.1 to escort the fast carriers in that group as they launched strikes on Taroa and Kwajalein. It also moved in to hit Kwajalein with its guns on 30 January. Before dawn on 1 February, with the sky still shrouded in darkness, Washington collided with Indiana when the latter left formation to fuel four destroyers. Indiana had radioed that it was going to make a turn to port out of the formation, but soon after starting the turn, its captain ordered a reversal, back to starboard. About seven minutes later, it came into view of lookouts aboard Washington at a range of 1,000 yards (3,000 ft; 914 m). Although crews on both ships frantically tried to avoid the other, it was to no avail; Washington gave Indiana a glancing blow, scraping down a large aft portion of the ship's starboard side. Washington's fore end was severely damaged, with about 60 ft (18 m) of its bow hanging down and into the water. Ten men, six from Washington, were killed or listed as missing. After temporary reinforcements to the damaged section, it was forced to sail to Pearl Harbor to be fitted with a false bow to make possible a voyage to Puget Sound. Once there, it received a full overhaul, along with a new bow; this work lasted from March until April. Washington did not enter the war zone again until late May. Washington next participated in the Mariana and Palau Islands campaign, serving again as a carrier escort ship, though it was detached on the 13th to fire on Japanese positions on Saipan and Tinian. With the sortie of a majority of the remaining ships in the Imperial Japanese Navy spotted by American submarines, Washington, along with six other battleships, four heavy cruisers and fourteen destroyers covered the aircraft carriers of TF 58; on the 19th, with the attack of many aircraft, the Battle of the Philippine Sea began. Able to beat off the attacks, Washington refueled and continued escorting carriers until it formed a new task group with three battleships and escorts. After a lengthy stop at Enewetak Atoll, it supported troops assaulting Peleliu and Angaur before returning to screening duties. This duty lasted from 10 October to 17 February 1945. The battleship bombarded Iwo Jima from 19–22 February in support of the invasion there before escorting carriers which sent aircraft raids against Tokyo and targets on the island of Kyūshū. On 24 March and 19 April, Washington bombarded Okinawa; it then departed for Puget Sound to receive a refit, having been in action for the majority of the time since its refit in March–April 1944. This lasted through V-J Day and the subsequent formal ceremony aboard Missouri, so Washington received orders to voyage to Philadelphia, where it arrived on 17 October. Here it was modified to have an additional 145 bunks so it could participate in Operation Magic Carpet. Sailing to Southampton with a reduced crew of 84 officers and 835 crew, it brought 185 army officers and 1,479 enlisted men back to the United States; this was the only voyage it would make in support of the operation. The battleship was placed into reserve at Bayonne, New Jersey on 27 June 1947, after only a little more than six years of service. Washington was never reactivated. Struck from the Naval Vessel Register on 1 June 1960—exactly 21 years to the day since its launch—she was sold on 24 May 1961 to be scrapped. ## Post-war alteration proposals North Carolina and Washington remained in active duty in the years immediately after the war, possibly because their crew accommodations were more comfortable and less cramped than the four South Dakotas. The ships received alterations during this period; the Ship Characteristics Board (SCB) directed in June 1946 that four of the quadruple-mounted 40 mm guns be removed, though only two were actually taken off each ship. The 20 mm weapons were also reduced at some point so that both ships were decommissioned with sixteen twin mounts. North Carolina and Washington were decommissioned on 27 June 1947 and subsequently moved to the reserve fleet. In May 1954, SCB created a class improvement project for the North Carolinas which included twenty-four 3-inch/50 guns directed by six Mark 56s. A month later, the SCB chairman voiced his belief that the North Carolinas and South Dakotas would be excellent additions to task forces—if they could be faster. The Bureau of Ships then considered and discarded designs that would move these ships at 31 kn (57 km/h; 36 mph), four knots faster than their current attainable speed. In order for a North Carolina to obtain 31 knots, 240,000 shp (180,000 kW) would be required. This, in turn, would necessitate the installation of an extremely large power plant, one which would not fit into the ship even if the third turret was removed. If the outer external belt armor were removed, 216,000 shp (161,000 kW) would still be required. However, no matter if the belt was taken off or not, all of the hull form aft would have to be greatly modified to accept larger propellers. The last strike against the project was the high estimated cost of \$40 million, which did not include the cost of activating battleships that had been out of commission for ten years. Later calculations proved that the North Carolinas could be lightened from 44,377 long tons to around 40,541 long tons (41,192 t), at which 210,000 shp would suffice. At the trial displacement figure of 38,400 long tons (39,000 t), even 186,000 shp (139,000 kW) would be enough; the 210,000 figure was derived from a 12.5% overestimation to account for a fouled bottom or bad weather. A similar power plant to the one used in the Iowa class (generating 212,000 shp (158,000 kW)) would be enough, and if the third turret was removed there would be no problems with weight, but there was not enough space within the North Carolinas. When compared, the current power plant measured 176 ft × 70 ft × 24 ft (53.6 m × 21.3 m × 7.3 m), but Iowa's was 256 ft × 72 ft × 26 ft (78.0 m × 21.9 m × 7.9 m). Lastly, there would be an issue with the propellers; the Iowa class' were 19 ft (5.8 m) wide, while the North Carolinas were 17 ft (5.2 m). In the end, no conversions were undertaken. Designs for helicopter carriers also contained a plan for a conversion of the North Carolina''s. At a cost of \$30,790,000, the ships would have been able to embark 28 helicopters, 1,880 troops, 530 long tons (540 t) of cargo and 200,000 US gal (760,000 L) of oil. All of the 16-inch and 5-inch guns would have been removed, though the number one turret would have remained so that weights added on the stern half of the ship could be balanced. In place, the ships would have received sixteen 3-inch guns in twin mounts. Displacement would be lowered slightly to a fully loaded weight of about 41,930 long tons (42,600 t), while speed would not have changed. It was estimated that the ships could serve for about fifteen to twenty years at a cost of about \$440,000 a year for maintenance. However, it was found that a purpose-built helicopter carrier would be more economical, so the plans were shelved. ## Ships in class ## Endnotes
51,448,968
Hawker Hurricane in Yugoslav service
1,129,853,530
Royal Yugoslav Air Force plane (1938–1941)
[ "Hawker Hurricane", "Royal Yugoslav Air Force", "Yugoslav Air Force" ]
In late 1937, the Royal Yugoslav Air Force (Serbo-Croatian Latin: Vazduhoplovstvo Vojske Kraljevine Jugoslavije, VVKJ) placed an order with Hawker Aircraft for twelve Hawker Hurricane Mk I fighters, the first foreign purchase of the aircraft. The VVKJ operated the British Hawker Hurricane Mk I from 1938 to 1941. Between 1938 and 1940, the VVKJ obtained 24 Hurricane Mk Is from early production batches, marking the first foreign sale of the aircraft. Twenty additional aircraft were built by Zmaj under licence in Yugoslavia. When the country was drawn into World War II by the German-led Axis invasion of April 1941, a total of 41 Hurricane Mk I's were in service as fighters. They achieved some successes against Luftwaffe aircraft, but all Yugoslav Hurricanes were destroyed or captured during the 11-day invasion. In mid-1944, the Yugoslav Partisans formed two Royal Air Force squadrons, Nos. 351 and 352, which both operated Hurricane fighter-bombers. No. 351 Squadron flew Hurricane Mk IICs during training and was later equipped with Hurricane Mk IVs, and No. 352 briefly flew Hurricane Mk IICs during training before re-equipping with Supermarine Spitfire Mk Vs. Both squadrons operated as part of No. 281 Wing RAF of the Balkan Air Force, conducting ground attack missions in support of Partisan operations until the end of the war. Hurricanes remained in service with the post-war Yugoslav Air Force until the early 1950s. ## Acquisition In early 1938, the VVKJ placed an order with Hawker Aircraft for twelve Hawker Hurricane Mk I fighters, the first foreign purchase of the aircraft. The British government was willing to supply excess Hurricanes to nations that were likely to oppose German expansion because the rate of production of the aircraft slightly exceeded the capability of the RAF to introduce it at the time. The first of these aircraft destined for Yugoslav service was No. 205 (formerly L1751), which was flown from the United Kingdom to Belgrade via France and Italy, arriving on 15 December 1938. The first batch of aircraft were fitted with a Rolls-Royce Merlin II engine driving a two-blade wooden propeller. The initial order was followed by a second order of twelve (N2718–N2729), which were fitted with Merlin III engines driving a three-blade variable-pitch propeller, and were delivered in February and March 1940. At the same time, the Yugoslav government applied to build more under licence. Once the negotiations were successfully concluded, production lines were established at the Rogožarski plant in Belgrade and the Zmaj factory in nearby Zemun. The two plants were expected to build forty and sixty of the aircraft respectively, at a rate of twelve per month. Of the locally built aircraft, only twenty were completed by Zmaj; the Rogožarski plant did not produce any. A total of 44 aircraft were put into service with the VVKJ. ## Operational service ### Royal Yugoslav Air Force Once in service, Hurricane Mk Is were used to equip the 52nd Fighter Group of the 2nd Fighter Regiment based at Knić, and the 33rd and 34th Fighter Groups of the 4th Fighter Regiment based at Bosanski Aleksandrovac. Hurricanes were also operated by the Independent Fighter Squadron of the 81st Bomber Group and by the Air Training School, both based at Mostar. All of these aircraft were deployed in the fighter/interceptor role. Immediately prior to the German-led Axis invasion of Yugoslavia in April 1941, 41 of the original 44 Hurricanes were serviceable. They were allocated as follows: At 06:45 on 6 April 1941, the Luftwaffe launched Operation Retribution, a series of concerted bombing attacks on Belgrade that coincided with air and ground attacks throughout the country. Several waves of German aircraft approached Belgrade during the day, initially Junkers Ju 87 "Stuka" dive-bombers escorted by fighters. About 08:00, Hurricanes of the 52nd Fighter Group engaged the second wave as it departed after bombing the city; one of the dive-bombers was shot down by three pilots from the 163rd Squadron. For the remainder of the first day of the invasion, the Hurricanes of the VVKJ saw little action, despite constant patrolling between Čačak, Kraljevo, and Kragujevac. Few of the aircraft had radio sets, so the fighters usually arrived too late to take part in the fighting. Two 4th Fighter Regiment machines were tasked with escorting Bristol Blenheim Mk I light bombers to attack targets in Austria, but they lost sight of the bombers in cloud cover. One of the pilots then attempted to intercept some German Messerschmitt Bf 109 fighters. The following day, the 2nd Fighter Regiment continued to patrol over central Serbia, protecting factories at Kraljevo and Kragujevac from potential German air attacks that never occurred. The 4th Fighter Regiment was also active, patrolling over Bosnia and Croatia, but saw little action except for attempts to intercept German reconnaissance aircraft. Two of the Air Training School Hurricanes scrambled to intercept a small formation of Junkers Ju 88 bombers as they flew over Mostar towards Sarajevo. They were both hit by return fire, wounding one pilot and forcing him to crash land. The other pilot continued pursuing the bombers when he was attacked by a group of German Bf 109s. He was severely wounded, bailed out, and eventually died from loss of blood. Another Hurricane was also claimed by the Germans on 7 April. On 8 April, the main VVKJ effort was directed towards a German ground thrust through the Kačanik gorge in southern Kosovo. By this time, the VVKJ had lost over 60 per cent of its aircraft; 70 aircraft from the remnants of various bomber and fighter units were assembled for the attack. Hurricanes of the 52nd Fighter Group made up the last wave of attacks on the 13 kilometres (8 mi) of closely packed convoys. Flying beneath low cloud in very poor weather, both squadrons strafed the Germans. One Hurricane was hit and crash-landed by the road, its pilot evading capture. In the north of the country, patrolling Hurricanes of the 4th Regiment clashed with German fighters on several occasions, without result. Other 4th Regiment Hurricanes escorted another bomber mission against targets in southern Austria. The remaining five Hurricanes of the 105th Squadron relocated to Veliki Radinci, where other surviving aircraft had also concentrated. The following day, heavy snow fell in some areas of Yugoslavia, grounding the 52nd Fighter Group Hurricanes at Knić. In the north, a major air battle developed between Rovine and Bosanski Aleksandrovac, involving Hurricanes of both the 106th and 108th Squadrons. The Yugoslavs shot down one or two Bf 109s, but lost two Hurricanes in the process. By 10 April, the VVKJ was disintegrating quickly in the face of the successful German onslaught, but Hurricanes of the 105th Squadron were able to get airborne despite the continuing poor weather, patrolling for and engaging German aircraft without result. At Knić, rumours of the approach of German ground forces led the 164th Squadron to attempt to fly its Hurricanes to a safer airfield. Five machines got into the air, but almost immediately two of them collided, and another flew into a mountain wreathed in fog. The two surviving pilots returned safely to Knić. Meanwhile, the Hurricanes of the 163rd Squadron had been rendered unserviceable by their crews to prevent their capture and use by the Germans. When it became clear that the rumours of approaching German forces were unfounded, desperate attempts were made to return the aircraft to flying condition. The same day, Hurricanes of the 4th Regiment scored a victory over a Bf 109 while chasing reconnaissance aircraft over Bosnia. On 11 April, 4th Regiment Hurricanes shot down a Messerschmitt Bf 110 heavy fighter over Nova Gradiška. The following day, two or three of the remaining 105th Squadron Hurricanes were burned by ground crews at Veliki Radinci, while other machines from the same squadron flew to Bijeljina, where other aircraft had concentrated. This move had only just been completed when a large formation of German Bf 110s swept over the airfield and destroyed over two dozen aircraft, including all but one of the 105th Squadron Hurricanes. At Knić, approaching German vehicles caused the last two 163rd Squadron Hurricanes to scramble, flying to Zemun. The first pilot landed safely, but was immediately captured by armed Volksdeutsche. The other pilot, seeing that Zemun was not safe, broke off and tried to fly to Bjeljina, but ran out of fuel and was killed when he tried to crash-land near Valjevo. Even at this late stage, the 4th Regiment still had five or six airworthy Hurricanes. They continued to chase reconnaissance aircraft throughout the day, shooting down one Ju 88 near Banja Luka. When German bombers again attacked the airfields around Mostar, the one remaining Hurricane of the Air Training School attacked a Ju 88 but was hit by return fire, forcing the pilot to bail out. The following day saw 4th Regiment Hurricanes again patrolling over Bosnia, downing another Bf 110, this time over Banja Luka. One aircraft was destroyed on 13 April, the pilot being badly wounded. On 14 April, one Hurricane was still serviceable at Nikšić in Montenegro. This aircraft clashed with Italian fighters on 14 or 15 April and was hit 37 times. Despite this, it was still airworthy on 16 April when its pilot attempted to fly it to Greece, but he was forced to return due to poor weather, after which the aircraft was abandoned. At least two Hurricanes were captured intact by the invading forces, but this marked the end of the Hawker Hurricane in VVKJ service, as all opposition to the Axis invaders ceased on 18 April, following the Yugoslav Supreme Command's unconditional surrender the previous day. ### Yugoslav Partisans #### Establishment The raising of Yugoslav Partisan-manned squadrons within the Royal Air Force (RAF) was discussed between the Partisan leader Josip Broz Tito and the head of the British mission to the Partisans, Brigadier Fitzroy Maclean, on 12 March 1944. As a result of this discussion, an agreement was concluded later that month for the RAF to train Yugoslav personnel who would man two squadrons, one of fighters and one of fighter-bombers. After completing training, these two squadrons were to conduct operations exclusively over Yugoslavia. It was agreed that the new squadrons would largely be staffed by former VVKJ personnel who had fled the country during the invasion and had later agreed to join the Partisans. The first squadron was raised at an airfield near Benghazi, Libya, as No. 352 (Yugoslav) Squadron RAF. Members took their Partisan oaths on 21 May 1944. Until late June, this squadron was equipped with Harvard training aircraft and Hurricane Mk IICs, which were then replaced by Supermarine Spitfire Mk Vs, which it operated until the end of the war. The Hurricane Mk IICs were handed over to a second Partisan-manned squadron, raised as No. 351 (Yugoslav) Squadron RAF, which was also established as a fighter-bomber unit in Libya on 1 July 1944. During its work-up training, No. 351 Squadron was re-equipped with Hurricane Mk IVs. It completed training, including ground-attack practice runs, by 23 September. By 2 October, the squadron had been transferred to an airfield near Cannae in Italy to join No. 281 Wing RAF of the Balkan Air Force, a combined Allied organisation. The move was accompanied by complaints from the Partisan Supreme Command that the Hurricane was inferior to the Spitfire now being flown by No. 352 Squadron, and also the Hawker Typhoon. The complaints were ignored by the RAF, and the squadron operated Hurricane Mk IVs until the end of the war, as did No. 6 Squadron RAF, a British-manned squadron that also flew missions over Yugoslavia, even though the aircraft had been taken out of frontline service in the European theatre in March 1944. No. 351 Squadron was cleared for combat operations on 13 October, and from 18 October the squadron generally had 4–8 aircraft based at an airfield on the now Allied-held island of Vis in the Adriatic. #### October–December 1944 No. 351 Squadron flew its first mission on 13 October 1944; it involved six aircraft attacking an Axis supply convoy near the village of Aržano. On 20 October, aircraft from the squadron, supported by Spitfires from No. 352 Squadron, conducted rocket and strafing attacks on enemy columns near Metković that were withdrawing in the face of the advancing Partisan 26th Dalmatian Division. The mission was a success, but one aircraft was lost to ground fire. Nine days later, Hurricanes, escorted by a pair of Spitfires from No. 352 Squadron, flew a patrol over the island of Rab and adjacent areas of the Adriatic, but were unable to positively identify any targets. On 4 November, aircraft from No. 351 Squadron, again escorted by two Spitfires, were tasked with interdicting road communications between Bihać and Knin. One aircraft developed engine trouble and had to return to base, but the rest continued with the mission. They ran into heavy anti-aircraft fire near Knin, and one aircraft was shot down, the pilot bailing out and being captured by the Germans. The next mission, on 9 November, was hampered by extremely poor weather over the target area near Trebinje. One aircraft flew into a mountain, killing the pilot, and another suffered engine trouble and crash-landed near Trebinje, the pilot escaping unhurt. On 3 December 1944, No. 351 Squadron carried out a successful rocket attack against Axis coastal defences on the Island of Lošinj, launching from Vis. This was followed by a period of scouting and reconnaissance over several Yugoslav regions, hitting targets of opportunity, sometimes escorted and supported by the Spitfires of No. 352 Squadron. No. 351 Squadron ranged widely, interdicting rail and road routes in eastern and western Bosnia and throughout Dalmatia, and attacking Axis maritime traffic off the Adriatic coast and islands. As the year drew to a close, operations became severely hampered by the worsening weather. #### January–March 1945 In January and February 1945, much better co-ordination was achieved between the two RAF squadrons and Partisan ground and maritime forces. This was done through the deployment of aviation liaison sections with the main Partisan formations, initially with the 8th Corps and then also the 5th Corps. On 4 January, four Hurricanes flew from Vis to attack an enemy convoy travelling between Mostar and Sarajevo. The convoy was located near Jablanica and seven trucks were destroyed. The aircraft went on to attack the railway station in Jablanica, damaging one locomotive and ten wagons. One Hurricane was damaged by anti-aircraft fire during the mission. On 22 January, Hurricanes escorted by Spitfires attacked a ship of 1,000 tonnes (980 long tons) off the island of Rab, firing sixteen rockets. Due to the minimal Luftwaffe presence over much of Yugoslavia, in many cases the Spitfire escorts from No. 352 Squadron were not needed to protect against enemy aircraft, so they also engaged ground and maritime targets alongside the Hurricanes of No. 351 Squadron. In February, both squadrons provided support for the liberation of major towns, including Široki Brijeg, Nevesinje and Mostar, and patrolled and attacked targets of opportunity across Bosnia and Dalmatia. Specifically, they supported the Mostar operation of the 8th Corps, the work of the 11th Corps clearing the enemy from islands in the northern Adriatic, and also 5th Corps operations. Despite the presence of liaison sections with ground forces, procedures were not yet streamlined, and several friendly fire incidents occurred during the Mostar operation. In support of 11th Corps operations, Hurricanes attacked German headquarters, defences and naval traffic on and around the islands of Pag and Krk. On 7 February, Hurricanes of No. 351 Squadron were supporting 4th and 11th Corps and attacked a column of German trucks and wagons on the road between Gospić and Bihać when two of the aircraft collided, causing minor damage. Both aircraft crash-landed in friendly territory and were written off, but the pilots escaped unharmed. During February, No. 351 Squadron re-deployed to Zemunik on the mainland near Zadar. In early March, the formation of the 4th Army was accompanied by the development of even closer coordination and liaison with the two squadrons. Air marker panels began to be used to show the forward line of friendly troops and to identify friendly vehicles, and liaison teams were deployed with the commanders of lower-level formations to communicate directly with supporting aircraft. Operations continued across Bosnia and Dalmatia in March, and were extended to include support to advances in the Lika region and during the capture of Sarajevo and Bihać. As Axis forces withdrew west towards Zagreb, the Hurricanes of No. 351 Squadron continued to harry them, ambushing convoys and rocketing artillery positions. Between 1 January and 31 March 1945, the Hurricanes of No. 351 Squadron not only flew from Cannae and Vis, but also from the airfields at Zemunik and Prkos on the mainland. In the same period the squadron lost four aircraft and suffered damage to fifteen others. Of the lost aircraft, two were destroyed as a result of a collision, one was lost after engine failure, and only one was lost to anti-aircraft fire. Four aircraft were damaged by anti-aircraft fire, the remainder being damaged by fragments from their own rockets. #### April–May 1945 From the beginning of April 1945, the combat operations of No. 351 Squadron were focussed on supporting the offensives by the 4th Army in the Lika and Gorski kotar regions, along the Croatian coastline and in Istria. In particular, there was hard fighting in the islands of the northern Adriatic. On 5 April, one Hurricane was lost near Babin Potok when it flew into a mountain while supporting the 19th Dalmatian Division's attack on elements of the 11th Ustaše Division, resulting in the death of the pilot. During this period, all operations of No. 351 Squadron were carried out from the airfield at Zemunik. Two aircraft were destroyed and eighteen damaged. Between 2 and 8 May, which proved to be the last week of the war, the RAF did not permit the squadron to operate due to political considerations regarding the future status of Trieste. During its existence, No. 351 Squadron flew 227 combat missions: 119 ground attack sorties, 87 reconnaissance missions, 19 maritime interdictions, and two search-and-rescue missions. Of the 23 pilots that passed through the squadron, four were killed and one captured. The squadron lost nine aircraft, and 38 others suffered damage, mainly from anti-aircraft fire. The squadron was released from RAF control on 16 May 1945. ### Yugoslav Air Force After the war, the Balkan Air Force's 16 surviving Hurricanes continued to be used by the Yugoslav Air Force (Serbo-Croatian: Jugoslovensko ratno vazduhoplovstvo; JRV), the air arm of Tito's new communist government. The Hurricanes flew with the 1st Fighter Regiment in 1945, followed by the Reconnaissance Aviation Regiment in 1947–1948, and the 103rd Reconnaissance Aviation Regiment between 1948 and 1951. Hurricanes remained in service with the post-war Yugoslav Air Force until the early 1950s. ## Surviving example A Hawker Hurricane Mk IV that flew as part of No. 351 Squadron is on display at the Museum of Aviation in Belgrade. This aircraft, serial number 20925, was manufactured in 1943, and remained in operation until it was withdrawn from service on 18 August 1952.
14,914,225
Krulak–Mendenhall mission
1,172,108,729
US government mission to South Vietnam in 1963
[ "1963 in Vietnam", "1963 in international relations", "Buddhist crisis", "Presidency of John F. Kennedy", "South Vietnam–United States relations", "United States National Security Council" ]
The Krulak–Mendenhall mission was a fact-finding expedition dispatched by the Kennedy administration to South Vietnam in early September 1963. The stated purpose of the expedition was to investigate the progress of the war by the South Vietnamese regime and its US military advisers against the Viet Cong insurgency. The mission was led by Victor Krulak and Joseph Mendenhall. Krulak was a major general in the United States Marine Corps, while Mendenhall was a senior Foreign Service Officer experienced in dealing with Vietnamese affairs. The four-day whirlwind trip was launched on September 6, 1963, the same day as a National Security Council (NSC) meeting, and came in the wake of increasingly strained relations between the United States and South Vietnam. Civil unrest gripped South Vietnam as Buddhist demonstrations against the religious discrimination of President Ngô Đình Diệm's Catholic regime escalated. Following the raids on Buddhist pagodas on August 21 that left a death toll ranging up to a few hundred, the US authorized investigations into a possible coup through a cable to US Ambassador Henry Cabot Lodge Jr. In their submissions to the NSC, Krulak presented an optimistic report on the progress of the war, while Mendenhall presented a bleak picture of military failure and public discontent. Krulak disregarded the popular support for the Viet Cong, feeling that the Vietnamese soldiers' efforts in the field would not be affected by the public's unease with Diệm's policies. Mendenhall focused on gauging the sentiment of urban Vietnamese and concluded that Diệm's policies increased the possibility of religious civil war, and were causing the South Vietnamese to believe that life under the Viet Cong would improve the quality of their lives. The divergent reports led US President John F. Kennedy to ask his two advisers "You two did visit the same country, didn't you?" The inconclusive report was the subject of bitter and personal debate among Kennedy's senior advisers. Various courses of action towards Vietnam were discussed, such as fostering a regime change or taking a series of selective measures designed to cripple the influence of Ngô Đình Nhu, Diệm's brother and chief political adviser. Nhu and his wife Madame Ngô Đình Nhu were seen as the major causes of the political problems in South Vietnam. The inconclusive result of Krulak and Mendenhall's expedition resulted in a follow-up mission, the McNamara–Taylor mission. ## Background After the Huế Phật Đản shootings on May 8, civil unrest broke out in South Vietnam. Nine Buddhists were shot by the Roman Catholic regime of President Ngô Đình Diệm after defying a government ban on the flying of Buddhist flags on Vesak, the birthday of Gautama Buddha and marching in an anti-government protest. Following the shootings, Buddhist leaders began to lobby Diệm for religious equality and compensation and justice for the families of the victims. With Diệm remaining recalcitrant, the protests escalated. The self-immolation of Buddhist monk Thích Quảng Đức at a busy Saigon intersection became a public relations disaster for the Diệm regime, as photos of the event made front-page headlines worldwide and became a symbol of Diệm's policies. As protests continued, the Army of the Republic of Vietnam (ARVN) Special Forces loyal to Diệm's brother Ngô Đình Nhu conducted the Xá Lợi Pagoda raids on August 21, leaving a death toll estimated to be up to several hundred and causing extensive damage under the declaration of martial law. Universities and high schools were closed amid mass pro-Buddhist protests. In the meantime, the fight against the Viet Cong insurgency had begun to lose intensity amid rumors of sectarian infighting amongst ARVN troops. This was compounded by the plotting of a coup by various ARVN officers, which distracted attention from the insurgency. After the pagoda raids, the Kennedy administration sent Cable 243 to the US Embassy, Saigon, ordering an exploration of alternative leadership possibilities. ## Initiation and expedition At the end of the National Security Council (NSC) meeting on September 6, it was agreed that the priority was to obtain more information on the situation in Vietnam. US Secretary of Defense Robert McNamara proposed sending Marine Corps Major General Victor Krulak on an immediate fact-finding trip. The NSC agreed that Joseph Mendenhall—a Foreign Service Officer with Vietnam experience—would accompany him and the pair began the mission later that day. On their return trip to Washington, D.C., Krulak and Mendenhall were to bring John Mecklin and Rufus Phillips back from Saigon to report. Mecklin was the United States Information Service (USIS) director, while Phillips served as the director of rural programs for United States Operations Mission (USOM) and as an advisor for the Strategic Hamlet Program. The State Department sent the Saigon embassy a detailed cable containing questions about Vietnamese public opinion across all strata of society. In Krulak's words, the objective was to observe "the effect of recent events upon the attitudes of the Vietnamese in general, and upon the war effort against the Viet Cong". In a four-day trip, the two men traveled throughout Vietnam before returning to Washington to file their reports. Krulak visited 10 locations in all four Corps zones of the ARVN and spoke with US Ambassador Henry Cabot Lodge, Jr., the head of US forces in Vietnam General Paul Harkins and his staff, 87 US advisors and 22 ARVN officers. Mendenhall went to Saigon, Huế, Da Nang and several other provincial cities, talking primarily to Vietnamese friends. Their estimates of the situation were the opposite. Mecklin wrote afterwards that it "was a remarkable assignment, to travel twenty-four thousand miles and assess a situation as complex as Vietnam and return in just four days. It was a symptom of the state the US Government was in". The mission was marked by the tension between its leaders. Mendenhall and Krulak intensely disliked one another, speaking to each other only when necessary. Mecklin and Krulak became embroiled in a dispute during the return flight. Krulak disapproved of Mecklin's decision to bring television footage that had been censored by the Diệm regime back to the US, believing the action was a violation of sovereignty. After a long and bitter argument aboard the aircraft, Krulak called upon Mecklin to leave the film in Alaska during a refueling stop at Elmendorf Air Force Base, further suggesting that the USIS director remain with the film in Alaska. ## Report and debriefing The NSC reconvened on the morning of September 10 to hear the delegation's reports. Mendenhall had experience in Vietnamese affairs, having served under the previous US Ambassador Elbridge Durbrow. Durbrow had urged Diệm on a number of occasions to implement political reform. Krulak was a Marine known for his belief in using military action to achieve foreign affairs objectives. His temperament earned him the nickname "Brute", which originated from his wrestling career at the Naval Academy. The Deputy Secretary of Defense Roswell Gilpatric noted that Mendenhall was regarded "with great suspicion on the Virginia side of the river [the Pentagon, headquarters of the Defense Department]", whereas Krulak was "universally liked and trusted in the Pentagon, both on the civilian and military side". The backgrounds of Krulak and Mendenhall were reflected in their opposite analyses of the war. Krulak gave a highly optimistic analysis of military progress and discounted the effect of the Buddhist crisis on the ARVN's fight against the Viet Cong. His conclusion was that "[t]he shooting war is still going ahead at an impressive pace. It has been affected adversely by the political crisis, but the impact is not great." Krulak asserted that a substantial amount of fighting was still required, particularly in the Mekong Delta, which was regarded as the Viet Cong's strongest region. Krulak asserted that all levels of the ARVN officer corps were conscious of the Buddhist crisis but he believed that most had not allowed religious beliefs to negatively affect their internal military relationships to a substantial degree. He believed that the ARVN officers were obedient and could be expected to carry out any order they regarded as lawful. Krulak further asserted that the political crisis had not significantly damaged bilateral military ties. Moving along to the Vietnamese view of their leaders, Krulak predicted that there was dissatisfaction among the officers, which he believed was mainly directed at Ngô Đình Nhu, the younger brother of Diệm who was widely seen as the power behind the regime. Krulak believed that most officers wanted to see the back of Nhu but that few were willing to resort to a coup. Krulak reported that three US advisers strongly criticized the Nhus and advocated the pair's departure from South Vietnam to avoid a public relations disaster at the United Nations. Krulak felt that these problems were outweighed by what he believed to be a successful military effort and that the war would be won irrespective of the political leadership. He predicted that the ARVN had little ability to facilitate an improvement in governance and felt that they would not flex whatever muscle they had. Krulak optimistically concluded, > Excluding the very serious political and military factors external to Vietnam, the Viet Cong war will be won if the current US military and sociological programs are pursued, irrespective of the grave defects in the ruling regime. Mendenhall disagreed and argued that the anti-Diệm sentiment had reached a level where the collapse of civilian rule was possible. He reported a "reign of terror" in Saigon, Huế and Da Nang, observing that the popular hatred usually reserved for the Nhus had spread to the generally respected Diệm. Mendenhall asserted that many Vietnamese had come to believe that life under Diệm worse than being ruled by the Viet Cong. Mendenhall thought that a civil war on religious grounds was possible. He predicted that the war could only be won with a regime change, otherwise South Vietnam would collapse into sectarian infighting or a massive communist offensive. The contradictory nature of the reports prompted Kennedy's famous query, "You two did visit the same country, didn't you?" ### Debate Krulak attempted to explain the contrasting assessments by pointing out that Mendenhall had surveyed urban areas, while he ventured into the countryside "where the war is". Krulak asserted that political issues in Saigon would not hamper military progress, stating "We can stagger through to win the war with Nhu remaining in control." Assistant Secretary of State Roger Hilsman asserted that the difference between the contrasting reports "was the difference between a military and a political view". During the debate over the differences in outlook, Mendenhall asserted that Saigon had suffered "a virtually complete breakdown" following the pagoda raids. Mendenhall reported that Vietnamese public servants feared being seen with Americans. He recalled one visit when he had to remain quiet while his Vietnamese host crept around the room, searching for hidden microphones. Mendenhall asserted that "Saigon was heavy with an atmosphere of fear and hate" and that the people feared Diệm more than the Viet Cong. He reported many public servants no longer slept at home due to a fear of midnight arrests by Nhu's secret police. Many officials had recently spent the bulk of their day negotiating the release of their children, who had been incarcerated for participating in pro-Buddhist protests. Mendenhall asserted that internal turmoil was now a higher priority than the war against the communists. Mendenhall denounced Saigon's reconciliation and goodwill gestures towards the Buddhists as a public relations stunt. He reported that monks from provincial areas who had been arrested in Saigon for demonstrating were not returned to their places of origin as promised. Mendenhall noted that when the monks were released, Diệm's officials retained their identification papers. This resulted in their re-arrest upon attempting to leave the capital. The monks were then branded as Viet Cong because they did not have government identification papers. As news of such tactics spread across the capital, some monks sought refuge in the Saigon homes of ARVN officers. Mendenhall insisted that the United States was responsible for the situation because it had helped the Ngo family gain power, armed and funded it. He reasoned that as Diệm used the arms against his own people, Washington also shared responsibility. He stated that "a refusal to act would be just as much interference in Vietnam's affairs as acting". According to the Pentagon Papers, "the critical failure of both reports was to understand the fundamental political role that the army was coming to play in Vietnam". The papers concluded the ARVN was the only institution capable of deposing and replacing Diệm. Diệm and Nhu fully realized the potential threat, responding with the divide and conquer paradigm. They usurped the prerogative of senior officer promotion and appointed generals based on loyalty to the palace, giving orders directly to officers. This action caused deep distrust among the senior officers and fragmented their power. Krulak failed to realize that if the situation deteriorated to the point where discontent with Diệm posed the possibility of a communist victory, the generals would intervene in politics because of what would happen to them under communist rule. Neither Krulak nor Mendenhall seemed to anticipate that if a military junta came to power, the divisive effect of Diệm's promotion politics would manifest itself as the generals vied for power. Neither of the pair put any emphasis on the detrimental effects that would have been caused by political infighting among the generals. During the NSC meeting, Frederick Nolting—who preceded Lodge as US Ambassador to South Vietnam—took issue with Mendenhall's analysis. Regarded as a Diệm apologist, Nolting pointed out that Mendenhall had been pessimistic about South Vietnam for several years. Mecklin reinforced and pushed Mendenhall's view further, calling on the administration to apply direct pressure on Saigon by suspending non-military aid, in an attempt to cause a regime change. In Mecklin's words: > This would unavoidably be dangerous. There was no way to be sure how events would develop. It was possible, for example, that the Vietnamese forces might fragment into warring factions, or that the new government would be so incompetent and/or unstable that the effort against the Viet Cong would collapse. The US should therefore resolve now to introduce American combat forces if necessary to prevent a Communist triumph midst the debris of the Diệm regime. The Pentagon Papers opined that Mecklin understood the pitfalls of a military junta that Krulak and Mendenhall had overlooked. Regardless, Mecklin concluded that the US should proceed in fostering a regime change, accept the consequences, and contemplate the introduction of US combat troops to stop a possible Viet Cong victory. The NSC meeting then heard Phillips' bleak prognosis of the situation in the Mekong Delta. He claimed that the Strategic Hamlet Program was a shambles in the delta, stating that they were "being chewed to pieces by the Viet Cong". When it was noted that Phillips had recently witnessed a battle in the delta, Kennedy asked Phillips for his assessment. Phillips replied: "Well, I don't like to contradict General Krulak, but I have to tell you, Mr. President, that we're not winning the war, particularly in the delta. The troops are paralysed, they're in the barracks, and this is what is actually going on in one province that's right next to Saigon." Phillips asserted that removing Nhu was the only way to improve the situation. Phillips asserted that the only means of removing Nhu was to bring in Colonel Edward Lansdale, the CIA operative who had consolidated Diệm's position a decade earlier, a proposal that Kennedy dismissed. Phillips recommended three measures: - Terminate aid to the ARVN Special Forces of Colonel Lê Quang Tung, who took his orders directly from the palace and not the army command. Tung had led the raids on Buddhist pagodas on August 21 in which hundreds were killed and widespread physical destruction occurred. The Special Forces were used mainly for repressing dissidents rather than fighting communists. - Cut funds to the Motion Picture Center, which produced hagiographic films about the Nhus. - Pursue covert actions aimed at dividing and discrediting Tung and Major General Tôn Thất Đính. Dinh was the military governor of Saigon and the Commander of the ARVN III Corps. Dinh was the youngest general in the history of the ARVN, primarily due to his loyalty to the Ngo family. In the ensuing debate, Kennedy asked Phillips what would happen if Nhu responded to the cuts by diverting money away from the army to prop up his personal schemes. When Kennedy asked if Nhu would blame the US for any resulting military deterioration, Phillips replied that the ARVN would revolt, because the ARVN officers who were on Viet Cong hit lists would not allow the communists to run loose. Phillips said that if Nhu tried to divert military aid away from the troops to prop up his personal schemes, the Americans could deliver the money straight to the countryside in suitcases. ### Robust disagreement The meeting became confrontational when Krulak interrupted Phillips, asserting that American military advisers on the ground rejected the USOM officer's assessments. Phillips conceded that although the overall military situation had improved, this was not the case in the crucial delta areas. Phillips noted that the provincial military adviser in Long An Province adjacent to Saigon, had reported that the Viet Cong had overrun 200 Strategic Hamlets in the previous week, forcing the villagers to dismantle the settlement. McNamara shook his head at the radically divergent reports. When Krulak derided Phillips, Assistant Secretary of State W. Averell Harriman could no longer restrain himself and called the general "a damn fool". Phillips diplomatically took over from Harriman and asserted that it was a battle for hearts and minds rather than pure military metrics. Mecklin generated more disquiet by advocating the use of American combat troops to unseat the Diệm regime and win the war. He asserted that "the time had come for the US to apply direct pressure to bring about a change of government, however distasteful". Mecklin asserted that there would be a backlash if aid was simply cut, so US troops would have to directly fix the problem. Mecklin later wrote to USIS head Edward R. Murrow to insist that US troops would welcome combat in the case of a communist escalation. On the journey back to the United States, he had asserted that the use of American combat forces would encourage the coup and lift morale against the Viet Cong. He also called for the engineering of a coup. He called for the US to show more intent. The pessimism expressed by Phillips and Mecklin surprised Nolting who said that Phillips' account "surprised the hell out of me. I couldn't believe my ears." Nolting asserted that Mecklin was psychologically vulnerable to being brainwashed because he had recently split with his wife. At the time, Mecklin was living with journalists David Halberstam and Neil Sheehan of The New York Times and UPI respectively. Halberstam and Sheehan both won Pulitzer Prizes and were strident critics of Diệm. ## Aftermath One strategy that received increasing consideration in NSC meetings—as well as at the US Embassy, Saigon and in Congress—was a suspension of non-military aid to Diệm. After the erroneous Voice of America broadcast on August 26, which announced an aid suspension, Lodge was given the discretion on August 29 to suspend aid if it would facilitate a coup. In the meantime, the US Senate began to pressure the administration to take action against Diệm. Hilsman was lobbied by the Senate Subcommittee on the Far East. Senator Frank Church informed the administration of his intention to introduce a resolution condemning Diệm's anti-Buddhist repression and calling for the termination of aid unless religious equality was instituted. This resulted in Church agreeing to temporarily delay the introduction of the bill to avoid embarrassing the administration. While the delegation was in Vietnam, the strategy of using a selective aid suspension to pressure Diệm into ending religious discrimination was actively discussed at the State Department. In a television interview on September 8, AID Director David Bell warned that Congress might cut aid to South Vietnam if Diệm did not change his policies. On September 9, Kennedy backed away from Bell's comments, stating "I don't think we think that [a reduction in aid to Saigon] would be helpful at this time." On September 11, the day after Krulak and Mendenhall tabled their reports, Lodge reversed his position. In a long cable to Washington, he advocated the consideration of using non-military aid suspension to spark the toppling of Diệm. Lodge concluded that the US could not get what it wanted from Diệm, and had to force events to come to a head. After another White House meeting on the same day, Senator Church was informed that his bill was acceptable, so he introduced the legislation into the Senate. The National Security Council re-convened on September 17 to consider two of Hilsman's proposals for dealing with Diệm. The plan favored by Hilsman and his State Department colleagues was the "pressures and persuasion track". This involved an escalating series of measures at both public and private level, including selective aid suspension and pressuring Diệm to remove Nhu from power. The alternative was the "reconciliation with a rehabilitated GVN track", which involved the public appearance of acquiescence to Diệm's recent actions and an attempt to salvage as much as possible from the situation. Both proposals assumed that an ARVN coup was not forthcoming. The inconclusive report saw a follow-up mission sent to Vietnam, the McNamara–Taylor mission, led by McNamara and Chairman of the Joint Chiefs of Staff Maxwell D. Taylor.
239,413
Arctic tern
1,172,323,336
Bird that breeds in the Arctic and sub-Arctic and migrates to the Antarctic
[ "Birds described in 1763", "Birds of the Arctic", "Holarctic birds", "Sterna", "Taxa named by Erik Pontoppidan" ]
The Arctic tern (Sterna paradisaea) is a tern in the family Laridae. This bird has a circumpolar breeding distribution covering the Arctic and sub-Arctic regions of Europe (as far south as Brittany), Asia, and North America (as far south as Massachusetts). The species is strongly migratory, seeing two summers each year as it migrates along a convoluted route from its northern breeding grounds to the Antarctic coast for the southern summer and back again about six months later. Recent studies have shown average annual round-trip lengths of about 70,900 km (44,100 mi) for birds nesting in Iceland and Greenland and about 48,700 km (30,300 mi) for birds nesting in the Netherlands. These are by far the longest migrations known in the animal kingdom. The Arctic tern nests once every one to three years (depending on its mating cycle). Arctic terns are medium-sized birds. They have a length of 28–39 cm (11–15 in) and a wingspan of 65–75 cm (26–30 in). They are mainly grey and white plumaged, with a red/orange beak and feet, white forehead, a black nape and crown (streaked white), and white cheeks. The grey mantle is 305 mm (12.0 in), and the scapulae are fringed brown, some tipped white. The upper wing is grey with a white leading edge, and the collar is completely white, as is the rump. The deeply forked tail is whitish, with grey outer webs. Arctic terns are long-lived birds, with many reaching fifteen to thirty years of age. They eat mainly fish and small marine invertebrates. The species is abundant, with an estimated two million individuals. While the trend in the number of individuals in the species as a whole is not known, exploitation in the past has reduced this bird's numbers in the southern reaches of its ranges. ## Etymology The genus name Sterna is derived from Old English "stearn", "tern". The specific paradisaea is from Late Latin paradisus, "paradise". The Scots names pictarnie, tarrock and their many variants are also believed to be onomatopoeic, derived from the distinctive call. Due to the difficulty in distinguishing the two species, all the informal common names are shared with the common tern. ## Distribution and migration The Arctic tern has a continuous worldwide circumpolar breeding distribution; there are no recognized subspecies. It can be found in coastal regions in cooler temperate parts of North America and Eurasia during the northern summer. During the southern summer, it can be found at sea, reaching the northern edge of the Antarctic ice. The Arctic tern is famous for its migration; it flies from its Arctic breeding grounds to the Antarctic and back again each year. The shortest distance between these areas is 19,000 km (12,000 mi). The long journey ensures that this bird sees two summers per year and more daylight than any other creature on the planet. One example of this bird's remarkable long-distance flying abilities involves an Arctic tern ringed as an unfledged chick on the Farne Islands, Northumberland, UK, in the northern summer of 1982 that reached Melbourne, Australia in October, just three months after fledging – a journey of more than 22,000 km (14,000 mi). Another example is that of a chick ringed in Labrador, Canada, on 23 July 1928. It was found in South Africa four months later. A 2010 study using tracking devices attached to the birds showed that the above examples are not unusual for the species. In fact, the study showed that previous research had seriously underestimated the annual distances travelled by the Arctic tern. Eleven birds that bred in Greenland or Iceland covered 70,900 km (44,100 mi) on average in a year, with a maximum of 81,600 km (50,700 mi). The difference from previous estimates is due to the birds taking meandering courses rather than following a straight route as was previously assumed. The birds follow a somewhat convoluted course in order to take advantage of prevailing winds. The average Arctic tern lives about 30 years and will, based on the above research, travel some 2.4 million km (1.5 million mi) during its lifetime, the equivalent of a roundtrip from Earth to the Moon more than three times. A 2013 tracking study of half a dozen Arctic terns breeding in the Netherlands shows average annual migrations of c. 48,700 km (30,300 mi). On their way south, these birds roughly followed the coastlines of Europe and Africa. Arctic terns usually migrate sufficiently far offshore that they are rarely seen from land outside the breeding season. ## Description and taxonomy The Arctic tern is a medium-sized bird around 33–36 cm (13–14 in) from the tip of its beak to the tip of its tail. The wingspan is 76–85 cm (30–33 in). The weight is 86–127 g (3.0–4.5 oz). The beak is dark red, as are the short legs and webbed feet. Like most terns, the Arctic tern has high aspect ratio wings and a tail with a deep fork. The adult plumage is grey above, with a black nape and crown and white cheeks. The upperwings are pale grey, with the area near the wingtip being translucent. The tail is white, and the underparts pale grey. Both sexes are similar in appearance. The winter plumage is similar, but the crown is whiter and the bills are darker. Juveniles differ from adults in their black bill and legs, "scaly" appearing wings, and mantle with dark feather tips, dark carpal wing bar, and short tail streamers. During their first summer, juveniles also have a whiter forecrown. The species has a variety of calls; the two most common being the alarm call, made when possible predators (such as humans or other mammals) enter the colonies, and the advertising call. While the Arctic tern is similar to the common and roseate terns, its colouring, profile, and call are slightly different. Compared to the common tern, it has a longer tail and mono-coloured bill, while the main differences from the roseate are its slightly darker colour and longer wings. The Arctic tern's call is more nasal and rasping than that of the common, and is easily distinguishable from that of the roseate. This bird's closest relatives are a group of South Polar species, the South American (Sterna hirundinacea), Kerguelen (S. virgata), and Antarctic (S. vittata) terns. The immature plumages of Arctic tern were originally described as separate species, Sterna portlandica and Sterna pikei. ## Reproduction Breeding begins around the third or fourth year. Arctic terns mate for life and, in most cases, return to the same colony each year. Courtship is elaborate, especially in birds nesting for the first time. Courtship begins with a so-called "high flight", where a female will chase the male to a high altitude and then slowly descend. This display is followed by "fish flights", where the male will offer fish to the female. Courtship on the ground involves strutting with a raised tail and lowered wings. After this, both birds will usually fly and circle each other. Both sexes agree on a site for a nest, and both will defend the site. During this time, the male continues to feed the female. Mating occurs shortly after this. Breeding takes place in colonies on coasts, islands and occasionally inland on tundra near water. It often forms mixed flocks with the common tern. It lays from one to three eggs per clutch, most often two. It is one of the most aggressive terns, fiercely defensive of its nest and young. It will attack humans and large predators, usually striking the top or back of the head. Although it is too small to cause serious injury to an animal of a human's size, it is still capable of drawing blood, and is capable of repelling many raptorial birds, polar bears and smaller mammalian predators such as foxes and cats. The nest is usually a depression in the ground, which may or may not be lined with bits of grass or similar materials. The eggs are mottled and camouflaged. Both sexes share incubation duties. The young hatch after 22–27 days and fledge after 21–24 days. If the parents are disturbed and flush from the nest frequently the incubation period could be extended to as long as 34 days. When hatched, the chicks are downy. Neither altricial nor precocial, the chicks begin to move around and explore their surroundings within one to three days after hatching. Usually they do not stray far from the nest. Chicks are brooded by the adults for the first ten days after hatching. Both parents care for hatchlings. Chick diets always include fish, and parents selectively bring larger prey items to chicks than they eat themselves. Males bring more food than females. Feeding by the parents lasts for roughly a month before being weaned off slowly. After fledging, the juveniles learn to feed themselves, including the difficult method of plunge-diving. They will fly south to winter with the help of their parents. Arctic terns are long-lived birds that spend considerable time raising only a few young, and are thus said to be K-selected. A 1957 study in the Farne Islands estimated an annual survival rate of 82%. ## Ecology and behaviour The diet of the Arctic tern varies depending on location and time, but is usually carnivorous. In most cases, it eats small fish or marine crustaceans. Fish species comprise the most important part of the diet, and account for more of the biomass consumed than any other food. Prey species are immature (1–2-year-old) shoaling species such as herring, cod, sandlances, and capelin. Among the marine crustaceans eaten are amphipods, crabs and krill. Sometimes, these birds also eat molluscs, marine worms, or berries, and on their northern breeding grounds, insects. Arctic terns sometimes dip down to the surface of the water to catch prey close to the surface. They may also chase insects in the air when breeding. It is also thought that Arctic terns may, in spite of their small size, occasionally engage in kleptoparasitism by swooping at birds so as to startle them into releasing their catches. Several species are targeted—conspecifics, other terns (like the common tern), and some auk and grebe species. While nesting, Arctic terns are vulnerable to predation by cats and other animals. Besides being a competitor for nesting sites, the larger herring gull steals eggs and hatchlings. Camouflaged eggs help prevent this, as do isolated nesting sites. Scientists have experimented with bamboo canes erected around tern nests. Although they found fewer predation attempts in the caned areas than in the control areas, canes did not reduce the probability of predation success per attempt. While feeding, skuas, gulls, and other tern species will often harass the birds and steal their food. ## Conservation status The total population for the arctic tern is estimated at more than two million individuals, with more than half of the population in Europe. The breeding range is very large, and although the population is considered to be decreasing, this species is evaluated as a species of least concern by the IUCN. Arctic terns are among the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds applies. The population in New England was reduced in the late nineteenth-century because of hunting for the millinery trade. Exploitation continues in western Greenland, where the population of the species has been reduced greatly since 1950. In Iceland, the Arctic tern has been regionally uplisted to Vulnerable as of 2018, due to the crash of sandeel (Ammodytes spp.) stocks. At the southern part of their range, the Arctic tern has been reducing in numbers. Much of this is due to a lack of food. However, most of these birds' range is extremely remote, with no apparent trend in the species as a whole. The Arctic terns' dispersal pattern is affected by changing climatic conditions, and its ability to feed in its Antarctic wintering is dependent on sea-ice cover, but unlike breeding species, it is able to move to a different area if necessary, and can be used as a control to investigate the effect of climate change on breeding species. ## Cultural depictions The Arctic tern has appeared on the postage stamps of several countries and dependent territories. The territories include Åland, Alderney, and Faroe Islands. Countries include Canada, Finland, Iceland, and Cuba.